Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Saturday, April 11, 2009

Mo Ro Ko

"Roko" takes another shot:
Right. Artificial intelligence is not intelligence. Dale, this is a *logical* contradiction.

To point out the misapplication by my lights of the term “intelligence” to denote things that either aren’t intelligent or don’t actually exist is far from a contradiction on my part, it is the exposure of an error.
And tens of thousands of scientists are wrong and Dale Carrico is right.

Reality check -— how many of these “thousands” of scientists you suddenly seem to imagine you speak for would actually share your superlative futurological interpretation of their work?

Singularitarians, transhumanists, techno-immortalists are a marginal minority of Robot Cultists who go to one another’s conferences and cite one another’s papers and make a lot of sensationalist claims to attract media attention to themselves.

But consensus science will never spit out the ponies the singularitarians are actually looking for, the aspirations that actually make singularitarians singularitarians -— the superintelligent post-biological Robot God, the mind uploading that immortalizes you into cyberspace, and so on.

Superlativity is not science, and to repudiate it is not to repudiate science. The marginality of a position on matters of morals, aesthetics, or politics isn't disqualifying in the way marginality is in matters of science, where candidate descriptions for warranted belief in the service of prediction and control should achieve the provisional consensus at which they aim. I elaborate this point here, among other places. Superlativity is a discourse through which selected scientific results, among many other things, are rendered meaningful to the futurological faithful and those they would cajole by connecting these results to congenial narratives, frames, assumptions, aspirations.

Discourse analysis and cultural criticism is what I do -— it isn’t science, and doesn't properly either pretend or aspire to be, but it needn’t be science to be relevant to what superlative futurologists are up to, indeed it is far more relevant as such.

As it happens, although I am a reasonably technoscientifically literate person, and definitely a great champion of science education, funding, and progress, I am definitely not a scientist myself. Like most people in my position (including many Robot Cultists who apparently lack my modesty in this admission but who have scarcely more knowledge to justify immodesty on this score) I can only assess the proximity to consensus of would-be candidate descriptions for scientific warrant beyond my knowledge and judge them accordingly. On those grounds, needless to say, every specifically superlative futurological claim fails to pass muster as warrantedly assertible as a reasonable belief where matters of prediction and control are concerned.

But this is far from the end of the story. I can also grasp the ways in which scientific descriptions are taken up by aesthetic, moral, ethical, and political discourses that are not scientific in themselves and are deployed in efforts to make meaning, reconcile hopes and histories, mobilize collective efforts, and so on by my peers in the world caught up with me in the storm-church of disruptive technoscientific change. The strategies and conventions of these sorts of efforts differ from those of science, and here my training puts me on much more solid ground. And here, I must say that superlative discourses reveal themselves to be unwarranted in many more ways still and far more outrageously. Hence the Superlativity critique.
Tell me, is the university of Birmingham teaching pseudo-science, or are you going to start retracting some of your claims?

I would describe theirs and many comparable attributions among coders and computer scientists of intelligence to their robots and to their software as bad poetry at best and pseudo-science at worst.

The misapplication of the word “intelligence” to the complexity of behavior governed by a program is not itself an act of science or coding, it is a rhetorical choice made in an effort to communicate the salience of those results to an audience by connecting them to more familiar experiences in the world. As evidenced by the pathological extreme variations of this discourse one finds among Robot Cultists, this rhetoric, as it turns out, was incomparably more trouble than it was worth.

No comments: