Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Thursday, August 09, 2007

More on "Intelligences" -- Whether Human, Artificial, Normative, or Otherwise Imaginary

Bruce Klein kindly asked me to participate in a survey today and I, perhaps less kindly, responded by going meta. It's kinda sorta what I do. The survey and the results (of those who were actually willing to play by the rules) are available here. I post my own reply to the survey here, because it takes up the discussion of intelligence where we left off before (thanks to Anne and Jim for their Comments, by the way, as always -- I'll reply in some form somewhere soon).

The Question:
Dale, quick question... when do you think AI will surpass human-level intelligence?

[ ] 2010-20
[ ] 2020-30
[ ] 2030-50
[ ] 2050-70
[ ] 2070-2100
[ ] Beyond 2100
[ ] Prefer not to make predictions
[ ] Other: __

Survey sent to a few friends to gain a better perspective on time-frame... results posted: www.novamente.net/bruce_blog Thanks! Bruce

My (Rather Non-Responsive) Response:

I can't answer the question because too many of the terms aren't clear. There are a limited number of things that can be usefully said while standing on one foot.

What counts as "human-level" here? It seems to me humans exhibit "intelligence" in radically different ways, measurable (perhaps not always nor well) via many different metrics.

As humans continue -- not begin, mind you, continue -- prosthetically (through medical interventions, network practices, embodied archives, and so on) to modify their perceptual, associative, problem-solving capacities, their memories, their moods, and so on is this a matter of "surpassing" "human"-"level" "intelligence," really, or a matter of humans continuing to change what counts as "human" as they always have done through their ongoing social and cultural intercourse with the made world?

If human "intelligences," properly so-called, are multi-dimensional phenomena, facilitating multiple ends that are not properly reducible to one another (instrumental ends, yes, but also moral, esthetic, ethical, and political ends) then is it right to speak of "supassing" current norms and "levels" even if one has only modified one dimension or capacity in an effort to facilitate some particular end, but at the expense of or in a way that is indifferent to the other dimensions of intelligence and the other ends to which intelligence is responsive?

Futurists like to bark out their glib predictions like auctioneers soliciting bids, and while I can get caught up in the excitement of such a scene quite as much as the next guy, it seems to me that too often this noisy spectacle becomes a too-vacuous display of competing clevernesses and inner-circle citations that become substitutes for open deliberation, distractions from thinking clearly.

What if the primary effect of offering up my own prediction from your kindly-provided checklist of dates when "AI will suprass human-level intelligence" is not to provide a contribution to the resources available for foresight, but to help distract us all from becoming better aware that we cannot actually adequately characterize "the human" nor the idea of "intelligence" so freighted with significance in these formulations in the first place? How can participation in such a survey contribute clarity, even to those who ask the question because they are sincerely looking for clarity?

By way of conclusion, let me make a typical rhetorician's point that we all need to be wary of the ways in which metaphors like "level" connected to verbs like "surpassing" paint a picture in figures that has all the compelling concreteness of fact -- this is much of what metaphors do, drawn as they are through analogies to the everyday factual furniture of the world. This "reality effect" that figurative language sometimes lends to our abstractions and our cases for action, however edifying, however clarifying it might appear to be for the moment, may for all that be giving us the false impression that we know more and know better about the "intelligence" at the heart of this question than we really do.

Always think what your metaphors quite as much as your logical premises have committed you to. Now that's an important lesson that can be uttered and even understood standing on one foot.

2 comments:

Robin said...

Possibly the biggest problem I find with this question (today, anyway!) is that it's sort of like asking when Zeus is going to resume throwing thunderbolts. The loadedness of the question is amazing - it asks when something that doesn't exist is going to become better than something that currently exists. Maybe we should just start with when/if there will be anything like Strong AI to begin with?

ZARZUELAZEN said...

We can be sure, Dale, that whatever 'Intelligence' is, it is not at all what the SIAI folks think it is ;)

My best current informed opinion would be that there is not, in fact, a 'single' g-factor behind a monolithic concept an 'Intelligence' but rather that general intelligence is combination of *three* different kinds of intelligence (multiple intelligences theory).

(Incidentally, I believe the three kinds of intelligence are indirectly related into the '3 forms of causality' I mentioned in my last post).

You have Symbolic Intelligence (deductive rules), Bayesian Intelligence (the ability to recognize patterns via Inductive reasoning) and Reflective Intelligence (the ability of knowledge systems to reflect on their own operations).

Of course 'Bayesian reductionists' try to reduce all three types to 'Bayes' but there is no basis for thinking that Bayesian reasoning is all encompassing at all (When I pointed this fact out a number of times on the SL4 list years ago I was repeatedly ridiculed in the nastiest possible way) - dispite well known problems arising from the fact that probability theory apparently cannot be applied to itself reflectively without paradoxes. Further it is not at clear that Deduction is merely a 'single case of Bayesian Induction' at all.

And the three kinds of cognitive intelligence in turn can be applied over a different of knowledge domains (for instance social, physical etc), leading to a multitude of different possible definitions of intelligence.

Thus, as you rightly point out Dale, it is far more fruitful to consider many different definitions of intelligence when reasoning about different domains and not fall for the narrow, reductionistic definitions hoisted upon us by "self-appointed elite organizations and self-described soopergeniuses" ;)