Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Monday, April 20, 2009

Hannah Arendt on AI

And now for the third and final excerpt of something like an unexpected trilogy of excerpts from Hannah Arendt today. Although this is likely to be the first many readers encounter in consequence of the chronological arrangement of posts in a blog, I do want to stress that this is the third, and in many ways least interesting, of the trilogy, an excerpt that needs the earlier two (first here and second here) to take on its real salience as a complement to what I criticize as superlative futurology. This passage appears in The Human Condition, on pp. 171-172, and nicely ties together some of the themes from the preceding discussion.
If it were true that man is an animal rationale in the sense in which the modern age understood the term, namely, an animal species which differs from other animals in that it is endowed with superior brain power, then the newly invented electronic machines, which, sometimes to the dismay and sometimes to the confusion of their inventors, are so spectacularly more "intelligent" than human beings, would indeed be homunculi. As it is, they are, like all machines, mere substitutes and artificial improvers of human labor power, following the time-honored device of all division of labor to break down every operation into its simplest constituent motions, substituting, for instance, repeated addition for multiplication. The superior power of the machine is manifest in its speed, which is far greater than that of human brain power; because of this superior speed, the machine can dispense with multiplication, which is the pre-electronic technical device to speed up addition. All that the giant computers prove is that the modern age was wrong to believe with Hobbes that rationality, in the sense of "reckoning with consequences," is the highest and most human of man's capacities, and that the life and labor philosophers, Marx or Bergson or Nietzsche, were right to see in this type of intelligence, which they mistook for reason, a mere function of the life process itself, or, as Hume put it, a mere "slave of the passions." Obviously, this brain power and the compelling logical processes it generates are not capable of erecting a world, are as worldless as the compulsory processes of life, labor, and consumption.

Again, we have in the reference to the "worldlessness" of instrumental calculation and its effects a unique Arendtian usage. For Arendt the "world" is profoundly political in its substance, akin to the sense in which when we speak of "worldly" concerns we often mean to indicate more than just planetary or natural concerns, but public and cultural affairs more generally. On p. 52 of The Human Condition, she writes that "the term 'public' signifies the world itself." She continues
This world... is not identical with the earth or with nature... It is related, rather, to the human artifact, the fabrication of human hands, as well as to affairs which go on among those who inhabit the man-made world together [emphasis added --d]. To live together in the world means essentially that a world of things is between those who have it common, as a table is located between those who sit around it; the world, like every in-between, relates and separates men at the same time.

Among other things, it seems worthwhile to draw attention to Arendt's idiosyncratic understanding of the "world" especially since this is the world the love of which Arendt announced in her personal motto, Amor Mundi. Think of the way in which we are born into a speech, a "mother tongue," the existence of which long precedes our birth and will continue on long after our death, but which, for all that still consists entirely of our own performances of it, performances that at once sustain it in its existence but also change it (through coinages, figurative deviations, and so on).

1 comment:

jimf said...

> [T]he newly invented electronic machines. . . are, like
> all machines, mere substitutes and artificial improvers of human
> labor power, following the time-honored device of all division of
> labor to break down every operation into its simplest constituent
> motions. . . All that the giant computers prove is that the modern
> age was wrong to believe with Hobbes that rationality, in the sense of
> "reckoning with consequences," is the highest and most human of man's
> capacities, and that the life and labor philosophers, Marx or Bergson
> or Nietzsche, were right to see in this type of intelligence, which
> they mistook for reason, a mere function of the life process itself,
> or, as Hume put it, a mere "slave of the passions." Obviously. . .
> the compelling logical processes it generates are not capable of erecting
> a world. . .

"Being comes first, describing second... [N]ot only is it impossible to
generate being by mere describing, but, in the proper order of things, being
precedes describing both ontologically and chronologically. . .

Doing... precedes understanding... [A]nimals can solve problems that they
certainly do not understand logically... [W]e [humans] choose the right
strategy before we understand why... [W]e use a [grammatical] rule before
we understand what it is; and, finally... we learn how to speak before we
know anything about syntax. . .

Selectionism precedes logic. . . Logic is... a human activity of great
power and subtlety... [but] [l]ogic is not necessary for the emergence
of animal bodies and brains, as it obviously is to the construction and
operation of a computer... [S]electionist principles apply to brains
and... logical ones are learned later by individuals with brains. . ."

Gerald M. Edelman and Giulio Tononi,
_A Universe of Consciousness_, pp. 15-16


Interview with Michael Anissimov
Questions by Sander Olson. Answers by
Michael Anissimov
http://www.nanotech.biz/i.php?id=michaelanissimov

Michael Anissimov is a Singularity analyst and advocate.
He is interested in futurist issues, especially the interrelationships
between accelerating change, nanotechnology, transhumanism, and the
creation of smarter-than-human intelligence. . .

I'm Michael Anissimov, 21 years old as of 2005. . .

> Question 3: Artificial Intelligence (AI) proponents have been predicting
> AI breakthroughs ever since the first AI conference in 1956. As a result,
> many believe that artificial intelligence is 10 years away, and always
> will be. How do you respond to such skepticism?

Like the fields of psychology and other soft sciences, Artificial Intelligence
is a field that has historically contained a lot of quacks. The definition of
"Artificial Intelligence," "the ability of a computer or other machine to perform
those activities that are normally thought to require intelligence," is so broad
that it is hard to tell where AI begins and everyday software programming ends.
AI researchers often fall prey to anthropomorphism: they project human characteristics
onto their nonhuman programs, much in the same way that mythology projects spirits
onto natural phenomena. Researchers put tons of work into creating a doll. . .
AI researchers can easily get carried away and think completion is around the
corner when in fact it's a long ways off. . . [S]trong competition for research
grants and public attention. . . can encourage researchers to exaggerate their
current results and future prospects.

Despite all these false alarms, we must be realistic and acknowledge that
Artificial Intelligence will be created eventually. . . The human brain is a
structure with a function, or rather a set of functions. Like any other
functional structure, the human brain is susceptible to reductive analyses
and eventually reverse-engineering. There is no Ghost in the Machine,
no immaterial soul. . . Mathematically rigorous metrics of intelligence
have been formulated, and computer scientists continue to create programs that
display progressively better performance in tasks related to induction,
sequence prediction, pattern detection, and other areas relevant to intelligence.
If all else fails, we will use high-resolution brain scans to uncover the
structure of a specific human brain and emulate it on a substrate with superior
performance relative to the original organics. Analog functioning could be
perfectly duplicated in a digital context. A computer-emulated human mind with
the ability to reprogram itself would be an Artificial Intelligence for most
practical purposes. . .

Part of recognizing progress in Artificial Intelligence is keeping your eyes on
the right areas. Oftentimes the areas where real progress is occurring are not
called "Artificial Intelligence" at all, but theoretical computer science,
evolutionary psychology, information systems, or mathematics. Creating Artificial
General Intelligence will require researchers with serious knowledge of Natural
General Intelligence. It will require an awareness of the underlying mathematics
of intelligence, not just programming savvy. It will require lots of computing power,
probably somewhere between one-thousandth and ten times the computing power of
the human brain. The overenthusiastic Artificial Intelligence researchers of the
60s and 70s were using computers with the computational power of a cockroach brain.
How could we have expected them to create intelligence, even if they had the right
program? Conversely, even a bright graduate student might be able to create a functioning
Artificial Intelligence with ten or a hundred times human brain power at her disposal.
Many so-called "AI skeptics" are just thinkers afraid of the prospect that the human
brain is no more than neurological machinery, much like the biologists of the early
1800s were terrified by the prospect that the patterns of life were entirely
rooted in mere chemistry; but this fact has already been established in cognitive
science for decades.

Another issue in Artificial Intelligence progress is what we might call a threshold
effect. A certain threshold of functional complexity must be assembled before we have
anything we can reasonably call an AI. Human-equivalent and human-surpassing Artificial
Intelligences will be concrete inventions. This is similar to the way that the light
bulb and the steam engine were both concrete inventions. 50% of a light bulb does not
produce more light than 25% of a light bulb; they both produce none. There is no
such thing as "50% of an Artificial Intelligence". You either have an AI or you don't.
Although there may be important intermediary milestones which produce interesting results,
it is not fair to expect constant, even progress when technology often proceeds in
fits and starts. Technologies that have proven themselves in the past, such as the computing
industry, can continue to attract brainpower and funding even if no critical breakthroughs
occur, because workers in the field are confident that eventually a threshold will be
passed and a breakthrough will occur. . .

Question 6: Your website discusses the links between AI and nanotechnology. If,
as some skeptics have argued, Drexlerian nanotechnology never becomes feasible, how
will that hamper the development of machine intelligence?

I regard it as extremely unlikely that Drexlerian nanotechnology will never become
feasible. . . The human body is made up of working nanomachines. Among the
simplest applications of Drexlerian nanotechnology is the creation of powerful
nanocomputers, nanocomputers which will offer orders of magnitude times human brain
computing power, even if we use the most liberal estimates for human brain-equivalent
computing power and the most conservative estimates for nanocomputers. These
nanocomputers will allow the development of Artificial Intelligence to accelerate rapidly,
if AI hasn't already been developed yet, and if general-purpose nanocomputers are
available to AI researchers.

If Drexlerian nanotechnology is delayed, say, arriving in 2025 instead of 2015,
then whether or not AI is delayed will be dependent upon how much computing power is
necessary to produce AI, and whether or not Moore's law grinds to a halt due to the
absence of nanotechnology. Even if nanotechnology were not developed by 2025, I'm
sure that computer manufacturers will exploit every possible alternative technological
avenue to make our computers faster. . .

Will AI be a computing-power-hungry area to do research in, requiring nanocomputers
to succeed? Compared to conventional software applications, yes. But how much more?
Giving a ballpark estimate would require a theory of intelligence that explicitly
says how much computing power would be necessary to produce a prototype that solves
challenging problems on human-relevant timescales. For the reasons stated earlier,
I doubt that human-equivalent computing power will be necessary to create AI. Getting
AI right is more a matter of having the right theory than pouring tons of computing power
into theories which don't work. If your theory is garbage, it could require hundreds
or thousands of times human brain computing power to produce intelligence. Conversely,
if one's theory is good, they might be able to create intelligence on the computing hardware
of the day, only to see it running in comparatively slow motion. But if the researcher
could prove to others that their software really is intelligent, then funds would no
doubt be readily available, allowing the researcher to purchase better machines to let
their AI run in realtime or faster.

To be perfectly honest, I would prefer that (human-friendly) AI be developed before
Drexlerian nanotechnology ever arrives. This is the standard opinion of the Singularity
Institute and the majority of Singularity activists. The risk of nanotechnology being
introduced to our current society is high, and the safe development of superintelligence
is the best way of preparing for that risk. If we confront nanotechnology first,
then we face that risk alone, in addition to the risk of creating AI in a possibly
chaotic post-nanotech world. . .