Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Tuesday, May 18, 2004

More "Singularity" Talk

Michael Anissimov, a director of the Singularity Institute for Artificial Intelligence commented on my earlier post on Roboethics, and this lead to a series of exchanges over the course of this afternoon which I wanted to blog here. The language gets a little precious and technical, so you'll have to forgive all that, but the issues perplex me deeply and I would welcome further comments and questions.

Michael writes: “The term "Singularity", when used in the correct, Vingean sense, as it was used repeatedly at this recent Foresight Gathering, can be a quite useful term. It simply refers to the fact that our model of the future gets a lot fuzzier when a smarter-than-human intelligence hits the block. In the same way that chimps could never have imagined the detailed consequences of the expanding of the prefrontal cortex, humans can certainly not imagine the detailed consequences of a mind with a completely different design than our own, running at totally different speeds, with the ability to improve its own architecture. Asking what the "cultural consequences" of such an event could be sort of misses the point; our entire history of culture, art, thinking, science, and technology is based on a 3-lb cluster of nerve cells running at 200Hz, without the ability to self-modify, filled with evolutionary luggage and ancestral brainware adapted to our specific niche. When you step outside of that same-old same-old, you are playing with different rules.

“Smooth function of postulated technologies is not necessary for the creation of transhuman intelligence. That would eventually be possible with, say, linearly accelerating technology. It's a problem at the intersection of cognitive science and engineering. I can sympathize with your distrust of the more extreme-sounding Singularity discourses, but let me remind you that we have much the same issue in nanotech - the survival of the human species is literally at stake in both issues. We live in a time where issues that superficially sound like "techno-apocalyptic survivalist ranting conjoined to the tropic paraphernalia of transcendental theology" now correspond to actual issues in reality. Just look at CRN, for example.

“Anyway, the serious concerns of Singularity activists (to be kept distinct from random technology enthusiasts using the word for fun) are summed up just perfectly in the Transhumanist FAQ:

"The arrival of superintelligence will clearly deal a heavy blow to anthropocentric worldviews. Much more important than its philosophical implications, however, would be its practical effects. Creating superintelligence may be the last invention that humans will ever need to make, since superintelligences could themselves take care of further scientific and technological development. They would do so more effectively than humans. Biological humanity would no longer be the smartest life form on the block."

To which I responded:

“Since even Vinge's canonical formulations on "singularity" contain both claims about "exponential runaways" as well as claims about the impact of an arrival of either artificial or augmented greater-than-normatively-human intelligence, I think it is important not to be so sure confusions are just a result of sloppy or "incorrect" terminology. The confusions go deeper than that, it seems to me.

“Remember that to an important extent "fuzziness" in prediction has always been central to our understanding of what "futurity" means. To the extent that singularity-talk proposes a kind of hyper-futurity in this way, I think it makes sense to interrogate just what inspires us to expect this (and apparently why so many seem to desire it so).

“I think I do miss your point that "the entire history of culture, art, thinking, science, and technology is based on a 3-lb cluster of nerve cells running at 200Hz," since, depending on precisely what you mean by the phrase "based on," this is not a point that makes much sense to me. Heap up a pile of 3-lb nerve cell clusters and you're not going to find culture or art or science conjuring up there. (And I say this as a materialist.)

“Your point about smooth function not being a necessary assumption, since linearly accelerating progress can produce greater-than-normatively human intelligences seems absolutely right to me, but it no longer is clear to me why under such a state of affairs it clarifies much to go into "singularity"-talk in the first place to think these eventualities through. This is, of course, why I called attention to the "roboethics" rubric in the first place.

“I think we agree about how the more extreme, complacent, transcendentalizing freightings of the term singularity are often contrary to clear thinking and necessary collaborative work to ensure the developmental outcomes we desire from technology. I guess ultimately my trouble with "singularity" ends up being that it is hard for me to figure out just what singularity-talk contributes to discourses about the quandaries of technological intelligence augmentation that are actually on offer.

“Finally, I don't see how the arrival of greater-than-normatively human technoconstituted intelligence will deal a greater or more decisive blow to anthropocentrism than did Copernicus or Freud already, say. To the extent that it makes sense to be anthropocentric right now (which is to say, not very much), it seems to me new technologies by expanding the reach of our capacity to re-write ourselves in the image of our values will if anything expand (while certainly changing) what anthropomorphism might mean to us.”

Later this afternoon, I posted a comment in an unrelated conversational thread on wta-talk:

"Roboethics," unlike some superlative-state AI activism/speculation seems concerned less with questions such as "should AIs have rights" or "will AIs be righteous" than with questions like what are the ethical uses to which robots and automation more generally can be put and with what consequences (read Marshall Brain's blogs to see just how real-world relevant such questions are already), and I doubt ordinary people will laugh at these questions inasmuch as their livelihoods and sometimes their lives are on the line here right now.

And Michael responded to this in a way that more or less picked up the conversation reproduced above where we left off. I quote my own response to him (and Michael’s comments and objections are interspersed within my responses to him):

[> Michael wrote:]

> The problem with overfocus on nearer-term applications of robots and
> automation is that it neglects the slightly-longer-term risk of software
> programs with human-surpassing intelligence.

I responded:

This would only be true if something about a nearer-term focus precludes deliberation about longer-term and superlative-state risks/benefits. I don't agree that it does. In fact, I suspect that a nearer term focus provides tools for more reasonable expectations about the longer term than does a more purely "long-term" discussion that leapfrogs all the proximate
developmental stages that will likely stand between where we are and where we'll be.

> This also ties in with
> conventional issues of robots and automation - say I use nanocomputers,
> in 2014 or whenever they are available, to run an intelligent software
> program whose purpose is to perform weather surveillance. Without the
> complex goal system structure necessary to see human beings as sentient
> entities worthy of respect, there may be little stopping such a software
> program from lapsing into recursive self-improvement and paving over the
> surface of the Earth with sensors in order to maximize its ability to
> determine weather patterns.

There are all kinds of assumptions about just what form superlative-state AI would take in this scenario. How do you know that the kinds of superintelligence that will perform sophisticated expert functionality will be of a kind to produce entitative goal systems for which the category "respect for biological sentience" is even relevant? Why not assume a pre-entitative monomaniacal expert system for which the relevant safeguards will be sequestration and an elaborate (likely also machinic) oversight regime? Why assume in advance that you are making a kind of being that requires a superego, rather than a big lumbering piece of machinery that might go out of control unless you can shut it off or stop it from doing irreparable damage? (Assume for the moment that I already know all of the obvious things you think this objection of mine signals I don't know, and then take the objection seriously anyhow.)

> Even if you figure that the probability of
> this happening in a given year or whatever is only 1%, 6 billion lives
> could be on the line. Which would make that given scenario more worthy
> of attention than a scenario in which, say, a given automation advance
> seems to entail a 50% risk that the salaries of a mere 100,000 people
> drop by 10%. (Which seems to be what you are talking about.)

Is the scenario you are talking about worthy of attention? Certainly it is! Don't mistake my objections. I just don't estimate the kinds of predictions that tend to preoccupy singularity-enthusiasts/worriers as plausible enough to devote much of my own attention to them. here's nothing wrong with the fact that bright earnest people do take them more seriously than I do. Still, it is perfectly appropriate to argue about why people weight these expectations differently, what kinds of motivations may contribute to these different weightings, what the rhetorical effects of various kinds of arguments for them are, what we can say about cultures-of-belief for which these preoccupation and not others dominate, etc. Now, about the specific numbers you are throwing around here, permit me to return to that in moment.

> Just to quickly answer the two questions that were brought up;
>
> Q: Should AIs have rights?
> A: It probably doesn't matter what humans say except insofar as they
> can have an input today in the creation process, since any
> human-surpassing AI will be able to do whatever it wants because of its
> superior intelligence and the technologies it will develop.

See, I think this demonstrates a really pernicious effect of taking singularity-talk too seriously in its transcendentalizing mode. Look at all that you're apparently willing to give up here, presumably reluctantly, as a consequence of what you no doubt imagine to be a "hard-boiled" contemplation of various developmental extrapolations.

If greater-than-normatively human intelligence emerges as a function of augmentations, or as a property of technologically-assisted collaboration, say, then certainly it will still matter what humans say -- and these are two superintelligence scenarios, not even more incrementalist accounts for which the human (whatever that will come to mean) still manifestly matters here. I am not sure I can grant even what you mean by "input in the creation process" for an autonomous superintelligence, to the extent that I am not ready to grant that even there the most plausible scenarios will involve artificial-conscience-engineering rather than analogues to the sorts of failsafes big dumb more-than-normatively powerful than human machines already require. Look at how very few actual superlative projection-possibilities are inspiring how sweeping a sense of what the future is likely to look like for you, and look how it is restricting what you are willing to entertain as likely and as worthy of intervening into now. You've done a lot of math, but have you done the right math?

> Q: Will AIs be righteous?
> A: Hopefully in ways that either all or the vast majority of humans see
> as desirable. This depends most on how initial conditions are set up.

> Software programs that can develop
> nanotechnology (or better manufacturing technologies) with their
> accelerated, transhuman brains and use them to kill off all humans
> unless they specifically see humans as entities worthy of value, are
> worth worrying about.

Well, sure, I guess. But why assume that this is a different sort of discussion than gray goo already is? Why be so confident that more-than-normatively human intelligent expert systems will have "brains" "they" use in the first place? Why be so sure anything will be "worthy" for "them" in the sense you mean? A rat gnaws a wire in Kamchatka and a thermonuclear device is launched into space precipitating apocalypse (or whatever). Why are we talking about some geeks in Sunnyvale creating a possibly malicious superhuman AI again? (I know, I know -- I kid because I love!)

> Whether or not normal people laugh at the topic
> is an issue of memetics to be taken up *after* we determine the
> differential importance of addressing any given risk. If I can spend 20
> minutes convincing someone that the negative impact of a given
> automation process could put 100,000 people out of a job, thereby
> lowering the likelihood of that negative impact by, say, 1%, then that
> is peanuts relative to the utility of spending 20 minutes convincing
> someone that the potential negative impact of superintelligence could
> entail the demise of humanity, and having even a 0.00001% positive
> effect in the that direction.

People who talk about "singularities" with one breath should hesitate to start flinging out hard numbers characterizing risk-assessments of superlative state tech with the other breath. That sounds harsh, but I don't mean it that way at all! I am registering a real and ongoing perplexity of mine (after a decade of talking to singularitarian-types). Where are your caveats? Where can your confidence be coming from here? Isn't your whole point of contention with me that my frame of reference for assessment is derived from too modest and linear a set of developmental assumptions? But precisely to the extent that you break from such a predictive frame aren't you required to qualify your claims and weight your assessments in light of those qualifications? It isn't enough to posit existential threats willy-nilly and imagine that their scale alone justifies a primary focus on just those developmental scenarios that have come to dominate your fancy.

> People can laugh all they want, our goal
> should not be to get every last person to take us seriously, but to
> actually maximize the probability that we transition into a peaceful and
> enjoyable future.

I definitely agree with you. People find laughable any number of things I worry about and expect to happen, too. But this gives me special responsibilities in making my case, it seems to me. I need to be especially generous in explaining my reasons to those who disagree with me, I need to be especially attentive to the ways in which non-rational factors (fancies, fears) may contribute to my own sense of the allure of my very unconventional beliefs, I need to expect to chart very detailed argumentative pathways to lead my interlocutors from where they are to where I want them to be, I need to take very seriously the objections and alternatives to my own very controversial beliefs, etc.

> It is my belief that lowering the probability of existential risk is
> what transhumanists should *really* be concerned about. (I think Nick
> Bostrom might agree.) There is already enough cultural and technological
> momentum already present that the eventual availability of transhuman
> modifications seems extremely likely, *given that we don't wipe
> ourselves out first*.

I definitely agree with you. For the next half-century my own expectation is that post-humanist (and let us hope, not de-humanizing) technologies will be primarily genetic, prosthetic, and cognitive. I personally think the focus needs to be on ensuring that costs, risks, and benefits of technology development are fair so as to limit their destabilization effects and maximize general welfare, and to ensure that funding and oversight is internationalized so that they don't incubate new kinds of devastating arms races and unmanageable terrorism. I honestly don't see how the word "singularity" can introduce much but confusion and distraction into these many necessary conversations. That is why I express these concerns aloud. Not because I think there is anything logically the matter with the ideas in the abstract, or with the good people (among them friends of mine!) who enjoy them.

> If we mess up and kill ourselves, then no
> transhuman future, no space colonization, no uploading, no immortality,
> no nothing. The instant your probability estimate of superintelligent
> AIs wiping out humanity goes from zero to anything above that, say one
> in a million, it immediately becomes something worth worrying about.

I only disagree with you because I think the number of existential threats more likely than the particular threat of a hard-takeoff totalizing superintelligent AI singularity seem to me overabundantly too numerous themselves (bioengineered pathogens, proliferating nukes and other wmd, weoponized nanoscale devices, catastrophic climate change, etc.) to justify spending too much time on singularity-talk however compelling it might be in principle. YMMV, of course!

> Robots that take our order at McDonalds are basically worth ignoring.
> Existential risk is the number one issue, and I think that a wise
> strategy for countering such risk involves covering two main bases -
> bits and atoms - AI and nano - which is what we have SIAI and CRN for.

My guess is that dealing with robots at McDonalds now is more likely than thinking about superlative-state tech to provide the practical, strategic, cultural, problem-solving, conceptual, networking resources from which eventually will come the very tools we will later need to deal with the actually superlative state tech that will evolve out of our contemporary quandaries to become our shared future. I agree with you about CRN, and hope you do not take offense at my hesitancy to extend a comparable support for the efforts of SIAI. For now, my focus will remain on the bioethics, neuroethics, and roboethics of proximate technology development, rather than on superlative states."

No comments: