Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Saturday, May 02, 2009

Advice to a Shaken Robot Cultist

JimF offers up a helping hand to a shaken Robot Cultist in the Moot:
My advice to you: take a step back from the breathless gee-whizzery, the conferences, the Web sites, the Second Life fantasies; take a step back from the (would-be) technical arguments for or against nanorobot-based Star Trek replicators, rejuvenation machines, and AI "superintelligences", and take a page from Dale's book about the sociological and psychological context and function of these (interlocking) beliefs.

To which "Anonymous" (possibly a different person altogether, who can say?) responds:
That might be a valid activity for persons like Dale and yourself, and would help to keep religious tendencies, which is innate in all of us, in check. But that is not the focus of so called "Transhumanist" Organisations or their advocates. As I understand it, the purpose is to make the public more aware of possible impending technological developments, in the medium term to long term, and to try to steer public policy in a progressive direction. I think we should endeavour to support such a cause, because incumbent interests will develop new technologies as our scientific knowledge progresses to entrench their position to the detriment of the rest of society.

Of course, "Anonymous" here is mistaking idealized technodevelopmental outcomes with which he has identified personally for reasonable consensus science and progressive activism. These idealized outcomes are the ones he is describing as "impending" and as "new."

I absolutely will not ever allow these discussion to become distracted from the fact that the technodevelopmental focus of superlative futurology is always with imaginary and idealized technodevelopmental outcomes that answer to superpredicated aspirations -- superintelligence, superlongevity, superabundance.

The transhumanists, extropians, singularitarians, techno-immortalists, nano-cornucopiasts, and the rest spend the majority of their time discussing the "impending" and "new" development of techniques for "migrating" their embodied brains without loss into cyberspace or robot bodies thereby achieving a radical longevization as good as immortality, to be spent in immersive better-than-real virtualities or nanobotic treasure cities, under the gaze of history-ending superintelligent post-biological Robot Gods.

There is no available sense in the world in which any of this is either "impending" or "new." While it is true, of course, that there are innumerable problems for secular democratic progressive politics to be found in ongoing technodevelopmental social struggle, there is no (or only an accidental) contribution to be found in this work by turning to the deranged and deranging discourse of superlativity and its various faith-based initiatives.

It is actually quite ludicrous to say, as you do, that transhumanism (a would be -ism and marginal movement of all things, members of which evangelize their faith in "The Future" rather than engaging in any of the actual worldly processes through which progressive technodevelopment actually takes place) is essentially an "educational" and "policy-making" enterprise. Transhumanism is a marginal sub(cult)ure whose membership organizations often seek to acquire legitimacy (and therefore attention and money) by selling themselves as "educational" and "policy-making" enterprises. But there is obviously no need to join a Robot Cult to facilitate technodevelopmental progress at the level of research or policy, while just as obviously there is every reason not to do so.

Progressive activism certainly includes activism [1] to facilitate consensus science through public funding, regulation, and education and also [2] to facilitate progressive distributions of the costs, risks, and benefits of technodevelopment to all the stakeholders to those developments, and [3] to democratize the processes through those stakeholders have a say in these technodevelopmental decisions that affect them, and [4] consensualize to the greatest possible extent the terms on the basis of which technodevelopmental change is incorporated into people's lives.

Taking this sort of progressive technodevelopmental activism seriously looks nothing like indulging in superlative futurological discourses or investing your identity in marginal defensive sub(cult)ural superlative futurological membership organizations.

5 comments:

Nato Welch said...

So they're putting the cart of warning about superlative outcomes before the horse of analyzing the probability they will occur?

Sounds like Pastafarians feeling a heartfelt duty to warn us about the correlation between global warming and the decline of piracy on the high seas.

A very, very large set of things are merely possible. But imagining the few mere possibilities is no way to draft policy, regardless of how high-impact of your particular pet scenarios are.

If the likelihood of your scenario is very uncertain or very small, you cannot compensate by inflating the stakes. That way lies madness, because, well, ANYBODY can propose a merely possible apocalyptic scenario that "changes everything." If we let the stakes override the odds, our imaginations will get the better of us.

Anonymous said...

"superintelligence, superlongevity, superabundance"

Some individuals who call themselves "transhumanists," on their personal blogs, engage in the kind of hyperbole that you describe for various psychological or social reasons.

However, on the whole the stuff that I read from groups such as the IEET seem well reasoned.

In a time of so much apathy and negativity, especially with regard to the potential to transform society, I think that it is good to develop a positive attitude to technology and to look at how scientific progress could lead to transforming social, political and technological developments.

Transhumanists do shoot themselves in the foot by engaging in wishful thinking. This detracts from the real progress being made in neuroscience and artificial "intelligence", genetic engineering and manufacturing capabilities.

They have unfortunately framed these discussions in terms of "superintelligence, superlongevity, superabundance."

The universe is stranger than we can imagine and so will be our technological progress.

This does not mean that by thinking and writing about these issues, one ignores current problems which require mundane solutions without hyperbole. The solution to many of the problems we face today will require radical re-thinking, including how we think about technology and its possibilities, notwithstanding the role of human agency in historical progress.

Dale Carrico said...

Nato wrote:

If the likelihood of your scenario is very uncertain or very small, you cannot compensate by inflating the stakes. That way lies madnessThis point cannot be emphasized enough.

Dale Carrico said...

Some individuals who call themselves "transhumanists," on their personal blogs, engage in the kind of hyperbole that you describe for various psychological or social reasons. However, on the whole the stuff that I read from groups such as the IEET seem well reasoned.How is it that the transhumanist "movement" and belief-system with which IEET is decisively associated provides any contribution at all to the things you find reasonable among its publications? Would it be possible to say these reasonable things without this affiliation with transhumanism? If it is possible to say these reasonable things with affiliation with a Robot Cult, then what is gained through that affiliation exactly?

Since you don't deny that the superlative futurology I analyze is typical of transhumanism I wonder how you would go on to deny (if you do) that it is not only typical but in fact definitive of trasnhumanism? Some transhumanists say some reasonable things about technoscience that are also said by many (indeed, mostly) non-transhumanist people, but they also say flabbergastingly unreasonable things -- about robot-bodied superlongevity and robot-slave superabundance and Robot God superintelligence -- that pretty much only transhumanists and other superaltive futurologists say. I think it is the unreasonable things that define the discourse, then, while the reasonable things are incidental to it.

I think that it is good to develop a positive attitude to technologyI couldn't disagree with you more. I think it is good to insist on progressive, democratizing, consensualizing, equitable, diversifying technodevelopment while resisting regressive, ant-democratizing, exploitative, incumbent-elitist, eugenic technodevelopment. And I think this has nothing at all to do with having a "positive" or "negative" attitude toward some abstraction called "technology" in general that doesn't exist, and which functions in fact to mystify actually-ongoing technodevelopmental social struggle into an alienated relation of consumers to a socially-indifferent accumulation of a toypile that will "emancipate" through brute-force amplification even though freedom has nothing whatever to do with force in fact.

The universe is stranger than we can imagine and so will be our technological progress.That doesn't mean the Robot Cultists get their ponies.

The solution to many of the problems we face today will require radical re-thinking, including how we think about technology and its possibilities,No doubt, no doubt. But this re-thinking will happen in the midst of the work itself, and the thinking of what we are doing as we are doing that work. It won't be engineered in the abstract, far in advance, absolutely separate from the struggle itself.

"The Future" superlative futurologists are genuflecting to and blueprinting in the present is just a funhouse mirror reflecting the present and their alienation to the present (including the openness which is the substantial futurity actually inhering in the diversity of stakeholders to the present) back at them.

jimf said...

Nato Welch wrote:

> A very, very large set of things are merely possible. But imagining
> the few mere possibilities is no way to draft policy, regardless of
> how high-impact of your particular pet scenarios are.
>
> If the likelihood of your scenario is very uncertain or very small,
> you cannot compensate by inflating the stakes. That way lies madness,
> because, well, ANYBODY can propose a merely possible apocalyptic
> scenario that "changes everything." If we let the stakes override the
> odds, our imaginations will get the better of us.

To which Dale replied:

> This point cannot be emphasized enough.


I once said to a certain Singularitarian:


> If the Singularity is the fulcrum determining humanity's
> future, and **you** are the fulcrum of the Singularity,
> the point at which dy/dx -> infinity, the very inflection
> point itself, then **ALL** morality goes out the window.
>
> You might as well be dividing by zero.
>
> You could justify **anything** on that basis
>
> . . .
>
> The more hysterical things seem, the more desperate,
> the more apocalyptic, the more the discourse **and**
> moral valences get distorted (a singularity indeed!)
> by the weight of importance bearing down on one human
> pair of shoulders. Which happens to belong to you (what
> a coincidence).
>
> Don't go there. . . Back slowly away from the precipice.


To which my interlocutor replied:


> > You could justify **anything** on that basis
>
> No, *you* could justify anything on that basis. I am much more careful
> with my justifications. . .
>
> Ethics doesn't change as the stakes go to infinity.


[Shrug.]


WOODROW WYATT: If you're not enthusiastic, you don't get
things done, but if you're over-enthusiastic, you
run the danger of becoming fanatical. Well, now, how
do you make certain that what you're doing is all
right, and that you haven't become, uh, in a fanatical
state?

BERTRAND RUSSELL: Certainty is not ascertainable. But what
you can do, I think, is this: you can make it a
principle that you will only act upon what you think
is **probably** true... if it would be utterly disastrous
if you were mistaken, then it is better to withhold
action. I should apply that, for instance, to burning
people at the stake. I think, uh, if the received
theology of the Ages of Persecution had been **completely**
true, it would've been a good act to burn heretics
at the stake. But if there's the slightest little
chance that it's not true, then you're doing a bad
thing. And so, I think that's the sort of principle
on which you've got to go.

WYATT: Would this apply to political parties and
governments?

RUSSELL: Oh, certainly it would. I mean, everybody who
belongs to a political party thinks the other party's
in the wrong. But, uh, he wouldn't say "therefore,
you have a right to go and assassinate them". You, uh...
there are certain things you **may** do when you think a
party's in the wrong, and certain things you mayn't.

WYATT: But what do you think of the limits of toleration?
I mean, you can get into a situation where you have
complete license and chaos.

RUSSELL: Well, the general principle **there** is,
that, uh, people should be allowed to advocate any change
in the law that they like. But in **general** --
though I don't say this always, by any means -- in
**general**, you should not permit the agitation for a
definitely illegal action prior to a change in the law.
You may advocate a change in the law, but you shouldn't
advocate an act which is illegal while the law stands
as it is. I don't say this as an absolute principle,
but usually.


"Bertrand Russell Speaking" 1959 52 min.
Woodrow Wyatt Interviews
Published in _Bertrand Russell Speaks His Mind_