Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Saturday, November 10, 2007

Business of Death

[via GOODMagazine h/t Shane!]



Discuss...

69 comments:

Roko said...

Or, and I know that this sounds a bit radical, you could work towards not dying in the first place!

If your body happens to pack it in, you could go and be cryopreserved at Alcor.

Dale Carrico said...

Well, strictly speaking, in terms of the actual case they are making, this would just shift their focus to questions of the environmental impact of dewars, cryoprotectants, environmental impacts of vitrification, costs of "suspensions" compared with funerals, and so on, correct?

By the way, you do realize that you are going to die, don't you?

Roko said...

"you do realize that you are going to die, don't you?"

This seems to be a loaded question; if I answer in the negative, you ridicule me for entertaining delusions of grandeur, hubris, etc. Therefore, rather than answering it directly I'll present a challenge to you. Prove to me, beyond all reasonable doubt, that I will necessarily die, and you may proceed with the ridicule.

Anne Corwin said...

I think that when people nowadays say "You're going to die", they're speaking in terms of "for all practical purposes, this is the best assumption based on past data, and the one you should probably get used to the idea of".

I'm very literal so it took me a long, long time to figure this out -- I used to think people were claiming special knowledge of the distant future when they said things like, "You're going to die", and I used to get all indignant and huffy about it, but really, they're just making a practical, pragmatic statement. At least that's my impression.

I think that arguments over whether someone will or will not EVER die are kind of pointless; nobody needs to "prove immortality" (now or ever) in order to support, or work toward, enabling people access to medicine that will allow them the longest, healthiest lives possible.

I think that longevity discourse that tries to "promise" that people will someday live forever is what Dale would probably call "deranging" -- not only is it totally impossible to promise anything of the sort, nobody needs to be able to make a case for the ultimate fulfillment of such a promise in order to make longevity-oriented healthcare a worthwhile pursuit.

People will live as long as they live, and all we can do is our best in that regard.

Dale Carrico said...

The animation was drawing our attention to the incredible profit and waste driven enterprise of modern processing and disposal of the dead, and the basic irrationality that drives so much of it. Roko mentioned cryopreservation as a more recently available alternative and I pointed out that in the terms of the piece we were presumably talking about the issue with Alcor would be the costs and environmental impacts associated with their method.

Even if genetic, prosthetic, and cognitive modification yields unprecedented rejuvenation and longevity gains, these would not circumvent the issues raised by the piece, since greater longevity isn't immortality after all, and since chances are the techniques would benefit some but not all (leaving most of the issues with which the animation was dealing intact, right?), especially in overexploited regions of the world, and so on.

In other words, I actually tried to take Roko seriously on their own terms.

Asking whether or not Roko realized we were indeed going to die was, as they put the point, "a loaded question," I guess. It is crucial to disarticulate the basic irrationality of The Denial of Death for embodied sociable narratively coherent beings in a finite universe from things like -- informed, non-duressed, non-normavizing consensual healthcare in an era of unprecedented emerging genetic, prosthetic, and cognitive therapy. Read that again if you haven't gotten that last sentence yet -- for me it is enormously important.

"[R]ather than answering it directly,"

Actually, you just did, didn't you?

"I'll present a challenge to you. Prove to me, beyond all reasonable doubt, that I will necessarily die, and you may proceed with the ridicule."

Roko, you don't understand how this works at all! It's extraordinary claims that demand extraordinary evidence. Every person has always died, and no person as far as we know has eluded this fate: the dying goes relentlessly on across the globe like a stream of grains tapping a drumhead, the beat goes on and on.

The radical interventions of genetic, prosthetic, and cognitive medicines in coming decades -- from which you and I may be lucky enough to benefit should North Atlantic civilizations not idiotically destroy themselves in warfare or disruptive energy descent or climate change -- will neither deliver immortality, even on their most reasonably optimistic construal, nor nudge anyone past the hurdle of some as yet unfathomed breakthrough that delivers omnipotence -- we people are all of us finite beings, forever prone to disease, accident, violence, betrayal, novelty, and fantasies about shiny robot bodies or angelic digital ones and so on rest on deep confusions about the actually embodied status of mind.

You radically misconstrue my purposes if you imagine ridicule is my aim. What's the pleasure bullying people who are likely doing their best however confused they may be? I tend to extend the benefit of the doubt until people start organizing or behaving badly first.

I don't know you and I can't fairly form expectations about you, but I will say that as a general matter: If you fear death that fear will drive your life and offer you a life lived in fear of death, a death in life. Stop casting about after invulnerability and find your aliveness in the terms that inevitable present themselves. Only connect, as the poet says. Beware the ones who promise you immortality -- they want to fix you in a prolonged present like a bug pinned to a board, they want to sell you something with no value, they want your energy, they want your time, they want your regard to feed on. Be careful. Immortality is squaring the circle, stop wasting your time. Enjoy yourself, feel lucky, pay attention, be generous.

As for your argumentative gambit that I present you with evidence "beyond reasonable doubt" that you will die (I'll don my rhetorician's cap for a moment) -- this framing suggests that you don't understand what "reasonable doubt" consists of. Gosh, that was quick.

I wish you all the best, honestly, and I will happily find my way to the barricades with you so long as you are fighting corporatists and theocrats who would deny the world the benefits of new therapies to end unnecessary suffering and help people live longer healthier lives (either by making research illegal, refusing to fund it, or pricing it prohibitively to benefit only the elites with which they identify).

All the immortality stuff, however, is a sad scam. Stop falling for it and keep your eye on the ball.

Roko said...

@Dale: "Every person has always died"

This is not a logically coherent clause. If you mean "every person has died" you are asserting a falsehood. You could instead mean "every person who is no longer alive has died" you are asserting a pointless triviality.

Let me get to the crux of the matter: You made a very strong assertion - that I will, for sure, die; in all possible worlds, there is some time at which Roko ceases to be. Then you failed to come up with any logical argument to support this strong claim. This is typical deathist behaviour. Now you are trying to back you claim up with the "evidence" that a lot of people in the past have died - in fact something like 80% of the people who ever existed have died, the rest are still alive. Sure, a lot of people have died. A lot of people never had the internet. A lot of people did not have a university education. A lot of people have never been into space (if I had £100,000, I could book a ticket to go into space in the next two years), so your argument from past experiences is not conclusive. Technology changes the world; a static extrapolation of the past into the future will lead you to believe falsehoods.

Now you're saying that "All the immortality stuff, however, is a sad scam"

- in order for it to be a scam, it has to be demonstrably false. Logically speaking, you have failed to demonstrate the falsehood of the concept of indefinate life extension, aka immortality.

When you ridicule someone else's worldview as a "sad scam", you had better back your mockery up with reasoned arguments. I just don't see those arguments. I see sloppy, logically incoherent assertions like "Every person has always died".

Roko said...

@Anne: "I think that longevity discourse that tries to "promise" that people will someday live forever is what Dale would probably call deranging"

WOW! Hold your horses! No-one, especially not me, said that immortality is somehow guaranteed. That would be a very silly, indefensible claim to make. This whole discussion was prompted by dale's comment:

"you do realize that you are going to die, don't you?"

Dale is promising me mortality. Dale is promising me that I'm definitely going to die, and on top of that he's implying that I've been "scammed" if I entertain any hope in my mind that I might, in fact, not die. All I'm asking is for him to back up the strong position he has taken. If he wants to retract his statement, and instead say "well, you might live forever, good luck with that", then that's great!

(note: the clause "people will someday live forever" is also incoherent. Living forever is not something that happens to you at a particular time; living forever is simply the absence of death at all future times)

@Anne: "I think that arguments over whether someone will or will not EVER die are kind of pointless"

well, I wouldn't throw that particular issue away so quickly. If someone thinks that there is some chance of them living forever, their attitudes to life will change an awful lot. If the question of everlasting life is so irrelevant, then why do all the world's major religions harp on about it so much (and, indeed, so successfully). The simple answer is because *DEATH IS A BAD THING*, although it takes courage to strip away the layers of deathist rationalization to come to this realization.

I, personally, am interested in knowing whether this bad thing (death) is avoidable, and in avoiding it.

Dale Carrico said...

Well, that didn't take very long. One more Robot Techno-Immortalist Cultist brings on the crazy.

You could instead mean "every person who is no longer alive has died" you are asserting a pointless triviality.

On the question at hand this is the furthest imaginable thing from a pointless triviality. Honestly, are you kidding me?

This is typical deathist behaviour.

I'm a "Deathist," now, whatever that's supposed to mean. Just because you're caught up in some weird cult doesn't mean everybody is, you know. (I wasn't ridiculing you before, but now it is clear that you are ridiculous.)

The universe is finite, bodies are mortal, the coherence of narrative selfhood is fraught with limits. To acknowledge these things is a matter of basic sanity. You've gone off the rails, and it is very much a matter of the company you are keeping.

In order for it to be a scam, it has to be demonstrably false.

What would count for you as such a demonstration? I'm sure you fancy yourself veeeeery scientific.

Logically speaking, you have failed to demonstrate the falsehood of the concept of indefinite life extension, aka immortality.

Prove Jeebus won't hold your hand forever in heaven if you just pray hard enough. Prove the flying spaghetti monster doesn't watch over all human affairs. C'mon show us all your stainless steel logic, cultist.

Another comment thread derailed by Superlativity. What a pack of loons.

Dale Carrico said...

Dale is promising me mortality. Dale is promising me that I'm definitely going to die, and on top of that he's implying that I've been "scammed" if I entertain any hope in my mind that I might, in fact, not die.

Don't blame me for your mortality, poor sad beleaguered True Believer... I'm coping with human finitude as much as everybody else in the world -- though for me exploitation and violence preoccupy my attention as expressions of finitude far more than the fact that I will eventually die does.

And, yes, of course, you are almost certainly going to die, like all the rest of us are almost sure to do. From some accidental misfortune or violence, or from some disease -- possibly from one of the conditions we have long "naturalized" and associated with aging that are only now recognized as possibly susceptible to therapeutic intervention.

Even if every disease were therapeutically overcome (the smart money says that fine day will not arrive in time for you, poor mite), there would remain serious questions just what kind of selfhood would emerge from such prolongation, whether it would still be sensibly called a "life" at all in the sense we mean by the term.

My point is not to question the desirability or not of such a vanishingly unlikely eventuality but to point out -- yet again -- just how conceptually confused Superlative technocentrics tend to be when they start glibly tossing about ill-understood inapt concepts (intelligence, abundance, optimal health, eternal life and so on) on which they depend for the basic intelligibility of their views.

And, yes, in my view, Techno-Immortalists are indeed caught up in a scam -- the usual Priestly bs and Amway bs siphoning off funds from scared suffering rubes.

But, sure, pretend this is nothing but an ad hominen attack -- it's easier. Pretend I'm just not smart enough or courageous enough to grasp your awesome vision -- it's easier. Pretend you've got logic or science on your side while you offer up facile fallacies, handwaving hype, and ignore scientific consensus -- it's easier.

Best of luck to you.

Anonymous said...

While it may be possible in principle that you'll never die (I would say it's at least more likely than the Flying Spaghetti Monster), in practice you'd be a fool to expect it with any high probability. Aging could be harder to crack than expected, or you might die from an accident or disease or murder or war or any number of mundane things, or some less mundane existential disaster, or the technological infrastructure making medicine possible could collapse, and even if you make it through all that, and never choose to die for whatever reason, and never change yourself into some form that the present you wouldn't recognize as you, there's the heat death of the Universe. You can't be certain about this or indeed anything, but I expect to die, eventually, and would advise you to expect the same.

I'm what Dale would probably call a Robot Cultist, but I do think "immortality" is a poorly chosen word because it's so likely to activate irrational, transcendental passions, and encourage people to think they're not going to die when they probably will. (Also, is it accurate to say you've achieved immortality if you haven't already lived an infinite amount of time?) How is "immortality" in any way superior to "negligible senescence" or even "significantly longer healthy life"?

Also, it's pretty silly to call Dale "deathist". He never said it was a good thing that you would die, did he?

Michael Anissimov said...

Dale, it would be a bummer to think of you rotting after your heart stops. Why not sign up for cryonics? By freezing all your neural interconnections in place, it could become possible for future scientists to revive you one day.

www.alcor.org

Best,
Michael

Dale Carrico said...

That death is a "bummer" is actually a relatively widespread sentiment, you know.

Also, there are already a number of actual circumstances in which my stopped heart might not occasion my immediate death.

But: "[F]reezing all your neural interconnections in place... for future reviv[al]" is handwaving.

I don't expect to go to heaven, nor to be reincarnated, nor to have my vitrified brain unwrapped like a potato in foil and uploaded into a shiny robot body.

Longer, healthier lives for all in an era of unprecedented consensual genetic, prosthetic, and cognitive medicine? Sure.

Techno-Immortalism? Nope.

jimf said...

Anne Corwin wrote:

> I think that when people nowadays say "You're going to die",
> they're speaking in terms of "for all practical purposes,
> this is the best assumption based on past data, and the
> one you should probably get used to the idea of".
>
> I'm very literal so it took me a long, long time to figure this
> out -- I used to think people were claiming special knowledge
> of the distant future when they said things like, "You're going to die",
> and I used to get all indignant and huffy about it, but really,
> they're just making a practical, pragmatic statement. At least
> that's my impression.

Yes, of course.

It's a practical, pragmatic statement, and accommodating oneself
to that as-certain-as-certain-ever-gets likelihood is part of
growing up -- **even if** one is simultaneously being a doctor,
or participating in medical or biological research
(or research into chemistry or physics or computer
technology, for that matter), and **even if** one can suspend
disbelief and enter into full sympathetic engagement with a
novel by J. R. R. Tolkien in which there are beings who
are destined to live as long as the Earth remains inhabitable,
or a novel by Olaf Stapledon in which there are beings who
live as long (tens of thousands of years, perhaps) as they can significantly
contribute to their society, and who then voluntarily choose the time
of their death. And **even if** one is not averse to the
idea, and does not find it utterly implausible, that there
could be beings in real life, either somewhere else in the
universe, or here on Earth in the future (whether directly
descended from the humans of today or not) who have (or will have)
lives something like those of the beings in those works of
imaginative fiction.


“The war creates no absolutely new situation:
it simply aggravates the permanent human situation
so that we can no longer ignore it. Human life
has always been lived on the edge of a precipice. . .

Never, in peace or in war, commit your virtue or your
happiness to the future. . . Happy work is best
done by those who take their long-term plans somewhat
lightly and work from moment to moment. . . The present
is the only time in which any duty can be done or any
grace received. . .

What does war do to death? It certainly does not
make it more frequent: 100 percent of us die and the
percentage cannot be increased. Yet war does
do something to death. It forces us to remember it.
The only reason that cancer at sixty or paralysis
at 75 do not bother us is that we forget them. . .
All schemes of happiness centered in this world
were always doomed to final frustration. In ordinary
times only a wise man can realize it. Now the
stupidest of us knows.”

-- C. S. Lewis, "Learning in War-Time"

jimf said...

Dale wrote:

> . . .to have my vitrified brain unwrapped like a potato in foil. . .

---------------------------
Dr. Melik: Well, he's fully recovered, except
for a few minor kinks.

Dr. Orva: Has he asked for anything special?

Dr. Melik: Yes. This morning for breakfast,
he requested something called "wheat germ",
"organic honey" and "Tiger's Milk".

Dr. Orva: [Laughs.] Oh, yes. Those were the
charmed substances that some years ago were
felt to contain life-preserving properties.

Dr. Melik: You mean there was no deep fat?
No steak or cream pies or hot fudge?

Dr. Orva: Those were thought to be unhealthy.
Precisely the opposite of what we now know to be
true.

Dr. Melik: Incredible! Well, he wants to know
where he is and what's going on. I think it's
time to tell him.

Miles Monroe: I can't believe this! My doctor
said I'd be up and on my feet in five days.
He was off by 200 years.

Dr. Melik: I know it's hard, Miles, but try to
think of this experience as a miracle of science.

Miles: A "miracle of science" is going to the
hospital for a minor operation, I come out the
next day, my rent isn't two thousand months
overdue. **That's** a miracle of science. This
is what I call a cosmic screwing. And where
am I anyhow? What happened to everybody?
Where are all my friends?

Dr. Orva: You must understand that everyone you
knew in the past has been dead nearly 200
years.

Miles: But they all ate organic rice!

Dr. Orva: You are now in the year 2173. Now this
is the Central Parallel of the American Federation.
This district is what you'd probably call the
southwestern United States. That was before it
was destroyed by the war.

Miles: War?

Dr. Orva: Yes. According to history, over 100
years ago, a man named Albert Shanker got hold
of a nuclear warhead.

Dr. Melik: You will remain in hiding
here for two weeks while we run a battery of tests
on you. Then, when we think you've fully recovered
your strength, we'll discuss the plan. . .

Miles: I still can't believe this. What do you
mean, "hiding"? Who am I hiding from? What
did she mean, "hiding"?

Dr. Orva: Well, you might as well know, Miles,
that reviving you was in strict opposition to government
policy.

Dr. Melik: What we've done is highly illegal, Miles,
and if we get caught, we'll be destroyed, along
with you.

Miles: What do you mean, "destroyed"?
What do you mean, "destroyed"?

Dr. Orva: Your brain will be electronically
simplified.

Miles: My brain? That's my second-favorite
organ!

Dr. Orva: Resisters to mind reprogramming will
be exterminated, for the good of the state.

Miles: What kind of government you guys got
here? This is worse than California!

Dr. Melik: There is a growing underground,
Miles, and some day the revolution will come when
we can overthrow our "great leader".

Miles: Look, you gotta be kidding. I wanna
go back to sleep. If I don't get at least
300 years, I'm grouchy all day.

Dr. Orva: We're taking him along too fast.
He's still emotionally unstable.

Miles: I can't believe this! I go into a hospital
for a lousy ulcer operation, I lay around in a
Bird's Eye wrapper for 200 years, I wake up,
suddenly I'm on the ten-most-wanted list.

Dr. Orva: He's ranting. We'd better tranquilize
him.

Miles: I knew it was too good to be true! I parked
right near the hospital.

Dr. Orva: Here. Smoke this, and be sure you get
the smoke deep down into your lungs.

Miles: I don't smoke.

Dr. Orva: It's tobacco! It's one of the healthiest
things for your body. Now go ahead. You need all
the strength you can get.

Miles: [Puffing the cigarette.] You know, I
bought Polaroid at seven. It's probably up
millions by now.

Anonymous said...

But: "[F]reezing all your neural interconnections in place... for future reviv[al]" is handwaving.

I don't expect to go to heaven, nor to be reincarnated, nor to have my vitrified brain unwrapped like a potato in foil and uploaded into a shiny robot body.


Even a handwavey chance at revival is better than a virtual certainty of death. OTOH, I'm sympathetic to the argument that cryonics is a poor use of money compared to effective charitable giving, and mentally easier to forgo than other luxuries.

jimf said...

> Even a handwavey chance at revival is better than a
> virtual certainty of death.

Why?

"I live and have my day, my son succeeds me and
has his day, his son in in turn succeeds him.
What is there in all this to make a tragedy about?
On the contrary, if I lived forever the joys
of life would inevitably in the end lose all
their savor. As it is, they remain perenially fresh.

'I warmed both hands before the fire of life,
It sinks, and I am ready to depart.'

This attitude is quite as rational as that of
indignation with death. If therefore moods were
to be decided by reason, there would be quite
as much reason for cheerfulness as for despair."

-- Bertrand Russell, _The Conquest of Happiness_,
Chapter 2, "Byronic Unhappiness"

Something I posted on the Extropians' 6 or 7 years ago:

---------------------------------------------------------------
[Another list member] wrote:

> ... Cryonics patients are DEAD.... The euphemisms are not fooling anyone
> and I think it sounds darn stupid, like wearing a sign that says... "Hi,
> this is a cult"....
>
> Saying that Alcor has sixty (or whatever) "reversibly dead patients" is,
> in itself, a powerful sign that cryonics is not cultish or denial or an
> Egyptian mummification sham....

I know it's not PC on this list, and I don't claim
that anybody else should feel the same way, but I have
a hard time taking cryonics seriously -- not least because
it provokes too many humorous associations for me.

For one thing, speaking of Egyptian mummification, there's
the altogether too close parallel with the outfit I saw a
TV show about a while ago, called Summum:
http://www.summum.org/mummification/

Then there's the whole association in my mind with
those aspects of California culture which New Yorkers
like Woody Allen have been making fun of for years
(and cf. "The Californian Ideology":
http://www.alamut.com/subj/ideologies/pessimism/califIdeo_I.html
and the rebuttal by Louis Rossetto, editor of _Wired_:
http://www.alamut.com/subj/ideologies/pessimism/califIdeo_II.html )

Woody Allen has also made the serious and somber point,
in the comedy _Sleeper_, that the kind of world a cryonaut
might wake into, or the purposes for which one might be
revived, might not be what ve might wish for prior to
cryonic suspension.

One reflection that's crossed my mind that
isn't just a matter of media associations is the thought
that the nastiest aspect of death for me personally (and
probably for many other people) isn't the idea of nonexistence
per se, but rather the likelihood of having to go through a
fair amount of unpleasantness in order to reach that state.
Cryonics doesn't get you around any of that; in fact, if
anything, there'd likely enough be a fair amount of unpleasantness
at the other end, assuming one got revived (I'd anticipate
something at least as bad as the discomfort of being "rebooted"
following my phenobarbital suicide attempt 20 years ago).

Of course, the wish to avoid the unpleasantness surrounding
death (on whichever side of it) can be written off as
mere cowardice; but the belief that one's own personality
is important enough to take extreme measures to preserve
from nonexistence strikes me as -- unseemly, somehow.
Maybe I feel that way for the same reason I'm not a fan
of Ayn Rand, and maybe that feeling marks me out as somebody
who doesn't **deserve** such perpetuation (that has a
satisfyingly Darwinian ring to it!).

Unlike HAL in _2001_, I find nothing particularly alarming
in the fact that my consciousness does, in fact, cease
to exist every night when I'm in delta sleep. Nor do I
find the prospect that the universe will go on for billions
of years without me in it particularly alarming. **Strange**,
yes, but no stranger than contemplating the fact that
the unverse existed for billions of years **before** I came
into it, or contemplating the unlikelihood that I should exist
at all, or contemplating the fact that the person I was
at age 5 or 15 has already almost altogether disappeared,
and is only dimly reflected in the person I am now.

In fact, I even get into moods sometimes in which I'm
struck by the sheer strangeness of being limited to my
own conscious **perspective** on the universe -- I'm overwhelmed
by the thought of the sheer **simultaneity** of billions
of other human beings going about their business at this
exact moment, to say nothing about the trillions upon
trillions of other biological organisms on this planet,
and the unknown number of organisms, intelligent or otherwise,
on other worlds throughout the universe. I mentioned this
feeling once in an oddball telephone conversation I was
having with a friend many years ago, saying something
like "doesn't it ever make you feel **claustrophobic** to
be stuck in your own infinitesimal corner of spacetime?",
whereupon my friend asked "have you been dropping windowpane
acid?" ;->

However, getting back to media influences, the dominant
image in my mind when it comes to cryonics has got
to be the 1965 film _The Loved One_, based on the
Evelyn Waugh novel of the same name:
http://www.amazon.com/Loved-One-Robert-Morse/dp/B000ERVK4O
If I were about to sign a contract in the business
office of a cryonics outfit, the anticipation that
Liberace might appear at any moment in the proceedings
would, no doubt, reduce me to a fit of hysterical giggling.
For that matter, I don't think I could ever be certain
that I wouldn't wake up to discover that I **was**
Liberace (*). ;-> ;-> ;->

Jim F.

(*) or Elvis! ;->

-----------------------

Even though it offers little hope of resurrection, J. G. Ballard's
portrayal of a sort of cybernetic version of cryo-preservation in
his story "The Time-Tombs" is far more romantically appealing than the
prospect of having one's head preserved in liquid nitrogen
at Alcor:

"There were no corpses in the time-tombs, no dusty skeletons.
The cyber-architectonic ghosts which haunted them were embalmed
in the metallic codes of memory tapes, three-dimensional
molecular transcriptions of their living originals, stored
among the dunes as a stupendous act of faith, in the hope that
one day the physical recreation of the coded personalities
would be possible. After five thousand years the attempt had
been reluctantly abandoned, but out of respect for the tomb-
builders their pavilions were left to take their own hazard
with time in the Sea of Vergil...

The furnishings of the tomb differed from that of the previous
one. Sombre black marble panels covered the walls, inscribed
with strange gold-leaf hieroglyphics, and the inlays in the
floor represented stylized astrological symbols, at once
eerie and obscure. Shepley leaned against the altar, watching
the cone of light reach out towards him from the chancel as
the curtains parted. The predominant colours were gold and
carmine, mingled with a vivid powdery copper that gradually
resolved itself into the huge, harp-like headdress of a
reclining woman. She lay in the centre of what seemed to
be a sphere of softly luminous gas, inclined against a massive
black catafalque, from the sides of which flared two enormous
heraldic wings. The woman's copper hair was swept straight
back from her forehead, some five or six feet long, and
merged with the plumage of her wings, giving her an impression
of tremendous contained speed -- like a goddess arrested in
a moment of flight in a cornice of some great temple-city
of the dead.

Her eyes stared forward expressionlessly at Shepley. Her
arms and shoulders were bare, and the white skin, like
compacted snow, had a brilliant surface sheen, the
reflected light glaring against the black base of the
catafalque and the long sheath-like gown that swept
around her hips to the floor. Her face, like an exquisite
porcelain mask, was tilted upward slightly, the half-closed
eyes suggesting that the woman was asleep or dreaming.
No background had been provided for the image, but the
bowl of luminescence invested the persona with immense
power and mystery."
---------------------------------------------------------------

jimf said...

Not that I myself wouldn't take advantage of whatever life-prolonging
technology becomes available. As I said on the same list,
if I could toddle through an Introdus portal [a Greg Egan
upload interface] with a swipe of the
credit card same as everybody else is doing, I'd probably
do it, for the same reason I go to work in the morning,
or go to the dentist regularly, or change the oil in the car,
or keep the computer backed up -- namely, to avoid or
postpone unpleasant consequences.

Roko said...

Nick Tarelton said:

"in practice you'd be a fool to expect it with any high probability. Aging could be harder to crack than expected, or you might die from an accident or disease or murder or war or any number of mundane things, or some less mundane existential disaster, or the technological infrastructure making medicine possible could collapse, and even if you make it through all that, and never choose to die for whatever reason, and never change yourself into some form that the present you wouldn't recognize as you, there's the heat death of the Universe. You can't be certain about this or indeed anything,"

Now there is a nice, reasoned argument that comes to a sensible conclusion about whether or not I will die. Dale, you should pay attention to how this works: what Nick did there was analyze the space of possibilities and scenarios, weight up evidence, and come to a probabilistic conclusion.

He did not ad-hom me ("poor sad beleaguered True Believer"), go off on irrelevant hyperbole ("Prove Jeebus won't hold your hand forever"), or simply re-assert the proposition he was trying to prove in a manner which implied that anyone who did not believe it was mentally deficient. ("To acknowledge these things is a matter of basic sanity").

@Nick: I agree with what you say, depending on what you mean by high probability. I'd estimate the probability that I will never die at 10^(-1). I’ve deliberately written that as a power of ten to re-enforce the low degree of confidence it comes with.

Some specific quibbles I have with your reasoning: 1. Deliberately changing myself to a form that my present self would not recognize does not count as death in my book, but you’re free to define that as death if you want. We might have to invent distinct words to represent these two slightly different concepts. 2. My survival doesn’t have to depend very much on medical technology; it could be a simple matter of brain scanning. Once I’m on a computer, I am much safer from all sorts of nasty things, because computers can store information in a distributed fashion, be backed up, etc.

Dale Carrico said...

Even a handwavey chance at revival is better than a virtual certainty of death.

One reaches a point, don't you think, though, where the "chance" at techno-immortality looked at honestly is really so truly "handwavey" that its only practical impact is little more than to distract people from efforts to make a difference for the better on terms that are actually available in the world and to disconnect people from opportunities for flourishing on terms that are actually available in the world? In other words it defers lived life in the name of a "life eternal"? And all this quite apart from the other things superlative technology discourses do in the way of fostering technocratic elitism, reductionism, hyperbole, True Belief, and so on?

Dale Carrico said...

Now there is a nice, reasoned argument that comes to a sensible conclusion about whether or not I will die. Dale, you should pay attention to how this works:

These are all arguments I've made myself over and over and over again for fifteen years. Let's see how patient Nick is feeling with these arguments in a decade's time, once he's had time really to savor the endlessly recycled fallacies and frames of Superlativity, to observe the curious psychic and political subcultural quirks that play out over and over and over again.

The problem with many Robot Cultists is that they have seriously lost track of just how crazy some of what they believe really is. Playing nicey-nice with you guys is all very well when I'm in the mood for magnanimity, but it's not like I'm going to apologize for ridiculing the ridiculous when I lack the saintly patience to slather on the honey rather than the vinegar.

Buck up! Consider my brickbats a wake-up call, or a lark. Haven't you ever met anybody with a bit of gumption for heaven's sake?

Once I’m on a computer, I am much safer from all sorts of nasty things, because computers can store information in a distributed fashion, be backed up, etc.

Dag, they must have some special sooper dooper reliable computers where you're from, then. You might benefit from re-reading Jeron Lanier's critique of "Cybernetic Totalism," from the now superannuated Half A Manifesto.

Anonymous said...

If you mean "immortality" taken literally, I agree. However, cryonics isn't synonymous with "immortality" (even if some cryonicists use that word) and is no more innately transcendentalizing and only moderately more, ah, handwavey than something like SENS. However again, as I mentioned, signing yourself up for cryonics isn't a great use of money from a large-scale, moral perspective, so you still have a good point about distraction.

Buck up! Consider my brickbats a wake-up call, or a lark. Haven't you ever met anybody with a bit of gumption for heaven's sake?

Have you considered that you may have an abnormally thick skin?

jimf said...

Nick Tarleton wrote:

> Have you considered that you may have an abnormally thick skin?

Cake-crumbs in bed:
http://www.boop.org/jan/justso/rhino.htm

jimf said...

Roko wrote:

> I'd estimate the probability that I will never die at 10^(-1).
> I’ve deliberately written that as a power of ten to re-enforce
> the low degree of confidence it comes with.

You could've said 10% and been clearer -- small integral powers around
0 aren't really what scientific notation is needed for
(except maybe in the math textbooks that teach it).

And you consider 10% a **low degree** for a contingency
like this??!!

You expecting to be hired by Diziet Sma, or something? ;->

Roko said...

@jfehlinger "you expecting to be hired by Diziet Sma, or something?"

As much as Dale will pour his rhetorical vinegar on me for saying this, I would love to be hired by Diziet sma. Aaah, use of weapons. An excellent read. Thankyou for taking me back to that most pleasant and excellent book.

As to my probability estimate, I think you're confused between two different things. Firstly there's the probability you assign to an event, secondly there's the amount of confidence you have in that probability estimate, which you can think of as a measure of how likely you think it is that you will have to adjust that estimate as new evidence comes in, or equivalently as how much evidence your estimate is based on.

I didn't want to say 10% because it suggests a precision which I just don't have. Based on the extremely high degree of uncertainty involved in making this prediction, I might just as well have said 10^(-2) or 10^(-0.5).

let us write p for the probability that I will live forever. What I want to say is that p=10^(-2) or p=10^(-0.5) or p=10^(-1) would all count as reasonable beliefs to hold, but p=10^-5 would be unreasonably pessimistic.

Roko said...

Dale: "These are all arguments I've made myself over and over and over again for fifteen years."

ok, well why not make them again, here so I can read them? Also, notice that the output of Nick's argument is not as strong as the things that you are claiming! Nick concluded that there is some small probability that I might live forever, and I suggested that 10^(-2) or 10^(-1) are plausible estimates for this probability. You might want to chip in and say what you think the probability that I will live forever is. But if you say something really small, like 1 in a billion, I will show that you have made an error, and if you choose something in the same range as my estimates, you cannot sustain your critique that wanting to live forever is delusional.

In fact while I'm at it, let me write down a short justification for that probability estimate that I gave. Barring civilizational disaster (P>0.5 say?), and assuming no advances in medical technology, I will probably (p>0.5) live until the year 2060. So, what is the probability that there will be sufficient brain scanning technology to record my mind-state at that date, achieved by incremental advances in nanotech. let us call this P1. Then there is the possibility of the development of a friendly AI at some point between now and 2060, call this P2. Then there is the probability that medical technologies such as SENS simlpy extend my life further and further, and that I get to some point in the fairly distant future (2120, say) where more advanced techniques can take over and successfully overcome biological senescence. Let us call this P3.

The probability that I'll get to a non-senescent form is at least max(P1,P2,P3). Now things could still go wrong from here; but the arguments from biological senescence don't apply, so if we're being honest we should apply perhaps a factor of 0.5 for "all the things you haven't thought of that can kill a non-senescent person". There is of course the "heat death of the universe" argument. But it is not watertight; first of all we don't have a fully unified theory of physics, so we don't actually know whether entropy increases in all physical systems. Secondly we don't know what other possibilities future technological and scientific research will throw up; remember that these advances could be made in a million years' time - we have a very, very long time before the universe ceases to be useful for entropy reasons. Other universes? Wormholes? Extra dimensions? It suffices to say that we should not assign a very high probability to death by entropy, being generous, let us say that we can be 80% certain that all living things will eventually die because of heat-death.

If we say that one of P1, P2, or P3 is at least 20%, then we end up with

0.2*0.5*0.5*0.2 = 1/100.

So I claim that any probability estimate which is smaller than the above is unreasonable.

Anne Corwin said...

Jim F. said:

"the belief that one's own personality
is important enough to take extreme measures to preserve
from nonexistence strikes me as -- unseemly, somehow.
Maybe I feel that way for the same reason I'm not a fan
of Ayn Rand, and maybe that feeling marks me out as somebody
who doesn't **deserve** such perpetuation (that has a
satisfyingly Darwinian ring to it!)."

Actually, I think Ayn Rand would probably agree that heroic lifesaving efforts were "unseemly" -- after all, per her stated philosophies, aren't people who command more resources (medical or otherwise) than they can "earn" better off leaving the world to the young, strong, and "productive"?

Peter Singer has also written that increases in longevity might be a negative thing from a utilitarian standpoint, seeing as he believes younger people to be more capable of happiness, etc.

I don't agree with this assessment, seeing as it over-abstractifies and makes a lot of inappropriate assumptions with regard to what kinds of lives might be worth living at any given time.

I realize you really don't like Ayn Rand, but you don't need to conflate a healthy survival instinct (something you certainly "deserve" to have!) with narcissism. :)

ZARZUELAZEN said...

Roko son,

Take it from me. No one cares about your probabilistic calculations.

There's a steady stream of potential 'robot cultists' coming on-line all the time and the first thing they all do is start spouting off in geek-talk sure that they're full of wisdom. Trouble is that there's so many other geeks spouting off, all sure that they're right and every-one else is wrong, all the geek talk gets lost in the shuffle.

The robo-geeks (new word Dale! - 'Robo-geek') can't seem to help themselves - they just can't seem to stop spewing probability calculations on-line.

Son, it's all *talk*. You and the robot cultists are sitting there at a keyboard tapping *words* on a crappy keyboard to an audience of perhaps 100 (90 of which probably don't caare any way). Mean-while, all the time you are aging quite normally, and brain-cells are dying.

The human body peaks at 20 - , and after that your chances of dying double relentlessly every 7 or 8 years. You're 'past it' (both physically and intellectually) at 36. From 36-60 it's all down-hill and survival rates plummet after 60. It ain't worth it past 70.

---

Tip to the robo-geeks, SIAI etc: Leave the probabilility calculations to the machines. To achieve your goals you need to start thinking in terms of prototypes and archetypes (the highest level of logical abstraction and what humans do best). The probabilitiy calculations are *secondary*.

Or to re-phrase: A sufficiently high-level programming language should not deal in probabilities at all. The complier should be generating the probability distributions automatically from your source code.

Hope that helps.

jimf said...

Anne Corwin wrote:

> [A]fter all, per [Ayn Rand's] stated philosophies, aren't
> people who command more resources (medical or otherwise)
> than they can "earn" better off leaving the world to the
> young, strong, and "productive"?

Gack, no! People who "command" more resources have ipso
facto "earned" them, and they have no altruistic obligations
to "the world" or to the young and the strong.

Ayn Rand, according to the reports of her friends at the time,
seems to have been rather embarrassed that she was stricken
with lung cancer. She seems to have had the notion that
physical illness is the result of "bad premises", and that
she should have been above all that. (It seems to be part and
parcel with her belief that she, in her 60s, **should** continue
to be sexually attractive to a man 25 years her junior.)

There's a scene at the beginning of the Showtime movie adaptation
of Barbara Branden's _The Passion of Ayn Rand_ (Helen Mirren as
Ayn Rand is probably scarier than Rand was in real life ;-> )
http://www.amazon.com/Passion-Ayn-Rand-Helen-Mirren/dp/B000056BP0/
in which the Barbara Branden character, accompanied by
introductory voice-over, is in a queue of people at a Manhattan
funeral parlor in 1982 waiting to pay their respects to Rand
in her casket. An official comes out and reminds everybody
when the viewing will begin and how long it will last, and
asks that, in view of the large number of people waiting in
line, people should keep moving as quickly as possible
"for the sake of the other mourners." At that, a woman behind
Branden snorts "She would've **loved** that!" Branden,
absorbed in her own thoughts, replies "I beg your pardon?"
The woman explains, "'For the sake of the others. She
would've loved that.'"

Roko said...

@Marc: "Son, it's all *talk*."

I can see where you're coming from. Me making "crappy" probability estimates is not making me any younger, it's just wasting time I could be spending actually trying to push reality in the direction I want.

But then again, "just talk" is very important. Whether or not humanity achieves an end to againg, suffering, material scarcity, etc DEPENDS upon how many people can be persuaded to work on these problems.

A lot of the comments that Dale makes serve to reduce the number of people who will work on the technology necessary to get us out of problems like death, so it's important to expose those comments as being ungrounded.

Thus probability estimates do actually matter. It's hard to persuade anyone to work on solving a problem that is almost insoluble. For that matter, I'm not so sure that I want to spend my life working on something that will almost certainly fail. The probabilities matter.

jimf said...

> A lot of the comments that Dale makes serve to reduce the number
> of people who will work on the technology necessary to get us out
> of problems like death, so it's important to expose those comments
> as being ungrounded.

It seems highly unlikely to me that Dale's observations on this
blog or anywhere else "serve to reduce the number of people who
work on. . . technology". He is certainly not advocating that
anyone give up a career in science or medicine to pursue
rhetoric, or ballet, or stock-brokering.

On the other hand, I suspect that anybody who takes too seriously
the idea that he is working "on the technology necessary to
get us out of problems like death" is reducing his own
effectiveness as a scientist or technologist by virtue of his
own grandiosity. Reality simply will not be "pushed" into fulfilling
anybody's grandiose plans, and those who get too wrapped up in the
grandiose will find themselves unable to accomplish even the mundane.

OTOH, there is an incredible amount of "fat" in human society as
it exists today. Rackets like Scientology (or, some might say,
the Catholic Church) can extract big budgets, buy real estate,
and monopolize people's time and attention, while promising
the moon and delivering. . . well, I suppose the people
who are running those treadmills are persuaded, at least for
a while, that they're getting something in return --
and maybe they are, even if that something is no more than
being shielded from unpleasant truths. And other people take
up stamp collecting, or attend Star Trek conventions.

"The world must construe according to its wits."

Don't quit your day job just yet. ;->

Roko said...

@jfehlinger:

You just made one very bad point and then one good point.

To deal with the bad first: If you say that the solution to a problem is "grandiose", you are effectively giving up on it. To put this another way, you will never make the world a better place if you lack the willingness to take a bit of a risk, and try something that may well go nowhere. You might reply that there are plenty of mundane ways in which one can make the world better, like by doing an ordinary "office job", making those people close to you happy, or even just engaging in 'vanilla' scientific research. But a lot of these things demonstrably have ZERO effect on the big problems in life, the ones that we really care about. I'll take grandiose over pointless any day of the week.

Then you made a very good point, which is often overlooked; billions and billions of people believe things that are infinitesimally unlikely and almost maximally ridiculous/grandiose, you mentioned the catholic church and Scientology, but I'll add all Christians and Muslims to that for good measure. Transhumanism is, I'll admit, stretching rationality by interpreting reality in a fairly optimistic way. But religious belief tramples all over reality, and as for "grandiose pie-in-the-sky ideas" and "superlative discourses", religious belief beats the crap out of transhumanism.

At the end of the day, it is psychologically unhealthy not to interpret the "big picture" in an optimistic way. The alternative - to take a pessimistic view on the ultimate questions in life - leads to a sour, cynical outlook where one's only source of validation is to try and destroy other people's hopes and dreams; a kind of schadenfreude.

jimf said...

Roko wrote:

> At the end of the day, it is psychologically unhealthy not
> to interpret the "big picture" in an optimistic way. The
> alternative - to take a pessimistic view on the ultimate
> questions in life - leads to a sour, cynical outlook where
> one's only source of validation is to try and destroy other
> people's hopes and dreams; a kind of schadenfreude.

This observation contains a remarkable echo of:

“The Gospels contain a fairy-story, or a story of a larger kind
that embraces all the essence of fairy-stories. They contain many
marvels - peculiarly artistic, beautiful and moving: ‘mythical’
in their perfect, self-contained significance; and among the
marvels is the greatest and most complete conceivable eucatastrophe.
But this story has entered History and the primary world; the
desire and aspiration of sub-creation has been raised to the
fulfillment of Creation. The Birth of Christ is the eucatastrophe
of Man’s history. The Resurrection is the eucatastrophe of the
story of the Incarnation. This story begins and ends in joy. It has
pre-eminently the ‘inner consistency of reality’. There is no
tale ever told that men would rather find was true, and none which
so many sceptical men have accepted as true on its own merits. For
the Art of it has the supremely convincing tone of Primary Art,
that is, of Creation. To reject it leads to sadness or wrath.”

-- J. R. R Tolkien, "On Fairy-Stories"

I'll admit that you've touched here on some very deep questions,
which as far as I know do not have unequivocal answers.
To explore such questions, one might start by reading
William James' _The Varieties of Religious Experience_.

A "pessimistic" observer might well conclude that there's
a fundamental conflict between science (**real** science,
not scientistic PR that functions as what H. L. Mencken
might have called "perfumery" for that old-time religion) and
human psychology.

> If you say that the solution to a problem is "grandiose",
> you are effectively giving up on it. To put this another way,
> you will never make the world a better place if you lack the
> willingness to take a bit of a risk, and try something that
> may well go nowhere.

It's a matter of degree. And when you go shopping for your
Pied Piper, do not forget that:

"Wasted Lives"
"Narcissists are as gifted as they come. The problem is to
disentangle their tales of fantastic grandiosity from the reality
of their talents and skills.

They always tend either to over-estimate or to devalue
their potency. They often emphasize the wrong traits
and invest in their mediocre or (dare I say) less than
average capacities. Concomitantly, they ignore their real
potential, squander their advantage and under-rate their gifts.

The narcissist decides which aspects of his self to nurture
and which to neglect. He gravitates towards activities
commensurate with his pompous auto-portrait. He suppresses
these tendencies and aptitudes in him which don't conform
to his inflated view of his uniqueness, brilliance, might,
sexual prowess, or standing in society. He cultivates these
flairs and predilections which he regards as befitting his
overweening self-image and ultimate grandeur."
http://samvak.tripod.com/journal11.html

> Transhumanism is, I'll admit, stretching rationality by
> interpreting reality in a fairly optimistic way. But religious
> belief tramples all over reality, and as for "grandiose
> pie-in-the-sky ideas" and "superlative discourses", religious
> belief beats the crap out of transhumanism.

Transhumanism is simply more "modern" in that it reflects
a recognition (whether conscious or not on the part of its
adherents) that science -- that bringer of inconvenient
truths -- simply isn't going to go away. So -- if you
can't beat 'em, join 'em! It's a tribute to the inventiveness
of the human psyche, in a way -- wishful thinking can't
be expunged because of its deep roots in the emotional parts
of the brain; science can't be discarded because it's proved
too useful in practical life; but human self-deception proves
once again its supreme adaptability by blending the two in a way that
seems superficially plausible if you don't look too closely.
(And who really wants to look closely?

Dorothy Michaels' screen test]
Rita [soap opera producer]: I'd like to make her look a little
more attractive, how far can you pull back?

Cameraman: How do you feel about Cleveland?

Rita: Knock it off.

-- _Tootsie_ )

I'm continually struck by the deep resonances between Transhumanism
and the religious apologetics of, say, C. S. Lewis.

From my e-mail archive:

Subject: Let's be angels

[An Extropian] wrote:

> ...I've been thinking about the "long view" of my life...
> It is almost as if we are aliens here, just passing through. Or we are
> immortals for whom a brief flash of a single generation seems less
> important... [W]e may be the only long-term relationships
> we have in the future.

And you wrote:

> Right now, there are six billion people alive in the world...
> When the Afterglow of the Big Bang is over and matter
> has stopped gathering in clumps and burning by fusion, there
> will still be only six billion of us.

Your remark about the "Afterglow of the Big Bang", of course,
echoes that memorable passage in Arthur C. Clarke's _Profiles
of the Future_ . . .:

"Our galaxy is now in the brief springtime of its life, Arthur Clarke
wrote, two-thirds of my lifetime ago, in the closing passage of _Profiles
of the Future_. With the lyrical melancholy that marks the finest
scientific and science fiction writing, he had kept the strangest magic
until last. It is not until these stars have guttered out, he told us,
not until Vega and Sirius and the Sun have burned low, that the true
history of the universe will begin:

     It will be a history illuminated only by the reds and infrareds
     of the dully-glowing stars that would be almost invisible to
     our eyes; yet the sombre hues of that all-but-eternal universe
     may be full of colour and beauty to whatever strange beings
     have adapted to it ...

          They will have time enough, in those endless aeons, to
     attempt all things, and to gather all knowledge. They will not
     be like gods, because no gods imagined by our minds have
     ever possessed the powers they will command. But for all
     that, they may envy us, basking in the bright afterglow of
     Creation; for we knew the Universe when it was young."

But the same lyricism and portentousness can be found in,
of all places, a passage from C. S. Lewis' "The Weight of Glory"
(though with a completely different metaphysical underpinning,
of course ;-> ). . .:

"It may be possible for each to think too
much of his own potential glory hereafter;
it is hardly possible for him to think too
often or too deeply about that of his neighbour...
It is a serious thing to live in a society of
possible gods and goddesses, to remember that
the dullest and most uninteresting person you
can talk to may one day be a creature which, if
you saw it now, you would be strongly tempted
to worship, or else a horror and a corruption
such as you now meet, if at all, only in a
nightmare. All day long we are, in some degree,
helping each other to one or other of these
destinations. It is in the light of these
overwhelming possibilities, it is with the awe
and the circumspection proper to them, that
we should conduct all our dealings with one
another, all friendships, all loves, all play, all
politics. There are no **ordinary** people. You have
never met a mere mortal. Nations, cultures, arts,
civilizations - these are mortal, and their life is
to ours as the life of a gnat. But it is immortals
whom we joke with, work with, marry, snub, and
exploit - immortal horrors or everlasting
splendors."

I can't deny the emotional appeal of this sort of thing.
But there's no getting around it: Transhumanism and
Christianity are joined at the amygdala here.

Roko said...

jfehlinger said: "But there's no getting around it: Transhumanism and Christianity are joined at the amygdala here."

Oh definitely. This is something that I've been realizing incrementally over the past year or so - especially when I talk to friends who are Christians and realize that, aside from factual disagreements, we're on almost the same wavelength. [Note: these are educated, British, moderate, liberal christians]

But you may be pushing the analogy too far. All human goals are joined at the amygdala. As you say, it's all a matter of degree. How wishful is Transhuman thinking compared to, say, most people's life goals? Related question: how much do outcomes depend on our desire to achieve them? There are some parts of life where one can achieve extraordinary things by having enough faith and determination that one will succeed. Talk to most self-made millionaires to discover this, for example. Then again other things in life do not depend at all on our desire to make them happen: no matter how much I want to win the national lottery, I stand exactly the same chance as everyone else (per ticket purchased).

Here lies the crucial difference between the kind of wishful thinking that a Transhuman engages in and the kind that a religious believer engages in. By going out on a limb and allowing ourselves to
dream of all the nice things that a positive singularity might bring, we motivate ourselves to MAKE IT HAPPEN. Even if you, like Dale, think that transhumanist goals (mind uploading, unlimited lifespan, end to suffering etc) are hugely unlikely, you still have to accept that the existence of motivated transhumanists makes them more likely.

The best way to predict the future is to create it.

The best way to create the future is to have an unflinching belief that you will (somehow) succeed.

jimf said...

Roko wrote:

> [W]hen I talk to friends who are Christians [I] realize that,
> aside from factual disagreements, we're on almost the same
> wavelength. [Note: these are educated, British, moderate,
> liberal christians]

Well, it's honest of you to acknowledge that. I have no beef
with educated, British, moderate, liberal Christians (in fact,
some of my favorite authors -- C. S. Lewis, J. R. R. Tolkien,
are, or were, of that persuasion). Now the uneducated, American,
fundamentalist, conservative Christians on the other hand. . .
;->

OTOH, the kind of vituperative, militant atheism displayed
by some >Hists irritates me ("pot to kettle" comes to mind).
I'm especially irritated by people who, when they find out
you like to read C. S. Lewis or J. R. R. Tolkien, think you
must be a moron (which is perhaps one reason why I like
to quote CSL and JRRT so much ;-> ).

> All human goals are joined at the amygdala. As you say,
> it's all a matter of degree. How wishful is Transhuman thinking
> compared to, say, most people's life goals?

I have no argument with optimism, per se. I could probably
use a bit more of it myself.

I also have no doubt that optimism must be kept on a leash for
sanity's sake. When optimism shakes off its reins completely, the
result is what the shrinks call hypomania, or outright
mania.

> Related question: how much do outcomes depend on our desire to achieve
> them?

And another related question: at what point does this assumption
cross the line into cultism?

"Tone 40:
Intention without reservation or limit; an execution of intention.
The top of the emotional tone scale, seen as a godlike state of
command and control of others.

Tone Scale, the list of all human emotional states, arranged beside
an arbitrary scale number from -40 (total failure), through 0
(death), to +40. "S[uppressive]P[erson]s are at 1.1, Covert Hostility,
on the Tone Scale."

-- L. Ron and the Uptones, a.k.a the Scientologists

> There are some parts of life where one can achieve extraordinary
> things by having enough faith and determination that one will succeed.
> Talk to most self-made millionaires to discover this [or Dale Carnegie,
> or Maxwell Maltz, or Norman Vincent Peale, or Wayne Dyer,
> or Deepak Chopra, or Anthony Robbins], for example. Then again other
> things in life do not depend at all on our desire to make them happen. . .

"Unluckily, it is difficult for a certain type of mind to grasp
the concept of insolubility. Thousands...keep pegging away at
perpetual motion. The number of persons so afflicted is far
greater than the records of the Patent Office show, for beyond the
circle of frankly insane enterprise there lie circles of more and
more plausible enterprise, until finally we come to a circle which
embraces the great majority of human beings.... The fact is that
some of the things that men and women have desired most ardently
for thousands of years are not nearer realization than they were
in the time of Rameses, and that there is not the slightest reason
for believing that they will lose their coyness on any near
to-morrow. Plans for hurrying them on have been tried since the
beginnning; plans for forcing them overnight are in copious and
antagonistic operation to-day; and yet they continue to hold off
and elude us, and the chances are that they will keep on holding
off and eluding us until the angels get tired of the show, and the
whole earth is set off like a gigantic bomb, or drowned, like a
sick cat, between two buckets.

-- Mencken, "The Cult of Hope"

> Here lies the crucial difference between the kind of wishful thinking
> that a Transhuman engages in and the kind that a religious believer
> engages in. By going out on a limb and allowing ourselves to
> dream of all the nice things that a positive singularity might bring,
> we motivate ourselves to MAKE IT HAPPEN. Even if you, like Dale,
> think that transhumanist goals (mind uploading, unlimited lifespan,
> end to suffering etc) are hugely unlikely, you still have to accept
> that the existence of motivated transhumanists makes them more likely.

On the contrary, I think that to the extent that "motivation"
approaches **fanaticism**, it can work **against** the professed
goals of the fanatics themselves.

And it's certainly all too prone to manipulation both by self-deluded
self-aggrandizers (gurus) and un-self-deluded but completely manipulative
and cynical opportunists.

That kool-aid may taste sweet, but it has a kick. Watch out!

(Or, on the other hand, maybe Dale and I are just Suppressive Persons.
Take your pick. ;-> )

Roko said...

@jfehlinger:

"That kool-aid may taste sweet, but it has a kick. Or, on the other hand, maybe Dale and I are just Suppressive Persons. Take your pick. ;-> "

I don't quite see the analogy between trying to realize the Transhuman vision and drinking the kool-aid of some suicide cult. I mean if I spend the next 20 years of my life researching AI systems (becuase I think that this is a good way to contribute to accelerating change), and suddenly become disillusioned, I will be (1) well educated and very well qualified and (2) philosophically well read. It's not like being a transhumanist means selling your worldly possessions and cutting off all ties with reality. I still don't see any good reason to liken transhumanism to scientology or to call it a cult. I get the impression that you call it a cult because you know it will evoke an emotional reaction and cause your rhetorical adversary to lose control of what they say; a kind of rhetorical tactic. I don't think that you or dale actually think transhumanism is a cult. Maybe you can provide me with evidence to show otherwise.

As for why you and Dale reject the transhumanist vision, I don't know. Let me ask: jfehlinger, what do you think is really important in life, and what actions are you taking given this value?

Roko said...

@jfehlinger: you quoted Mecken:

"beyond the circle of frankly insane enterprise there lie circles of more and more plausible enterprise, until finally we come to a circle which
embraces the great majority of human beings"

But don't forget how pointless and tedious that last circle of "most plausible" activity is. That is the circle of the apathetic, the collection of people who passively let life lap over them. You HAVE to engage in a non-maximally-plausible enterprise to count as a truly good person, because the most plausible aim to have about the future is no aim at all.

Anne Corwin said...

Roko, I think Dale and Jim are mainly trying to help "transhumanist-identified" folks see how certain aspects of what you might call the "transhumanist vision" can (as you say) provoke emotional reactions and perhaps lead to cultish thinking and behavior.

From the Skeptic's Dictionary, a "cult" is defined as follows:

Three ideas seem essential to the concept of a cult.

One is thinking in terms of us versus them with total alienation from "them."

The second is the intense, though often subtle, indoctrination techniques used to recruit and hold members.

The third is the charismatic cult leader.

Cultism usually involves some sort of belief that outside the cult all is evil and threatening; inside the cult is the special path to salvation through the cult leader and his teachings.


Now, nothing in that definition seems to describe H+ on the whole -- but there are pockets and subgroups that seem at least potentially capable of germinating a bit of cultiness.

For instance, the definition above refers to an "us vs. them" mentality. In transhumanism, I can see (at the very least) vague shades of this among those who try to cast the whole thing in terms of the "progress-minded scientific types" versus the "progress-averse bioconservative deathists" -- when really, there isn't any particular group of people that are somehow a threat to "transhumanism" or "progress".

Yes, there are certainly bioconservatives, there are silly creationists and IDers, and there are people who think everything from homosexuality to in-vitro fertilization is "yucky" -- but it's probably a lot more reasonable to engage these individuals on either an issue-by-issue basis, or under the mantle of already-existing frameworks (like civil rights, bodily autonomy, etc.) than to dream up a whole new subculture that effectively invites people to think that they're the first ones to ever really think about this stuff (when that isn't actually the case).

A lot of the points Dale makes, I think, have to do with how a lot of what transhumanists describe wanting is really just mainstream stuff dressed up in funky vocabulary.

And the stuff that isn't exactly mainstream (like uploading) tends to be so "out there" that it shouldn't be brought into conversations about how to actually improve the world in the near term, lest we risk alienating those who might otherwise ally with us on issues that are actually important (such as longevity medicine, which already exists in some form as "basic health care"; people can now live a lot longer than they used to on average, and be revived following things like cardiac arrest that used to be defined as "death" -- and all that happened without anyone trying or needing to "prove" that someday in the distant future invulnerable robopeople might find a way to escape the fizzling universe through a wormhole).

And it's not that it's bad to talk about uploading or near-indestructible robot bodies -- I think these subjects are fascinating and a lot of fun, and as Jim F. has noted, they can lead to a lot of interesting philosophical ponderings on the nature of personal identity, etc. But there do seem to be some people who develop a skewed sense of reality, to the point where they consider things like the future legal status of uploaded consciousnesses and finding a way to "escape the heat death" to be just as immanently important as issues like poverty, illiteracy, health care, etc.

I'm not going to disidentify myself completely from the term "transhumanist", mind you -- I still see it as a kind of "amateur futurological blue-skying salon", and I find that sort of thing to be a whole lot of fun. But it would be nice to see more people embracing the fact that they are doing something for fun, and that no, it's not all Serious Business.

jimf said...

Anne Corwin wrote:

> But there do seem to be some people who develop a skewed
> sense of reality. . .

Precisely. And when enough of them get together and manage
to convince each other that their sense of reality is the
**normative** one, and get (quite irrationally) defensive
toward anyone who dares to suggest otherwise (even when it
comes to quite idiosyncratic details, like how and when
AI is likely to come about), then those red lights start
blinking on my panel.

There's another thing. Each of the "planks" of what might
be called the current "mainstream" >Hist "party platform" --
1) AI leading to >H intelligence, then ballooning up to superintelligence
2) molecular nanotechnology leading to abundance with
Krell-like effortlessness, together with artifactual complexities
equalling or exceeding those of living things, and 3) (most
likely courtesy of 1 and 2 above) personal immortality --
has a certain degree of plausibility (or implausibility) all
on its own, which can be independently and more-or-less
entertainingly speculated about.

But I get uncomfortable when I see these things being
"force-fitted" into some kind of composite view of the future
which, it seems to me, is considerably less plausible as
a package deal even than the component pieces. This kind
of "force fitting" -- also rather hysterically defended
from criticism -- rings alarm bells for me.

I've seen comments made by folks in the cryonics community
who are uncomfortable with the fact that their fellow cryonicists sort
of "jumped on" nanotech as the magic spell that would allow
cryonauts to be repaired upon being thawed. These are people who were
interested enough in cryonics on its own but who were not thrilled
to have this "composite" scenario foisted on them, and who
thought it was a step backward for cryonics.

jimf said...

> And the stuff that isn't exactly mainstream (like uploading) tends
> to be so "out there" that it shouldn't be brought into conversations
> about how to actually improve the world in the near term. . .

Right, although uploading is one of the more entertaining >Hist
topics to **speculate** about (as long as we know that we're
speculating).

Moravec's bush-robot transfer procedure in _Mind Children_ was
indeed a very compelling and memorable thought experiment, however
far it may be from practical reality.

And, in the hands of a talented SF author like Greg Egan,
these things can make for extremely entertaining story-telling.
_Permutation City_ and _Diaspora_ are **terrific** SF.

> But it would be nice to see more people embracing the fact that
> they are doing something for fun, and that no, it's not all
> Serious Business.

Hear, hear! SF is great ("For those who like that sort of
thing, that is the sort of thing they like" as Miss Jean Brodie
would say ;-> ) but it's not going to Save The World.

Anne Corwin said...

Jim F. said: But I get uncomfortable when I see these things being
"force-fitted" into some kind of composite view of the future
which, it seems to me, is considerably less plausible as
a package deal even than the component pieces. This kind
of "force fitting" -- also rather hysterically defended
from criticism -- rings alarm bells for me.


Yeah...I know what you mean about the "composite view". Is that sort of what is meant by "cybernetic totalism", perhaps?

One problem I have with it (the composite view) personally, though, has more to do with me than with any particular technological prediction/speculation -- and that is the fact that I simply don't know enough about artificial intelligence, nanotechnology theory, etc. to feel justified in "getting behind" any superlative predictions associated with those things. Which obviously means that I can't stand in solidarity with folks who seem to have a composite, totalist vision in mind.

I'd certainly like to keep learning more about developments in AI, robotics, nanotech, etc., and maybe at some point I'll be in a better position to judge the plausibility and likely effects of each, but in the meantime I just can't bring myself to get overly freaked or enraptured about their eventual(?) deployment over a wide scale.

To be fair, though, I do sometimes wonder if maybe the people who are all atizzy about such things simply know something I don't (by virtue of their own study and understanding). I don't know everything, and hold no illusions that I do. I only have access to my part of the elephant, after all. :)

If I know someone is being ridiculous, I will certainly say so (e.g., here is a fairly stunning example of both lunacy and grandiosity, right down to the plea for money at the end so that the site author can build a "permanent headquarters for [her] international ministry", and a guilt trip for those who "..have benefited from [her] ministry yet don't support it.").

But if a person is simply advocating closer attention to the speculative technology that interests them most, that doesn't tend to set off my alarms quite so much -- there is a fair bit of important stuff going on that rarely reaches the mainstream (autistic self-advocacy probably falls into this category to some extent!), and I've definitely reaped some interesting returns via a lifetime of "fringe-watching". Sure, there's a lot of fluff there, but ever so often something interesting will pop out like gold dust in a dry lake bed.

Roko said...

@Anne: "I've definitely reaped some interesting returns via a lifetime of "fringe-watching". Sure, there's a lot of fluff there, but ever so often something interesting will pop out like gold dust in a dry lake bed."

The frame that you're setting up here is that of a passive watcher. Perhaps I'm wrong, but I get the impression that you think transhumanism is about reading some "way out" sci-fi, joining the SL4 list and mentally masturbating about

(a) how great things will be and

(b) how superior you are to everyone else in the world for realizing what's coming.

I might call this *armchair transhumanism*. I too dislike armchair transhumanism, so perhaps I've spent 40 comments talking cross-purposes with various people. Maybe 10 years ago this was what transhumanism was all about; (you guys will have to tell me because I was still a kid then) but it certainly isn't today. Look at the lifeboat foundation. Look at SIAI. Look at Goertzel's Novamente. This is real transhumanism.

Roko said...

Perhaps I should clarify that: today, Transhumanism is about going out and changing the world. It's about actually doing something that will influence the future in a big way, not just about pontificating.

Anne Corwin said...

Roko, I'm a volunteer for the Methuselah Foundation. I helped edit "Ending Aging", I occasionally try and write things that bring some aspects of biogerontology down to layman's terms, etc., and I am helping work on a document detailing the foundation's strategy for encouraging/funding real research into medicine that might help real people avoid suffering from heart disease, atherosclerosis, etc. So I'm very much involved to the extent that I can be in longevity advocacy and still keep my day job.

I also try to do my best to break down stereotypes and advocate for the rights of neuro-atypical individuals (and generally atypical individuals). And old people too!

Those two "serious" pursuits are pretty much all I have the bandwidth for in terms of "going out and doing something" that may (I hope, but I'm definitely not certain) help improve people's lives. If you're working on actually designing nanotech or something, that's great! Keep it up. But as for me, for the stuff that I find interesting but don't know enough about to make judgment calls on (and which sometimes strike me as kind of silly to discuss in the same context as more immediate concerns), I maintain the right to engage in "armchair pontification" as it suits me, and as I find it enjoyable.

I hold no illusions that such pontification is going to save the world -- it doesn't need to, any more than Hello Kitty needs to in order to be worth its existence. I can't be a super-expert in everything, and honestly, neither can anyone else -- which is part of the problem with the "totalist" viewpoint. I do what I can in the areas I can, but as for the rest, I am content to watch.

Anne Corwin said...

Oh, and just to clarify -- when I say that I like to engage in "armchair pontification", I certainly don't mean anything like what you (Roko) described as:

mentally masturbating about

(a) how great things will be and

(b) how superior you are to everyone else in the world for realizing what's coming.


...I just mean having fun thinking about and discussing ideas that bring up interesting questions about what consciousness is, what personal identity is, what present-day questions/anxieties contemporary speculations about speculative technosocial developments reveal, etc. In other words, "amateur futurological blue-skying".

A lot of H+ identified folks do this, as do some SF enthusiasts, etc. There's nothing wrong with it at all, in my estimation.

But what I do see as slightly suspect (and this is just my impression right now -- it's not like I've stopped gathering new data or anything) is when organizations form with the intent of trying to "save the world" via the efforts of a smallish group of like-minded folks.

In some respects there's an idealistic quality to this, which I can respect -- I don't see quite as much "grandiosity" as Dale and Jim seem to, perhaps because I tend to recoil from viewing things in pathological terms due to having been "pathologized" so much myself. But I can see how some of it could become grandiose (to the point of dangerous disconnection from reality) if certain attitudes aren't kept in check.

For one thing, there's also a tendency for such persons/groups to think they have reality (and other people) figured out to a greater extent than they actually do. Or at least to appear overconfident in that regard. I know everyone does this to some extent, but when you combine "we know the recipe for hacking reality down to its most basic elements" with "we are going to build some massively influential device that will radically remake the world", you get a recipe for...well, a hefty dose of cockiness, at the very least.

For another thing, particularly on fora populated by AI enthusiasts, there seems to be what looks like a bias toward trying to distill reality down to a "function" or set of functions that can (presumably) be fed into some kind of super-AI that will be almost guaranteed to know what would benefit everyone the most. The amount of trust people put into this level of reductionism scares me, as does the tendency to dismiss accusations of "reductionism" as mere fluff.

Critiquing reductionism is not the same thing as suggesting that some aspects of matter, energy, and consciousness are "magic" or ultimately so mysterious that they can never be properly understood. Rather, such critiques are meant to be caveats against thinking that every aspect of reality can be squashed into a single conceptual framework and evaluated as to its "utility".

Maybe I just haven't read enough of the AI literature. Maybe I'm mischaracterizing what I've seen due to not having (as I acknowledged in my last comment) the bandwidth to investigate everything remotely of interest in equal depth. But it does seem very much like some in the AI circles especially tend to pre-dismiss criticism as the result of critics simply not being smart enough to understand what they are talking about, or as the result of critics employing "emotional reasoning" (when what's really going on is that the critics are pointing out that the "right" answer to an ethical puzzle often depends greatly on what one chooses to value most).

(Apologies to Dale if this has gone way off-topic, but this discussion is extremely interesting..)

Roko said...

Indeed; I didn't mean to imply that you yourself were an "armchair transhumanist", I just meant that maybe you thought that a lot of other people were, and that your critique is aimed at such people. I highly commend the fact that you work actively on longevity advocacy.

But you must surely realize that although no-one knows everything about everything, everyone ought to know something about the big initiatives involved in the transhuman vision. Even though you're not an expert in nanotech (neither am I, my preference lies towards AI), you must surely realize that what goes on in that field affects the decisions you take with respect to longevity.

What I'm trying to say is

(1) You cannot cut the future up into little disjoint areas and pretend that they won't affect each other, because reality respects no such distinctions. You have to see the big picture.

(2) You cannot plan for the near term future without thinking about what will happen in the medium and long term future; right now I want to go out and party, but I know that it is better in the long term if I engage in the less exciting activity of studying, as in the long term I will reap a greater reward.

In the context of the Transhumanist vision, this means that we should spend a lot more time working on acceleration-enabling technologies. That is the legacy that I.J Good and Vernor Vinge have bequeathed to us. If you make either of mistakes (1) or (2) that I outlined above, you won't see this.

You said:

"people develop a skewed sense of reality, to the point where they consider things like the future legal status of uploaded consciousnesses and finding a way to "escape the heat death" to be just as immanently important as issues like poverty, illiteracy, health care, etc."

Which I take issue with on this basis: poverty, illiteracy and health care are very, very hard problems that we are unlikely to solve without some serious technological advance. It is thus, at the moment, MORE important to think about acceleration-enabling technologies (for example how to upload someone) than to make the n'th ill-fated attempt to solve poverty. A human world will always have some members in poverty because human motivation is largely zero-sum. The only way out these problems {poverty, illiteracy, war, death and illness} is by leaving behind the limitations of homo sapiens.

As for discussions of immortality, (even to the extent of discussing how to escape the heat death of the universe) you have to realize that ordinary people really care whether they will cease to exist or not. Transhumanism is an emotionally powerful meme, and we should use that power to garner support. Think about how much money ($ hundreds of billions) the various religious groups on this planet make by promising people "life after death", and think how they waste that money and influence. If the transhumanist movement could make 1/100th of that money, we could bring about a positive singularity very, very quickly.

You also said:

"But it would be nice to see more people embracing the fact that they are doing something for fun, and that no, it's not all Serious Business."

I could not disagree more. If this is the case, the what on earth are Eliezer & co doing at SIAI? Are they just entertaining themselves in a bizzare way? Do you think that the transhumanist movement is all just some big Role Playing Game? Or do you think that the creation of a recursively self-improving AI or a universal nanoassembler is somehow not serious?

Overall, I think that you are mistaken in rejecting the forward-looking, integrated vision vision of the future that is Transhumanism. You are trying to cut Transhuman thinking up into little non-interacting "departments", and you are trying to ignore the medium-long term consequence of actions; such a drastic mutilation of the transhuman meme inevitably results in it completely dissolving.

Anne Corwin said...

Whoah! Okay...well, um, I have to go run some errands but what you just wrote sounds pretty True Believerish. I'll elaborate later, but eek! The "integrated vision" is precisely the problem, and frankly, if "transhumanism" dissolves in the face of even constructive criticism, one has to question whether it was at all very robust in the first place.

If I can single-handedly "dissolve" the "integrated (T)ranshumanist (when did it become capitalized?) Vision", just by trying to point out some of the flaws in the very idea of an Integrated Vision (very much what Jim describes as the "force-fit composite view"), then there can't really be much to the "integrated view", can there?

Roko said...

@Anne: Sorry, Didn't see your post there!

As regards AGI people and their zealous reductionism, I can see where you are coming from. My personal opinions are more towards the "AGIs as people" side of the spectrum, and less towards the "AGIs as optimizers" view. I've had a lot of disagreements with people over at SIAI about this. I particularly dislike the "coherent extrapolated volition" scenario that SIAI people like. I suppose you could say that I'm halfway between fluffy and greedy reductionist.


Nevertheless, AGI is an accellerant. Whatever your view on reductionism/holism, it is a hugely important practical technology.

Roko said...

@Anne: "The "integrated vision" is precisely the problem"

So how do you view the future? Do you just not think about how different aspects of life will interact? I'm not arguing that one particular scenario is guaranteed to happen, but I'm saying that we need to think about a range of scenarios, each one of which includes interactions between various technology, values, and personal outcomes. An integrated vision is a provisional plan for how to deal with a complex, interacting future. This is what Transhumanism is.

@Anne: "if "transhumanism" dissolves in the face of even constructive criticism"

- No, I'm saying that your re-constituted, muted transhumanism where we don't think about the long term future and where we don't think about outcomes in an integrated way does, in fact, dissolve into nothing.

ZARZUELAZEN said...

Jim said:

"Narcissists are as gifted as they come. The problem is to
disentangle their tales of fantastic grandiosity from the reality
of their talents and skills.

They always tend either to over-estimate or to devalue
their potency. They often emphasize the wrong traits
and invest in their mediocre or (dare I say) less than
average capacities. Concomitantly, they ignore their real
potential, squander their advantage and under-rate their gifts."

Interesting points. Of course there's a bit of narcissim in all of us... a small dollop of narcissim is no bad thing.

I for instance, once made the mistake of thinking I had a chance of taking on Yudkowsky on his own terms... in terms of raw 'IQ/Logical/Mathematical' firepower. But the 'battering ram' of Yudkowsky's raw IQ is enough to crush any challenger in the narrow reductionistic realm of logical manipulations.

So I had to constantly flail around, switching, searching, probing like water, until at last I realized a strategy which played to *my* strengths, a strategy capable of winning any 'race to AI'.

I shall not say exactly what my own strategy is... but here's a hint: when doing software development I learned about the concept of 'User Stories' and 'Interactive Design'.... and a light-bulb switched on :D

Here's another hint: I recently picked up the encyclopedia of fantasy and in the introduction, there is this sentence:

'It's imagination that makes us human, not intelligence'.

---

The SIAI is really the perfect example of grandiosity gone mad. They seek to 'save the world', but I only seek to 'save myself'.

When did I ever sign a social contract with the rest of the world saying that I have to 'save' them? Where in fact in all of human culture is such a social contract even implicit? No where.

The 'morality' of a super-AI is only really a problem if it sought to impose its own morality on the rest of the world. If it doesn't seek to do that, there's not a problem.

Anne Corwin said...

Roko said:

So how do you view the future?

Well, I'm not there yet so I don't really "view" it at all. But when I think about the future, I guess I sort of figure that the more features one attempts to add to their "integrated vision", the less accurate that vision is likely to be.

Do you just not think about how different aspects of life will interact?

I think about how different aspects of life might interact. For instance, I do "get" the whole thing about how advances in one area of science can lead to further advances in other, seemingly unrelated areas. I can see how, for instance, faster computers and the decreasing cost of digital storage could speed medical progress in areas like drug discovery and accuracy of disease diagnosis (since now we can store more data -- including negative data, which can be quite useful -- and search through it more quickly than previously).

But: I don't put a whole lot of stock in people being able to actually make particular integrated visions happen, or even wanting to make particular integrated visions happen once it becomes possible to do so. I mean, when I was growing up, everyone was going on and on about "video phones", and how eventually we'd all be able to look at people while on the phone with them, and how this would "revolutionize communication" and all that.

This technology now exists, but it isn't exactly popular -- many people consider the lack of visual data on the telephone to be an advantage (you can answer the telephone in your undies if your boss calls, but heaven forbid you answer the videophone in same)! But one thing that is wildly popular is the cellular phone camera -- it is very common for people to take still shots and zap them to their friends over the airwaves. And I don't recall anyone ever talking about that when I was growing up. It was all about the Videophone then, not the cellular camera phone, and Cameraphone World looks dramatically different than the predictions of Videophone World did.

Additionally, I think integrated visions tend to be very poor at incorporating potential social change -- back in the 1950s depictions of Fabulous Future Worlds, you still didn't see women doing much more than fetching coffee, while the Serious (generally white) Male Scientists pored over Fascinating Discoveries.

Along these lines, I hope to see a future in which there is greater social flexibility to the point where neuro-atypical folks, etc., are more incorporated and accepted in social, educational, and employment situations, but most of the "integrated visions" I've seen proposed are somewhat more homogenizing (though not all of them are, to be sure).

So...maybe I do have an "integrated vision" that is just different from some of the popular ones I've seen, but that in itself ought to be a clue that the future is more likely to consist of multiple coexisting, equally valid formulations than of one over-arching Vision.

I'm not arguing that one particular scenario is guaranteed to happen, but I'm saying that we need to think about a range of scenarios, each one of which includes interactions between various technology, values, and personal outcomes.

And I'm saying that people do this all the time, regardless of whether they call themselves "transhumanists" or not -- and that transhumanism has no special claim on scenario planning, except in the sense that they/we might be more likely to posit scenarios involving particular speculative technologies.

Believe me, when I first encountered H+ I was very attracted to the idea of being at the "cutting edge of what might be possible", and perhaps having something of a jump-start in comparison to folks who mostly just read mainstream media, etc. As noted in an earlier comment, I've always been a fringe-watcher for this very reason -- I think there's a lot going on that never sees the light of day, and that is definitely very interesting and worthy of discission.

But eventually I realized that while a fair number of H+ seemed to be making sensible arguments and (seemingly) astute observations, there were a lot of other people who were as well -- who had no association with transhumanism.

And I guess part of me started to see at that point how insular H+ could be. I've seen a lot of "Hey, so-and-so is really a transhumanist -- they just won't admit it!" leveraged at random people like, say, Richard Dawkins, and at first I thought similar things, but now I see it more as a sign that there's less originality in H+ than insiders would like to believe. That is -- no, we're not the only folks who think about this stuff, and no, just because other people think about this stuff doesn't make them transhumanists (or obligated to call themselves H+, etc.).

This is in part why I see "armchair transhumanism" (of the non-masturbatory variety, of course) as perhaps the only form of transhumanism likely to (a) survive, and (b) not become creepy/culty. And I don't think that should be taken as insulting, but as liberating. Nobody needs to prove that their favorite pastimes are Serious Business in order to feel justified in engaging in them -- and nobody needs to call themselves a "transhumanist" in order to make scientific discoveries or promote positive technosocial outcomes.

I do think there's a lot of exciting stuff going on right now, and I do think some things are changing for the better, and that there's potential for people today to make a tremendous positive impact on the future. But in order to do that, I think we have to make our actions about the actual improvements we want to see, not about a particular "movement", and definitely not about defending "transhumanism" from criticism as a monolithic entity. The way I see it, if you feel at all threatened by superlativity critique, you're taking transhumanism WAY too seriously.

It won't be "isms" that matter in the long run, but individuals (and groups, and particular technosocial developments), so to me, transhumanism makes the most sense as a loose social grouping of science and technology enthusiasts, not as a grand tidal wave of evolution in thought. Your mileage may vary, of course -- I'm well aware that I find "isms" a lot less useful than most people do, so I'm probably biased here. But it's tremendously confusing to have discovered that something I thought was a very open system is quite possibly more of a "closed system" constrained by particular dominant "integrated visions" that I'm supposed to jump on the bandwagon of lest I be considered dismissive of the notion of scenario planning in general.

I'm happy to count myself among H+ers so long as I'm still allowed to keep my own visions of what might be, and what things might interact, and how to approach such issues using presently-available resources, but sometimes it feels like there's this weird pressure to support things I might not support (such as "getting rid of disability" -- disability is, and always will be, relative to social norma, so of course I don't advocate getting rid of everything that today might be considered "disabling"!), or claim certainties I don't have (such as that "uploading is possible" -- maybe someday it might be, but I don't have the neuroscience or computing background to say much about it at this point!), etc.

An integrated vision is a provisional plan for how to deal with a complex, interacting future. This is what Transhumanism is.

Again with the capital "T"! But that aside, I am glad you acknowledge that the plan is "provisional" at least. I've been working in industry as an electrical engineer for going on six years now, and even though I'd hardly call myself a veteran (I'm very much a n00b, and well aware of it!), I have seen firsthand how "provisional plans" seem to go even when your timeframe is mere months.

Which is to say that such plans are indeed essential for structuring how people work cooperatively, but that they absolutely must remain flexible, and nobody can afford to get "locked into" some early idea just because they invested a lot of time or other resources on its behalf initially.

I'm saying that your re-constituted, muted transhumanism where we don't think about the long term future and where we don't think about outcomes in an integrated way does, in fact, dissolve into nothing.

Who said anything about not thinking about the long-term future? I spend a lot of time thinking about the long-term future, though less so than I did in my late teens/early 20s when I was one of those people who lost sleep over the prospect of the universe's heat death, etc. I guess at this point in my life I've come to terms with the inherent unknowability of the long-term future and I'm at peace with it. I'm at peace with uncertainty, and with the understanding that if I spend all my time worrying about a billion zillion years down the line, I am likely to (a) miss out on life as it is happening now, and (b) miss out on chances to do things that actually have a chance of improving things in the near-term.

For me, it's a matter of "how much information do we really have right now, and what can we do with it?" And maybe this is a sign that my "IQ" is too low for my brain to be of much use to the Real Serious Capital-T Transhumanists, but I've never been very good at memorizing symbol systems used to abstractly represent reality. I much prefer the concreteness of actual reality, or something as close to it as possible.

Sometimes it seems like people who talk about "integrated visions of the future" are basically building (by way of analogy) big, intricate towers out of delicate glass toothpicks, and then trying to balance more glass towers (made of even thinner toothpicks) upon the initial foundation, scaling upwards and upwards until they're essentially dealing with the finest gossamer and trying to make bridges and cat's cradles and orb-spider webs in spaces they themselves can't even see.

In that sense, I think that the most strictly-defined "integrated visions" dissolve into nothing on their own, whereas down here on the ground, real life still happens, and real people are still quite capable of affecting change.

So, to continue this analogy just a bit further, I see those glass towers (and all the people running around them, guarding them nervously from all who might so much as poke at them) as less likely to bring about neat stuff like supercomputers and better medical care leading to longer, healthier lives than those who focus less on coming up with maximally-integrated visions (that depend almost entirely on stuff that hasn't yet been invented) and more on coming up with incremental, interim plans that are flexible as opposed to fragile.

And...I'm fully aware that I could be wrong about all this here. But if I am, it won't matter anyway, and I'm frankly not worried about "looking silly" or "realizing that the [superlatives] were right all along".

ZARZUELAZEN said...

Roko said:

"I too dislike armchair transhumanism, so perhaps I've spent 40 comments talking cross-purposes with various people. Maybe 10 years ago this was what transhumanism was all about; (you guys will have to tell me because I was still a kid then) but it certainly isn't today. Look at the lifeboat foundation. Look at SIAI. Look at Goertzel's Novamente. This is real transhumanism."

Some might say that 'Lifeboat foundation, SIAI, Novamente etc' are *real crackpottery* ;)

I can only give my personal experience. I came to the transhumanist lists in 2002, was finally kicked off all lists and 'excommunicated' from the 'movement' by 2006.

I never detected any 'transhumanist community'. What I did detect was basically a small group of highly egotistical and idiosyncratic high-IQ types pushing all sorts of weird and wacky ideologies and ideas (some of which I quickly got sucked into myself).

After I started to realize most of them were speaking bullshit, I started to critqiue them more and more harshly - was only ever ignored, talked down to or insulted. Got fed up, got kicked off the lists, that was it. Not interested in 'transhumanism' any more - or any 'ism' for that matter.

Basically I'd say that you've a small group of people who want to feel 'special' and do it by playing up their high-IQ's and a set of specialized jargon and superlative terms.

SL4ers (who are basically crack-pots who have never achieved a single thing intellectually in the real world) often opine to some poor newcomer: 'You're not worth talking to because you're ignorant of basic bayesian decision theory' (or whatever stragety the robot-cult is currently pursuing... it changes completely every other year).

Roko my son, listen to Dale, Anne and Me: I wouldn't waste your time with 'isms'.

ZARZUELAZEN said...

>At the end of the day, it is psychologically unhealthy not to interpret the "big picture" in an optimistic way. The alternative - to take a pessimistic view on the ultimate questions in life - leads to a sour, cynical outlook where one's only source of validation is to try and destroy other people's hopes and dreams; a kind of schadenfreude.

The heart of the matter! As jim said, deep questions!

The human capacity for optimism is really quite amazing. I don't know how most people do it - I suspect that the really depressing truth is that a lot of it might be just self-delusion borne of ignorance.

In this respect I strongly recommend that you all read 'The Chronicles of Thomas Covenant' by Stephen Donaldson. It's a best selling fantasy series. The first two chronicles (spanning 6 books) are a brilliant metaphorical exploration of despair verus hope (or optimism versus realism/pessissim).

'The Land' (the fantasy world of the series) is a metaphor for the human mind and the creatures and powers the main character encounters are archetypes - symbols of mental characteristics. Despair/pessimism is personified by 'Lord Foul'. Free Will is personified by a 'White Gold' ring of power. The idea is that by weilding 'wild magic' (free will) we all have the capacity to choose either optimism or despair - but both choices can destroy us....

There's a 3rd chronicles too, still being written. (Only the first 2 books in the new series are out). The latest books explore the theme of knowledge versus ignorance.

Take my word for it - the books are absolutely brilliant. See...

http://www.stephenrdonaldson.com/

'A simple charm will master time,
A cantrip clean and cold as snow.
It melts upon the brow of thought,
As plain as death, and so as fraught,
Leaving its implications' rime,
For understanding makes it so.'

Anne Corwin said...

Whoah! No offense, Marc -- but I haven't figured out your communication style yet, and as such, I am somewhat uncomfortable at being lumped in in a list as if we're on some kind of "team".

I wouldn't want to reinforce the wrongheaded framing of all this as a battle between two groups of people -- the Naysayers and the Idealists, perhaps -- duking it out "for no reason" other than blind hostility and/or personal feelings of being slighted.

That framing makes it impossible to get at the substance here, and I'd not want to encourage it or appear as if I'm encouraging it.

ZARZUELAZEN said...

>Whoah! No offense, Marc -- but I haven't figured out your communication style yet, and as such, I am somewhat uncomfortable at being lumped in in a list as if we're on some kind of "team".

Hey no offense detected at all. My communication style is simple when dealing with any-one even vaguely associated with transhumanism:

Every man (or woman!) for him/herself. ;)

The only 'team' I'm on is mine.

Cheers

Roko said...

@Anne:

"I don't put a whole lot of stock in people being able to actually make particular integrated visions happen"

"down here on the ground, real life still happens,"

"Well, I'm not there yet so I don't really view [the future] at all."

"I'm at peace with uncertainty,"

The only way that I can sum up your overall point/attitude is that you find it mentally easier to let the future wash over you like gentle waves on a beach than to actually go out there and take the risk of being wrong and actually push for something great.

Your notion of what counts as progress is telling:

"neat stuff like supercomputers and better medical care"

supercomputers won't make my life better. Better medical care will also make little difference to me; I will naturally live to the age of, say, 50 or 60 with a reasonable quality of life, and then I'll get old and my quality of life will plummet, and then I'll die. Better medical care makes very little difference to this; it might prolong the my twilight years a little bit, but I'm not very interested in that.

And neither are most people! The above explains why so many people spend their time and money on maximizing the quality of life they have in their good years; they buy make-up, fashion-items, fast cars, computer games, holidays, etc. This explains the "consumer society" that many people complain about.

What happened to the real goodies that a positive technological singularity can bring us? Let's hear it for not getting old! Let's hear it for being able change whatever you don't like about your physical appearance! Let's hear it for positive-sum motivational systems! Why don't you mention them? Perhaps you're frightened of sounding like a true believer?

Maybe the difference between me and you is that I am a risk taker.

Sure, you can hope for a future with a few nice supercomputers (just in case you wanted to know pi to 10^10 decimal places) and slightly better medical care ("hey I'm 90 and I'm still alive but I'm on a life support machine and I can't even go to the toilet without help"). But I just think that that future is aiming too low.

I am taking the risk of aiming for a future that actually lives up to my standards, you are simply lowering your standards.

To sum up the entire debate, I might say this: the world we live in today is broken in many, many ways. An intelligent person can react to this in three ways:

(1) She can lower her standards, or "be at peace" with those bad things.

(2) She can delude herself completely. Most people choose this option, it’s called religion.

(3) She can go out on a limb and take up the risky aim of actually fixing the problems. Since there are so many problems and they are all so hard, this option is necessarily highly speculative and ambitious.

It seems that there is simply no rational way to choose between these options. Take your pick people!

jimf said...

Roko wrote:

> (2) She [Anne Corwin] can delude herself completely. Most people choose this option,
> it’s called religion.
>
> (3) She can go out on a limb and take up the risky aim of actually fixing
> the problems. Since there are so many problems and they are all so hard, this
> option is necessarily highly speculative and ambitious.

Some people think it's worth attempting to make sure that (3) doesn't become
indistinguishable from (a potentially malignant, for that matter) version of (2).

There are down sides when that happens, you know. Generally, it doesn't
"fix" any problems, and it can create new ones.

Does that count as a "rational" consideration?

jimf said...

Roko wrote:

> "beyond the circle of frankly insane enterprise there lie circles
> of more and more plausible enterprise, until finally we come to a
> circle which embraces the great majority of human beings"
> [-- H. L. Mencken]
>
> But don't forget how pointless and tedious that last circle of
> "most plausible" activity is. That is the circle of the apathetic, the
> collection of people who passively let life lap over them. You HAVE to engage
> in a non-maximally-plausible enterprise to count as a truly good person,
> because the most plausible aim to have about the future is no aim at all.

I'm reminded here of another scene in the Showtime movie adaptation of
Barbara Branden's _The Passion of Ayn Rand_.

Ayn Rand (Helen Mirren) and Nathaniel Branden have just announced
to Rand's husband Frank (Peter Fonda) and Branden's wife Barbara
that they're in love and wish to conduct a "rational" affair,
with the permission of all parties involved. "Ordinary people
could not do this," says Rand, "but we are **not** ordinary!"

Frank and Barbara retire to an all-night diner across the street
from the apartment building (in Manhattan). Barbara looks around
at the other diner customers and recoils from them in disgust.
"I can't -- I **won't** -- be 'ordinary'". Thus she makes her
decision to go along with Rand's proposal.

Not wanting to belong to "that pointless and tedious. . .
circle. . . of people who passively let life lap over them"
**could** be no more than the voice of vanity speaking.

Vanity, which becomes "elitism" in the political sphere.

Roko said...

@jfehlinger said: "...She [Anne Corwin]..."

No, I was wasn't specifically referring to Anne, I was using the word "she" to refer to a generic intelligent person.

@jfehlinger: "Ayn Rand Ayn Rand Ayn Rand Ayn Rand ... "

who invited her to this discussion!? not me!

@jfehlinger: "Some people think it's worth attempting to make sure that (3) doesn't become
indistinguishable from (a potentially malignant, for that matter) version of (2)."

Right, so we come down to it: Transhumanism is a (3) not a (2). If you disagree, you haven't been listening to the carefully framed rational arguments that I [and many, many others] have been making. If you think that transhumanism is "malignant" then come out with some specific criticisms, rather than the generic "transhumanism is a cult" smear tactic.

jimf said...

Roko wrote:

> "Ayn Rand Ayn Rand Ayn Rand Ayn Rand ... "
> who invited her to this discussion!?

Who, indeed?

http://www.wired.com/wired/archive/11.04/start_pr.html

(search down the page for "Transhumanus aeternis")

;->

> If you think that transhumanism is "malignant" then come out with
> some specific criticisms, rather than the generic "transhumanism is a cult"
> smear tactic.

Well, if you'd been listening. . .

Sigh.

jimf said...

Anne Corwin wrote (among many sensible things):

> Yeah...I know what you mean about the "composite view".
> Is that sort of what is meant by "cybernetic totalism", perhaps?

Not precisely. I think what Jaron Lanier (who coined the phrase)
means by "cybernetic totalism" is the tendency to reach for
digital computer models and metaphors to interpret and explain --
well, everything.

Ted Nelson poked gentle and respectful fun at this sort of thing
back in 1975. _Computer Lib_, p. 46:

"The strange language of computer people makes more sense than
laymen necessarily realize. It's a generalized analytical way of
looking at time, space and reality. Consider the following.

'THERE IS INSIGNIFICANT BUFFER SPACE IN THE FRONT HALL.'
(Buffer: place to put something temporarily.)

'BEFORE I ACKNOWLEDGE YOUR INTERRUPT, LET ME TAKE THIS PROCESS
TO TERMINATION.'

'COOKING IS AN ART OF INTERLEAVING TIME-BOUND OPERATIONS'
(i.e., doing parts of separate jobs in the right order with
an eye on the clock)"

Nelson also remarks (on the same page -- it's a dense book):

"THE HEARTS AND MINDS OF COMPUTER PEOPLE

Computer people are a mystery to others, who see them
as somewhat frightening, somewhat ridiculous. Their concerns
seem so peculiar, their hours so bizarre, their language so
incomprehensible.

Computer people may best be thought of as a new ethic group,
very much unto themselves. Now, it is very hard to characterize
ethnic groups in words, and certain to give offense, but if I
had to choose one word for them it would be **elfin**. We are
like those little people down among the mushrooms, skittering
around completely preoccupied with unfathomable concerns
and seemingly indifferent to normal humanity. In the moonlight
(i.e., pretty late, with snacks around the equipment) you may
hear our music.

Most importantly, the first rule in dealing with leprechauns applies
_ex hypothesi_ to computer people: when one promises to do you a
magical favor, **keep your eyes fixed on him until he has delivered**.
Or you will get what you deserve. Programmers' promises are
notoriously unkept.

But the dippy glories of this world, the earnestness and whimsy, are
something else. A real computer freak, if you ask him for a program
to print calendars, will write a program that gives you your choice
of Gregorian, Julian, Old Russian, and French Revolutionary, in
either small reference printouts or big ones you can write in.

Computer people have many ordinary traits that show up in extraordinary
ways -- loyalty, pride, temper, vengefulness and so on. They have
particular qualities, as well, of doggedness and constrained fantasy
that enable them to produce in their work. (Once at lunch I asked
a table-full of programmers what plane figures they could get out of
one cut through a cube. I got about three times as many answers
as I thought there were.)

Unfortunately, there is no room or time to go on about all these
things. . . but in this particular area of fantasy and emotion I
have observed some interesting things.

. . .

Perhaps a certain disgruntlement with the world of people fuses with
fascination for (and envy of?) machines. Anyway, many of us who have
gotten along badly with people find here a realm of abstractions
to invent and choreograph, privately and with continuing control.
A strange house for the emotions, this. Like Hegel, who became most
eloquent and ardent when he was lecturing at his most theoretical,
it is interesting to be among computer freaks boisterously explaining
the cross-tangled ramifications of some system they have seen or
would like to build.

(A syndrome to ponder. I have seen it more than once: the technical
person who, with someone he cares about, cannot stop talking about his
ideas for a project. A poignant type of Freudian displacement.)

A sad aspect of this, incidentally, is by no means obvious. This is
that the same computer folks who chatter eloquently about systems
that fascinate them tend to fall dark and silent while someone **else**
is expounding his own fascinations. You would expect that the person
with effulgent technical enthusiasms would really click with kindred
spirits. In my experience this happens briefly: hostilities and
disagreements boil out of nowhere to cut the good mood. My only conclusion
is that the same spirit that originally drives us muttering into
the clockwork feels threatened when others start monkeying with what
has been controlled and private fantasy.

This can be summed up as follows: NOBODY LIKES TO HEAR ABOUT ANOTHER
GUY'S SYSTEM. Here, as elsewhere, things fuse to block human communication:
envy, dislike of being dominated, refusal to relate emotionally, and
whatever else. Whatever computer people hear about, it seems they
immediately try to top.

Which is not to say that computer people are mere clockwork lemons or
Bettelheimian robot-children. But the tendencies are there."


This stuff may have been "cute" back in 1975 (although Joseph Weizenbaum
or Hubert L. Dreyfus would not have thought so, even then), but 30
years on it's getting a little old. And I say that as somebody
who **is** (a very insignificant example of) one of these people,
out of a sense of personal disillusionment.

Oh, and one other (more down-to-earth) thing that's part of Jaron
Lanier's definition of "cybernetic totalism" is the tendency for
some programmers to attempt to take control away from the user
and give it to the computer. An example he uses is Microsoft
Word, which insists on correcting your spelling, grammar, and
formatting, instead of just shutting up and following **your**
instructions. He thinks this leads to worse software than
necessary.

BTW, the word "cybernetic" (or the prefix "cyber-") applied **solely**
to computers, is a bit of a misnomer. As coined by Norbert Wiener
(from the greek word for "helmsman"), it just means any device
which, by means of feedback, is able to self-govern or self-steer
to some extent. Computers can do this, of course, but so can other
things (like steam engines with governors).

jimf said...

Anne Corwin wrote (a beautiful figure):

> Sometimes it seems like people who talk about "integrated
> visions of the future" are basically building (by way of analogy)
> big, intricate towers out of delicate glass toothpicks, and
> then trying to balance more glass towers (made of even thinner
> toothpicks) upon the initial foundation, scaling upwards and
> upwards until they're essentially dealing with the finest
> gossamer and trying to make bridges and cat's cradles and
> orb-spider webs in spaces they themselves can't even see.

Yes, I've often thought this. They start with elements of
initially rather dubious (or at least seriously questionable)
plausibility (like molecular nano-assemblers, or recursively
self-improving artificial intelligence), and then use those
notions as the **foundation** of ever more elaborate houses
or cards, of ever higher orders of implausibility. Then they
move into those houses!

> Who said anything about not thinking about the long-term future?
> I spend a lot of time thinking about the long-term future, though
> less so than I did in my late teens/early 20s when I was one of
> those people who lost sleep over the prospect of the universe's
> heat death, etc.

I hope you've seen _Annie Hall_. ;->

-----------------------------------------------------
Alvy's mother: He's been depressed. All of a sudden, he can't do anything.

Doctor: Why are you depressed, Alvy?

Alvy's mother: Tell Dr. Flicker. (To the doctor) It's something he read.

Doctor: Something he read, huh?

Alvy: The universe is expanding...Well, the universe is everything,
and if it's expanding, some day it will break apart and that will be
the end of everything.

Alvy's mother: What is that your business? (To the doctor) He stopped
doing his homework.

Alvy: What's the point?

Alvy's mother: What has the universe got to do with it? You're here in Brooklyn.
Brooklyn is not expanding.

Doctor: It won't be expanding for billions of years, yet Alvy. And we've
got to try to enjoy ourselves while we're here, huh, huh? Ha, ha, ha.
(He gives an artificial laugh before taking another drag on his cigarette)
-----------------------------------------------------
http://www.filmsite.org/anni.html

OTOH, Isaac Asimov's short story "The Last Question" (one of his
most memorable pieces of fiction) also made a big impression on me.
http://en.wikipedia.org/wiki/The_Last_Question

As did "Eyes Do More Than See"
http://en.wikipedia.org/wiki/Eyes_Do_More_Than_See


There's a related trend (related, that is, to the multiplying
of implausibilities by weaving them together in grandiose
tapestries) in the thinking of some >Hists. In their deductive
reasoning, they tend to want to put the cart before the horse.
(I believe the technical term is "begging the question." ;-> )

In an article I posted to the Extropians' in mid-2000, I
mentioned Eliezer Yudkowsky's then-current "Coding a Transhuman AI"
(CaTAI):

-----------------------------
Returning to CaTAI: "AI has an embarassing tendency to predict
success where none materializes,... and to assert that some
simpleminded pattern of suggestively-named LISP tokens completely
explains some incredibly high-level thought process... There are
several ways to avoid making this class of mistake... One is an
intuition of causal analysis... that says 'This cause does not
have sufficient complexity to explain this effect.' One is to be
instinctively wary of trying to implement cognition directly on
the token level". "Any form of cognition which can be
mathematically formalized, or which has a provably correct
implementation, is too simple to contribute materially to
intelligence". "Clearly [there] is vastly more mental material,
more cognitive "stuff", than classical-AI propositional logic
involves". And, "special-purpose low-level code that directly
implements a high-level case is usually a Bad Thing". On the
other hand, the title of Eliezer's article has an awfully
top-down ring due to the word "coding", and his notion of a
"codic cortex" or "codic sensory modality" seems to swamp the
usual notion of a sensory modality. I wonder how, using
Edelman's schema, one would select "values" for a codic sensory
modality without instantly facing all the difficulties of
traditional AI, and in an even more concentrated form than usual
(we don't want a robot that can just sweep the floor, we want a
robot than can write a computer program for a newer robot that
can program better than it does!). It seems like a bit of a leap
from "Blue is bad, red is good" to "COBOL is bad, Java is good"
;->.
-----------------------------

And more recently, I e-mailed somebody:

-----------------------------
You know, one of the problems here is that artificial
intelligence has gotten irretrievably entangled with
the notion of the technological Singularity. You can
blame Vernor Vinge for that, even more than Eliezer
(of course, it was Vinge's _True Names_ that got
Eliezer on the bandwagon in the first place, sez he).
It's hard enough thinking about the issues involved
with cognition. But that's not enough for these
people. They have to do that (as an appetizer)
and **then** save the world (or the Universe)
too -- by inventing a friendly God, basically.
The grandiosity bubble inflates faster than
Alan Guth's early universe.

You can see it with Eliezer -- he's forever putting
the cart in front of the horse. He implicitly believes
Moore's Law (or its nanotechnological extension)
is going to get us effectively infinite
computing power in a short time (the world will be
then made out of "computronium" free for the grabbing),
so then he just takes that infinite computer power
as given when he talks about building an AI. His musings
on an AI "seed" requires it to have all the power
of a superintelligence in order to become intelligent
in the first place. And so on.
-----------------------------

jimf said...

Anne Corwin wrote:

> Additionally, I think integrated visions tend to be very
> poor at incorporating potential social change -- back in
> the 1950s depictions of Fabulous Future Worlds, you still
> didn't see women doing much more than fetching coffee, while
> the Serious (generally white) Male Scientists pored over
> Fascinating Discoveries.

Boy, you said it!

I wrote once (to Dale, it seems):

-------------------------------------------------------------
It. . . seems hysterically funny to me that people who
affect to nonchalantly contemplate the radical transformation
of human bodies and brains can **allow** themselves
to be squeamish about a relatively innocuous natural
human variation [such as homosexuality] (without, moreover,
realizing the full extent of their hypocrisy in doing so).
Nevertheless, I once had an e-mail exchange with a very high-profile
transhumanist, who informed me in a tone dripping with
contempt that he did **not** value variation for
its own sake, only variation that could be rationally
justified (thus implying that homosexuality only
merited favorable mention if it could be so justified,
and further implying that in his opinion it could
not -- shades of Ayn Rand announcing in public that
she personally found homosexuality "deesgoosting").
There is, of course, no arguing with such people.

Even more dismaying, because so hard to come to
grips with, is the deafening silence and lack of
engagement one is greeted with when one broaches
certain topics in transhumanist circles. Homosexuality,
I believe, is one of these no-comment zones.
Any political attitude contravening the libertarian
orthodoxy is liable either to be flamed to a
crisp or met with annoyed murmurs of "take the
politics elsewhere. It doesn't belong here." or
"You're projecting your own problems here. Fix
your own thinking, and come back when you've
embraced rationality." You've gotta wonder, though,
what's in store for some of the cryonics enthusiasts
if they turn out actually to be "lucky" enough
to be woken up 100, 200, or 500 years from now,
or whenever the techno-rapture arrives. Are
they really arrogant enough to think that their
own prejudices about the "good life" are **necessarily**
going to be the ones embodied by the folks who
revive them? Answer: yes, a lot of them **are**
that arrogant (haven't they seen Woody Allen's
_Sleeper_?).

Anyway, just thought I'd offer my appreciation
for your contrarianism. Also, I think it would be
cool if gay transhumanists stuck together more
than they do (I know there are some gay and
transgendered folks on the Extropians'. But they
don't talk about it much. It's, as I said,
mostly a no-comment zone.)
-------------------------------------------------------------

I was browsing just the other day in Barnes & Noble in a
book: _Gut Feelings: The Intelligence of the Unconscious_
by Gerd Gigerenzer
http://www.amazon.com/Gut-Feelings-Intelligence-Gerd-Gigerenzer/dp/0670038636
(and see http://www.nytimes.com/2007/08/28/science/28conv.html )
and the author mentions in passing the scandal of Maria Sklodowska-Curie
(Marie Curie) who, after having been awarded her **second** Nobel
prize (in chemistry; the first was in physics -- she remains the only
person ever to have gotten a Nobel in two different fields of science)
she was still denied membership in the French Academy of Sciences
(to which her husband had been elected a year before his
death in a street accident). Oh, those uppity females! :-/

Here's another story to make your blood boil, this time from the
community of SF authors.

There's an almost unutterably smug episode recounted
in Julie Phillips' biography of SF author "James Tiptree, Jr."
(Alice B. Sheldon):
_James Tiptree, Jr.: The Double Life of Alice B. Sheldon_
( http://www.amazon.com/James-Tiptree-Jr-Double-Sheldon/dp/0312426941 )
in which Arthur C. Clarke comes off particularly badly.
This happened in 1975, more than a decade after
Valentina Tereshkova had gone into orbit for the
Russians. And Christ, it was years after _Star
Trek_ had come and gone. And _2001: A Space Odyssey_,
for that matter.

(pp. 330 - 331):

"The science fiction community as a whole was in an
odd position regarding feminism. On one hand, most
of the writers and fans were men. In 1974, women still
made up less than 20 percent of SFWA's membership.
And most of those men, even those who were using SF
to address other social issues, were still not ready
to question gender relationships. The "rocket jocks"
(who had also hated the New Wave) insisted women
couldn't write real, "hard" science fiction and
probably shouldn't even be reading it. Other men
were more open in theory, but had trouble understanding
the problem.

Arthur C. Clarke, for example, had recently sent a
letter to the editor of _Time_ magazine agreeing with
astronaut Mike Collins. Collins had told _Time_ that
women could never be in the space program, since in
zero G a woman's breasts would bounce and keep the men
from concentrating. Clarke proudly claimed he had
already predicted this "problem." In his novel
_Rendezvous with Rama_ he had written, "Some women,
Commander Norton had decided long ago, should not
be allowed aboard ship: weightlessness did things
to their breasts that were too damn distracting."
When Joanna Russ tried privately to explain why this
was insulting, Clarke, responding publicly in the
SFWA newsletter, asked why Commander Norton shouldn't
be attracted to women -- didn't Russ want him to be?
He added that though some of his best friends
were women, the level of discourse of the "women's
libbers" clearly wasn't helping their cause.

The whole exchange appeared in the _SFWA Forum_ in
February and March 1975. It drew a storm of comment
from all directions, most of it expressive of how
new feminism was to most men and how automatically
many reacted by kicking slush. The newsletter's
editor, Ted Cogswell, illustrated an issue with
pictures of naked women -- intended, he said, as a
joke. [SF author] Suzy Charnas informed him that
this kind of "joke" was aggression disguised as
humor. Some of the letters, from men and women,
were open and intelligent, but even the more reasonable
men often reduced the argument to the sexual or
the physical, as if all sexism was about was, as
one man put it, the shape of a person's plumbing.

On the other hand, the SF community had a great deal
invested in the idea of tolerance. It was, and is,
in principle sympathetic to all who feel themselves
different. Science fiction itself is, at its best
moments, a literature of difference, alienation,
change. Russ said in _Khatru_ that she wrote it
because she could make it hers. 'I felt that I
knew nothing about "real life" as defined in college
writing courses (whaling voyages, fist fights, war,
bar-room battles, bull-fighting, &c.) and if I
wrote about Mars nobody could tell me it was (1) trivial,
or (2) inaccurate.'"

jimf said...

Anne Corwin wrote (so **many** sensible things!)

> I don't put a whole lot of stock in people being able to
> actually make particular integrated visions happen, or even
> wanting to make particular integrated visions happen once
> it becomes possible to do so. I mean, when I was growing up,
> everyone was going on and on about "video phones", and
> how eventually we'd all be able to look at people while on
> the phone with them, and how this would "revolutionize
> communication" and all that.
>
> This technology now exists, but it isn't exactly popular --
> many people consider the lack of visual data on the telephone
> to be an advantage (you can answer the telephone in your undies
> if your boss calls, but heaven forbid you answer the videophone
> in same)! But one thing that is wildly popular is the cellular
> phone camera -- it is very common for people to take still shots
> and zap them to their friends over the airwaves. And I don't
> recall anyone ever talking about that when I was growing up.
> It was all about the Videophone then, not the cellular camera phone,
> and Cameraphone World looks dramatically different than the
> predictions of Videophone World did.

Excellent example. When I was growing up, you could actually see
a demo of the AT&T Picturephone at the 1964-65 New York World's
Fair:
http://www.porticus.org/bell/telephones-picturephone.html

There was, BTW, a humorous acknowledgement of the potential
problems of the video-telephone in one episode of that futuristic
prime-time comedy cartoon spin-off of _The Flintstones_,
_The Jetsons_: "The Jetsons (1962). In the episode where they buy
the car, it starts out with Jane putting on her "morning mask"
to answer the video phone and her friend Gloria. An untimely
sneeze on Gloria's part reveals she too is wearing a mask."
http://www.maskon.com/kerry/masks/mediacom.htm
"One thing that shows up in sci-fi going back years and years
that we still do not have is videophones. We do have web cams and
video conferencing, and more people now are getting internet
phone service, so I guess you could make an argument that that
counts. But I think The Jetsons hit the nail on the head with
the "Morning Mask" for use with video phones, in case you were
not yet "presentable". Who really wants people to be able to watch
them when they are on the phone?"
http://sf.theboard.net/cgi-bin/yabb2/YaBB.pl?num=1142200426/0
"There are consumer video phones for use with high speed Internet
connections: several shown at CES. I wonder when Jane Jetson's
morning mask will also become common..."
http://www.jerrypournelle.com/archives2/archives2mail/mail292.html

A similar mis-prediction occurred with computers. The SF world
was fantasizing, back in the days of floorspace-devouring mainframes,
that everybody was going to be connected up to huge central computers.
(e.g., PLATO Homelink from Control Data Corporation,
http://www.slideshare.net/wuzziwug/how-college-students-influenced-gaming
[Slide 23] ). Of course, we **are** in a sense, but Google is just
the **index** to a vast network of decentralized computers, which
we access from a vast decentralized network of **personal**
computers. Nobody, but nobody, was predicting that in the 60s.
Even Ted Nelson got that one wrong. He may have envisioned
hypertext systems (though he didn't invent the idea -- I suppose
Vannevar Bush's Memex is credited as the progenitor of that
one), but he imagined people accessing (his version) of the Web
by getting in the car and driving to something like a McDonald's
franchise where you'd **rent** a terminal.

jimf said...

Anne Corwin wrote:

> To be fair, though, I do sometimes wonder if maybe the people
> who are all atizzy about such things simply know something I don't
> (by virtue of their own study and understanding). I don't know
> everything, and hold no illusions that I do. I only have access
> to my part of the elephant, after all. :)

One rather unfortunate thing about human beings is that they're
apt to get "all atizzy" about stuff for reasons **other**
than "by virtue of their own study and understanding."

There are clues to this. One clue is that if something is worth
getting atizzy about for "good" reasons, then the interest
and enthusiasm will tend to spread even among the most ferocious
skeptics (i.e., the scientific community). Eventually, it becomes
so uncontroversial as to be taken for granted. There's
nothing so mysterious about the operation of, say, a CAT scanner
that it needs to be bolstered and defended by a PR apparatus so that
its benefit will not be sabotaged by know-nothing Luddites.
Even though you can see a fossil, just in the word X-ray, of
how mysterious that particular part of the electromagnetic
spectrum once seemed.

I know you're sensitive to having once been "pathologized" by
psychological labelling, but it is worth knowing about some
of the darker capacities of the human mind.

A good introduction to the seductions of cult thinking, IMO,
is Kramer & Alstad's _The Guru Papers_.
http://www.amazon.com/Guru-Papers-Masks-Authoritarian-Power/dp/1883319005

The trouble seems to be that some people are extremely
susceptible to signs of certainty and self-confidence in **others**.
This makes it possible for seriously deluded folks, if they give off
the right "vibes", to accrete groups of followers who will
defend them aggressively.

ZARZUELAZEN said...

>Right, so we come down to it: Transhumanism is a (3) not a (2). If you disagree, you haven't been listening to the carefully framed rational arguments that I [and many, many others] have been making. If you think that transhumanism is "malignant" then come out with some specific criticisms, rather than the generic "transhumanism is a cult" smear tactic.

# posted by Roko : 7:04 AM

---

As regards Roko's points, I don't view transhumanism as a cult. It just proved to be not my thing. Like I said, all that ever happened to me was I was talked down to, ignored or ridiculed. Me and a lot of the people calling themselves 'transhumanists' just don't get along, so I had to go away and do something else instead.

I don't think 'transhumanism' will ever be anything other than a tiny sub-culture.

---

Singularitarian did definitely display signs of cult-hood in the past.

Roko,

I see alot of guys coming on-line trying to 'imitate' Eliezer - they all seem to want to be mini-Yudkowskys. All that happens to these poor-sod's is they end up trying desperately to be as good at Bayes as Eliezer. The inevitable happens and the sad Yudkowsky-wannabes end up as 'true believers'.

If you're interested in AGI, I think Internet messagebaords are the last place you want to look. You're a lot better off developing your own strengths elsewhere rather than spending the next 20 years stoking the egos of self-appointed on-line internet gurus.

---

ZARZUELAZEN said...

jfehlinger said>>>

>On the other hand, the SF community had a great deal
invested in the idea of tolerance. It was, and is,
in principle sympathetic to all who feel themselves
different. Science fiction itself is, at its best
moments, a literature of difference, alienation,
change.

Yes. Excellent observation. You really do need to read Donaldson's 'Chronicles of Thomas Covenant' if you haven't already. Best fantasy ever and deals directly with the issue you mentioned. The hero (or 'anti-hero') is a leper (in the real world), a definite outcast. In the fantasy world he is transported to, he also marked as 'different', but for completely different reasons....

http://www.stephenrdonaldson.com/

jfehlinger said>>>

>But that's not enough for these
people. They have to do that (as an appetizer)
and **then** save the world (or the Universe)
too -- by inventing a friendly God, basically.
The grandiosity bubble inflates faster than
Alan Guth's early universe.

Therein lies Yudkowsky/SIAI's mistake. No one can 'save the world'. That grandiose mission can only end in tears. I think though, that one might be able to save oneself.

If there's hope for the wild visions of transhumanism, this hope does lie in science fiction and story telling... really ;)

It was Donaldson who said that all his stories were designed to illustrate the fact that: 'man is an effective passion'.

Remember these words:

'A simple charm will master time,
A cantrip clean and cold as snow.
It melts upon the brow of thought,
As plain as death, and so as fraught,
Leaving its implications' rime,
For understanding makes it so.'

Roko said...

I have tried hard to find the logical content of the "grandiosity critique" that Marc and jfehlinger are leveling at transhumanism, and I have failed to find anything. It's just hot air and name-calling.

As for Anne, I just don't share her "I'm so laid back I'm horizontal" attitude; I'm sure that works well for Anne, but I can't take that stance. I actually care about what happens to the world, and thus I do think that Transhumanism is "serious business".

Earlier on I said that it was psychologically unhealthy to view the big picture in a pessimistic way. I think that Marc and jfehlinger are testament to this. Your criticisms of Transhumanism seem, to me, to be a way for you guys to revalidate yourselves and to release some of the underlying tension that a pessimistic view on life brings.

Now, I will take your advice and go work on AI.

jimf said...

Roko wrote:

> I have tried hard to find the logical content of the
> "grandiosity critique" . . . [levelled] at transhumanism,
> and I have failed to find anything.

You haven't looked very hard.

> It's just hot air and name-calling.

It always is, to the True Believer.

But I'm afraid there's more substance to the charge than
you're allowing yourself to see. I could give you a
bibliography, or you could Google one up yourself
easily enough, but you're in no frame of mind to
absorb that sort of critique. So it goes.

> Now, I will take your advice and go work on AI.

What, you mean you think you could do a better job
than [Bugs Bunny] or [Elmer Fudd]? What hubris!
What presumption! Or as Eliza Dolittle would say,
"The impudence!" ;->

ZARZUELAZEN said...

>I have tried hard to find the logical content of the "grandiosity critique" that Marc and jfehlinger are leveling at transhumanism, and I have failed to find anything. It's just hot air and name-calling.

Roko, I only state my personal experience. All I ever got from people on transhumanist lists was mean-ness, nastiness and dismissiveness.

It seems that the people that frequent those lists are mainly focused on their own egos and on high-IQ intellectual pursuits.

Myself, I think an ounce of EQ is worth more than all the IQ in the world. If you want to be happy the transhumanist lists are definitely the wrong place for it. So 'transhumanism' was quite useless to me personally.


>Earlier on I said that it was psychologically unhealthy to view the big picture in a pessimistic way. I think that Marc and jfehlinger are testament to this. Your criticisms of Transhumanism seem, to me, to be a way for you guys to revalidate yourselves and to release some of the underlying tension that a pessimistic view on life brings.

No, I can't speak for jim, but for myself see above. It was just the case that 'Transhumanism' proved quite useless to me personally.

>Now, I will take your advice and go work on AI.

AI seems to be every geeks wet-dream.

What you are going to do that thousands of people with super-high IQ's haven't already been doing for the last 50 years? ;)

What resources do you have that DARPA (which pours hundreds of millions into advanced IT research annually) doesn't have?

---

The battle between me and Eliezer for the fate of the universe has raged since 2002.

How can you hope to match our puissance - such pussiance as the universe has never seen until now? ;)

Eliezer, the master of math, versus Geddes, the master of emotion. It's been a strange battle, silient and terrible. The crushing boulders of Eliezers logic versus the flowing waters of Geddes's conscious reflections.

Steer clear boy, lest you be caught in the crossfire.