Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Thursday, December 03, 2009

Let A Bazillion Flowers Bloom

Upgraded from the Moot:

"Martin" in the Moot: So you wouldn't care about transhumanists if they dropped all pretenses of respecting science and just called their views a faith?

Yeah, pretty much, actually.

Of course, one has to remember that not all religious people disrespect science just because they are religious. The problem isn't that people deeply respect beliefs/values other than scientific ones: moral, aesthetic, ethical, and political beliefs are indispensable and can even be reasonable or unreasonable in my view in ways that are open to reasonable discussion, but are none of them properly construed by way of the criteria of reasonable scientific belief.

The problem arises when one confuses or seeks to reduce the terms proper to various modes of belief/value into those of another mode. I don't doubt that there are Catholics or Singularitarians who respect science (or sensible policy-making in relatively accountable and consensual diverse-stakeholder democracies) in a non-negligible way, but this wouldn't make them right to describe their Catholicism or Singularitarianism as scientific enterprises or policy think-tanks.

Even if the superlative futurologists admitted that they are essentially science fiction fanboy salons that have taken on the definitive character of sub(cult)ural faith-based initiatives aspiring toward faithly personal transcendence I would surely still think them quite silly -- as I do most forms of religiosity (being a crusty athiestical type myself) -- indeed, in their case, not just silly but also marginal and defensive faith-based communities aspiring somewhat sadly to become at most something like Mormonism or Scientology when they grow up.

Among the more interesting or reasonable adherents of their faith I would probably just quietly translate their expressions of faith into signals of moral/subcultural membership or assertions of aesthetic taste in order to make sense of them in a way that enabled me to engage with them as my peers on a case by case basis and not let their weird religiosity get too much in the way of keeping up the conversation. That's how I am with most religious folks so long as they don't try to convert me, or with nice and/or interesting people about whom I discover at some point that they have an unexpected religious side but thankfully which they almost never bring up because it is for them a personal matter and they grasp that it is not one of the things about them I am interested in.

But, again, if a person of faith tries to convert me, tries to insist their faith is not a faith but a kind of scientific or policy practice, or if their faith has an organizational life with anti-democratizing impacts I will publicly criticize and oppose those dimensions of their faith, while respecting their right as citizens to be faithful and celebrating the perfect reasonableness of their faith in its moral and aesthetic dimensions however much the form of those beliefs/values happen to differ from my own morals or tastes.

So long as people don't seek to undermine warrantable scientific belief-ascription by substituting the faithful/normative (the dissensus which depends on consent) for the factual (the consensus which depends on dissent) and so long as people don't work against consensual democratic equity in diversity, peer to peer, my rule is let a bazillion flowers bloom, hell, let a bazillion powers vroom.

However, to the extent that a faith takes on an organizational and political life, I will of course disapprove hierarchical or authoritarian tendencies that harm its members and I will criticize and oppose its work in the service of anti-democratizing ends, including anti-democratizing impacts it denies having a hand in either out of dishonesty or incomprehension.

13 comments:

jimf said...

> I would probably just quietly translate their expressions
> of faith into signals of moral/subcultural membership or
> assertions of aesthetic taste in order to make sense of them. . .

Making sense of them:

http://lists.extropy.org/pipermail/extropy-chat/2009-October/053043.html
---------------------------------------
Fri Oct 2

I am a Singularitian who does not believe in the Singularity

Natasha Vita-More:

> Stefano Vaj wrote:
>
> > I do think instead that I have a "moral" obligation, on the basis of. . . my
> > value system, to advocate an effort towards as much "superlativity" as
> > possible, the degree thereof, or my anticipations and expectations
> > thereabout, being immaterial to such stance.
>
> Attempting to foster an acme of a very large paradigmatic swerve brings the
> Singularity to the "now". I can see where this fostering is a valuable link[age of]
> behavioral and psychological approach[es], and which addresses my deepest
> concerns about an overt and symbolic erection. Rather, the ebb and flow of
> accelerating advances, across the converging fields, are the perturbations
> which beckon an emotional response.
>
> Stefano wrote:
>
> > And I do not believe that any meaningful distinction really exists, in our
> > extended phenotype, between our technology and ourselves, let alone in any
> > diacronical sense.
>
> Plasticity is the ability to change our phenotype. Emphasizing this
> behavioral characteristic might be [a] meaningful approach.
>

Oh dear, this is going to break my bayesian filter.

Emlyn
---------------------------------------

Love those Extropians!

I too have deep concerns about overt and symbolic erections.

(Credit to Dale for "superlativity"!)

jimf said...

> However, to the extent that a faith takes on an organizational
> and political life, I will of course disapprove hierarchical or
> authoritarian tendencies that harm its members and I will criticize
> and oppose its work in the service of anti-democratizing ends,
> including anti-democratizing impacts it denies having a hand in
> either out of dishonesty or incomprehension.

From the Mar. 17, 2008 interview with Michael Anissimov
on Phil Bowermaster's and Stephen Gordon's (of
http://www.blog.speculist.com) "FastForward Radio":

http://www.blog.speculist.com/archives/001675.html

Phil: OK, so let's move on to another "ism" versus
Transhumanism, which, uh, I didn't know how to describe
it, so I'm gonna give it a really bad, ugly, nasty
name, Michael, and you can correct me and give it a
better name, but I was reading your analysis of
Dale Carrico's essay about what all Transhumanists
believe and what's wrong with what they believe, and
you kind of took him down point by point, and, um,
so I called that Transhumanism vs. Buzz-kill-ism.
Um. . .

Stephen: Ha, ha, ha.

Phil: Is that a fair assessment of that philosophy,
or would that better be described as something else?

Michael: No, I don't think you could characterize Dale's,
um, criticism so one-dimensionally, I mean. . . like, I
**like** buzz-killing sometimes, 'cause I think some
people will get a little too enthusiastic for no real reason.

Um, I, yeah, I don't know about, uh. . . any, any one
that wants to read Dale's blog can, um, find all of his
positions, so, you know, they can do all that, but I still
want to. . . Well, I don't want to, like, be joshed into
saying anything negative about the guy, so. . . I mean,
he has his concerns, he thinks. . . I think the [lots?] of his
criticisms of Transhumanism are completely legitimate,
also I think that he has made somewhat of a [quite? plight?
polite?] obsession of criticizing Transhumanism, and it
doesn't even seem to be helping his own causes. Um, like
he's a big socialist activist, essentially, or he kind of
puts himself out as one, but the way he constantly
criticizes Transhumanism means that he's alienated from
his own group socialist activists, 'cause most of them don't
even know or care what Transhumanism is, and he's obviously
alienating himself from Transhumanists by, um, attacking
them so consistently, and I think that a lot of the reason
why he's attacked Transhumanism is because of his perception
that Transhumanism is closely allied with Libertarian, uh,
philosophies, which is. . . it's true that Extropianism did
grow out of Libertarian politics, and there were a lot of
Libertarians [who] became Extropians, but that was the
mid-90s, so it's kind of like. . . the Transhumanist movement
has become politically more diverse in recent years and, um,
I think a survey showed that about 47% of Transhumanists
identify themselves as left-wing, so I don't think that the
original reason why he started attacking Transhumanism is
still [the thing?] and, uh, I think he almost sees any
people on the Left as betraying other Leftists if they even,
like, enter organizations with, um, Libertarians or right-wing
people. And I think that's just, like, divisive and polarizing
and I'm tired of seeing our country ripped in half by
polar. . . politics, and I don't want to see the Transhumanist
community be ripped in half just 'cause of politics either.

jimf said...

Phil: Well, that's an excellent point. Yeah, I've. . . One
of the reasons that we don't ever get into the specifics of
politics on The Speculist is for exactly that reason, because
it always turns into this argument that everyone's making
the other side out to be the bad guy, and, and, there's this
kind of finger-pointing and vilifying around the whole, uh,
around the whole political process. I think that's really
interesting, the, the history of how Extropianism did in fact
kind of arise out of Libertarian philosophy, and obviously
there's still a lot of, uh, Libertarian-type thinking in the
Transhumanist movement, but, yeah, we had James Hughes on
from the World Transhumanist Association -- when was that Stephen,
earlier this year or was that late last year?

Stephen: I think that was late last year, wasn't it?

Phil: Late last year -- and, you know, the kinds of issues
politically that are being addressed by folks who are concerned
about Transhumanism, or who, who take that philosophical
approach, you know, it runs the gamut, it would seem to me,
anymore, there's definitely room. It's a. . .

Stephen: The word he used to describe most of the people who
are involved in his group I didn't particularly care for, he
called 'em techo-progressives, which would leave out the
Libertarian folks, you know? So, I mean, it's hard to come up
with a label, isn't it?

Phil: Well, I'm not sure that it would leave them out though.
Maybe a "techno-progressive" is different from a just plain
"progressive" see, so. . .

Stephen: That's true.

Phil: But that is the danger of labels. But I thought it was
real interesting, um, the title of Dale's post, this time 'round --
'cause I read his blog frequently, and I, the Buzz-kill-ism
was intended humorously, I'm a fairly regular reader of his
and have swapped e-mails with him in the past. He's let me
know when things we've written on The Speculist have just been
**outrageous** in terms of his political point of view, but, um,
the title of his post was "Transhumanists believe it is bad
to be sick or to suffer, if this can be avoided". And, um,. . .

Stephen: I've never known anyone who, who didn't **want** to avoid
sickness.

Phil: I. . . Could we [tran, , ,?]. . . Could we. . .

Stephen: That's a big tent there.

Phil: **Human beings** believe it is bad to be sick or to suffer.

Michael: Well that, that was actually his point. Like, that's why
he says that Transhumanism doesn't offer anything new, because it's
already common knowledge, but I disagree with him that it really
**is** so standard because, um, most people think it's normal for
old people to be afflicted by diseases for instance, more normal
than for a young person, and Transhumanists don't make that, um,
don't use that double standard, so. . . Yeah, I mean he's basically saying
there's nothing new that Transhumanism offers, but I completely
disagree.

jimf said...

Phil: Absolutely. Well, I. . . thanks for that clarification, I
think that helps quite a bit. So he was saying that that's the
obvious, that. . . He was saying what I was just saying, which
is **everyone** would agree that, uh, nobody wants to be sick
or to suffer, but this is one of the things we've talked about
in the past when we get into, uh, the subject of life extension,
um -- there seems to be this thought that "Well, yeah, we want
people not to suffer and not to die but, of course, people do need
to die." Right? There's this other side of the argument that
says that it is necessary. So, uh, what was your response to
that? How did you respond to what Dale wrote?

Michael: Oh, well, heh, it's there on the blog, but, um, let's
see. . . Oh, I just went through it point by point, I basically
responded in the way I just said, which is that, um, everyone
claims to be against sickness, death, et cetera, but you'll
find lots of excuses when it comes to certain things, for instance,
I dunno, like saying that God took someone away or something when
they get, die in a car crash when they're young, um, or, uh. . .
it's, you know, people get numb to all this suffering, and the
difference with Transhumanists is that we don't have any artificial
limits, and the piece I would recommend on this topic is
"Transhumanism As Simplified Humanism" by Eliezer Yudkowsky,
which, uh, elucidates the points that, uh, this point really really
well. So. . .

Phil: OK. And that was "Transhumanism As **Simplified** Humanism"?

Michael: Yeah, and that's the way I've often looked at it really.

Phil: And, uh, actually, Stephen, that goes to one of your points about
Transhumanism, right?

Stephen: Yeah, that basically, uh, in order to **be** a Transhumanist,
you have to be a Humanist first. There's, you know, there's not gonna
be too many fundamentalist, uh -- pick your religion -- there's not gonna
be fundamentalists that they're gonna, you know, um, sign up
necessarily for Transhumanism unless, I don't know, if their faith
or whatever allows them to be a Humanist first. Um, it's, that's, you
know I just, uh, see that as, like, a prerequisite.

Phil: And I like the idea. . .

Stephen: The individual is **important**. Right? If you don't
believe that, if you don't accept that, then you can't take the
next step.

Phil: I like the idea that, um, that Transhumanism would be not a,
some weird offshoot or extension, but is actually just, just
Humanism, which, uh, is that the gist of what Eliezer argues in
his essay?

Michael: Yeah. And, um, I think that it's important for people
**outside** of Transhumanism to also recognize that it's related
to Humanism? Because, fully to understand. . . sometimes when you
bring up the idea that, um, enhancing humans, immediately [it]
hearkens back to the Nazis and the eugenics projects? And we have
to make it utterly clear that Transhumanism is based on Humanism
**first**, so that also means that Transhumanists would completely
defend the right of anyone to **not** modify themselves, as well
as modify themselves, like, even though we have a lot of enthusiasm
for the possibilities of modifcation, it doesn't mean that we
consider it to be mandatory, it just means that we ourselves find
it an interesting prospect and want to pursue it, and, um, we don't
want to force anyone else into following our route if they don't
want to, and also that everyone has a responsibility both socially
and legally to coexist with each other.

jimf said...

Phil: OK, well, that's really bummin' me out. But you're not. . .

Stephen: Ha, ha, ha.

Phil: Uh, you're not eliminating Utility Fog altogether, here?

Michael: No, no, it's different. Utility Fog wouldn't construct
something nanotechnologically **itself**. It would be a **product**
of nanotechnology instead.

Phil: OK, so once you've created those swarm-bots that might
be one of the applications of a swarm bot?

Michael: Yeah, I mean there will be swarm-bots, but not, uh, I
doubt that they will be involved in manufacturing anything, at least
initially.

Phil: OK, so. . . But the Utility Fog does show up eventually,
that, that's the point.

Michael: Yeah. Ha, ha. Most likely.

Phil: OK. OK, 'cause I'm really countin' on that stuff. I've
got a lot of big plans for Utility Fog in the future.

Michael: Can I make just one more remark on the nanotech helping people?
Um, I was especially, I'm kind of offended, because the reason
why I initially became interested in nanotechnology was because
of the humanitarian angle. And I think that third-world
countries have the most to gain by creating a decentralized
method of manufacturing. 'Cause today, manufacturing is
extremely centralized, and makes it difficult to get parts that
people need to them, so particularly Dale's kind of insinuating
that all the support of nanotechnology has to do greed --
but that's not the case at all, as you mentioned, Foresight
Institute and the Center for Responsible Nanotechnology make
a bigger deal about the humanitarian applications than, uh,
making rich people richer.

Phil: Yeah, well, hold that thought. We're gonna come back to
that. This is FastForward Radio, on the BlogTalk Radio Network,
and the phone lines are open. We're talking with Michael
Anissimov. . .

Stephen: Michael, do you think that the more likely path to the
Singularity at this point would be human enhancement? 'Cause it seems
to me that, you know, just, like you say, just the right drugs
would make a human more intelligent. That's an easier route
to go at this moment in time than building the AI.

Michael: Hmm. . . You know, I'm not sure, but I don't think
that's necessarily the case, because, uh, biology is really
complicated, and our brains are already highly optimized by
evolution, so it could be relatively difficult, like, the
brain depends, every part of the brain depends on every other
part of the brain working essentially normally?

Stephen: Uh huh.

Michael: And I think once you start to mess around with things,
biology's extremely complex, and if it were really that easy to make
drugs that made people smarter then we'd already have them by now.
And all the nootropics, these so-called smart drugs, not a
single study that I've seen has been published that proves
that these actually make people smarter. So, I think it's quite
a challenge. Meanwhile, for AI, it might be possible to use
relatively simple algorithms that are, um, or actually probably
not simple, but inference algorithms that are a hell of a
lot simpler than human intelligence, so, kind of in the same
way that an airplane is a lot simpler than a sparrow, so I
think that the AI route is possible, the human intelligence
enhancement is possible. The other barrier to human intelligence
enhancement is that a lot of the approaches that would be likely
to actually work, like deep brain stimulation and interfaces
between the deep brain and computers, would be ethically
questionable, and I think they'd be shut down if they were
in any developed country.

. . .

jimf said...

Stephen: I wrote a short story that was published on The Speculist
a while back. It, heh heh, it's not **particularly** good, I don't
know if I've ever been able to write good fiction just yet,
but, uh, it was basically Leon Kass's family, you know, giving him
life extension treatments against his will. Ha, ha, ha.

Michael: What's it called?

Stephen: I'll have. . . It's been, you know it's been -- when
did I publish it? You know it would've been about 2005, so I
forget the name of it. . .

Phil: That might be back on the, back on speculist.com, rather than. . .

Stephen: Yeah, I might have. . . But I'll put it in the show notes,
if I can find it. But it's, uh. . .

Phil: Or Google it. . .

Stephen: Yeah, I guess the point is, **exactly** what you just said,
that if someone doesn't want, you know, whatever technology can
provide, then the Transhumanist would be the **first** person to
say -- hey, you don't have to have it.

Michael: Right.

Phil: So, one point that Dale writes, that "swarms of multi-purpose
programmable nanobots will soon make everybody who counts rich beyond
the dreams of avarice", well, uh, to me the big disagreement was
that it's not about making "everyone who counts" rich beyond the
dreams of avarice, but that one of the great goals of nanotechnology, uh,
molecular nanotechnology as it's been advocated by Foresight Nanotech
and other groups who look ahead to what that future might bring, um,
is that it could have a **substantially** positive effect, impact, on
helping to eliminate hunger, helping to eliminate poverty, and, and
you would think that, uh, whether one was a techno-progressive
or a progressive or -- whatever one's political viewpoint is -- that
that would sound like an awfully good thing to do. But I was interested
in your response to him, because you didn't take the political angle,
but you came back and said that you don't think that swarms of
nanobots **will** be used in the near term. Tell us why that's the
case.

Michael: All right. Well, initially when Drexler started presenting
his ideas he kinda made it seem as if nanotechnology would consist
of a bunch of swarm-bots flying around. And, um, since then, we've -- or
they, I guess, I'm not in on the technical angle of it, but -- they've
determined that it would be a hell of a lot easier to, basically,
create a nano-factory, which is where everything's stapled down, in
place, 'cause the amount of infrastructure you'd need to make a 'bot
fly is pretty significant, and that's like a lot of wasted space.
A nano-factory can be sealed in a vacuum. Nano-factories can be
bought and sold, and authenticated, and regulated, which is also
really important, and, um, the idea of nanobots flying around is --
I mean, I think that nanofactories will build small swarm-bots --
basically, like insect-bots, which we already have today, actually,
but we'll just see them in greater numbers and more effective, but
um I don't think we'll actually use swarms of 'bots to construct
things, it's too complicated, it's not necessary, and it's, the
fact they can't be regulated, so we, is a big reason for why they
probably won't come about in the near term. You'd have to have
some way of figuring out, keeping tabs on every single one, so
until we figure out how to do that, and also overcome the technical
problems, uh, I just don't think we're going to see nanobots
flying around.

jimf said...

I'm actually worried that maybe humans just inherently aren't
smart enough to build Friendly AI, so we might need a leg up with
primitive intelligence-enhancement techniques, and then
those enhanced human beings could create a Friendly AI
[morally? more alike? more alive?] [Good thing? Could be?] we have only one
chance to do this. It'd be nice not to mess it up and
kill ourselves.

Stephen: Well, we had on the program last week Ben Goertzel,
a big, **big** AI researcher, and uh, that's a huge concern
of his, and uh, what he wants to do is -- he says that humans
have a capacity for empathy, but that evolution gave us the
capacity for empathy particularly within our own tribes.
If there was another tribe with whom we were at war we
could easily turn off empathy and go to war with 'em. And
he says that a goal of AI ought to be to create something
that has a **greater** power of empathy than humans do,
and therefore would tend to be friendly.

Michael: Absolutely. Eh, you might call that a kinder-than-human
intelligence.

Stephen: Exactly. It seems like a good way to go, if
you're gonna build a machine like that.

Phil: Well that's, obviously, that's what we're all hoping for.
I've been saying, lately, after, uh. . . The more I learn about
financial markets, it feels like, if anyone's working on the
robots that are gonna build better robots it's these quants who
are designing algorithms that are ultimately gonna be, **limited**
AI programs, but that are designed to build more intelligent, uh,
algorithms that are designed to develop even more intelligent
algorithms to get an edge in trading. And you think, well, how
much empathy would an AI that develop from that particular
tribe have for humanity, or even for another AI, you know. . .?

Stephen: They're cutthroat.

Phil: Yeah, if it came from that. . .

jimf said...

Michael: Yeah, I often [read?] by example, like, a stock-market
trading AI. Unlikely. One guy I talked to about a year ago
was the founder of Hanson Robotics. He was about to come out
with a personal robot, called Zeno I believe, and he says that
if we do AIs in such a way that they come from toys and
entertainment things that are designed to interact with humans,
then we would be in a way-better situation than if,
for instance, the military came out with them. And I was
thinking about it and, it is possible but I don't think
there'll be a serious project to create AI used for entertainment
'cause it's easy to entertain people with something a lot
less smart than a human intelligence, and I think that, uh,
you'd have people crafting Artificial General Intelligence
for something more serious, like stock-market trading or
military purposes, so. . . I want to believe it's likely
that AI will come in a toy form first, but I just don't
think it's too plausible.

. . .

I think that virtual worlds are an ideal training-ground for
AGI because of sensory richness and the environment and human
interaction and stuff like that, but it wouldn't happen without
a lot of deliberate work. I think that maybe, like, a tag-team
of the virtual-world-pet approach, and actual mathematicians
working on the structure of AGI could actually result in AGI,
and that's exactly the approach that Ben [Goertzel] is taking,
which is why I think that everyone should keep a really close
eye on Novamente, because it could be going place real
fast.

Phil: Absolutely. So, yeah, we're lookin' forward to seeing
developments from that. I've just been alerted to the
fact that we've gone over our hour. I was meaning to look
down at the clock to see if it was time to start wrapping
up and I realized that we've actually gone over. It just,
uh, speaks well for you Michael how interesting it is
talking to you, and how we're going to need to have you on
the program again real soon, hopefully we'll schedule something
in the very near future. Any parting thoughts for us before
we wrap up?

Michael: Hmm. . . Well, my parting thought would be to
encourage people to take a look at the Lifeboat Foundation.
I am the fund-raising director for the Lifeboat Foundation
so I encourage people to consider donating. It's
lifeboat.com. And anyone that's concerned about the risks
of nanotechnology should, uh, there's something you can
do, not just sitting there, donating to an organization that's
exclusively focused on dealing with it.

Phil: So if you want to be involved with these kinds of issues,
that's a great place to start. We will be providing links
to Lifeboat Foundation as well as World Transhumanist Association
on The Speculist for those who are listening. Go to our
show notes and you can follow the link there, and **do**
make a donation to Lifeboat Foundation. Well, Michael, thank
you very much for being with us this evening.

Michael: Thanks. Phil.

jimf said...

Dale wrote below in "Don't Even Try To Draw Me In"

> [T]here is a world of actually sensible and intelligent
> medical researchers, bioethicists, progressive healthcare
> policy wonks, media theorists, network security wonks,
> [and] computer scientists free from the taint of cybernetic
> totalism. . .

One of whom is, of course, Jaron Lanier. Who has by now, of
course, tangled directly with the True Believers.

From
"A Non-Half-Assed Response to 'One Half a Manifesto'”
Tuesday, Apr 29 2008
by Michael Anissimov
http://www.acceleratingfuture.com/michael/blog/2008/04/a-non-half-assed-response-to-one-half-a-manifesto/
----------------------------------
[quoting Jaron Lanier]

> The dogma I object to is composed of a set of
> interlocking beliefs and doesn’t have a generally
> accepted overarching name as yet, though I sometimes
> call it “cybernetic totalism”. It has the potential
> to transform human experience more powerfully than
> any prior ideology, religion, or political system
> ever has, partly because it can be so pleasing to the
> mind, at least initially, but mostly because it gets
> a free ride on the overwhelmingly powerful technologies
> that happen to be created by people who are, to a large
> degree, true believers.

It isn’t really a dogma. The “interlocking beliefs” include an
interest in evolutionary psychology, heuristics and biases,
statistical inference, and the challenge of human-friendly AI.
[Well, those are Eliezer Yudkowsky's interests, anyway. JF]
It’s unfair to call us true believers, because we’re not.
From the Wikipedia entry for _The True Believer_ [by Eric Hoffer]:

> Part of Hoffer’s thesis is that movements are interchangeable
> and that fanatics will often flip from one movement to another.

Singularitarians, like the author of these words, aren’t fanatics,
nor True Believers, and we tend to regard our goals as non-interchangeable.
We are focused on ensuring that general AI is safe to humans,
and that its actions are widely seen as beneficial. The goal
is largely technical. Subcultures sometimes form around groups
of people that work together. A tenuously connected subculture
does pursue safe AI in a unified manner, but we aren’t fanatics.
A fanatic is “one who can’t change his mind and won’t change
the subject”, but I, and others in this camp, are open-minded
and willing to discuss any number of subjects. Surely, I would
bore quite a few people if I continuously went on about the
likely cognitive differences between human-equivalent AIs and
human beings. Still, I think that the way humanity addresses
the AI challenge is a matter of life and death.

I could go on about the differences between AI advocates and
True Believers, but I already posted Steven’s
“Rapture of the Nerds, Not” the other day.

I would very much like to keep a catalog of those calling
Singularitarians “True Believers”, however. Please, if there’s
anyone else in the audience who thinks we are, please step forward.
So far on my list, there’s Dale Carrico, James Hughes, Greg Egan,
and John Smart. I’m here to kick ass, but especially to take names.

It’s exhausting to know so many people who think you’re a
True Believer, but it helps to have confidence in the face
of social ridicule. All the Disney movies I watched as a kid
taught me to believe in myself. At least I have that.
----------------------------------

Well, here's one more for the list -- John Horgan.
http://www.stevens.edu/csw/cgi-bin/blogs/csw/?p=388

jimf said...

JOHN TO MICHAEL: Come on Michael, listen to yourself! We’ve got problems
that threaten us right now! Fundamentalist suicidal religious cults,
collapsing states, proliferating nukes and other deadly weapons in
unstable regions, surging populations in some of those same regions,
global warming and other more tangible forms of pollution. And
you’re fretting about supersmart cyborgs or bots or cyberentities or
whatever, stuff that MAY–and may not–happen within 500 years?
Why not waste your life agonizing over the dangers of time travel
or evil aliens?

Also it pisses me off when you and your ilk–including Kurzweil–accuse
me of “fearing” the Singularity or of merely dismissing it as “weird.”
That’s bullshit. Sure, I make fun of you guys, because I’m trying
to entertain people. But in my Spectrum article and even that crappy
little Newsweek piece I also present specific counterarguments to the
wild extrapolation upon which the Singularity is based. My first
two books also have a detailed critique of the fields you think
will produce the Singularity, including AI, neuroscience, genetics
and so on. You Singularitarians, for all your vaunted cleverness,
display an extraordinary and I can only assume willful ignorance of
the complexities of biology, including how the genetic code produces
bodies and how the neural code produces minds. When someone draws
your attention to these issues, you respond with what you accuse
critics of, ad hominem attacks. There’s the cult-like insularity
and arrogance I talked about before. And that’s why you don’t deserve
to be taken seriously.
----------------------------------

jimf said...

> > [Michael Anissimov wrote:]
> >
> > I would very much like to keep a catalog of those calling
> > Singularitarians “True Believers”. . . So far on my list,
> > there’s Dale Carrico, James Hughes, Greg Egan, and
> > John Smart. I’m here to kick ass, but especially to take names.
>
> Well, here's one more for the list -- John Horgan.
> http://www.stevens.edu/csw/cgi-bin/blogs/csw/?p=388

----------------------------------
"MICHAEL TO JOHN: . . . I think it’s dishonest to pump up the
“cult” meme as a rhetorical device. You could be putting us in
personal danger down the road. Some deep greens already want to kill us:
http://www.greenanarchy.org/index.php?action=viewwritingdetail&writingId=182

If our ideas are so wrong, you should be able to criticize them
without the “cult” label… what about real cults like Scientology,
Raelians, etc? If anything, our “cult” is more analogous to the
enthusiasm that caused the dot com bubble — futurist enthusiasm
about technology.

JOHN TO MICHAEL: Michael, it’s not a thetorical device. I do indeed
think the Singularity movement, as represented by Kurzweil and Vinge
and others who talk about the end of everything as we know it
real soon, is a cult. A harmful one, as I’ve said, in an age when
science’s reputation is under attack. Cults often coalesce around
these sorts of apocalyptic fantasies, and true believers display
us-vs-them insularity, hostility and arrogance toward non-believers
etc. You seem quite reasonable yourself. You seem to think merely
that AI is gonna happen some day, but then I’d say you’re not a
Singularitarian. And that’s not the sort of prediction that sells
books, generates cover stories, gets the attention of nasty a-holes
like me, etc. John

MICHAEL TO JOHN: A “cult” is a centrally organized, physically in-contact
group that does things like alienate you from your family. A “movement”
is a general ideological thrust. It’s absurd to call a loosely
connected, online movement a “cult” when its members disagree so
thoroughly on the details and aren’t even attempting anything dangerous.

Superintelligence is not a fantasy, it’s a real possibility — the
question is when. It makes sense to worry that superintelligence
could wipe us out — did we not wipe out the Neanderthals? . . .

I do think we could be facing the end of the world as we know it,
due to the threat we face from recursively self-improving AI.
If AI is very difficult and takes centuries — great — that gives
us more time to prepare to ensure that it’s programmed to be human-friendly.

The extreme fuzziness within the “Singularity movement” belies your
claim that it is a cult. I am right in the thick of it — I’ve been
promoting Singularity-related ideas for a decade. If there were
a cult, I would be in its leadership. But there is no such thing.
[Michael, of course, **fronts** for the "leadership". He does
not claim to be a Bear of Great Brain himself. JF]

. . .

JOHN TO MICHAEL: Michael, you protest too much. My guess is that you’re
going thru a crisis of faith, as well you should. You’re obviously
a very smart knowledgable guy. Is this really what you want to spend
your life on? I asked Elizer the same thing on Bloggingheads, and I
meant it, it’s not a rhetorical ploy. And I mean it when I say that
the Singularity is giving science a bad name, because it’s not
based on a rational appraisal of current science. If it’s any
consolation, I also consider Christianity, Buddhism, Islam,
psychoanalysis and lots of other belief systems to be irrational
and hence cultish. They just have more adherents than you do.
Go ahead and keep the faith, but don’t blame others who find your
faith absurd and wasteful. John

MICHAEL TO JOHN: Yes, among other things, this is what I want to
spend my life on. . . When superintelligence does come about,
it could be a big deal, as in having the potential to threaten all
humanity. . .

Anne Corwin said...

Dale wrote: But, again, if a person of faith tries to convert me, tries to insist their faith is not a faith but a kind of scientific or policy practice, or if their faith has an organizational life with anti-democratizing impacts



For me this is the thing as well...I think of it as being a kind of "sticking things in the wrong context". Lots and lots of things are fine in their proper domains, but when you try and take them out of those domains and shove them into others, they make no sense.

(To invoke an analogy, some of what I see in superlative futurology looks like a case of someone on a Star Trekkian Holodeck trying to claim that the holograms are REAL and could very easily just be taken off the holodeck if only you'd stop being such a pessimist about it. Granted this did happen in at least one TNG episode, but still. I don't think there's any denigration to a hologram to call it what it is, and appreciate it in its proper context, but then switch it off when it's time to do something off in actual reality.)

jimf said...

In a nutshell:

http://www.acceleratingfuture.com/michael/blog/2009/08/a-boring-disagreement/
-----------------------------------
[Michael Anissimov writes:]

My disagreement with Dale Carrico, Mike Treder, James Hughes, Ray Kurzweil,
Richard Jones, Charles Stross, Kevin Kelly, Max More, David Brin, and many
others is relatively boring and straightforward, I think. It is simply this:
I believe that it is possible that a being more powerful than the entirety
of humanity could emerge in a relatively covert and quick way, and they don’t.
A singleton, a Maximillian [as in _The Black Hole_? The robot, presumably,
not the Schell], an unrivaled superintelligence, a transcending upload, whatever
you want to call it.

If you believe that such a being could be created and become unrivaled,
then it is obvious that you would want to have some impact on its motivations.
If you don’t, then clearly you would see such preparations to be silly
and misguided.

Why do people make this more complex than it needs to be? It has nothing
to do with politics. It has everything to do with our estimated probabilities
of the likelihood of a more-powerful-than-humanity being emerging quickly.
I am practically willing to concede all other points, because I think
that this is the crux of the argument. Boring and simple, if I am indeed correct.

I am fairly confident that, at this point in history, superintelligence
is the MacGuffin — the key element that determines how the story of humanity
will go. I could be entirely wrong, of course, but that is my current position,
and it is derived from cogsci and economics-based arguments about takeoff
curves, not political nonsense. If it is wrong, it should be entirely simple
to refute the hard takeoff hypothesis at the locus of cogsci and
economics-based arguments rather than political or sociological arguments.
-----------------------------------