Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Saturday, July 14, 2012

My Little Steampony Singularity


PZ Myers, had this to say (among other more substantive less amusing things) earlier today, on the sooper-genius of Singularitarians:
If singularitarians were 19th century engineers, they’d be the ones talking about our glorious future of transportation by proposing to hack up horses and replace their muscles with hydraulics. Yes, that’s the future: steam-powered robot horses. And if we shovel more coal into their bellies, they’ll go faster!
Just as a reminder, from my own most recent of many singu-skewerings:
I think Moore's "Law" is a skewed perspectival effect and that even on its own terms, to quote Jeron Lanier, "As processors become faster and memory becomes cheaper, software becomes correspondingly slower and more bloated, using up all available resources." I think "accelerating change" is just what neoliberal precarity and neoconservative militarism looks like from the perspective of its beneficiaries (or those dupes who wrongly fancy themselves its potential beneficiaries). I think the real substance of the Turing Test is that the attribution of intelligence to non-intelligent artifacts results in artificial imbecillence among the humans who fall for it. I think that those who are really materialists about consciousness should have to grant that all actually existing intelligence so far worthy of that name has been incarnated in organismic material brains and in social struggles in material history and that those who are really materialists about information should have to grant that all information is non-negligibly instantiated on material carriers and that "cyberspace" is run on coal fires and accessed on toxic devices made by slave labor in exploited regions of the world. Maybe alien or non-biological modes of consciousness and intelligence are logically possible, but that doesn't mean that it won't turn out to make sense to come up with a different word than "consciousness" or "intelligence" that does justice both to what it is and who we are. Maybe forms of narrative selfhood instantiated on silicon can be eternalized (not that there is any reason to think so if we are using actually existing computers as our reference point), but that doesn't mean we can "migrate" our own organismically materialized consciousness onto a different substrate without violating it or that we can extend the narrative form of our own selfhood beyond its present bounds without losing its integrity. Just declaring techno-immortality or super-intelligence a matter of how things are now, only Better, only Longer, only More is a way of refusing to grasp what intelligence, consciousness, selfhood actually materially are, and then plugging the hole of that refusal with a bunch of infantile fears and fantasies and pretending that somehow this constitutes Very Serious intellectual activity somehow…. Singularitarianism [is] a conceptually confused, scientifically superficial, emotionally infantile, politically pernicious farrago of ill-digested science fiction conceits and hyperbolic corporate press releases and wooly theology and shrieking id[.]

8 comments:

Summerspeaker said...

I don't think PZ Myers is saying the same thing you are, beyond that Singularitarians are t3h st00pid. Such 19th-century engineers would have gotten the substance of contemporary transportation correct (minus the glorious part) if not the mechanism. Myers' argument in the linked piece resembles the position of friend-ai folks, who often question the reverse-engineering model.

I'm also amused see a biologist mocking the technique of hacking things apart to understand them. It makes me smell the formaldehyde again.

Dale Carrico said...

Robot Cultism sure makes people dumb.

jimf said...

To be fair, there are more sober analyses of the problem than
the recent bit of naive cheerleading on freethoughtblogs.com
posted by Chris Hallquist that was lampooned by P. Z. Myers.

For example:

"Does Personal Identity Survive Cryopreservation?"
by "Mike Darwin" (Mike Federowicz)
http://chronopause.com/index.php/2011/02/23/does-personal-identity-survive-cryopreservation/

The gist of Darwin's (very technical) article is that while it
is unlikely that, as Myers points out, the functioning state of
a living brain could be captured by any instantaneous scanning
process (a la a Star Trek transporter), it may nevertheless still be
true that enough information might be preserved in a properly
vitrified brain to contain the subject's personal identity, in some
meaningful sense. **Reconstructing** a duplicate brain (either a
biological one, or a software simulation) from
that information is a different question, of course, and Darwin
properly leaves that problem to the future (as cryonicists always have).

And if you believe, as Kurzweil and others seem to, that computers
(even supercomputers, by today's standards -- the next or later generation
Blue Gene exascale processor, let's say) and robotic devices carrying
them, together with high-bandwidth communication links between
them, will be shrunk to sizes smaller than human neurons, and that
billions of them could be injected into a working human brain
and powered somehow (and cooled somehow, so that the brain doesn't
get cooked), then something like the MOravec Transfer becomes
plausible at least as a thought experiment (though the technology
to make that work seems pretty damned implausible at this point in
time). Each nanoscale exaprocessor/robot
might be able to hook up to, monitor, and finally mimic a given
neuron (or Edelmanian neuronal group of the "primary repertoire",
or whatever) well enough so that it could make a permanent connection
to adjoining biological neurons while a pruning nanobot disconnects the
hapless biological neuron and aspirates it into a drainage tube leading
to the sink. Rinse and repeat for all the 100 billion neurons in
a human brain. The subject might not, in theory, even notice anything happening
(other than, you know, being connected up to a bundle of tubes
and wires, having a pounding headache, and having his or her
head immersed in a bucket of ice).

Something like this is used in some of Greg Egan's stories ("Learning
to be Me" primarily). In that story (and in others where the technology is
merely alluded to), something called an "Ndoli Dual" (or "Jewel"
as it's nicknamed) is inserted into a child's head and at some
point near adulthood, the duplicate takes over the body. (The
whole brain, or the cerebral cortex, or something, being disconnected
at that point -- whether it gets flushed down the sink, or
routed into the gut, like a post-nasal drip, to end up in the
toilet, I can't remember. ;-> )

Anonymous said...

I always thought, Kurzweil is smart, someone smart like him should understand the problems involved in pumping a brain full of clanking diamondoid machines into the brain and somehow "interfacing" with it -- Both in general and specifically in the early 2020's.

And how can anyone believe in something like the Moravec transfer? I don't just mean that it's ridiculous, but it's also so overkill: Replacing every neuron with a 'nanobot' that acts like it? Why not just slice the brain, scan it, edge-detect the slices, build a connectivity graph out of that, and run it in some kind of neuromorphic machine? That sure is far closer to reality than the diamond machines that will never be.

jimf said...

> [H]ow can anyone believe in something like the Moravec transfer?
> I don't just mean that it's ridiculous, but it's also so overkill

Well, the attractiveness of the Moravec Transfer is primarily a
philosophical one.

All scenarios for "uploading" or copying a human brain before that
had involved duplicating a brain. Even in the original
_Star Trek_ series we had Roger Korby's (and Captain Kirk's) android
duplicates in "What Are Little Girls Made Of?" (Robert Bloch)
and later in TNG Riker getting duplicated in a "transporter
accident" and so on.

The problem with uploading or immortalization via wholesale
duplication is that **somebody**
still has to die -- the one still stuck with the organic
body, presumably, whether you consider both entities
after the process "duplicates" or whether you consider
one the "original" and one the "duplicate", **one** of them
is still headed for the grave in the usual way.

The coolness of the Moravec Transfer (setting aside its
practical implausibility) is that -- at least at our
**current** level of understanding of biological nervous
systems (and even that might turn out to be wrong) --
it seems plausible (at least as a **thought experiment**)
that a brain's physical substrate could
be replaced neuron by neuron without the overall
consciousness supported thereby ever being interrupted
or even disturbed, let alone duplicated. So there's
just one "entity" throughout the whole process,
anywhere on the spectrum -- the same person whether the
brain is 1% nanobots and 99% organic, or 50/50, or
99% nanobots and 1% organic, or finally 100% nanobots
and 0% organic. The **same person** is sitting there
even after an entire brain has been flushed down the
sink, neuron by neuron. No **person** ever dies (individual,
isolated neurons having no consciousness or personhood).
So the thought experiment goes, anyway.

Again, somebody who knows (now) or will know (in 25 years) more
about how biological nervous systems actually function (whether
subcellular structures are necessary for the actual
signal processing going on in the brain, for example,
and are not just there because, like all living stuff, neurons
have to build proteins from transcribed DNA and make ATP from
glucose, and all the rest of it, just to exist at all) might
have other reasons (besides practicality) to say that a
Moravec Transfer, as envisioned above, simply wouldn't work.

jimf said...

> . . .a brain's physical substrate could
> be replaced neuron by neuron without the overall
> consciousness supported thereby ever being interrupted
> or even disturbed, let alone duplicated. . .

Note, however, that even granting a Moravec Transfer could
work with respect to the replaced neurons, those nanobot-neurons
would still have to respond to and generate the chemical
and electrical signals that bind an organic brain to its
body. The hormones of the endocrine system, and so on and
so forth. So there would still have to be a sophisticated
"interface" to an organic body.

Unless you went on to simulate a body.

And then you'd need a sophisticated "interface" to the real
world.

Unless you went on to simulate the world.

Greg Egan recognizes all this in _Permutation City_, of
course.

;->

jimf said...

> [W]hy not just slice the brain, scan it, edge-detect the slices,
> build a connectivity graph out of that, and run it in some kind of
> neuromorphic machine?

Why not, indeed? Well, getting a basic wiring diagram (what Gerald Edelman
calls the "primary repertoire") is just part of the problem.
Those synapses, in a living brain, are **weighted** too -- they contribute
positively or negatively in various degrees to the likelihood of the postsynaptic
neuron firing (and even, apparently, at least in some cases, to the
behavior of the presynaptic neuron -- surprises like that are always turning up,
it seems). That's what Edelman calls the "secondary repertoire", and
it's presumably a substantial part of whatever "memory" consists
of. If "edge-detect[ing] the slices" isn't enough to recover that
information, you're probably screwed.

Also "some kind of neuromorphic machine" -- nobody knows **what** the
hell that would be! ;->

> . . .it's also so overkill. . .

In other words, nobody knows at this point what's "overkill".

Anonymous said...

>Also "some kind of neuromorphic machine" -- nobody knows **what** the
hell that would be! ;->

I imagine a bunch of processors that have a hardware (ie faster) implementation of some of the many models of neurons, hooked up to a router that emulates the connectivity.

>Why not, indeed? Well, getting a basic wiring diagram (what Gerald Edelman
calls the "primary repertoire") is just part of the problem.
Those synapses, in a living brain, are **weighted** too -- they contribute
positively or negatively in various degrees to the likelihood of the postsynaptic
neuron firing (and even, apparently, at least in some cases, to the
behavior of the presynaptic neuron -- surprises like that are always turning up,
it seems).

Well, the Izhikevich model seems to have a rather low number of features that might presumably be recovered from electron micrography, + some extra stains and what not. Although it's just a spiking neuron, and it doesn't have the complexity of the more detailed models Myers talks about, it replicates spikes just fine. And it certainly doesn't require you to measure the concentration of ions across every square micron of membrane.

Then again, whether it preserves memories is another thing.