Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Friday, September 07, 2007

More on Superlativity (The Exchange Continues)

My conversation with friend of blog Michael Anissimov has continued on, and I am enjoying the way it is taking us to what look to me like essential differences of and quandaries for technodevelopmental politics. I do hope people can look beyond the pyrotechnical sparring and focus on the issues illuminated by the sparks.

Michael recommends that I have a look at a short text, Technology: Four Possible Perspectives, most of which I quote here:
1.Technology will lead to extremely good outcomes (technophile)
2. Technology will lead to extremely bad outcomes (technophobe)
3. Technology will lead to outcomes that are on the whole neutral (technonormal?)
4. Technology will lead to extreme outcomes, either good or bad (technovolatile?)

I happen to consider all four stances offered here rather unserious. That is because they seem to me to reduce actual technodevelopmental complexity and dynamism to something of a monolith ("technology [in general])." Worse, they seem to go on to invest that monolith with a kind of intentionality and agency ("will [automatically?] lead"). This gesture is all the more worrisome since it seems (as happens so often among Superlative Technocentrics) to go hand in hand with "neutral" (so-called), "autonomous," "apolitical" views of technodevelopmental change that would diminish the agency (or at any rate our awareness of and responsiveness to the agency) of the actually-existing actors on that terrain: the people who collectively research, study, invest, invent, test, publish, edit, teach, criticize, regulate, facilitate, promote, distribute, apply and appropriate technoscientific outcomes. As it happens, none of the -- presumably "only" -- "four possible stances" offered up in the recommended formulation describes my own position and, hence, I daresay the title of the recommended post may need revision.

Michael chides me for expressing what he paraphrases as the accusation that "[Transhumanists] are all 'feeding an enormous amount of irrational delusive careless and damaging thinking' according to you, which is too bad."
Needless to say, I say this very thing on a regular basis and have done for years, so I can't quite see how this would surprise anyone. What I will insist on, however, is that when I say these things it isn't an ad hominem insult but a conclusion supported by reasons (which, as you all know by now, I regularly reiterate in my writing).

In a nutshell, lately I have been distinguishing two broad perspectives on technodevelopmental politics (surely there are more, but these are the two that preoccupy me in such discussions for now). First, some people assume a sub(cult)ural perspective on technodevelopmental politics, that is to say, they assume an identity politics frame directing itself to the implementation of particular concrete futural scenarios with which they personally identify. I distinguish this sub(cult)ural perspective from technoprogressive perspectives that assume instead a democratizing frame devoted to what Jamais Cascio would describe as "open futures," and the substance of which demands, in my view, social struggle directing itself to the safest, fairest, most consensual, most democratically responsive distribution of technodevelopmental risks, costs, and benefits possible. There is, it seems to me, an undue linearity, elitism, and utopianism that sub(cult)ural perspectives lend themselves to, and I would suggest that sub(cult)ural technocentricity is the likeliest political (a better word might be depoliticizing) expression of what I call Technocentric Superlativity. (That's a lot of jargon, I know, but many long critiques are being telescoped here in a rather breezy way since this is a conversation that Michael and I have been having a long time by now.)

Michael continues: I just find it oddly fascinating that someone who doesn't believe that technology can cause extreme, sweeping, transformative change ("Superlative" in your rhetoric) obviously identifies with a community that does.

That simple overgeneralization ("radical change is likely") is, of course, not what I mean by the term "Superlative" in my own (oft-delineated) usage. Again, in a nutshell: [1] Superlativity is a focus on an idealized farther-future over what I would regard as a more useful focus on futures emerging from and shaped by proximate problems and ongoing problem-solving. [2] Superlativity also tends to be a discourse invested with hyperbolizing and transcendentalizing significances of a kind once associated primarily with religious worldviews and still vulnerable to appropriation by social formations with the trappings of authoritarian religiosity (like cults). To the extent that Superlativity [3] solicits personal identification with rather than deliberation over particular futural scenarios (and this is very regularly the case in my view) it seems to be attractive to certain socially marginalized (many of them especially vulnerable to True Belief) and explicitly anti-social (among them, market fundamentalists and technocratic elitists) personalities. I find this very interesting, and more interesting still are the ways in which [1], [2], and [3] tend to support one another.

It is abundantly clear from my writings that I consider technodevelopmental social struggle enormously sweeping and transformative in many of its historical and current and likely formations (an observation in any case so obvious that it hardly even qualifies as an insight in my view), contrary to Michael's impression, gleaned who knows how from who knows what writings of mine. And you can trust me when I say that I don't identify with "transhumanists" in the least (even if I count a few among my friends and colleagues), however "obviously" it might seem to them that I really truly must do so, presumably just because I happen to take them seriously enough to worry about the impact they have on technodevelopmental policy language, efforts at education and organizing, and so on.

The point of saying this sort of thing is not to indulge in facile name-calling, however much folks who feel targeted by these critiques may wish to dismiss them as such, but to undermine tendencies to Superlativity that seem to me to inhere in technocentricity (any social worldview defined by a focus on technodevelopmental questions) at a time when technocentricity is demanded of serious progressives in a changed and changing world. I say these things to transhumanists and other futurists in particular, by the way, precisely because I think many of them are quite open to these critiques, would benefit from them, and once enlightened would make better technoprogressive allies. I should have thought all that would be obvious by now.

Solutions to poverty, neglected disease and militarism will require a *combination* of political and technological solutions, Michael insists.

But the "technological solutions," so-called, are already available to eliminate poverty (and in any case technical problem solving is already ineradicably political), meanwhile the barriers to the solution of these problems are indeed profoundly political questions of laziness, greed, short-sightedness, parochialism, and ruthless incumbency. New technologies will not alter that basic state of affairs one bit. Technodevelopmental outcomes express politics, they don't circumvent them. Until my Superlative Technocentric interlocutors grasp and come to terms with such basic propositions it is, I fear, rather difficult to take them very seriously for very long.

Michael proposes that: One either accepts the possibility [of Molecular Manufacturing in a strict Drexlerian construal] or not, and in your case, it seems you don't.

But the fact is that this particular logical alternative is not one I invest with much in the way of significance, personally. Is the particular scenario that preoccupies Michael's attention here logically possible, or at any rate not logically disallowed -- as it certainly is, for now, practically unavailable -- given our present knowledge?

Sure. But to be a wee bit provocative here: So what?

There are a bazillion equally logically possible outcomes that seem to me as or more likely as this one at the level of detail where life is actually lived and will continue to be. And I am, in any case (as a rhetorician and technocritical theorist by trade, recall) far more interested personally in the fascinating displays of loose argumentation, the surrogate commentaries on contemporary circumstances, the symptoms of social alienation, collective wish fulfillment, authoritarian religiosity, and so on that tend to freight Superlative Technology discourses, than I could possibly be interested in making promises I can't keep or listening to others make such promises where technodevelopmental outcomes are concerned.

Michael admits, I find it odd that you recommend the materials at CRN that are obviously so "Superlative".

But there is nothing mysterious in all this. I simply see more in the materials at the Center for Responsible Nanotechnology than he seems to do. For all I know Mike and Chris (the founders and directors of the Center) wouldn't agree with me at all about the texts of theirs and the discussions they have faciliated that seem to me to be the most valuable ones. Be that as it may, I certainly don't agree that everything of interest discussed at CRN is properly identified with Superlativity in my sense of the term, even if some of it is (as some of my own writing could no doubt justly be criticized for as well). I find the things Mike and Chris write about quite interesting on a regular basis, but it may be that I simply skip right past some discussions out of complete lack of interest which are the very passages that for Michael define the very spirit of the place altogether. Different things interest different people, this is nothing odd in the least but, to the contrary, surely an obvious commonplace.

9 comments:

Mike Treder said...

Dale, I'm glad I wasn't the only one to see those "Four Possible Stances" as problematic. My comment over on that blog says:

I strongly urge singularitarians, transhumanists, technophiles, technophobes, and everyone else to get off the technological determinism bandwagon. Take responsibility for leading — or at least influencing — societal choices about technology, instead of passively observing a supposedly preordained outcome.

Anonymous said...

You're reading way too much into a wording that I didn't agonize over much because it didn't matter much for the point I wanted to make in that particular post (which is that many transhumanists are closer to a mix of technophilia and technophobia than to pure technophilia). Yes, I could have emphasized that there are many different technologies that should each be evaluated individually, and I could have emphasized that the consequences of technological change will depend on people and the choices they make. I just chose to be concise, hoping all that would be implicit.

Note that the post isn't titled "the only four possible stances", and I said explicitly that other mixes should also be options. These are just four particular poles in a spectrum of views.

It does seem to me, however, that any view of the future that makes predictions as to the future state of the world (together with some sort of criterion for evaluating outcomes as more or less desirable) also makes predictions as to the desirability of the effects of advanced technology, and so lets itself be described in terms like I used in my post (or other mixes, e.g., "technology will probably lead either to very good outcomes or to neutral outcomes"). That such descriptions are crude and not the whole story doesn't mean they're unserious.

Dale, since as far as I can tell you reject all the "utopian"/"apocalyptic"-sounding possibilities, I would place you closest to #3.

Unknown said...

I'm too many years out of academia to feel properly-g(ear)ed to grapple with Dale's arguments at his level, so I'll stumble through this and hope that it makes sense.

It seems to me that Dale's argument about "superlative" technologism is not so much a rejection of "utopia" or "apocalypse" as it is a rejection of the notion of simplification. The real world is messy and ambiguous, with unanticipated results and complicated outcomes. Much of the literature that Dale has called "superlative" either glosses over such complexities, or outright declares that they'd no longer apply.

One argument I've seen, for example, is that we needn't worry about access to water and energy, because nanotech will allow us to pave the deserts with solar panels and use the energy for desalination, so as to turn megagallons of ocean water into pure drinkable H2O. As Dale would point out, access to water and energy isn't really a technological problem, it's a political and economic problem; that is to say, this is something that doesn't take nano to solve, as using nano wouldn't solve it to begin with. Similarly, such a plan ignores the environmental impact of paving the desert and desalinating so much ocean water; in the real world, it wouldn't get past the first EIR. In short, the superlative view seems to forget that actions have consequences, consequences that aren't mitigated just by applying more cowbell.

So (as I understand him) Dale isn't arguing that molecular nanotech won't happen or couldn't do wondrous things, it's that the nanotechnological wondrous things won't in and of themselves be wondrous solutions, are likely to cause their own complex repercussions, and that focusing on the wonderment to the exclusion of thinking about the context leads to conclusions about policy and behavior that are at best irrelevant, can be counter-productive, or may even dangerous.

Anonymous said...

As I see it, there is no a priori law that says no political problems have technological solutions, just as there is no a priori law that says all political problems have technological solutions. Looking at these things on a case-by-case basis (like by arguing about the environmental impact of nano-solar power, or whether transhuman AI would face the same obstacles as human intelligence) seems to me to be much more helpful than dismissing people's views out of hand as involving "robot armies" or "nanosanta".

As for the real world being messy and full of unintended consequences -- absolutely, and I think most of what Dale refers to as "superlative technology discourse" recognizes this. But there are arguments why, for example, transhuman AI might be more capable of managing messes and predicting repercussions than humans are, if we get it right. It's worth dealing with these arguments on the object level rather than just mocking them at the meta level for uncoolness or disliked political implications or anything like that.

Dale Carrico said...

You're reading way too much into a wording...

There is no more commonplace complaint about theoretical readings of texts. Perhaps it will help to realize that from my theoretical perspective the meaning of a text is far from exhausted by trotting out the author's intentions for that piece (to the extent that these intentions are even determinable in the relevant sense, even, sometimes, for the author herself), that the work that texts do depends deeply also on the context of their production and on the contexts of their reception and so on.

I am not making the claim that every text means every possible thing or some such facile relativist point you may want to attribute to me upon hearing this, but I am saying that just as authors rely on the archive of past creativity they rely as well on the ongoing collaborative expressivity of readers for whatever force, meaning, relevance, and abiding life their texts attain to.

What you intended to communicate in an off-the cuff way in your text (believe me, I understand) seems to have resonated with Michael Anissimov enough that he recommended your text to me in the midst of a conversation we were having on technodevelopmental politics. Maybe Michael is reading more into your text as well when he finds it such a useful way to summarize his own sense of the issues, as you would say I am when I say your text symptomizes some problems I grapple with when technology-talk goes "superlative" in various ways.

Writers have less say than they would like as to the ways in which their work will be taken up by the world and the work it will do once it has been released into the dynamic reception of that world. Even if that yields frustrations for me as it does any other writer from time to time, I'll admit that I wouldn't have it any other way. Of such vulnerabilities, pleasures, frustrations, and serendipities is freedom made.

Dale Carrico said...

more helpful than dismissing people's views out of hand as involving "robot armies" or "nanosanta"...

just mocking them at the meta level for uncoolness or disliked political implications

Ridicule is a perfectly appropriate response to the ridiculous.

You would be foolish indeed to fool yourself into thinking you can circumvent this critique of mine by pretending it a facile bit of name-calling or an attribution to Superlative Technocentrics of "uncoolness," whatever that might amount to in this sort of context (I daresay few versions of technocentricity, including my own, manage to rate as "cool" in any general accounting, as such things go).

As for the real world being messy and full of unintended consequences -- absolutely, and I think most of what Dale refers to as "superlative technology discourse" recognizes this.

I don't agree that Superlative Technocentricity recognizes such points at all, except perhaps occasionally in a superficial way that has no discernible impact on its actual analyses, proposals, or rhetoric apart from genuflections to critics who demand a little nuance and social responsibility in arguments in public places.

If "Superlativity" was really as reasonable as you want to assure us it is, I suspect it would sound altogether more reasonable than it does.

[T]here are arguments why, for example, transhuman AI might be more capable of managing messes and predicting repercussions than humans are, if we get it right. It's worth dealing with these arguments...

I completely disagree. Of all the variations of Superlative Technology discourse the Singularitarians offer up the most arrant and damaging foolishness of all. I think the Singularitarians need to get more of a handle on "I" before they bray any more about "AI," and they need to direct their attention to actually-existing malware rather than made up non-risks in which Bad Robot Gods foment ugly apocalypses.

At least the Technological Immortalists -- another Superlative Technocentricity about which I have many critical things to say -- might just manage to increase medical r & d in ways that will improve actually existing healthcare and lifespan for all. But the Singularitarians, I'm afraid, all too often seem to me just this side of batshit crazy.

But, uh, you know, good luck with the whole Robot Rapture thing and stuff.

Unknown said...

Dale, it might be illustrative for you to write up a brief example of a non-"Superlative technology" argument that still recognizes the potential large-scale impacts of the NBIC technologies. You offer a hint of one with the reference to technological immortalitists, and I suspect that it would be useful for the various participants in this conversation to see an example of a non-Superlative argument that can't be dismissed as Luddism.

Anonymous said...

Dale, if you believe you have provided a substantive critique of singularity predictions anywhere in these posts, one that's about the world itself rather than about people who hold particular beliefs about the world, I'm genuinely curious as to what you think it is. Just noting that singularitarianism is dangerous under the assumption that its claims are false doesn't count -- I know that, and what I'd like to know is why the claims are false.

I have some objections to your use of the words robot and rapture.

Anonymous said...

BTW -- it's also possible that we'll just have to agree to disagree on what constitutes an "argument", and in that case there's probably just not much point in us talking. Either way is OK with me.