The art of the impossible
So the question of last weekend very quickly became: how do you tell plausible fantasy from wild possibility? It's a good conversation starter.
One friend had a simple assessment: "They are all nuts," he said, after glancing over the weekend's program. The problem is that 150 years ago anyone predicting today's airline economy class would also have sounded nuts.
Last weekend's (un)conference was called Convergence, but the description tried to convey the sense of danger of crossing the streams. The four elements that were supposed to converge: computing, biotech, cognitive technology, and nanotechnology. Or, as the four-colored conference buttons and T-shirts had it, biotech, infotech, cognotech, and nanotech.
Unconferences seem to be the current trend. I'm guessing, based on very little knowledge, that it was started by Tim O'Reilly's FOO camps or possibly the long-running invitation-only Hackers conference. The basic principle is: collect a bunch of smart, interesting, knowledgeable people and they'll construct their own program. After all, isn't the best part of all conferences the hallway chats and networking, rather than the talks? Having been to one now (yes, a very small sample), I think in most cases I'm going to prefer the organized variety: there's a lot to be said for a program committee that reviews the proposals.
The day before, the Center for Responsible Nanotechnology ran a much smaller seminar on Global Catastrophic Risks. It made a nice counterweight: the weekend was all about wild visions of the future; the seminar was all about the likelihood of our being wiped out by biological agents, astronomical catastrophe, or, most likely, our own stupidity. Favorite quote of the day, from Anders Sandberg: "Very smart people make very stupid mistakes, and they do it with surprising regularity." Sandberg learned this, he said, at Oxford, where he is a philosopher in the Institute for the Future of Humanity.
Ralph Merkle, co-inventor of public key cryptography, now working on diamond mechanosynthesis, said to start with physics textbooks, most notably the evergreen classic by Halliday and Resnick. You can see his point: if whatever-it-is violates the laws of physics it's not going to happen. That at least separates the kinds of ideas flying around at Convergence and the Singularity Summit from most paranormal claims: people promoting dowsing, astrology, ghosts, or ESP seem to be about as interested in the laws of physics as creationists are in the fossil record.
A sidelight: after years of The Skeptic, I'm tempted to dismiss as fantasy anything where the proponents tell you that it's just your fear that's preventing you from believing their claims. I've had this a lot - ghosts, alien spacecraft, alien abductions, apparently these things are happening all over the place and I'm just too phobic to admit it. Unfortunately, the behavior of adherents to a belief just isn't evidence that it's wrong.
Similarly, an idea isn't wrong just because its requirements are annoying. Do I want to believe that my continued good health depends on emulating Ray Kurzweil and taking 250 pills a day and, a load of injections weekly? Certainly not. But I can't prove it's not helping him. I can, however, joke that it's like those caloric restriction diets - doing it makes your life *seem* longer.
Merkle's other criterion: "Is it internally consistent?" This one's harder to assess, particularly if you aren't a scientific expert yourself.
But there is the technique of playing the man instead of the ball. Merkle, for example, is a cryonicist and is currently working on diamond mechanosynthesis. Put more simply, he's busy designing the tools that will be needed to build things atom by atom when - if - molecular manufacturing becomes a reality. If that sounds nutty, well, Merkle has earned the right to steam ahead unworried because his ideas about cryptography, which have become part of the technology we use every day to protect ecommerce transactions, were widely dismissed at first.
Analyzing language is also open to the scientifically less well-educated: do the proponents of the theory use a lot of non-standard terms that sound impressive but on inspection don't seem to mean anything? It helps if they can spell, but that's not a reliable indicator - snake oil salesmen can be very professional, and some well-educated excellent scientists can't spell worth a damn.
The Risks seminar threw out a useful criterion for assessing scenarios: would it make a good movie? If your threat to civilization can be easily imagined as a line delivered by Bruce Willis, it's probably unlikely. It's not a scientifically defensible principle, of course, but it has a lot to recommend it. In human history, what's killed the most people while we're worrying about dramatic events like climate change and colliding asteroids? Wars and pandemics.
So, where does that leave us? Waiting for deliverables, of course. Even if a goal sounds ludicrous working towards it may still produce useful results. A project like Aubrey de Grey's ideas about "curing aging" by developing techniques for directly repairing damage (or SENS, for Strategies for Engineered Negligible Senescence) seems a case in point. And life extension is the best hope for all of these crazy ideas. Because, let's face it: if it doesn't happen in our lifetime, it was impossible.
Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to firstname.lastname@example.org (but please turn off HTML).