Hey Marc,
Yeah, being a Singularity advocate ain't easy. Luckily, the transhumanist community is radically more tolerant and accepting than most, so I haven't gotten any mail bombs lately. (Transhumanists being the main audience of my Singularity advocacy.)
One of the foundational aspects of Singularity activism is creating a model of how much you expect a Friendly AI to cost in terms of money, time, and brainpower. I realized this from day one, so my model has been under development and revision for quite some time. It turns out that you almost certainly don't need to convince the world of the value of Singularity advocacy in order to make a smarter-than-human, kinder-than-human intelligence a reality.
Eliezer Yudkowsky started off by assuming that it would take a planet-wide open source movement, because, unlike many silly AI researchers of the past decades, he saw the *real size* of the problem of general intelligence. (See the document "Plan to Singularity".) When I first discovered the idea of Friendly AI, my specialty was still nanotechnology, so I had a very fuzzy model of what a "general intelligence" is. (As the vast majority of transhumanists still do.) I assumed that FAI would take a planet-wide open source movement as well, and was indeed quite shocked when I heard that Yudkowsky was starting to think that it could be done by a smaller research team.
But in the past few years, my model has improved radically. I've done a ton of reading in the field of cognitive science, and have a vastly improved theoretical understanding of the latest research on general intelligence and empirical research on its known subsystems. Better brain scanning devices have especially given us plenty of great knowledge. Exponentially accelerating computing power makes things easier too, no doubt. On the WTA-talk mailing list, a poster mentioned an interesting scale for scientific literature that goes like this;
- intraspecialist (papers written by and for scientists within a specific field)
- interspecialist (papers written for scientists in other fields)
- textbook (for college textbooks)
- popular science (90% of what you find in bookstores)
- mass-mediated (magazines, newspaper articles)
In the domain of cognitive science, 99.999% of society, including maybe around 95% of all transhumanists, are down in mass-media land. They are forced to rely on guesses, intuitions, and grapevine rumors regarding AI prediction timeframes because they lack personal visualizations of the systems that are being modeled. That is why my papers go on and on and on about the precise differences between human brains and likely AI brains. But the stuff I write only scratches the surface; you have to hit the books - *lots* of them - if you *really* want to see why people like myself, Yudkowsky, and Nick Bostrom are all expecting AI to hit the scene so soon. The fun thing is that once you do do the required reading, you converge to practically the same viewpoint on the issue as the 7 or 8 other people (almost all Bayesians, interestingly) who did the reading too. SIAI is also supported by several dozen people who haven't done the cognitive science reading, but I notice that they don't contribute too much.
If we know so much about general intelligence and the more precise details of its known subsystems, then why don't Singularitarians speak up about it a bit more often? The fact of the matter is that most of us are horribly terrified about communicating this knowledge because of the risk of it falling into the wrong minds. I mean, there's LOGI, which some Singularitarians regard as the most dangerous document on Earth. That should be enough to convince the right people to continue reading in the field independently. In fact, I really hope that nothing more along those lines is ever published again.
Anyway, the main point of that whole line of thought is that we don't need to convince the world to build a Friendly AI. A medium-sized circle of regular donors and a super-bright programming team, plus futuristic computing hardware, should be more than enough. Anyone reading this should seriously consider becoming one of those donors, and pay attention to the possible size of the gap between their guess of the difficulty of AI and the more educated guesses of people who spend thousands of hours of their personal time reading up on the topic. *We have no incentive to be overoptimistic*. If I thought AI was going to be really hard, then I would advocate the global open-source movement option, because that would be the most rational way of pursuing the Singularity. Now to respond to some of the specific points you made...
The thing is, most people are stuck at SL0, and it will take a lot of convincing to move even a small number of people to SL4. In fact I now even doubt that it's worth the expenditure of energy required to try persuade anyone.
Eternal life and the elimination of poverty, disease, ignorance, pain, unhappiness, and annoyance is not worth it? A recursively self-improving benevolent Artificial Intelligence, if built tomorrow, could do all of these things very quickly. I notice you mention the alternative of focusing on the technical side rather than the activist side, though. Unfortunately I worry that going at the current rate, we may not have enough resources to implement Friendly AI in time. A small circle of donors and programmers should be all that we need - and as always, anyone reading this should consider being one of those people.
Persuading this proverbial 'man on the street' would require a major reconstruction of the memetic structure of his mind From Metaphysics (belief in a rational universe, the Multiverse) through Epistemology (strict rationality, bayesian reasoning) through Ethics (Altruism) , the sort of memetic framework needed might just be well beyond most people.
I don't think that all of this is necessary to create a Singularity activist. I became one before I understood any of the stuff. Although I must admit, it is shockingly powerful to possess the memetic structure you refer to. But I must confess, it is *only in the past year* that I have accepted Bayes as the only self-consistent standard of rationality, MWI as the only coherent physics, and volition-based Friendliness as the referent to which human altruists throughout history have unknowingly been approximating. And my understanding and continued practice of these complex disciplines is still very much in progress.
So: do you think that the expenditure of time and energy needed to be a 'Singularity activist' is really worth the trouble? Or would energies be better spent simply working on the technical side?
Care to send me the money to work full time on the challenges of Friendly AI? If you did, I might consider it. Money not available? Looks like more activism is needed, then. I'm not even sure I'm smart enough to tackle the technical side. But the Singularity Institute is nearing its fourth anniversary and only one other potential FAI programmer has emerged (maybe two), so the situation may be bleak, and in the case of an emergency, people like me might actually be able to help, although I hope it won't come down to that...
Don't get me wrong, I'm glad you've taken on the activist task, but not everyone has the temperament to be an activist. I myself recently decided I'm not cut out to be a political activist, and so I've stopped arguing about Libertarianism
Oh, I have the temperament. It's just keeping a roof over my head and food on my plate without wasting time on a conventional career that I'm starting to get worried about.
Being an activist is tough work. You'll be ridiculed, you'll be abused etc. Definitely something which can even be hazardous to your health if you upset the wrong people.
And here's where I say exactly what you would expect someone in my position to say...
I'm willing to suffer if I think it will lead to a better world for everyone!