• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Many new writings


  • Please log in to reply
12 replies to this topic

#1 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 04 May 2004 - 03:24 AM


Hi y'all, I'm back after a three-week retreat from a lot of internet use. I wrote a bunch of stuff, please feel free to check it out or make comments:

Beyond Defaults

My personal catchphrase is: "Never expect the default to be optimal!" Just because it's there doesn't mean we should accept it. The default is just what happens to be around when we get here, but why not try doing better? We apply this reasoning very well in certain contexts and very poorly in others. For example, some people assume that Homo sapiens is an optimal species, or, complain about Homo sapiens without offering potential improvements or alternatives. The latter is what this essay is mostly about.

Who Are Singularity Activists?

Long paper describing what "Singularity activists" are. Singularity activists are folks trying to create smarter-than-human, kinder-than-human intelligence, because we view the human mind as one particular type of mind, a mind potentially subject to improvement along intellectual, emotional, and moral axes, among others. We figure that transhuman intelligences, properly constructed, will be far more capable of coming up with ideas to help people with their problems than we are. This is not worship of some speculative higher being, but straightforward common sense. We are not so arrogant to assume that the human species represents some ideal of inferential, moral, or intellectual optimality.

Shock Level Analysis

Analysis and expansion of a popular classification scale for various types of futurism. The "Future Shock Levels" scale helps one make decisions about which futurological concepts to present to which audiences; for example, you wouldn't want to talk about nanotechnology to someone who isn't familiar with genetic engineering or nuclear fusion. The interesting thing about the Shock Levels is that there seems to be a fairly regular migration from one end of the spectrum to the other, although this is not entirely certain. If this phenomenon does not hold for the whole scale, it at least holds for a major chunk of it.

Accomplishments of Transhumanist Organizations

A series of overviews of the concrete accomplishments of transhumanist organizations and individuals associated with them. I feel that having a list of these accomplishments would be a good idea, because it's convenient to see them all in one place, plus it will help encourage people to get involved. Nine organizations reviewed so far. Please send me an email if I missed anything.

Transhuman Intelligence and Altruism

A relatively short piece on transhuman intelligence and altruism. A rant of sorts - winds through several related topics. I'm generally arguing that transhuman intelligence and altruism really need to go together, that altruism is a real thing, and how much better the world could become if it were saturated with altruistic transhuman intelligence. Whether or not this eventually comes to pass, of course, will depend upon our actions in the present.

#2 reason

  • Guardian Reason
  • 1,101 posts
  • 251
  • Location:US

Posted 04 May 2004 - 10:54 PM

Aha - so it's all this Internet that's stopping me from writing swathes of stuff. Interesting. Good work there.

One thing I think you're missing from your analysis of accomplishments of transhumanist organizations is the estimate of eyeballs - how many people are looking at the these things? For example, Betterhumans content is better than my content (or at least far more frequently expanded), but the reason they are doing a far better job at promoting transhumanist ideas is because a hundred times as many people are reading.

Reason
Founder, Longevity Meme
reason@longevitymeme.org
http://www.longevitymeme.org

sponsored ad

  • Advert

#3 MichaelAnissimov

  • Topic Starter
  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 05 May 2004 - 02:43 PM

Hm, very true. I should email everybody and ask for a rough sketch of their web stats.

#4 cafinatedcowboy

  • Guest
  • 5 posts
  • 0
  • Location:San Francisco

Posted 09 May 2004 - 08:13 PM

I've got some questions about your Transhuman Intelligence and Altruism essay. Let me just start by letting people know how paranoid I am (Evolutionary relic, maybe. But until I have no need for it, I plan to cling to it). If altruism were the norm instead of just a concept, I agree that evryone would be having a much more fun. (Here comes the paranoia, bear with me) I just kind of want to know what we're supposed to do if we encounter some variety of threatening malevolent intelligence when we start exploring the universe. True altruists would have some serious problems with eliminating a threat completely, and since most forms of threat containment involve involuntary unpleasantries, altrusim doesn't really help there either. I guess I'm just wondering how widespread altruism will be beyond humanity and transhumans, and how a population consisting only of altruists will deal with a serious threat to it's well being. Like I said...I'm Paranoid.

#5 th3hegem0n

  • Guest
  • 379 posts
  • 4

Posted 09 May 2004 - 09:11 PM

I think paranoia is a great quality to have ^_^

#6 John_Ventureville

  • Guest
  • 279 posts
  • 6
  • Location:Planet Earth

Posted 09 May 2004 - 10:10 PM

Michael,

You forgot to list the Society for Venturism. We have been around for over a decade but are only now "coming out of a long dormancy" to take an active role in events. We have two primary goals, one is to help our members get quality cryonic suspensions and the other is to try to persuade to larger world out there to embrace our worldview so they too can be saved.

Construction for our cryonics community will begin in about 2-3 years and in time we hope to create a thriving community of several hundred people. The inhabitants will be of a wide age range and a key aim is to make living there affordable. We will pool our talents together as we work to get the word out.

Our quarterly magazine Physical Immortality has been around for one year now and the subscription base is steadily growing. I am always on the lookout for new subscribers and currently we make it into several hundred independant bookstores across the United States.

The money for these projects will come largely from the luxurious small resort we have in Arizona. Actually having a business engine to empower our goals is going to be a key thing which differentiates us from other immortalist/transhumanist organizations which tend to struggle financially.

So next time you draw up a list please keep us in mind. : )

Best wishes,

John Grigg
Alcor member
Society for Venturism Advisor and Secretary
www.venturist.org

#7 MichaelAnissimov

  • Topic Starter
  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 10 May 2004 - 07:18 AM

John,

D'oh, sorry for forgetting about the Society for Venturism, as well as Alcor, come to think of it! Very silly considering how I recently got the PI magazine in the mail. However, you must admit that it is unfortunate how few self-identifying Venturists are coming out and getting involved with the larger transhumanist community. Hopefully that will change in the future.

I agree that having a business component does indeed differentiate you from other transhumanist organizations, and hope that if you demonstrate sufficient levels of success, other transhumanists will consider following that model. The problem with the business thing is that it requires 1) good management and 2) a nice level of funds to get started. Cryonicists also tend to be uniquely confident of their point of view unlike the many wishy-washy "transhumanists" out there, resulting in greater committment, money-wise and time-wise. Early cryonicists especially had to deal with a society "advocating complacency in the face of massive, continuous loss of human life" largely on their own, to paraphrase the WTA FAQ. So kudos to them.

#8 MichaelAnissimov

  • Topic Starter
  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 10 May 2004 - 07:20 AM

Mikey!

Paranoia in this context definitely makes sense. The solution to the problem would be a radical ability to self-revise in the face of threats. A rational altruistic society might continue research of brain structures necessary to prevail in violent conflicts without employing them on their own. This could probably be done through advanced simulations, and the extent of simulations done could depend upon the estimated probability of eventually encountering malevolent intelligence elsewhere in the universe.

Ideally, should a malevolent alien intelligence be detected, the first persons to detect the aliens could send out alert signals at the speed of light. This massive altruistic civilization could then take action to temporarily self-revise in the face of the threat, replacing blissfully altruistic brain structures with paranoid, battle-ready cognitive systems. Beings with complete access to their source code would experience far less "philosophical inertia" than human beings do, with the capability for rapid cognitive restructuring based on the immediate situation.

It is worth noting that for an ideal goal system, "be ready to defend your loved ones" falls immediately and naturally out of altruistic ethics. There would be no human-style psychological dilemmas, because doing the necessary preparatory research for combat-capable cognitive structures (and associated weaponry) would simply be the rational thing to do. It just really pisses me off when those cognitive structures are being used in the *absence* of life-threatening malevolent intelligence.

The real problem with "true" altruism is that it's only really sensible when everyone else in the universe is a true altruist. If some sentient being(s) are trying to massacre you, you simply have to fight back or just lay down and die. Any altruist who wants altruism in the universe to continue will indeed do the rational thing - fight back.

#9 MichaelAnissimov

  • Topic Starter
  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 10 May 2004 - 08:42 AM

Hey Marc,

Yeah, being a Singularity advocate ain't easy. Luckily, the transhumanist community is radically more tolerant and accepting than most, so I haven't gotten any mail bombs lately. (Transhumanists being the main audience of my Singularity advocacy.)

One of the foundational aspects of Singularity activism is creating a model of how much you expect a Friendly AI to cost in terms of money, time, and brainpower. I realized this from day one, so my model has been under development and revision for quite some time. It turns out that you almost certainly don't need to convince the world of the value of Singularity advocacy in order to make a smarter-than-human, kinder-than-human intelligence a reality.

Eliezer Yudkowsky started off by assuming that it would take a planet-wide open source movement, because, unlike many silly AI researchers of the past decades, he saw the *real size* of the problem of general intelligence. (See the document "Plan to Singularity".) When I first discovered the idea of Friendly AI, my specialty was still nanotechnology, so I had a very fuzzy model of what a "general intelligence" is. (As the vast majority of transhumanists still do.) I assumed that FAI would take a planet-wide open source movement as well, and was indeed quite shocked when I heard that Yudkowsky was starting to think that it could be done by a smaller research team.

But in the past few years, my model has improved radically. I've done a ton of reading in the field of cognitive science, and have a vastly improved theoretical understanding of the latest research on general intelligence and empirical research on its known subsystems. Better brain scanning devices have especially given us plenty of great knowledge. Exponentially accelerating computing power makes things easier too, no doubt. On the WTA-talk mailing list, a poster mentioned an interesting scale for scientific literature that goes like this;

- intraspecialist (papers written by and for scientists within a specific field)
- interspecialist (papers written for scientists in other fields)
- textbook (for college textbooks)
- popular science (90% of what you find in bookstores)
- mass-mediated (magazines, newspaper articles)

In the domain of cognitive science, 99.999% of society, including maybe around 95% of all transhumanists, are down in mass-media land. They are forced to rely on guesses, intuitions, and grapevine rumors regarding AI prediction timeframes because they lack personal visualizations of the systems that are being modeled. That is why my papers go on and on and on about the precise differences between human brains and likely AI brains. But the stuff I write only scratches the surface; you have to hit the books - *lots* of them - if you *really* want to see why people like myself, Yudkowsky, and Nick Bostrom are all expecting AI to hit the scene so soon. The fun thing is that once you do do the required reading, you converge to practically the same viewpoint on the issue as the 7 or 8 other people (almost all Bayesians, interestingly) who did the reading too. SIAI is also supported by several dozen people who haven't done the cognitive science reading, but I notice that they don't contribute too much.

If we know so much about general intelligence and the more precise details of its known subsystems, then why don't Singularitarians speak up about it a bit more often? The fact of the matter is that most of us are horribly terrified about communicating this knowledge because of the risk of it falling into the wrong minds. I mean, there's LOGI, which some Singularitarians regard as the most dangerous document on Earth. That should be enough to convince the right people to continue reading in the field independently. In fact, I really hope that nothing more along those lines is ever published again.

Anyway, the main point of that whole line of thought is that we don't need to convince the world to build a Friendly AI. A medium-sized circle of regular donors and a super-bright programming team, plus futuristic computing hardware, should be more than enough. Anyone reading this should seriously consider becoming one of those donors, and pay attention to the possible size of the gap between their guess of the difficulty of AI and the more educated guesses of people who spend thousands of hours of their personal time reading up on the topic. *We have no incentive to be overoptimistic*. If I thought AI was going to be really hard, then I would advocate the global open-source movement option, because that would be the most rational way of pursuing the Singularity. Now to respond to some of the specific points you made...

The thing is, most people are stuck at SL0, and it will take a lot of convincing to move even a small number of people to SL4. In fact I now even doubt that it's worth the expenditure of energy required to try persuade anyone.


Eternal life and the elimination of poverty, disease, ignorance, pain, unhappiness, and annoyance is not worth it? A recursively self-improving benevolent Artificial Intelligence, if built tomorrow, could do all of these things very quickly. I notice you mention the alternative of focusing on the technical side rather than the activist side, though. Unfortunately I worry that going at the current rate, we may not have enough resources to implement Friendly AI in time. A small circle of donors and programmers should be all that we need - and as always, anyone reading this should consider being one of those people.

Persuading this proverbial 'man on the street' would require a major reconstruction of the memetic structure of his mind  From Metaphysics (belief in a rational universe, the Multiverse) through Epistemology (strict rationality, bayesian reasoning) through Ethics (Altruism) , the sort of memetic framework needed might just be well beyond most people.


I don't think that all of this is necessary to create a Singularity activist. I became one before I understood any of the stuff. Although I must admit, it is shockingly powerful to possess the memetic structure you refer to. But I must confess, it is *only in the past year* that I have accepted Bayes as the only self-consistent standard of rationality, MWI as the only coherent physics, and volition-based Friendliness as the referent to which human altruists throughout history have unknowingly been approximating. And my understanding and continued practice of these complex disciplines is still very much in progress.

So: do you think that the expenditure of time and energy needed to be a 'Singularity activist' is really worth the trouble? Or would energies be better spent simply working on the technical side?


Care to send me the money to work full time on the challenges of Friendly AI? If you did, I might consider it. Money not available? Looks like more activism is needed, then. I'm not even sure I'm smart enough to tackle the technical side. But the Singularity Institute is nearing its fourth anniversary and only one other potential FAI programmer has emerged (maybe two), so the situation may be bleak, and in the case of an emergency, people like me might actually be able to help, although I hope it won't come down to that...

Don't get me wrong, I'm glad you've taken on the activist task, but not everyone has the temperament to be an activist. I myself recently decided I'm not cut out to be a political activist, and so I've stopped arguing about Libertarianism


Oh, I have the temperament. It's just keeping a roof over my head and food on my plate without wasting time on a conventional career that I'm starting to get worried about.

Being an activist is tough work. You'll be ridiculed, you'll be abused etc. Definitely something which can even be hazardous to your health if you upset the wrong people.


And here's where I say exactly what you would expect someone in my position to say...

I'm willing to suffer if I think it will lead to a better world for everyone!

#10 bitster

  • Guest
  • 29 posts
  • 0

Posted 10 May 2004 - 09:21 PM

Re: Altruism and Transhuman intelligence

I tend to frame the concept of altruism as simply the "selfishness" of a larger organism.

Biological evolution seems to have demanded barriers between agents in order to preserve diversity. If genetic connectivity were too promiscuous, then genetic mutations would affect more than just the creature they they were born in. Since most mutations are not beneficial, this would have been a big threat to the survival of the species, and then of life in general.

The same principle applies to a certain extent in information space. We have inherited strong individualistic concepts of identity from our bodies. The skull itself is a great example of how disconnected we are from one another, and it shows up in the hedonism debate. The very term "SELF-ishness" points directly at concepts of separate identity.

Humans, however, with their remarkable communications abilities, have begun to circumvent the biological concept of self - and thus, "self-ishness". We now recognize many different forms of collective entities which we can participate in - nation states, corporations (literally, "bodies"), clubs, communities, and families. Many of these are constructed in a democratic fashion whereby they can be understood to have their own intersts, motivations, perceptions and even consciousness that is distinct from any of the single individual humans that comprise it - often in much the same way that subdivisions of the brain on numerous physical scales collectively comprise a person's mind.

With this model in mind, I tend to see "altruism" as simply a self-interest framed in a larger scope. Given the natural advantages that altruistic behaviour grants to the larger organism, it's no wonder that it is encouraged and selected within human organizations, despite how irrational it may sound on the level of the individual human. Individual altruistic acts, then, seem to be evidence of that individual's connection to and recognition of the larger entity's value and existence.

Needless to say, there are tremendous implications for this model for the immediate future. The Internet is creating an environment where connectivity between computational systems - humans included - is becoming ubiquitous. Connectivity is the prime ingredient for collective organization and the altruism that results. At present, our computer systems are more promiscously connected to each other over the Internet than they are to us, as evidenced by comparing the bandwidth that exists in internet links to the bandwidth that exists in human-computer interfaces. Because of this, it is easy to continue to support our concepts of individualistic, separate identity as humans.

In contrast, our computers and their networks are increasingly drifiting towards unification. The ecology of computer viruses is prompting users and vendors alike to constantly be ready to download updates to software to plug the holes the malware exploit. These updates take what was once a user-controlled "product" that could exist individually and turn it into a "serivce" provided by just one entity - in this case, the vendor. Furthermore, The phenomenon of copyright piracy is forcing media "vendors" to work with hardware & software makers to lock down previously assumed consumer freedoms with "Digital Rights Management" systems. Laws like the American Digital Millenium Copyright Act criminalize certain activities that were previously permitted to people who bought, and thus, owned, consumer media devices. The devices are now migrating to a point where they are not so much owned by the consumers who buy them as they are rented (ala cable boxes) from the central entity that dictates what you can or can't do with them. The entities that the devices serve is consolidating away from individual consumers into the hands of fewer, monolithic organizations.

I suspect that when neurotransistors open up the bandwidth gap between our brains and our computer systems, a similar impact will be had on human individualism. The technology that allows one more intimate connections with machines will also allow us more intimate connections with one another. If we can still create these "altruistic" collective organizations despite the connectivity limitations that we evolved with, how much more altrusitic can organizations become with even greater connectivity?

#11 th3hegem0n

  • Guest
  • 379 posts
  • 4

Posted 10 May 2004 - 11:38 PM

That's some great insight.

#12 d_m_radetsky

  • Guest
  • 17 posts
  • 0

Posted 15 May 2004 - 06:04 AM

Marc, has anything really impressive happened in AI theorem-proving recently? I ask because the last thing I'd heard of was Gerletner's (sp?) program proving pons asinorum (although that was probably about thirty years ago). Unless something really impressive has happened since then, I'd be pretty skeptical of anything as significant as the Hypothesis being proved. I admit, I don't keep up with the field, but I'd expect I'd have heard of, say, a machine proving the fundamental theorem of arithmetic.

Anyhow, if you're going to aim high, why aren't you working on P=NP? that seems like it'd be a bit more relevant.

sponsored ad

  • Advert

#13 Infernity

  • Guest
  • 3,322 posts
  • 11
  • Location:Israel (originally from Amsterdam, Holland)

Posted 14 February 2005 - 07:18 PM

cafinatedcowboy, as long as you are aware to your paranoia- you are not a lost case :)
But, if you know all is just a paranoia- how come you are letting it happen and control you...?

Yours
~Infernity




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users