• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans

Photo
- - - - -

Michael Anissimov: ImmInst Chat 4-11-04


  • Please log in to reply
1 reply to this topic

#1 Bruce Klein

  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 01 March 2004 - 10:33 AM


Chat: Humanism & Immortalism
ImmInst Director, Michael Anissimov, has been invited to speak at the annual American Humanist Association Conference to be held in Las Vegas Nevada from May 6 to 9, 2004. He'll chat with ImmInst members on how Humanism relates to Immortalism.

Chat Time: Sunday Apr 11, 2004 @ 8 PM Eastern [Time Zone Help]

Chat Room: http://www.imminst.org/chat
or, Server: irc.lucifer.com - Port: 6667 - #immortal

Posted Image
http://www.americanh...ence/index.html

Posted Image
HomePage: http://www.accelerat...re.com/michael/

Anissimov's Articles:

Hacking the Maximum Lifespan
Speculist Interview: Happily Ever After – Speaking of the Future
Accelerating Progress and the Potential Consequences of Smarter than Human Intelligence
Objections To Immortality - Answering Leon Kass
Why Immortality

#2 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 12 April 2004 - 02:11 AM

Fun fun chat!


[16:45] <Ben> planning to pursue work in neurotech - Ocrazor is my hero
[16:48] *** Vaporeon (blah@159-134-218-235.as2.chf.cork.eircom.net) has quit IRC [Ping timeout]
[16:51] *** Resonte (resonte@host81-133-79-29.in-addr.btopenworld.com) has joined #immortal
[16:52] *** Vaporeon (blah@159-134-219-145.as2.chf.cork.eircom.net) has joined #immortal
[16:53] [MichaelA] I'm interested in philosophy and policy of Artificial Intelligence but haven't done any programming yet
[16:54] <th3hegem0n> yeah i don't know if anyone noticed but i've been talking to that rikard kid
[16:54] <th3hegem0n> the guy claiming he invented AI or whatever
[16:55] <TylerEmerson> Ugh, stop there
[16:55] <th3hegem0n> haha
[16:55] <TylerEmerson> That's all you need to say :)
[16:55] <Vaporeon> lol
[16:55] <th3hegem0n> He's rather naiive but his concept, if implemented actually does have the potential to become a singularity
[16:56] <Ben> Hi back! <th3hegem0n> hello
[16:56] <Ben> whoops
[16:56] [MichaelA] Hege, I really don't think so
[16:56] *** FutureQ (~FutureQ@c-24-21-73-213.client.comcast.net) has joined #immortal
[16:57] [MichaelA] Somone needs to write an AI Theory Generator just like the Postmodernist Generator
[16:57] <th3hegem0n> Ah well I had a long conversation in which he explained the idea
[16:57] <TylerEmerson> I couldn't find a discernible "concept" in the morass of broken grammar and usage
[16:57] [MichaelA] http://www.elsewhere...bin/postmodern/
[16:57] <TylerEmerson> It tends to give one spasms
[16:58] [MichaelA] tyler i don'tkno wha ur talking bout
[16:58] <TylerEmerson> lol
[16:58] <TylerEmerson> :)
[17:00] <th3hegem0n> Eh, if I had a way around school and shit to just get involved with a research team I'd have Singularity running around doing shit by the end of the year
[17:01] [MichaelA] "A Singularity running around doing shit"?
[17:01] [MichaelA] What exactly are you visualizing
[17:01] <Ben> lol
[17:01] [MichaelA] I imagine a project leading up to recursive self-improvement, at which point everything immediately goes out the window
[17:01] <th3hegem0n> Basically a simple robot that learns and comprehends the world. It would become concsious pretty quick and learn like a human
[17:02] <th3hegem0n> Yep, it would have innovative ideas that would shoot technology through the roof i'm assuming
[17:02] [MichaelA] It's easier to build AIs in software microworlds than in robotic bodies; the latter involves a massive amount of programming that has to do with motor functions
[17:02] [MichaelA] The Singularity isn't really about technology though, it's about intelligence
[17:02] <th3hegem0n> I know, but they are so limited otherwise
[17:02] [MichaelA] Hm, is it 8PM EST now?
[17:02] <th3hegem0n> Yep.
[17:03] [MichaelA] Well then
[17:03] [MichaelA] ~*Humanism and Immortalism: ImmInst Chat*~
[17:03] [MichaelA] Go!
[17:03] [MichaelA] So, everybody, I'm going to Las Vegas next month to talk about transhumanism
[17:03] [MichaelA] I'm interested in ideas
[17:03] [MichaelA] About how to talk about it
[17:03] [MichaelA] Without shocking people
[17:04] [MichaelA] My current idea is to basically present a super-skimmed down version of the WTA FAQ
[17:04] <Jonesey> whats in vegas michaela?
[17:04] [MichaelA] With a bit more emphasis on creating strategies to counteract existential risks
[17:04] [MichaelA] http://www.americanh...ence/index.html
[17:05] <th3hegem0n> I'm assuming this is a conference in which religious people aren't common?
[17:05] [MichaelA] Yep
[17:05] [MichaelA] All atheist
[17:05] [MichaelA] But probably a fair amount of close-minded people will be there, people who are suspicious of futurism and especially transhumanism
[17:06] [MichaelA] Well, not close-minded really
[17:06] <th3hegem0n> What exactly is transhumanism?
[17:06] [MichaelA] www.transhumanism.org
[17:06] *** Chubtoad (~Chubtoad@h-68-166-253-127.chcgilgm.dynamic.covad.net) has joined #immortal
[17:06] [MichaelA] The philosophy that views humanity in its current form as an interim step on a longer road of improvement
[17:07] [MichaelA] The philosophy that affirms our desire to enhance our mental, physical, and emotional capacities through the use of technology
[17:07] [MichaelA] Beyond "natural" limits
[17:07] [MichaelA] Some would call people with eyeglasses "proto-transhumans"
[17:07] <th3hegem0n> lol
[17:07] *** outlawpoet (~Justin@24.130.29.164) has joined #immortal
[17:07] [MichaelA] A human being designed to fly through the air on its own power source would probably qualify as a "transhuman"
[17:08] [MichaelA] For example
[17:08] <th3hegem0n> Well for a group of atheists that seems like a topic that could be easily addressed without shock
[17:08] [MichaelA] Heh, I wouldn't be so sure
[17:08] [MichaelA] Many humanists probably view Homo sapiens in its current form as some optimal being
[17:08] [MichaelA] And the thought of integrating ourselves with "technology" would send chills through their spine
[17:09] <th3hegem0n> Heh, "integrating" is a scary word
[17:09] [MichaelA] Because the technology they are familiar with tends to be stupid, simple, and frustrating
[17:09] [MichaelA] Not really
[17:09] [MichaelA] What word would you use?
[17:10] <th3hegem0n> I would probably approach it differently
[17:10] <th3hegem0n> Get people excited about the possiblities of transhumanism
[17:10] [MichaelA] Well, yeah
[17:10] <th3hegem0n> Who wouldn't want some extra memory space or processing power in the brain?
[17:11] [MichaelA] Oh, people are good with coming up with reasons why not
[17:11] <Ben> Just like the THst movement, pro and con groups tend top break out along sex lines
[17:11] *** WebVisitor (~nobody@c-65-34-223-179.se.client2.attbi.com) has quit IRC [Read error: Connection reset by peer]
[17:11] <FutureQ> Hi all, well if the Brights frum is any indication most humanists view transhumanism as another religion and have invested way too much intellectual fiat into beleiving death is ok.
[17:11] [MichaelA] The Brights forum is indeed a good example
[17:12] <th3hegem0n> Have a link?
[17:12] <Ben> The majority of Humanists I have spoken with are receptive to THst memes
[17:12] [MichaelA] Quasi-receptive
[17:12] <Ben> not gung-ho, but receptive, and that is what we need to start
[17:12] <FutureQ> That's not been my experience
[17:12] [MichaelA] http://www.the-brights.net/forums/
[17:13] [MichaelA] I see a mix
[17:13] [MichaelA] The more scientifically literate tend to be more receptive
[17:13] <Ben> agreed
[17:13] [MichaelA] Those more interested in politics, the humanities, and so on, tend to be stuck inside a box
[17:13] *** TylerEmerson (~TylerEmer@host-64-72-54-38.classicnet.net) has quit IRC [Read error: Connection reset by peer]
[17:13] <FutureQ> I see about 1/5 4/5ths
[17:13] [MichaelA] Receptive vs. non-Receptive?
[17:13] <FutureQ> 1/5 ok with it
[17:14] *** WebVisitor (~nobody@c-65-34-223-179.se.client2.attbi.com) has joined #immortal
[17:14] *** TylerE (~TylerEmer@host-64-72-54-38.classicnet.net) has joined #immortal
[17:14] <Ben> what is our bar of acceptance
[17:14] [MichaelA] All of my meatspace friends are atheists and probably half of them are receptive to transhumanist idaes
[17:14] [MichaelA] ideas*
[17:14] <FutureQ> Man this thing s slow.
[17:15] [MichaelA] What is?
[17:15] <FutureQ> java app
[17:15] [MichaelA] Yeah I know
[17:15] <Ben> tolerance is a start, most people do not have that, then exploration of ideas, then actual pursuit
[17:15] [MichaelA] You should download vIRC
[17:15] [MichaelA] Life extension seems to be a transhumanist goal that appeals pretty widely
[17:15] <Ben> most pHumanists I know are tolerant
[17:15] [MichaelA] Depends on what you are talking about
[17:16] [MichaelA] There is a whole spectrum of ideas ranging from SL0 to SL4
[17:16] <FutureQ> One thing is for sure, once the possibilities begin to actually surface as real I expect then the humanists to ghom on in large numbers.
[17:16] [MichaelA] http://yudkowsky.net...hocklevels.html
[17:16] <th3hegem0n> Ah
[17:16] [MichaelA] Definitely agreed, FutureQ
[17:16] [MichaelA] Making it far easier to actually develop the technologies than convince everybody
[17:16] <th3hegem0n> Most people don't really care.
[17:16] <FutureQ> SL0-SL4?
[17:17] [MichaelA] But there are certain people you need to convince in order to get the support required to create the technologies
[17:17] [MichaelA] See my above link, FutureQ
[17:17] [MichaelA] Yeah, I would agree that many people don't care
[17:17] [MichaelA] Usually because they can't imagine it in enough detail to create a wonderful set of scenarios
[17:17] <FutureQ> Sorry, I should know better but i'm myopic ately.
[17:17] [MichaelA] I.e., future scenarios where death or suffering are eliminated
[17:17] *** gustavo (~gustavo@pool-141-156-240-39.res.east.verizon.net) has joined #immortal
[17:18] [MichaelA] http://www.accelerat...umanisttips.htm
[17:18] [MichaelA] Any transhumanist activists here may be interested in the above link
[17:18] <Ben> I think you should try to focus on a move from SL0 to SL1 - show them that these things are possible, then dabble in the higher levels. Those who are ready will give chase
[17:19] [MichaelA] That's the plan
[17:19] [MichaelA] May put in some SL2 stuff
[17:19] [MichaelA] Near the end
[17:19] <Ben> Sounds good.
[17:19] <th3hegem0n> Immortality is usually even too much for some to think about
[17:19] [MichaelA] I figure that SL0s are so far out that reaching them with the transhumanist meme is pretty difficult
[17:19] <FutureQ> I wish Imminst had a way to gather all the url';s given in chats and make a pulished list.
[17:20] <Ben> Its too much for a lot of THsts to have faith in, including myself.
[17:20] [MichaelA] I.e., the ratio of SL0s who go on to transhumanist ideas after hearing them casually for the first time is so low for it not to be worth our time focusing on them
[17:20] <Ben> I'm for the pursuit, but I'm clear eyed about the probabilities
[17:20] <th3hegem0n> Well there are only certain types of people in the world that are seeking anything more than what they have
[17:20] [MichaelA] FutureQ: you can look up past chat transcripts, just go crtl-F, and type in "http"
[17:20] *** John_Ventureville (~John_Vent@24-116-21-4.cpe.cableone.net) has joined #immortal
[17:21] [MichaelA] I agree Ben, it's important to have clarity while one is pursuing, clarity in the sense of an objective observer
[17:21] <th3hegem0n> The only way to convince people like that is to give them specific examples like "You don't want your grandmother to die of cancer"
[17:21] <Jonesey> this must be a discouraging time for the humanists. fundies are livin large and in charge
[17:21] [MichaelA] Yeah, sadly I feel like I'm "selling out" when I present it like that
[17:21] <Ben> Many of these people are dying, and though they may nt jump for Cryonics, like Sperling, they may donate to futurist causees they can understand.
[17:22] <FutureQ> ctrl-F in the java app or the site?
[17:22] [MichaelA] Probably Jonesey...sadly they feel that they need to convert everyone to their point of view to have a positive impact on the world; the transhumanist programme doesn't require this step, luckily.
[17:22] <th3hegem0n> Selling out is better than nothign at all
[17:22] [MichaelA] Going down to the level of saying "You don't want grammy to die" is a bit much for me
[17:23] <John_Ventureville> hello FuturQ, Omnido
[17:23] <Ben> Who is this metaphysical "objective observer?"
[17:23] <th3hegem0n> So you are attempting to persuade those a little bit higher on the scale then
[17:23] <FutureQ> Hi, John
[17:23] [MichaelA] A psuedo-agent, a homonculus software being running on the hardware of my brain :)
[17:23] <John_Ventureville> hello
[17:23] <Ben> I'm not advocating relativism, but we all are constrained by perspectives as individuals and as a species
[17:23] [MichaelA] homunculus*
[17:23] <th3hegem0n> lol
[17:24] [MichaelA] Ben, absolutely
[17:24] <th3hegem0n> homunculus ^ ^
[17:24] [MichaelA] That right there is meta-order rationality
[17:24] [MichaelA] Calibrating for noise
[17:24] [MichaelA] And observer selection effects
[17:24] [MichaelA] Transhumanism is about going outside the box of Homo saipiens sapiens in deep and philosophical ways
[17:25] [MichaelA] That's the spark that goes on to catalyze further learning, independently motivated learning
[17:25] <FutureQ> John, chk PM
[17:25] <John_Ventureville> ok
[17:26] <Ben> Mmm agreed,,. However I take an internalist-realist stance. You cannot know what is outside your power to know (at present or as a condition of your existence). About what is outisde of our power to know, we cannot say much.
[17:26] <Ben> Stiull, we must workl with the best tools we have
[17:27] [MichaelA] Actually, you can build a probabilistic model of what is and isn't within your power to know, although it obviously has to be a *very* approximate one
[17:27] <Ben> Like Bostrom has, sure.
[17:27] [MichaelA] Nick Bostrom: Archpriest of Anthropics :D
[17:27] [MichaelA] Did you read about the Adam and Eve paradox?
[17:27] <Ben> :~}
[17:28] <Ben> not yet
[17:28] [MichaelA] Okay, I won't ruin it
[17:28] <Ben> I suppose inbreeding would be a given
[17:28] [MichaelA] Hm, "internalist-realist" doesn't come up with any hits on google
[17:28] <Ben> It's a neologism of Hilary Putnam
[17:29] [MichaelA] Ben, you might find http://www.rci.rutge...WST/wsthome.htm interesting
[17:29] [MichaelA] Part of a field that studies "mental blind spots" common to all of humanity
[17:29] <Ben> a Harvard Philoopher I studied last quarter, recently retired. He was in the Analytic tradition, but a severe critique of Critirialism as well as Richard Rorty's relativism
[17:30] <Ben> Criterialism
[17:30] [MichaelA] Yeah, I've heard of Putnam
[17:30] <Ben> Critierialism was a naive view that basically prevented us from aquiring any new scientific knowledge.
[17:31] <Ben> acquiring
[17:31] [MichaelA] I feel that most of traditional philosophical epistemology is thoroughly locked up in several boxes
[17:31] <Ben> Anyway, I'm wasting your chat time - sorry!
[17:31] [MichaelA] No one else is talking though, so we can let the chat evolve as we please :p
[17:32] <FutureQ> Like when the Librabry of Alexandria was burned by zealots?
[17:32] [MichaelA] Does anyone else want to talk about the relationship between humanism and immortalism?
[17:32] <John_Ventureville> I thought the chat did not actually start for another 25 minutes
[17:32] <John_Ventureville> officially, anyway
[17:32] <th3hegem0n> no it started 33 minutes ago
[17:32] [MichaelA] Yup
[17:32] [MichaelA] I was gathering advice for my talk in Las Vegas
[17:32] <FutureQ> did you rest you clock? oh yeah AZ is not on DST.
[17:32] [MichaelA] I'll be representing the Immortality Institute at the American Humanist Association 2004 conference
[17:33] <FutureQ> rest=reset
[17:33] <John_Ventureville> how did you get chosen for this honor?
[17:33] <John_Ventureville> what lead up to it?
[17:33] [MichaelA] The eminent James Hughes forwarded my name to them when they requested he send a rep to Las Vegas
[17:33] <John_Ventureville> excellent
[17:34] *** Vaporeon (blah@159-134-219-145.as2.chf.cork.eircom.net) has quit IRC [Ping timeout]
[17:34] <Ben> I see Humanism as a good start. While we want to reach out to everyone regardless of beliefs, Secular Humanists do not hold delusions about an infinite afterlife that makes this one unnecessary. The are also ethical explorers (Humanists, not just Atheists).
[17:34] [MichaelA] Ah, Ben, were you the guy who was involved in the SSA?
[17:34] <th3hegem0n> The easiest way to conquer involuntary death is implied in the name ^_^
[17:34] [MichaelA] Who is
[17:34] <Jonesey> "No, I don't know that atheists should be considered as citizens, nor should they be considered patriots. This is one nation under God."
[17:34] [MichaelA] They are "ethical explorers"
[17:34] [MichaelA] Which often leads to them being entirely useless
[17:35] <th3hegem0n> Thank you Bush
[17:35] <John_Ventureville> at least when it comes to cryonics, humanist organizations have not been the fertile ground for recruiting which many thought it would be
[17:35] *** cyborg01 (~y.k.y@147.4.225.217) has joined #immortal
[17:35] <Jonesey> George H.W. Bush, on August 27 1987
[17:35] <Ben> CFA,SSA, etc. I just contacted some SSA leaders to help you get word about why they were taking so long to tell you they wanted to hear your speech. " :~)
[17:35] [MichaelA] But exploring an interesting sector of the state space of morality, the space not contaminated by religious dogma
[17:35] [MichaelA] Ahhh, thanks :D
[17:35] [MichaelA] Appreciated
[17:35] [MichaelA] They switched my contact at some point
[17:36] [MichaelA] What activities are you involved in the most right now?
[17:37] <FutureQ> Speaking of the Bushes, is anyone here able to give me a good argument for why GWB should be president again in 2005, keeping in mind his love of Leon Kass's every word and nti bio science atttude of course?
[17:37] [MichaelA] I think it might be a little harder to get people to actually sign up for cryonics than to simply get them interested in transhumanist ideas, actually
[17:37] <th3hegem0n> Down with Bush...
[17:38] <FutureQ> Agreed, but there are acrtually a few Libertarian immortalists that think a tax cut is worth the com primise r just ahte Liberals enough to support Bush regardless.
[17:38] <Ben> Trying to keep my classes up as I establish the New Humanists student group (THSt too - http://groups.yahoo....NewHumanistsNU), finding a prof for my thesis paper, finading roommates or another place to live, sproposing the Transhumanist Networking System to ExUI and WTA, trying not to lose my mind, down with Bush, trying to get an NSF grant, looiking for $ for TransVision, reorganizing my ideas for a speech, etc.
[17:38] *** Utnapishtim (~jeromejth@82-45-233-193.cable.ubr03.hari.blueyonder.co.uk) has joined #immortal
[17:39] [MichaelA] Grant for what?
[17:39] [MichaelA] Nice list of activities btw
[17:39] [MichaelA] What are "New Humanists"?
[17:39] *** Chubtoad (~Chubtoad@h-68-166-253-127.chcgilgm.dynamic.covad.net) has quit IRC [Ping timeout]
[17:39] [MichaelA] And have you considered transferring some of your efforts from humanist activities to transhumanist ones? :)
[17:39] <Ben> That's the question. " :~) It need to be something small, manageable, and near-term.
[17:39] <John_Ventureville> FuturQ, the only reason I can think of is because he probably will keep us in Iraq long enough to possibly to do longterm good there in terms of nurturing a democratic government
[17:39] <Ben> I know it will relate to cybernetics and cognitive neurobiology.
[17:40] <Ben> Big piture-to specifci project - that's how to win them, or so I've heard
[17:40] [MichaelA] MetaBrain Expansion, right?
[17:40] <Utnapishtim> John: I'm joining late.. Positives of a potential second Bush term?
[17:40] <John_Ventureville> hopefully the bodycount on both sides will not be in vain
[17:40] <John_Ventureville> yep
[17:40] [MichaelA] The bodypoint in Iraq is peanuts to the bodycount of the Reaper, of course
[17:41] <John_Ventureville> true
[17:41] [MichaelA] body count*
[17:41] <th3hegem0n> Hah good point
[17:41] <Ben> Yes, well, an adaptation. I was right according to the definitions I encountered, but there are more issues I need to deal with . Robert Bradbury helped me see that.
[17:41] [MichaelA] Enough to make it worth totally igoring in my mind, despite its massive salience in the evil media that poisons everybody *cough*
[17:41] <John_Ventureville> or a war like the one in Viet Nam
[17:41] [MichaelA] Ah, what did Robert have to say to you?
[17:41] [MichaelA] I deeply agree with some of his ideas and deeply disagree with others
[17:42] <John_Ventureville> Bradbury?
[17:42] <cyborg01> That's not true MichaelA
[17:42] <FutureQ> The middle east all together may not pre singularity ever get the idea of democracy period, too many mellenia under despots making their decisions for them, even ther religion, and by singulairty arrival democracy will be moot anyway.
[17:42] <cyborg01> It's the difference between war and natural death
[17:42] [MichaelA] FutureQ, bingo
[17:42] *** eclecticdreamer (forti2de@c-66-41-106-197.mn.client2.attbi.com) has joined #immortal
[17:42] [MichaelA] Democracy is just a system that emerges when you toss a lot of human-level intelligences together with limited resources
[17:42] <Ben> Well, there are questions about legal status I had not considered, but more importantly, the fuzzy, sub-optimal uploading processes that might come early need to be addressed.
[17:43] <John_Ventureville> Iraq is pretty bad soil to try to plant the seeds of democracy
[17:43] <th3hegem0n> Vote-- Do you want the Singularity to occur?
[17:43] <Utnapishtim> John Ventureville: Well Vietnam was based on an entirely false premise. The domino theory that was its justification proved to be incorrect
[17:43] [MichaelA] Hege: it's insanely important to specify a definition first
[17:43] <eclecticdreamer> insane.. :p
[17:43] <th3hegem0n> A non-human intelligent concsiousness
[17:43] <Ben> Entitiness needs further exploration too.
[17:43] [MichaelA] Depends on what we each visualize the consequences of that to be, however, and how fast
[17:43] <John_Ventureville> if we were an occupier in the vein of Assyria or Rome they would be dealing with someone they "understood"
[17:44] <th3hegem0n> Exactly
[17:44] [MichaelA] The meaning of the word has become too diluted to just take a poll across the board like that, IMO
[17:44] <th3hegem0n> Any sane person would never want it to happen...
[17:44] [MichaelA] -.-
[17:44] <Utnapishtim> John: Yes it is.. But there are some serious changes needed in the region. The middle eastern nations are culturally and instiutionally ineffective. They will continue to not only lag behind in terms of their development but the gap will actually widen unless we intervene now
[17:44] [MichaelA] Classic Adversarialism
[17:44] <th3hegem0n> It would probably destroy us.
[17:45] [MichaelA] http://www.singinst....FAI/anthro.html
[17:45] <John_Ventureville> Utnap, some experts say Vietnam was "good" for humanity in the sense that it allowed the western and communist worlds to "blow off steam" without a nuclear exchange
[17:45] <FutureQ> I want the Singularity to occur n my terms, slwly enough to bring human level augmented intelligence level up with any emerging machine intelligence. That's my vote.
[17:45] [MichaelA] Yeah FutureQ, a lot of people seem to want that
[17:45] <Ben> As in continuity of entitiness, as opposed to identicality. What does it mean for the entire neural net to be deactivated and reactivated. *Most importantly, how does the Central State Identiy Theory or varients of it jive with the Cog Sci "token" identity theory or varients of it..
[17:45] [MichaelA] There are a lot of problems though
[17:45] <Utnapishtim> John: hmm. Not sure I buy that. I'd have to give that some more thought
[17:46] [MichaelA] Ben, if I were you, I would worry about someone creating a self-improving AI that destroys the world before you get anywhere with your IA experiments
[17:46] <John_Ventureville> the theory goes, "better southeast asia then europe!"
[17:46] <Ben> We may never have an adequate answer to some questions until we test memory in uploads of trained rats. - Can an upload "recognize" its memory tokens? It seems it should, but I cannot give a clear reason why yet.
[17:46] [MichaelA] The stuff you're talking about is all very great, but I'd rather do it in a safe environment, where there isn't a deep chance that we'll destroy ourselves
[17:47] [MichaelA] I can give an excellent reason, it's called causal functionalism!
[17:48] <cyborg01> I do agree Friendliness is an important issue but it is also a very hard problem
[17:48] [MichaelA] All the more reason to focus on it
[17:48] <Ben> phone
[17:49] <cyborg01> An SI can understand human minds on a local level.... but how about the dynamics that emerges out of people interacting with each other?
[17:50] <th3hegem0n> An SI?
[17:50] [MichaelA] "People interacting with each other" just involves modeling a system of human minds
[17:50] <cyborg01> Friendly AI
[17:50] <th3hegem0n> Oh ok.
[17:50] [MichaelA] And anyway, I can tell you with confidence "better than a human would"
[17:50] [MichaelA] No, SI stands for superintelligence
[17:50] [MichaelA] FAI stands for Friendly AI
[17:51] <Ben> I like functionalism, but I take more of a "Dennettian" instrumentalist perspective. Functionalism in Central State Identity theory cannot cope with missing hemispheres for instance
[17:51] <cyborg01> Better... but not *complete*
[17:52] <cyborg01> FAI will not be powerful enough to model global dynamics
[17:52] <Ben> At the same time, there is no evidence for souls or homunculi.
[17:52] [MichaelA] No one is claiming completeness
[17:52] [MichaelA] Anything can be modeled at a certain grain size
[17:52] <cyborg01> Exactly...
[17:52] [MichaelA] It's just a question of how much resolution
[17:52] <cyborg01> And that leaves a lot of room for ... problems
[17:53] [MichaelA] Sooo, it would be powerful enough to model global dynamics, just not as powerful as omniscience...?
[17:53] [MichaelA] Not the anthropomorphic problems you're probably thinking about
[17:53] <cyborg01> People will still have free will then
[17:53] <Ben> It is very attractive, and intuitive, that the synaptic circiuts would just "give rise" to the kind of entity that would have the memories.
[17:53] [MichaelA] "Free will" is whatever we say it is
[17:53] <cyborg01> I don't understand what's so anthropomorphic about my views
[17:54] [MichaelA] The old physics-free "free will" is dead, obviously
[17:54] [MichaelA] Okay, sorry for saying that
[17:54] <FutureQ> John did you chk my reply to PM?
[17:54] [MichaelA] Please share with me what "problems" you're visualizing
[17:54] <Ben> sell-out : ~)
[17:54] <cyborg01> The FAI cannot replace morality
[17:55] *** Utnapishtim (~jeromejth@82-45-233-193.cable.ubr03.hari.blueyonder.co.uk) has quit IRC [Quit:]
[17:55] [MichaelA] Sigh, no one ever said it would
[17:56] <cyborg01> Coarse-grain morality is an oxymoron --- do you want it??
[17:56] <Jonesey> the "F" will be so subjective, one person's "F" will be another person's "U"..."C"...you get the idea
[17:56] <th3hegem0n> Yep.
[17:56] <eclecticdreamer> I think trying to discuss FAI or SI in terms of its dangers are probably totally inaccurate, once FAI is achieved..
[17:56] [MichaelA] Yeah Jonesey, I used to think about it like that, reading CFAI slightly changed my visualization of how that works
[17:56] <eclecticdreamer> I mean before FAI is achieved
[17:56] [MichaelA] But if you want to, you can just say a FAI is a really powerful altruist
[17:56] <Jonesey> you decided friendly is not subjective?
[17:57] [MichaelA] That practices altruism regardless of whatever anyone says
[17:57] <Jonesey> but people may see that as unkind in that the FAI turns them into welfare mothers
[17:57] [MichaelA] Well, humans have a lot of underlying hardware in common
[17:57] <Jonesey> they may want to smash the FAI if it gets too kind..?
[17:57] <cyborg01> The bias of that altruist is the crux of the problem
[17:57] <Ben> problems: Ok, the program idea has certain objections that could be made to it - as John Searle has done, but for convenience, I'll employ it. If a "hard" upload is a replicant, a seperate but identical entity, then would a completely "rebooted" program on the same substrate be the same entity?
[17:57] [MichaelA] A FAI would be able to take everything into account and take actions based on little pieces of information like that, so as to *minimize* the overall dissatisfaction
[17:58] <eclecticdreamer> My own intuition, believes once FAI is achieved, it won't have the ability to take over the world like many believe
[17:58] <Ben> And if notr, what about partial reboots, as when we sleep?
[17:58] <eclecticdreamer> not even close.. it will be baby steps.. :>
[17:58] <John_Ventureville> I realize Ben and Michael (and others) are exceptions to this quote, but my employer recently told me the other day, "too many transhumanists worry about some future singularity, rather than taking care of their day to day business to get somewhere in life!"
[17:58] <th3hegem0n> Trust me the ablitiy is there
[17:58] <Ben> What I need to do is make the right catagories and distinctions to help me understand the phenomena
[17:58] [MichaelA] That's a problem with reconciling the future with the present, John, and it is tough
[17:59] [MichaelA] Probably responsible for a lot of people's negative reactions towards any futurism
[17:59] *** WebVisitor (~nobody@c-65-34-223-179.se.client2.attbi.com) has left #immortal [WebVisitor]
[17:59] <John_Ventureville> true
[17:59] *** WebVisitor (~nobody@c-65-34-223-179.se.client2.attbi.com) has joined #immortal
[17:59] <Ben> what about people living in the past?
[17:59] [MichaelA] I think that an excellent way to preserve our future safety is to focus almost exclusively on this precise future event, the Singularity
[17:59] <John_Ventureville> in what way?
[18:00] <John_Ventureville> the great would be to make a living working toward it
[18:00] <John_Ventureville> *great thing*
[18:00] <th3hegem0n> It's all luck anyway. There's nothing we can really do if it makes up it's mind.
[18:00] <John_Ventureville> like Ben does
[18:00] <cyborg01> MichaelA you have not answered the bias question
[18:00] [MichaelA] The "bias" of the altruist?
[18:00] [MichaelA] *sigh*
[18:00] <cyborg01> Yeah
[18:00] [MichaelA] I really don't like talking about FAI with people that haven't read through CFAI
[18:00] <eclecticdreamer> *sigh* is most right :p
[18:01] <eclecticdreamer> this is all hypothetical -- not even having the blueprint for FAI
[18:01] <Ben> I have great respect for those who want to devote themselves to one big goal. But its not for everyone, and truth be told, they need pr people.
[18:01] [MichaelA] You guys actually have to sit down and read if you want to even understand where any of us are vaguely coming from
[18:01] [MichaelA] Yeah, I'm trying to be a PR person
[18:01] * MichaelA blows his PR-whistle
[18:01] <Jonesey> yep michaelA i totally don't get it
[18:01] <Jonesey> what's this cfai and where can i read it?
[18:02] <MRAmes> Or read some of the intro documents... sometimes that is enough to get an general idea of FAI.
[18:02] [MichaelA] www.singinst.org/CFAI
[18:02] <cyborg01> Tell me in summary what is the solution to that
[18:02] <Ben> loopl
[18:02] <Ben> lol
[18:02] <MRAmes> www.singinst.org/intro
[18:02] [MichaelA] Cyborg, I don't know exactly what you mean
[18:02] <Jonesey> thanx MichaelA
[18:03] <eclecticdreamer> Michael, can you download those links to my mind? Thanks :p
[18:03] [MichaelA] A selflessly altruistic goal system is just a certain type of physical object
[18:03] [MichaelA] It does stuff
[18:03] <cyborg01> I have asked this question more than once on @SL4 and eliezer was speechless
[18:03] <MRAmes> Soon eclecticdreamer, soon.
[18:03] <eclecticdreamer> heh
[18:03] <cyborg01> Talk about Friendliness
[18:03] [MichaelA] Friendliness is ridiculously, insanely complicated
[18:03] <cyborg01> Not true
[18:03] <eclecticdreamer> it is
[18:03] <eclecticdreamer> complex
[18:04] [MichaelA] If Eliezer weren't around to make some sense of it, I'd probably advocate IA rather than AI, because I'd say "AI Friendliness is impossible, bye"
[18:04] <MRAmes> cyborg01: "Not true?" defend your statement.
[18:04] <eclecticdreamer> IA is more likely to occur before AI, I think..
[18:04] <eclecticdreamer> just gut feeling..
[18:04] <cyborg01> There is no objective morality period
[18:04] <th3hegem0n> Hahahaha
[18:04] <Ben> So is doing "good." It is worldvieww dependent in some sense, though we can tend to agree on food, clothing, shelter, survival, education...
[18:04] <eclecticdreamer> actually IA is already occuring in somewhat limited respects
[18:04] <cyborg01> This is incontrovertible
[18:04] <MRAmes> cyborg01: FAI isn't about objective morality.
[18:05] *** gustavo (~gustavo@pool-141-156-240-39.res.east.verizon.net) has quit IRC [Ping timeout]
[18:05] <th3hegem0n> Eclectic I don't think so.
[18:05] * serenade plugs eclecticdreamer's gut feeling into bayes'
[18:05] <serenade> :oo
[18:05] <eclecticdreamer> th3hegem0n, I think so
[18:05] <eclecticdreamer> :p
[18:05] <th3hegem0n> Well, I know so
[18:05] <th3hegem0n> :-p
[18:05] <eclecticdreamer> we choose to disagree
[18:05] [MichaelA] Gut feeling is what told us the world is flat
[18:05] <eclecticdreamer> heh
[18:05] <th3hegem0n> Ok, deal
[18:05] <eclecticdreamer> ____ flat as a pancake..
[18:05] <eclecticdreamer> :p
[18:06] <eclecticdreamer> yep
[18:06] * serenade throws an apple jack at the flat earth society
[18:06] <cyborg01> Then try to explain it to me
[18:06] <serenade> :D
[18:06] <cyborg01> Unless you want to be unfriendly
[18:06] <MRAmes> cyborg01: Okay... you mean FAI, right? Explain FAI?
[18:06] <Jonesey> heh
[18:06] <cyborg01> Explain to me what is the FAI's bias
[18:07] <cyborg01> Because there is no such thing as bias-free
[18:07] <MRAmes> FAI's bias? Bias in comparison to what?
[18:07] <eclecticdreamer> th3hegem0n, how do you know so? :)
[18:07] [MichaelA] No one is claiming bias-free, but less bias would be nice
[18:07] <Ben> I think we can reach a pragmatic - non-absolutist objectivity, as much as anything can be perceived as objective
[18:08] <eclecticdreamer> There never will be no bias.. :O/
[18:08] <cyborg01> There is no such thing as less bias either
[18:08] <eclecticdreamer> It all depends on what we consider good, & that is construct of experience & systems who share these beliefs
[18:08] <MRAmes> Ben: Nope, can't do it. We can only agree on our frames of reference, our ways of viewing the world, then talk to each other rationaly using those frames.
[18:09] <Ben> Some things our cognitive system was not built to encompass - that light "is" a wave and particle for instance
[18:09] <FutureQ> All I know is I didn't become an atheist to wind up serving a man made electronic god in a virtual world. I want to be free ad that means free to become a god myself.
[18:09] <eclecticdreamer> all lesser conscious lifeforms, but that are telepathic, like some plants, may be reengineered to our goals whether they like it or not :p
[18:09] [MichaelA] No such thing as less-than-human bias?
[18:09] <th3hegem0n> Because an AI algorithm has been done already.
[18:09] <eclecticdreamer> not that they think in our terms, but they CAN sense thoughts of humans
[18:09] [MichaelA] "Serving a man-made electronic god in a virtual world"?-.-
[18:09] [MichaelA] Oh dear oh dear
[18:10] <eclecticdreamer> oh my oh my :p
[18:10] <John_Ventureville> FuturQ, give up your pride, worship the CYBERCHRIST!!
[18:10] <eclecticdreamer> what I speak is true, strangely
[18:10] <Jonesey> bush would have u burned at the stake bud
[18:10] <Ben> MRAmes - yes, and pragmatically, what seems to work very well is what we can calll as objective as anything we can know. All knowledge is fallible, and a good objective worldview is falliblistic.
[18:10] <John_Ventureville> or at least worship the A.I. Gods that live at the end of time and await us all
[18:11] <eclecticdreamer> Michael, as you are so fond of pointing us to Singinst FAQ's, here, you try :p http://www.primaryperception.com/
[18:11] <cyborg01> We do have concensus of what is objective to a certain degree
[18:11] <eclecticdreamer> absorb :p
[18:11] <FutureQ> any over us AI freindly or not that protects us from all harm, even th harm of surpassing it's level of intelligence, or the harm that could result from such, is to me a demigod being nd i don' want it.
[18:11] [MichaelA] The thing is that you guys never *READ* SingInst's FAQs :p
[18:12] <eclecticdreamer> ahem.. :O|
[18:12] <eclecticdreamer> newp.. is there anything there? :p
[18:12] <eclecticdreamer> i will.. in time :O>
[18:12] <cyborg01> Alright then I'll read it, talk about this later
[18:12] <TylerE> "Biocommunication...." RUN AWAY
[18:12] <Ben> consensus is not crucial, though wide employment is. I'm not talking about so-called "objectivism, " which takes an externalist perpective they have no right to claim (they are absolutists, not obnjectivists).
[18:12] <MRAmes> Ben: Every viewpoint we have on the world is true in some respects, false in others.
[18:13] <TylerE> Parapscyh...*keeps running*
[18:13] <FutureQ> Also any world physical even though with sufficient level molecular engineering is in a sense virtual or could be ,manipulated as easily eventually.
[18:13] [MichaelA] FutureQ, back up from the idea of FAI and approach the problem from the perspective of "what the heck do we do about this issue of greater-than-human intelligence?"
[18:13] <eclecticdreamer> Tyler, you aren't open minded enough, but you will be..
[18:13] <eclecticdreamer> in the future, perhaps :)
[18:13] <eclecticdreamer> As these truths will unfold more..
[18:13] [MichaelA] Heh Tyler
[18:13] [MichaelA] I ran away before that
[18:13] <eclecticdreamer> to reveal there is much more then physical materialism..
[18:13] <Ben> MRAmemes: yes, and?
[18:14] <Ben> If we find we were wrong, we change our views.
[18:14] <cyborg01> There is no substitute for biocommunication.... unfortunately
[18:14] <FutureQ> I'm all for greater than now human intelligence WITHIN _humans_ thorugh augmentatin even AI augmentation.
[18:14] <MRAmes> Ben: I agree with you: that to have meaningful communication, two people must have overlapping viewpoints.
[18:14] [MichaelA] FutureQ, what do you do if it turns out that it looks like pure AI is technologically easier?
[18:15] <MRAmes> cyborg01: biocommunication? You talk in riddles.
[18:15] <cyborg01> That's TylerE's term
[18:15] <th3hegem0n> Well i'm gone.
[18:15] *** th3hegem0n (~th3hegem0@c-24-98-162-125.atl.client2.attbi.com) has quit IRC [Read error: Connection reset by peer]
[18:15] <cyborg01> Scroll up
[18:15] <FutureQ> I waqnt the AI to be rased, so to speak, to think it is me, my ultra ego, and idenifty with me, thus never wishing to do me harm, and the same for everyone else.
[18:15] <MRAmes> cyborg01: Fine... what does it mean?
[18:16] <cyborg01> TylerE: what does it mean?
[18:16] <TylerE> :D
[18:16] <eclecticdreamer> "The process of Backster's discoveries revealed in Primary Perception is
[18:16] <eclecticdreamer> required reading for anyone interested in how science could be done in a
[18:16] <eclecticdreamer> better world. Ironically, the humility with which he took on the task
[18:16] <eclecticdreamer> made him better qualified to do the work than prestigious scientists at
[18:16] <eclecticdreamer> leading universities who have vested interests in traditional science
[18:16] <eclecticdreamer> and have avoided this kind of research for fear of being ostrasized by
[18:16] <FutureQ> I holdmy ass and duck!
[18:16] <eclecticdreamer> their peers.
[18:16] <eclecticdreamer> shit
[18:16] <eclecticdreamer> erm, I mean oops
[18:16] <eclecticdreamer> :p
[18:16] <TylerE> Nice one
[18:16] <eclecticdreamer> I didn't no there was return codes :p
[18:16] <TylerE> "Biocommunication" was from that Primary Perception site
[18:16] <eclecticdreamer> know
[18:16] *** Guest9027475 (~bjk@adsl-61-190-251.bhm.bellsouth.net) has joined #immortal
[18:16] *** Guest9027475 is now known as BJKlein
[18:16] [MichaelA] FutureQ, that right there is a good start to asking questions about Friendliness
[18:16] *** Mode change [+o BJKlein] on #immortal by ChanServ
[18:17] <TylerE> ESP, all that good stuff
[18:17] * BJKlein waves
[18:17] [MichaelA] Heya BJK
[18:17] <eclecticdreamer> hey Bruce :O)
[18:17] <John_Ventureville> howdy, BJ
[18:17] <Ben> Hello Bruce
[18:17] * BJKlein returns from B-day celebration with parents in ATL
[18:18] <John_Ventureville> cool
[18:18] <BJKlein> [Age][30.001801 Years]
[18:18] <MRAmes> FutureQ: Many people want to 'raise' AI that way... but it won't work unles the AI is build exactly the same way a humans are... and even then it won't be sure to work because it doesn't always work with humans!
[18:18] <TylerE> Bruce, birthdate?
[18:18] <Ben> Happy B-day!
[18:18] <TylerE> You best not say 4-10
[18:18] <BJKlein> apr 11 heh
[18:18] [MichaelA] It's probably impossible to build an AI that "thinks its you" specifically, but "thinks its a part of humanity" might be easier
[18:18] <BJKlein> double-fools
[18:18] * MRAmes observes BJ is OLD!
[18:18] <TylerE> k, that would've been frightening
[18:19] [MichaelA] Happy Bday Bruce
[18:19] * BJKlein throws away all mirrors
[18:19] <TylerE> We have enough 4-10's in this room
[18:19] <eclecticdreamer> anyway, that spam message quote was from http://www.primarype...om/reviews.html of Brian O' Leary - former astronaut and professor of astronomy
[18:19] <John_Ventureville> BJ, you are still a young man
[18:19] <FutureQ> First of all I see the pathway to human augmentation coming from cyber implants such as needed for disabilities. These evolve slowly at economic market force driven function to eventually bring super human intelligence,e specially ntworked minds. If AI can even be developed it is said that reverse engineering a human mind is the easiest way, so then it's a human mind after all.
[18:19] <BJKlein> 4-10 = MA, TE, ?
[18:19] <TylerE> *nod*
[18:19] <BJKlein> ah k..
[18:19] <TylerE> Mike, did you get something in the mail yet?
[18:19] <Ben> pish posh, a mere twinkle of potential in the time sclaes he pursues
[18:19] <TylerE> BTW, happy birthday, Bruce :D
[18:19] <BJKlein> tanks, i think
[18:19] <TylerE> ! :)
[18:19] [MichaelA] But FutureQ, those who reverse-engineer human minds might leave out critical parts, resulting in world-destroyers. Something to worry about, no?
[18:20] [MichaelA] Heh, no Tyler
[18:20] <John_Ventureville> it's when you turn *36* that the "old guy" feeling begins to set in
[18:20] [MichaelA] I'll keep an eye out though
[18:20] <TylerE> Mike, well what the heck
[18:20] <BJKlein> we still talking friendliness?
[18:20] * BJKlein we hairy primates
[18:20] <John_Ventureville> James doesn't trust A.I.
[18:20] <TylerE> I sent to the address you gave me last month
[18:20] <John_Ventureville> *old news*
[18:21] <Ben> Back to the homework grind. Great conversation. I wish you the best success in your presentation MichaelA!
[18:21] <BJKlein> James, if not trust AI, who then?
[18:21] <cyborg01> What if a war is started because of some trivial matter -- coarse grain morality cannot resolve that
[18:21] <TylerE> BJ, friendliness has become irrelevant. Didn't you hear? Some Rikard guy is soon to implement his real AI idea. We'll be put out to sea shortly, I think.
[18:21] <FutureQ> We'll se, I guess.
[18:22] <John_Ventureville> I should have said Sysop, instead of just A.I.
[18:22] [MichaelA] I haven't movd, Tyler
[18:22] [MichaelA] moved*
[18:22] <BJKlein> TylerE, heh... i've seen.
[18:22] <TylerE> all right, postal just being slow
[18:22] *** Ben (~Ben@c-24-12-187-77.client.comcast.net) has quit IRC [Read error: Connection reset by peer]
[18:22] [MichaelA] Thanks Ben
[18:22] [MichaelA] d'oh
[18:23] <John_Ventureville> if anyone is interested, I will send any takers a sample copy of Physical Immortality magazine
[18:23] <John_Ventureville> just p.m. me
[18:23] <FutureQ> It's not so kuch not trusting, in fact I'm certain it is possible to trust it to protect us quite fine. That is what I fear, I don't want mommie AI keeping me from growing.
[18:23] * BJKlein is startingt to see hairy apes everywhere..
[18:23] [MichaelA] That's excellent, Bruce
[18:23] <BJKlein> from human to primates.. it's like wild discovery
[18:24] [MichaelA] James, well duh, none of us would
[18:24] <BJKlein> i want to hand eveyone a banana
[18:24] [MichaelA] You think that Singularitarians are like, selling out to some AI God because we want everyone to be controlled?
[18:24] <cyborg01> MichaelA: what if a war is started because of some trivial matter -- coarse grain morality cannot resolve that
[18:24] [MichaelA] Cyborg, but an altruistic AI with nanotechnology probably could
[18:24] [MichaelA] Simply surround everyone with active shields, or some smarter solution I can't imagine
[18:25] <cyborg01> Not true: because emergent behavior involves groups of humans
[18:25] <cyborg01> That's orders of mag harder than modeling a single human mind
[18:25] <cyborg01> In the end you'll have to model the while damn world every fucking particle
[18:25] <cyborg01> R.I.P.
[18:25] [MichaelA] If I'm an AI that thinks at a trillion times the rate of human beings, plus I'm superintelligence, plus I have the ability to send out strong nanotechnology, what makes you think that I couldn't prevent a human war safely?
[18:25] <TylerE> BTW, Mike, to see a bad example of interacting with someone who would seem to benefit from more reading, see the singularitarian group at Orkut.
[18:26] [MichaelA] Yeah, I saw :(
[18:26] <FutureQ> Well, yes I do, but not that they realize it. It's quite logical really. For a Frendly AI to fulfill it's design to be "friendly" it must by logical extension protect us from harkm. In so doing it certainly cannot allow us to do the dangerous bit of surpassing itslf can it?
[18:26] [MichaelA] Didn't even grant AIs agenthood, was just viewing them as technology
[18:26] <John_Ventureville> Cyborg, are you saying "psychohistory" ala Asimov will never become a reality?
[18:26] <TylerE> I asked for it with my "I don't mean disrespect..." I must have been drunk
[18:26] <cyborg01> A trillion times? don't make me laffff
[18:27] [MichaelA] Where did you post that, Tyler?
[18:27] <TylerE> "Singularity and God"
[18:27] <cyborg01> Time for meee to *sigh*
[18:27] <TylerE> Haven't replied yet to his latest posting
[18:27] [MichaelA] James, I think you're underestimating the amount of elegance that can potentially be used here
[18:27] [MichaelA] Start by not thinking about super-AI, but just "smarter-than-human, kinder-than-human intelligence":
[18:28] [MichaelA] Like Gandhi and Einstein fused
[18:28] [MichaelA] But better
[18:28] [MichaelA] Do you think it would be possible?
[18:28] <BJKlein> did Eric Snyder show?
[18:28] [MichaelA] Do, sadly
[18:28] [MichaelA] No
[18:28] [MichaelA] lol
[18:29] <BJKlein> ah, k.. heh
[18:29] <TylerE> Mike, apparently if you want to piss someone off, respond by saying you "don't mean them any disprect."
[18:29] <TylerE> or disrespect
[18:29] <FutureQ> But it is those that talk about the singularity as a sate where the just more than hman friendlier than hyuman intelligence recursively advances itself to way beyond what you jiust claimed.
[18:29] <BJKlein> heh, me checks orkut...
[18:29] <John_Ventureville> Michael, how about Ralph Nader and Ralph Merkle fused?
[18:29] <John_Ventureville> "RalphMind Sysop"
[18:29] [MichaelA] Heh
[18:30] [MichaelA] This orkut thread is a mess
[18:30] <TylerE> Heh
[18:30] [MichaelA] James, "recursive self-enhancement" is a really specific idea with really specific justifications
[18:30] [MichaelA] http://www.singinst....OGI/seedAI.html has some
[18:31] [MichaelA] It justifiably sounds like nonsense until one examines the arguments
[18:31] <FutureQ> thanks
[18:31] <TylerE> "Like with a pocket calculator, this machine will become ubiquitous and indispensable..."
[18:31] <TylerE> Well, there seemed numerous warning signs in what he originally wrote
[18:32] <John_Ventureville> "RalphMind Sysop"
[18:32] [MichaelA] Quite a few
[18:32] <TylerE> So rather than responding to his specific questions, I suggested a different perspective in regard to what a greater differential in "smartness" means, and it went downhill from there
[18:32] * BJKlein slaps around TylerE (there, you feel better?) re: respect
[18:32] <TylerE> Aw phanks
[18:32] [MichaelA] I would ask him to stop imagining AIs as machines and more as independent agents
[18:33] <BJKlein> think, humans are a type of machine
[18:33] <TylerE> GOod start
[18:33] <BJKlein> we shall create our mind children
[18:33] <BJKlein> AI
[18:33] [MichaelA] Ah, well, this guy was imagining superintelligent AIs as PDAs
[18:33] <BJKlein> we're all made of atoms here
[18:33] <John_Ventureville> I imagine billions of AI Sysop avatars being "guardian angels" and even close friends to most of the population.
[18:33] <cyborg01> I'm sorry I was kind of extreme... it is possible to create some rules to govern humans, but never rigid, absoulte rules
[18:33] [MichaelA] Bruce, you may be interested in http://www.accelerat.../notoptimal.htm
[18:33] <John_Ventureville> *human population*
[18:33] [MichaelA] I talk a lot about how humans are atoms :)
[18:33] <BJKlein> excellent.. thanks
[18:33] [MichaelA] John, I used to kinda visualize it like that
[18:34] [MichaelA] I still do sometimes, although I know my visualizations far short, they're just approximations
[18:34] <FutureQ> gees now I have enough reading to go blind by next week.
[18:34] <TylerE> It can be weird chatting with someone who doesn't have the multi-layered background of singularity strategy and scenarios to their thinking
[18:34] [MichaelA] Yeah, sadly that's just about everyone
[18:35] <TylerE> You can't just throw out scenarios and expect it to be helpful to talk about them as if they're plausible or worth chatting about
[18:35] <John_Ventureville> Tyler, Michael, it can be so lonely at the top...
[18:35] <John_Ventureville> when surrounded by the ignorant masses!
[18:35] <TylerE> I was waiting for that :D
[18:35] <eclecticdreamer> You need people who build bridges between the groups..
[18:36] <TylerE> John, didn't mean that, sorry
[18:36] <TylerE> Just quickly typing
[18:36] <John_Ventureville> lol
[18:36] <FutureQ> You guys, that visualization is exactly what I'm talking about with my perswonal avatar ultra-ego onborad AI thheme.
[18:36] <John_Ventureville> ok
[18:36] <BJKlein> sometimes i feel like the radio guy from the moive 'Alive'
[18:36] <eclecticdreamer> so they can translate their terms in levels each can understand.. :O>
[18:36] <FutureQ> Jees, can't type ahgain.
[18:36] [MichaelA] James, a lot of CFAI is about building up better visualizations about what Friendly AIs could really do
[18:37] [MichaelA] "ultra-ego" is a Freudian term; I'm not sure how it corresponds to real physical patterns
[18:37] <eclecticdreamer> Normal folk without qualifications, can do a lot of good for correcting & improving on singularitarian values, & encompassing more then the visions of the original founders or group
[18:37] <John_Ventureville> "The Dummy Transhumanists Guide to the Singularity!"
[18:37] <eclecticdreamer> just need a tool to connect them where they can intelligently communicate
[18:37] <FutureQ> Well, in a nutshell I want it to be more personal and at the inividual lec=v=el increasing diversity not the opposite and not centrally controlled anything.
[18:37] <John_Ventureville> *a very needed primer*
[18:37] [MichaelA] Normal folk without qualifications could jump into science and start making up their own theories, but that wouldn't be so good, would it?
[18:38] <eclecticdreamer> "normal folk" :p
[18:38] [MichaelA] James, I agree
[18:38] <BJKlein> would someone happen to have a chat log handy?
[18:38] <eclecticdreamer> Michael, but you can create a tool that organizes such ideas..
[18:39] <eclecticdreamer> who would be responsible for organizing it? the collective minds of those participating, & those that are voted to have some of the best "ideas"
[18:39] <eclecticdreamer> according to the whole system/group
[18:39] [MichaelA] Well, I don't know if the programmers would like that
[18:39] [MichaelA] They're specialists working in a technical area
[18:39] [MichaelA] There will be things they know that we don't
[18:39] <TylerE> If I were to start talking to a Buddhist about the seven factors of awakening without much knowledge of what on earth I'm talking about, she might be perturbed, or perturbed in her own helpful and compassionate way
[18:39] [MichaelA] About minds and goal systems
[18:40] <eclecticdreamer> Why would they object? They wouldn't if the tool was intelligently done
[18:40] [MichaelA] "seven factors of awakening"?
[18:40] <eclecticdreamer> there are many things specialists don't know that many others, do
[18:40] [MichaelA] Um, no
[18:40] [MichaelA] The specialists really know more
[18:40] [MichaelA] Especially about an issue this complex
[18:41] [MichaelA] What most people are focusing on is actually totally irrelevant to the problem at hand
[18:41] <TylerE> diligence, joy, ease, concentration, letting go, mindfulness, and investigation of phenomena
[18:41] <eclecticdreamer> they know more about their area, but not about other disciplines which may complement & synergize the limited perspectives of those specialists
[18:41] [MichaelA] FAI specialists are highly interdisciplinary
[18:41] [MichaelA] What field do they need to know more about, and why, would you say?
[18:42] [MichaelA] Everyone seems to think that they need to know more about whatever field they happened to be interested in when they encountered the idea
[18:42] <TylerE> Mike, you seem pretty good at asking people to increase the depth of their knowledge without pissing them off in the process; maybe you can share some of that :)
[18:42] [MichaelA] *But they never actually do it*!
[18:42] [MichaelA] Or rarely
[18:42] <John_Ventureville> *a very needed primer*
[18:42] [MichaelA] Nah, I think I sometimes piss people off :)
[18:42] <eclecticdreamer> It's not to know more, but to have the correct balanced perspective & information
[18:42] <eclecticdreamer> quality much more important then quantity
[18:42] [MichaelA] John, have you read all of SingInst's short intros?
[18:43] <BJKlein> Tyler..read "How to win friends and influenece people"
[18:43] <John_Ventureville> I have read several, but it has been a long time
[18:43] [MichaelA] Yeah, which is why quality of the people is more important than the quantity of people getting input into the process, and that's exactly how things are currently working, which I'm fine with.
[18:43] <TylerE> BJK: I've read that! :)
[18:43] <TylerE> Rather, :(
[18:43] <BJKlein> you're hopeless then
[18:43] <TylerE> ! :)
[18:43] <BJKlein> sorry
[18:43] [MichaelA] John, I would read them again if this seems worthwhile to learn about at all
[18:43] [MichaelA] There are several primers and short pages online
[18:43] <TylerE> Nah, I'm really not that bad
[18:43] [MichaelA] All the stuff is there
[18:43] <TylerE> This one guy is just a quandary
[18:43] [MichaelA] www.acceleratingfuture.com might be interesting too *wink*
[18:44] <BJKlein> see, there ya go TylerE!
[18:44] <eclecticdreamer> Michael, but a computer tool that organizes ideas, could allow all ideas to have "voice" & observance over the entire collective mind
[18:44] <FutureQ> Actually m9chael Freud never used the term "Ultra-ego" to my knowledge, I coined it as something above Freud's three, id, ego and super-ego.
[18:44] <TylerE> rationalizing?
[18:44] <eclecticdreamer> to be voted upon
[18:44] <TylerE> :(
[18:44] <BJKlein> never imply that you're anything but crap
[18:44] <TylerE> Oh, laugh
[18:44] [MichaelA] Ah, I see
[18:44] <eclecticdreamer> a cosmic consciousness forming over the Internet, so-to-speak O.o
[18:44] <eclecticdreamer> :p
[18:44] <BJKlein> always smear yourself with self doubt
[18:44] <John_Ventureville> Michael, Tyler, what is your response to Max More and others who feel the singularity concept is overblown and that they want to distance themselves from it?
[18:44] <BJKlein> and other humans will love you
[18:44] <FutureQ> Damn I need a spell checker!
[18:45] <FutureQ> Stupid dyslexia problem.
[18:45] <BJKlein> humans hold love higher than intellect
[18:45] [MichaelA] My response is that they probably visualize "smarter-than-human" intelligence to be more like Einstein than actual smarter-than-human intelligence
[18:45] <BJKlein> most humans (sorry)
[18:45] [MichaelA] They don't visualize discontinuity and seriousness also because they visualize the whole thing as unfolding from the usual Engines of Civilization
[18:45] <TylerE> I'd have to find out why Max believes it's "overblown" and what he means by that
[18:45] [MichaelA] When in reality, the ermgence may not be so distributed and balanced as they hope
[18:45] [MichaelA] Or assume
[18:46] [MichaelA] Tyler, I think there's a piece on KurzweilAI.net
[18:46] [MichaelA] "Smarter-than-human intelligence" means the second it comes into existence, humans aren't the smar




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users