• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans

Photo
- - - - -

The Future of Death with Dr. James Hughes


  • Please log in to reply
2 replies to this topic

#1 Bruce Klein

  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 27 May 2003 - 05:40 PM


The Future of Death with Dr. James Hughes
Immortality Institute Online Chat :: Sun. June 8, 2003
Location: Cyberspace - http://www.imminst.org/chat2

Posted Image
Dr James Hughes
http://www.changesur...com/Hughes.html

On June 8th 2003 at 8:00 PM (Eastern) the Immortality Institute held moderated chat with Dr. James Hughes to discuss his essay, The Future of Death: Cryonics and the Telos of Liberal Individualism ( http://www.jetpress....lume6/death.htm ) and other topics related to immortalism.

#2 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 09 June 2003 - 01:53 AM

Chat Archive:

<BJKlein> Welcome Dr. James Hughes. It's a pleasure to have with us. Thanks for taking the time join us in the chat.
<jhughes> my pleasure really
<BJKlein> Before we get started, let me dispense with the formalities.
<BJKlein> caliban will be helping me with the questions from the other chat room
<BJKlein> The topic of discussion will center around Dr. Hughes' essay 'The Future of Death: Cryonics and the Telos of Liberal Individualism'.
<BJKlein> We'll also discuss topics concerning the prospect of physical immortality in general.
<BJKlein> About Dr. James Hughes: (from his homepage) http://www.changesur...com/Hughes.html
<BJKlein> James J. Hughes Ph.D. is a sociologist, bioethicist, health care policy analyst, and producer of the public affairs program Changesurfer Radio. He currently serves as the Associate Director of Institutional Research and Planning at Trinity College, where he teaches health policy in the graduate public policy program.
<BJKlein> Dr. Hughes... ya ready for our first question?
<jhughes> sure
<BJKlein> Just wondering…
<BJKlein> you seem to be amazingly active with your writings, producing shows for Changesurfer.com and acting as Sec. of the World Transhuman Assoc.
<BJKlein> Do you ever sleep?
<jhughes> Ha - not enough - abuse of caffeine mainly
<BJKlein> heh
<jhughes> the people need me - no rest for the wicked
<jhughes> ...just joking about the people...
<BJKlein> could you tell us more about your driving force..
<BJKlein> heh understand..
<BJKlein> what get's you up in the morning?
<jhughes> I was just thinking about it today, that the main thing that drives me is the need to create a meaningful life in a meaningless universe. Sounds bleak, but its the human condition. Without a vision, the people perish (of anomie).
<jhughes> In other words, it makes me feel happier to imagine that I'm creating better things for our descendents, even if its all meaningless
<BJKlein> interesting..
<BJKlein> so you see the main goal of humans is to try and make sense of it all..
<BJKlein> and to make things 'better' for the future..
<BJKlein> but defining what 'better' is is sometimes hard to do it seems
<jhughes> Well I don't think there is a "main goal" but making sense of the universe and creatign meaning, spreading life, these all seem to float my boat - I think one of the questions about posthuman values for me is whether these contingent values will make any sense to our descendents - Maybe navel gazing will seem a lot more plausible as a career
<BJKlein> heh, hard to say..
<BJKlein> we're going to open the discussion to one questioners at a time..
<BJKlein> I think Chestnut is our first..
<caliban> yes
<caliban> current roll 1) Chestnut, 2)hkhenson 3)PD 4)MRAmesx2
<Chestnut> Dr. Hughes-is there an event that started your drive -interest in this field of medicine, the drive to make a difference...and focusing on ethics and life/death versus other areas?
<jhughes> Not so much an event, but a context. I have always been interested in futurological and utopian speculation, and increasingly discovered the Left and the bioethicists (my friends and my colleagues) focused on the immediate past. I wasn't satisifed working to recreate an ideal version of 1970.
<BJKlein> have you taught in the past.. and do you teach now, per chance?
<jhughes> I taught bioethics and health policy at the U of Chicago for five years, and then my wife got a job here in Connecticut. So since 1996 I've been adjuncting in Sociology and Health Policy. Right now I teach a class in health policy at Trinity.
<jhughes> I wanna get the transhumanists to set up a distance learning course so we can start doing online instruction
<BJKlein> that sounds like a great idea
<BJKlein> I think I remember that extropy was thinking of that as well
<jhughes> its a lotta work, but we have a lot of potential instructors
<BJKlein> the Transvision 2003 conference seems to have a prelude to this..
<BJKlein> the Thr session?
<jhughes> I'm very excited - a dozen journalists, two film crews, 60 academics from all spheres
<BJKlein> yes, and this is the first TV in the US/
<jhughes> I think TV03 is gonna help make transhumanism a common term in academia and journalism
<jhughes> as you know, we're talking about having another one in Toronto next summer - so let's keep the momentum
<caliban> current roll in #immortal2- ** 1)hkhenson ** 2)PD ** 3)MRAmes ** 4) Utnapishtim ** 5) Chestnut (and then there is always BJ :-) )
<jhughes> About my article - just to summarize - tech will make death murky, and then it will make personal continuity murky - thoughts?
<BJKlein> yes.. this seems to be the start of something big
<hkhenson> Do you intend to try to be one of the people in the deep future? Personally that is.
<jhughes> Me in the deep future - I'd like to live a long time, but I'm kinda pessimistic whether I'll make it past 2040. Then there is the personal continuity problem - I'm not sure that the continuity of "me" will mean much past a certain point (1000 years?) - kind of the same issues Lazarus Long struggled with in TEFL
<hkhenson> Heh. I suspect you like everyone else will take it one day at a time. :-)
<PD1> What did you mean when you said "even if it's all meaningless?"
<jhughes> Exactly - I'm sure that if I make it to 3000 I won't mind living to 3001. But at a certain point, I think our backing up, copying, sharing, vastening of consciousness will just make the whole "personal continuity" thing seem pointless. Like gardening.
<BJKlein> quick question, i'm assuming you're not signed up for cryonics?
<jhughes> Meaninglessness - I think all value comes from the huan mind, and that there are no absolute referents for value, just internal consistency in value structures. So there is no real meaning to life, just what we make of it (Spike actually said that nicer to Buffy, but anyway...)
<PD1> Why wouldn't the meaning that we "make" be real meaning, though?
<jhughes> Sure - its as real as it gets
<MRAmes> Your writings rely on there being time for the progress of public and political opinion, and you posit various types of non-technological 'Singularities'. How likely do you think it is that there *will* in fact be enough time before a technological Singularity make the issues moot?
<MRAmes> *makes
<jhughes> I'm not a hard-take-off Singularitarian - too Xian for my taste. I think its going to be a slower diffusion - faster than the steam engine and the plow, but slower than most here expect. And the political and ethical "singularities" I address will have time to develop. But there are always existential risks that might make it all moot (asteroids, bombs, etc.)
<MRAmes> True. I think you have said something similar to this before.
<MRAmes> Please be sure to update us if your opinion changes :)
<MRAmes> But... give us a likelyhood, hmm?
<Utnapish> NI noticed definite parallels between what you consider to be the consequences of technologicaal change/progress and those of Francis Fukuyama.
<jhughes> Well, the likelihood of their being a fundamental problematizaion of our understanding of death is 100% since its already happening. The brain death consensus is eroding, and the West doesn't know what to replace it with yet. The best bioethics can come up with is "Who should we treat as if they are dead, whether they are or not."
<Utnapish> both of you view them as resulting in a collapse of post enlightening ethics
<Utnapish> why do you think these issues are so polarising even in the case of people who agree-as you and fukuyama seem to- on the the potential outcomes
<jhughes> Re: Fukuyama, yes. He is a smart guy, and I think he says some correct things. TH tech needs safety regs. Tech regs could significantly inhibit the technologies. And they do pose a fundamental threat to liberal democracy. But I think liberal democracy is more flexible, and based on different premises, than he does.
<Utnapish> thank you!
<Chestnut> Ethics-should an individual always make decisions when it affects his/her body (present and future body/mind)-In your mind who is best suited to analyse, decide and implement "ethical" issues-in the future with new tech surpasing even our understanding of life/death (which i think has already occured) is our current governmental structure capable of answering these questions?
<jhughes> I think there are different causes of polarization. For the Left its is a consistent category error, targeting of tech instead of capitalism/patriarchy/etc.
<jhughes> For the right, they are trying to "conserve" traditional social values and institutions (and power structure) from the erosion of tech.
<jhughes> (welcome leon) re: individual body control, I think we need to enshrine far more bodily autonomy in jurisprudence: drug legalization, suicide, and so on. On the other hand, I believe in a democratic polity's right to regulate technology for safety and efficacy. You may have a right to shoot silicon into your face, but if I do I should be arrested.
<Chestnut> yes- i was particularly interested in the section of your article where you dicsuss issues regarding awakening after cryonics as either yourself or a new individual vbased on the information that was saved--
<Hugh_Bristic> ACTION **"Personhood theory" seems to assume that we have a better idea of what consciousness is than we actually do. For instance, at what point exactly do we draw the line between the different types of life listed in the "Future Continuum of Consciousness and Rights." Is an infant, as listed in the chart, birth to 1 year, 1 year and 2 months, 2 years? How do you objectively make such a distinction-one which c
<jhughes> Drawing those lines is really hard, since it s fundamentally a phenomenological line - an experience had by people who can't remember or communicate them
<jhughes> But we can make parameters estimates based on neurology
<Hugh_Bristic> how?
<jhughes> re: post-neuro-reconstruction selves I think that will be a matter of individual and family choice - just as they are currently offered the choice of whether they would want to live "that way" (on machines) our families will be asked if we had any preferences about wether we would want a totally new persoanlity to be regrown in our head and body - most people probably won't have given it any thought unless forced to.
<jhughes> re: parameter estimates (hey I'm a sociologist and ethicist, I take the docs guidance on this) but there is a lively debate about which neural connection in fetuses, animals and the brain death have to be made in order for the being to experience "pain" (which itself is a problematic category) and then what the prereqs are for self-awareness
<hkhenson> On slow singularity: A recent Internet worm was found to be doubling in about 8.5 seconds. If that is speed of the final ascent to the singularity meat people are going to have a rough time keeping up unless augmented or something. Comments?
<jhughes> I think some things will move fast and others not so fast. Computing power may move fast, but I don't think the Net or planet will be gobbled by self-replicating AI in the next fifty years. For one thing, it ignores the agency of humans determined to stop it (including us).
<hkhenson> If it moves that fast . . . Human social cooperation evolved over a long time to something that works much of the time. I.e., humans mostly try to achive status by being nice and cooperative. What do you think of modeling AI from human minds in this critical respect.
<hkhenson> ?
<jhughes> But I'm not really an expert on the tech spec on singularity - what strikes me more about the debate is that the same question is asked in every period of apocalyptic expectation - "Why plant crops if the Lord is coming?" I think we need to worry about the future of democracy, even if there are friendly or unfriendly AI in our future.
<Venezuela11> Venezuela, question for J. Hughes: what is your latest thinking regarding the singularity? By this I mean, scope, speed, timing? What impact will it have in the poorer regions of the world?
<jhughes> Let me re-phrase that - I am very concerned about the creation of a global order that diffuses the benefits of tech to the developing world as rapidly and equitably as possible. I think the prospect of a posthuman First World and human Third World is very disturbing.
<Venezuela11> So you do not seem to agree with Eliezer on his views about a "harder" Singularity. Maybe Eliezer can comment on his views later too..
* caliban kindly reminds the chatters that Dr.Hughes was not invited by Immist to speak on his views on "the Singularity"
<jhughes> I like the old futurist Delphi technique - ask a bunch of experts, tell them the results, and ask again. One the wta-talk we have polled members about the Singularity, and hard take-off is a1/3, slow take-off is a 1/3 and no take-off (evolutionary) is a 1/3.
<Eliezer> What parallels do you see between the state of the world when it confronts these issues, and any historical periods? I.e., parallels between the present-day mood about, say, biotech and... 18th-century revolutionary France, or something.
<jhughes> Well I agree there spec about the Singularity is more rooted in real evidence of non-linear social dynamics than millenialism of medeival France. But the social psychology tends to be the same - it leads to disastrous expectations that salvation will come from a deus ex machina, and that human planning and volition will have no role in building the desirable future. That way leads the Ghost Dance.
<caliban> Eliezer- you may pose a follow-up question if you wish?
<Eliezer> Are there any specific periods you'd recommend looking up to find interesting parallels? (I would, in all seriousness, recommend "Citizens" by Simon Schama, for example, which happens to be about the French Revolution.)
<caliban> jhughes? Are you still with us here?
<jhughes> I'm here - anybody wanna talk about the proble of personal continuity?
<caliban> Well I would have a question, then a word form BJ and we could open the chat up If you agree?
<jhughes> K
<caliban> In your essay you go to great lengths to distinguish different stages of personhood and their legal protection. Does this adequately answer the problematic of handling cryonic patients? Especially considering that the cryonaut has lived a life that the embryo or the plant has not. Would an evolution of the law of trusts or succession not be the more fitting approach?
<jhughes> I can specify how my remains be handled in a trust, and as I say in the essay that will probably be the only available route for handling cryopreservation up until we start unfreezing people. But my article was partly a friendly critique of Alcor's position that the frozen should be seen as living patients - they are dead until proven living.
<caliban> That seems to be the problem does it not? E.g. in the interface between "freezing" patients while legally alive and assisted suicide. Do you see cryonics as a possible entry point into euthanasia?
<caliban> (that was my follow up ;-) )
<jhughes> If we had a society that was more tolerant of euthanasia and suicide, then people could enter death by freezing in a more advantageous way to their neurons. They don't have to be seen as living when frozen to be allowed to freeze themselves or be frozen.
<caliban> Thank you. *nods concerned*
<BJKlein> Dr Hughes, Thank you profusely for taking the time to join us tonight… Please feel free to stay with us as long as you’d like in #immortal . We’ll now declare this chat over in immortal2 after one last question:
<BJKlein> Could you possible join us again in the future?
<jhughes> Sure.
<BJKlein> great!
<caliban> Dr. Hughes, it was a pleasure to have you here, good luck with the Transvision Conference and your future work!
<jhughes> Thanks
* caliban just regrets he cannot relate the applause of the audience in this medium
* BJKlein claps

#3 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 09 June 2003 - 01:30 PM

Unmoderated Chat Discussion:

<Venezuela> My real name is José Cordeiro, alias Yosé
<Utnapishtum> suma that I can handle. I'd want plenty of time to play around at each level of cognition
<BJKlein> welcome Yose
<Rotaerk> hola chicita :p
<Utnapishtum> roto: How old were you when you dropped chritianity?
<Rotaerk> I am 18 now
<hkhenson> interesting Rota, what flavor of chistanity?
<Rotaerk> I remember I quit when I was 16
<Rotaerk> my family still doesnt know
<Rotaerk> dont wanna tell them...
<hkhenson> I dropped out of it when I was 18
<hkhenson> it no longer made any sense to me
<Utnapishtum> was never in it
<hkhenson> but I do value religions in general
<Venezuela> Indeed, the immortalist meme is catching up. Later this year, we plan to invite Ken Sills from Imm. Inst to come to Venezuela and giver several presentations, since he is fluent in Spanish:-)
<Rotaerk> I have a hatred for all religions myself
<hkhenson> they are highly evolved parasites to mutualist symbiotes
<Utnapishtum> I value some religious texts as works of literature
<hkhenson> I have been in a monumental battle with the scientology cult for years and yet I can say something good about it.
<Sumadartsun> I think religions are somewhat interesting, but not any good as a way of thinking about the world
<hkhenson> it is better than being sucked into a jim jones cult!
<hkhenson> or the aum cult . . . so far at least
<MRAmes> Why, are you now battling a Jim Jones cult?
<Venezuela> Jim Jones, well he came to Venezuela and finally dies in Guyana, next to Venezuela...
<Rotaerk> being sucked into ANYTHING is undesirable
<hkhenson> true.
<Utnapishtum> Sumadartsun: They have no value in the sense of telling you anything about objective truth
<hkhenson> do you understand the attraction of cults? It is EP related
<Rotaerk> dont know much about EP
<Rotaerk> hey, is that a major in college yet?
<Rotaerk> or a class?
<Utnapishtum> but I do for instant think that the bible has tremendous value as art. There is some spectacular writing in it
<hkhenson> hmm no google bot here
<MRAmes> an oversight I'm sure.
<hkhenson> put "evolutionary psychology" in google and take the link to the primer
<hkhenson> MRAmes, try Keith Henson in google
<MRAmes> You mean, again?
<hkhenson> you will get more than you would ever want to know about why I am fighting scientology
<MRAmes> Ahh.. yes I see your point now.
<Venezuela> What time is it now in New York? In Venezuela it is 7:27pm
<hkhenson> same time.
<MRAmes> kh: Hadn't clued into your latest run-in.
<Utnapishtum> Keith: You were never persoanlyl involved with that crowd were you?
<MRAmes> "Henson is seeking political refugee status in Canada " have you got somewhere to stay?
<hkhenson> been here two years
<hkhenson> yes
<MRAmes> good good
<MRAmes> I'm in Ottawa
<hkhenson> ut, no, never a sucker for the cult
<hkhenson> but it has been inspiring
<hkhenson> like having cancer
<Utnapishtum> your compassion for the plight of the foolish and gullible is commendable...
<hkhenson> some people become extremely knowledgeable about their cancer, just like I learned a lot about scientology and cults
<Utnapishtum> scientology seems like such a blatant scam to me that I find it hard to muster much sympathy for those in its clutches
<hkhenson> I am about 100km south and west of toronto place called brantford
<MRAmes> Ha! Know it well.
<hkhenson> which is known mostly for being where wayne gretsky came from
<Utnapishtum> you get sick of hockey out there?
<MRAmes> Actually, quite a nice place.
<hkhenson> it is ok
<BJKlein> Venezuela...
<BJKlein> and others
<Utnapishtum> i guess... I think it is the least interesting of the major 4 sports myself by a long shot
<BJKlein> please join immortal2 as well...
<BJKlein> #immortal2
<BJKlein> Venezuela please goto http://www.imminst.org/chat2
<MRAmes> BJ: how come immortal2?
<BJKlein> and mIRC users type /join #immortal2
<BJKlein> this will be a moderated discussion.
<BJKlein> you'll not be able to type questions in 2
<BJKlein> but you will in immortal
<MRAmes> Ahh... gotcha
<Utnapishtim> this immortal 2?
<BJKlein> no..
<BJKlein> this is #immortal
<MRAmes> Keith, the 'resume' link doesn't work on you web page... is that deliberate?
<Venezuela> Keith, also the www.keithhenson.org/ has not been updated in 2 years. Why?
<hkhenson> no idea
<hkhenson> some web sites have not been updated for longer than that.
<MRAmes> 'deliberate'... I meant deliberate on your part,eh?
<EmilG> Hello, Eli.
* Eliezer nods in EmilG's general direction
<EmilG> I already mentioned this on #sl4, but http://woodyallenita...m/short-uk.html is a funny short story by Woody Allen you might like.
<Utnapishtum> whats the premise?
<EmilG> Men pay young women, not to have sex, but to have pseudo-intellectual conversation. :)
<MRAmes> Wow... does that cost more, or less than sex?
<Utnapishtum> more
<EmilG> It depends on the topic.
<MRAmes> figures
<Utnapishtum> I would think
<Utnapishtum> I think it is far easier to find women to sleep with than those willing to discuss goedel
<MRAmes> true
<BJKlein> irc users be sure to type /join #immortal2
<hkhenson> for sure
<MRAmes> Or even to recognise that Godel is a person's name.. :)
<hkhenson> finding people to talk godel of either sex would be a problem.
<Eliezer> I'm shocked that you would sleep with a woman before discussing Godel with her.
<hkhenson> lol
<Utnapishtum> that is a beautiful line
<MRAmes> Godel should only be discussed *after* sex!
<MRAmes> Phillistine!
<Venezuela> MRAmes, you are right:-)
<EmilG> Is that Yudkowsky's Incompleteness Theorem?
<MRAmes> :D
* Eliezer says to Utnapishtim: "I note that you say 'beautiful', not 'efficient'."
<Eliezer> pretty good, Emil, you get bonus points for being able to come up with any pun at all along those lines
<Eliezer> I've been thinking and I still can't do it
<hkhenson> men are wired up to fuck anything that will hold still for it
<Utnapishtum> after sex the woman is even less likely to pay attention to pseudo intellectual ramblings. Absolutely no incentive to feign interest
<hkhenson> they don't *always* heed the wiring though.
<Eliezer> anyone here who does not already know the evolutionary psychology of the conflict of interest between the genders, raise your hand
<Utnapishtum> Henson: It is perfectly possible to be discenring in that sphere of life as in any other
<MRAmes> hk: And a woman who understands that is rare, a woman who can deal with it is... well, worth discussing Godel with :)
<BJKlein> welcome jhughes
<hkhenson> true, but men are not very choisy
jhughes is ~Java@cpe-68-118-188-13.ma.charter.com * irc.extropy.org
jhughes on #immortal
jhughes using irc.lucifer.com [127.0.0.1] Excalibur IRCd
jhughes has been idle 57secs, signed on Sun Jun 08 18:47:41
jhughes End of /WHOIS list.
<jhughes> howdy
<Eliezer> hi, james
<Utnapishtum> generally not, no
<BJKlein> jhughes.. can you try to join us in http://www.imminst.org/chat2
<BJKlein> that'll bring up the two chat rooms
<Utnapishtum> its the changesurfer himself!
* Eliezer says to hkhenson: "You *know* that's the kind of sweeping generalization laypersons are always chiding evolutionary psychologists over... it's not that "men are not very choosy" but that men have a strong innate tendency to be less choosy than women; what happens to that tendency is quite a separate issue."
<MRAmes> hk: you should be charging Eliezer for this discussion, nez pa?
<hkhenson> that's a way to put it, but I think most here would understand the more detailed explaination is implied by the shorter version
<Venezuela> Dr. J., we have now you cloned, with different names and in different channels:-)
<jhughes1> did I do that right (newbie)
*** Retrieving #immortal info...
<hkhenson> the real question is, can we get the neanderthals back
<Eliezer> you seem to have wound up with two copies in one channel
<Utnapishtum> MRames: Elizer is not one of the pseudo intellectual hookers int he woody allen story! why should he charge
<MRAmes> Oh, right. This is actual intellectual, not pseudo-intellectual. Phew!
<jhughes1> Ok - just one of me now
<avantopia> hello james :)
<hkhenson> can you imagine a football team of neanderthals?
<avantopia> hello everyone
<Eliezer> I think the idea was that they wanted one of you in #immortal2 also, though
<Eliezer> what are you using to connect?
<BJKlein> http://www.imminst.org/chat2
<BJKlein> should do it..
<MRAmes> HK: *That's* hairy!
<Eliezer> ok, all works out now
<Sumadartsun> it worked
<Utnapishtum> I can't decide which chat room presence is the real me...
<BJKlein> great
<Eliezer> hk: I disagree that we should bring back the neanderthals, because I don't know how to handle the ethical issues associated with an unfinished sapient being
<BJKlein> So... just to let others know.. we'll have two chat rooms
<hkhenson> I want them for football players, not sapients
<Utnapishtim> that football team would not be too good on passing plays
<MRAmes> Eliezer: Then you have difficulty with the issues of lesser beings that already exist?
<hkhenson> the two are disjoint
* Eliezer notes to himself that hkhenson is at least partially evil
<BJKlein> if you're coming in from the webpage use http://www.imminst.org/chat2
<Utnapishtim> you'd have a lot of linemen but not a lot of quarterback
<hkhenson> :-)
<BJKlein> if your using irc type: /join #immortal2
* Eliezer says to MRAmes: "What lesser beings?"
<Venezuela> Ravi, you seem to have been cloned too:-(
<Eliezer> if you're cloned, one of your selves should type /join #immortal2
* MRAmes says to Eliezer: non-human primates
<hkhenson> I wouldn't bet on quarterbacks
<hkhenson> those dudes could wrap a hand all the way around a football
<Utnapishtim> you'd have to keep the playbook real simple too
<Utnapishtum> and there'd be a lot of unneccessary roughness calls
<hkhenson> they fought hand to hand with cave bears.
* Eliezer says to MRAmes: "I could be wrong, but I don't think they're sentient. If they got upgraded to sentience (there seems to be a widespread opinion they deserve that) then you'd have to carry out the upgrade in such a way as to resolve the ethical issues at that time."
* Eliezer says to MRAmes: "What worries me about the prospect of cloning Neanderthals is that someone might do it *before* the Singularity. Creating a permanent, perhaps even an eternal ethical dilemna. A person is forever, after all."
<hkhenson> actually the chances are we will recover a whole mess of our remote ancestors
* MRAmes says to Eliezer: What an interesting set of problems... for another (future) time perhaps?
<Sumadartsun> what is the argument behind neanderthals not being sentient?
<hkhenson> in fact, they had larger brains than we do
<EmilG> Eliezer: Aah -- is that the problem that a friendly Seed AI wouldn't sufficiently converge because not everyone would have the same "panhuman" layer, as it were?
* Eliezer says to Sumadartsun: "It's not the sentience I'm worried about, but the possibility that they don't have the potential for moral growth using their built-in set of emotions. I.e. they don't renormalize to anything interesting as they stand, just thermostat AIs that eat sugar forever."
<hkhenson> but if calvin is right, they might not have been as good at the kind of complicated thinking we do
* Eliezer says to EmilG: "Yes, that would also be a problem."
<hkhenson> since his thought is that thinking uses the same hardware that was evolved for projectile hunting
<EmilG> "they don't renormalize to anything interesting as they stand, just thermostat AIs that eat sugar forever." -- that could describe some neurologically abnormal humans existing now.
<Sumadartsun> Eliezer, I understand, but why do you not think they're sentient?
* Eliezer says to hkhenson: "I like Terrence Deacon's theory better than William Calvin's."
<hkhenson> making, of course, the baseball pitcher the ultimate in human evolution . . . .
* Eliezer says to Sumadartsun: "Neanderthals? I don't know whether they were sentient."
<hkhenson> where do you find Terrence deacon?
* Eliezer says to EmilG: "Yes, possibly. My point is that I don't want to make *additional* trouble."
<Sumadartsun> hm, you were probably referring to something else, then
* Eliezer says to hkhenson: "A book called "The Symbolic Species". Don't know if he has anything interesting online, but the book is good."
<Utnapishtum> how does 95 mph fastball make you the pinnacle of the evolutionary foodchain.. I 'm afraid I don't follow?
<Sumadartsun> ah, yes; non-human primates
<hkhenson> it is the hitting of small targets.
<BJKlein> holla Chestnut1!!
<hkhenson> which involved very tight control over timing
* Eliezer says to hkhenson: "That's for the evolution of language. If you want the evolutionary psychology of general intelligence, see [url="http://singinst.org/LOGI/.""]http://singinst.org/LOGI/."[/url]
* Eliezer says to hkhenson: "That's me."
<Chestnut1> hola everyone :)
<BJKlein> once again: you'll wish to join #immortal2 to see the main discussion with Dr James
<hkhenson> which in turn requires a huge number of neurons in parallel to get the timing jitter down
<EmilG> Eliezer: I suspect that a Friendly Seed AI is going to need to depend on stuff as far up as the "Personality" layer of humans, and that it's not going to converge on anything without a fair amount of arbitrariness. That's why I can't confidently say that Friendly AI is more than a pipe dream.
<BJKlein> to do this from the webpage visit http://www.imminst.org/chat2
<Venezuela> Hola de Venezuela
<BJKlein> from irc type /join #immortal2
<BJKlein> we'll get started now...
<Utnapishtim> in that case barry bonds rather than roger clemens should be sitting at the top
<MRAmes> Ding! Ding!
<hkhenson> this, of course is mostly a joke
<Chestnut1> come estan VZ?
<Utnapishtim> hitting a baseball is tougher than throwing it arguably
<Utnapishtim> I know
<PD> Bruce, I replied to that thread.
<Venezuela> Muy mal:-(
<BJKlein> Ahh, Thank You.
<Utnapishtim> I am only debating it for fun
<hkhenson> but calvin does make a hell of a case for the sequencing equipment of throwing being the same as is used to form sentences
<Eliezer> btw, URL for JH's writing: http://www.jetpress....lume6/death.htm
* MRAmes passes around a tray of cool beverages.
<Sumadartsun> and PD, join #immortal2
<Chestnut1> porque VZ? la economia, condiciones politcas etc...?
<Sumadartsun> (if you didn't know already)
<Venezuela> Por Chavez, el país tiene una enorme crisis:-(
<EmilG> Here's the real truth: Any seed AI is going to grind us up into atoms no matter what we do, period. Sorry; we're doomed. :)
<MRAmes> EmilG: Not if I have much to do with it!
<PD> How do you keep both chatrooms up at the same time? ;\
<avantopia> EmilG: how do you draw this conclusion?
<Chestnut1> tu vez mejoras dentro de pronto? o periodo largo de lo mismo?
<Sumadartsun> by using mIRC or imminst.org/chat2
<MRAmes> caliban: how do we formally ask a question
<MRAmes> ?
<EmilG> avantopia: Because I suspect that a sufficiently smart AI will come upon a philosophical crisis over the very idea of "convergence".
<caliban> We have two chat rooms tonight: #immortal and #immotal2 If you want to ask a question, make a comment or a point of information please type "/me ***question** * " in #immortal Your name will be recorded and when your time comes, you will be "given voice" in #immortal2
<Venezuela> Chestnut, nada bueno hasta que Chavez salga. Y él no quiere salir, así que habrá más sangre:-(
<Chestnut1> como lo siento por ti y tus paisanos...
<EmilG> They're talking about Cezar Chaves' rule over Venezuela and how it's putting that country into the toilet.
<PD> philosophical crises... Ceza Chaves... Man am I lost.
<PD> Cezar even
<Chestnut1> Caliban: when, why and how did Hughes get staredand interesed in the field of medicine/dying/living etc...
<Chestnut1> "started"
<BJKlein> to ask a question of DR James type before ***question:
<EmilG> I saw them speaking Spanish, so I just thought I'd translate for the benefit of the channel. ;)
<BJKlein> and we'll copy paste into the chat
<Chestnut1> thanks EmilG ;)
<PD> I'm lost, I just read Chestnut's question as "medicine/lying/diving".
<caliban> @ Chestnut - you are first (after BJ) person on the list with your question
<PD> That's how lost I am.
<avantopia> EmilG: how do you conclude that the resolution to this 'philosophical crisis' is for a seed AI to ' grind us up into atoms no matter what we do' ?
<avantopia> I do not understand why that would be the case.
<PD> Why "philosophical crisis", anyway?
<avantopia> right
<Chestnut1> sorry PD i have tis little laptop and cant type
<PD> No, you typed it right. I read it wrong.
<Venezuela1> Sorry I got disconnected before:-(
<caliban> We have two chat rooms tonight: #immortal and #immotal2 If you want to ask a question, make a comment or a point of information please type "/me ***question** * " in #immortal Your name will be recorded and when your time comes, you will be "given voice" in #immortal2
<Hugh_Bristic> Venezeula... is that Yose
* hkhenson ***question: Do you intend to try to be one of the people in the deep future?
<PD> caliban, ask hughes what he means by "even if its all meaningless."
<Venezuela1> Yes, this is Yose from Caracas, Venezuela
<Hugh_Bristic> Hola
<caliban> I don't ask YOU ask current list 1) Chestnut, 2)hkhenson 3)PD
* MRAmes ***question*** You writings rely on there being time for the progress of public and political opinion, and you posit various types of non-technological 'Singularities'.
<PD> Ok
<EmilG> Avantopia: I suspect it will do that, not because it wants to "grind up the humans", but because by that point it will have already amassed much nanotech as a convergent subgoal, and it will then just behave pseudo-randomly (see note 55 of CFAI).
<caliban> current roll 1) Chestnut, 2)hkhenson 3)PD 4)MRAmes
<Utnapishtum> ***question. I find it interesting that there seem to be parallels between your position and that of Fukuyama. you both seem to be in broad agreement that the logical outcome of technological progress is a collapse of the Edifice of post enlighteninment western thought. Yet you regard this as good and he as bad. Why do you think this issue polarises views even when there is agreement on the implications?
* MRAmes ***question*** How likely do you think it is that there *will* in fact be enough time before a technological Singularity make the issues moot?
<PD> EmilG, why would it behave pseudo-randomly?
<EmilG> PD: See note 55 of CFAI, and also 56 and 57.
<Venezuela1> Chestbut, EmilG and Hugh-Brisitc seem to be able to communicate in Spanish. Anyone else?
<caliban> current roll 1) Chestnut, 2)hkhenson 3)PD 4)MRAmesx2
<Utnapishtum> caliban you got mine on the list
<Utnapishtum> ?
<Hugh_Bristic> Actually, Hola is about it foe me. Sorry. I was just trying to make you feel welcome.
<Hugh_Bristic> foe is for
<Eliezer> EmilG: the old model doesn't really specify very well what happens under a philosophical crisis, the new model obsoletes the term 'philosophical crisis' and does specify what would happen, and that's not it
<caliban> current roll 1) Chestnut, 2)hkhenson 3)PD 4)MRAmes 5)Utnapishtim 6)MRAmes
<BJKlein> Hugh_Bristic.. you need to join us in immortal2
<PD> How do you open the notes page anyway?
<EmilG> Okay; scratch that.
<Eliezer> caliban: I thought MRAmes's 2 questions were one long question
<Hugh_Bristic> Done. What is the difference?
<Sumadartsun> http://www.singinst....AI.html#from-55
<MRAmes> Eli + caliban: that is correct... just background+question... thx
<caliban> current roll 1) Chestnut, 2)hkhenson 3)PD 4)MRAmes (loong) 5)Utnapishtim
<PD> EmilG, then I don't see why an AI smart enough to have philosophical crises would have a single supergoal of amassing nanotech. ;\
<Hugh_Bristic> Bye, Too confusing being in two chats.
<EmilG> Eliezer: If a FAI converges on something that depends solely on the panhuman and Gaussian layers of humans, what happens when after the Singularity, humans try to modify their own architecture? Renormalized morality goes out the window.
* EmilG imagines Eliezer's answer is going to be, (1) don't talk about "after the SIngularity", and (2) one of the things that an AI convergent upon the panhuman layer of existing humans can decide *is* what to do about posthumans.
<Eliezer> EmilG: it's not a voting algorithm, it's a way of cooperating today to establish a lasting system that includes a definition of those individual rights that are beyond democratic revocation
<caliban> current roll in #immortal2- 1)hkhenson 2)PD 3)MRAmes 4)Utnapishtim (and then there is always BJ :-) )
<Chestnut1> ****question: what is the biggest hurdle you ee regarding the isue of "ethics" and advanced technology? who-in your mind- is best suited to analyse and make policy decisions regarding what should and should not be done-who draws the line?
<EmilG> And what happens after almost everyone no longer has a panhuman layer like the one we have today?
<Eliezer> EmilG: It is not my right - certainly not my personal right - to complain if other people choose to create moralities that lead them to wish me dead, so long as they cannot vote me out of existence
<Eliezer> here and now, though, is when we have to decide which aspects of the system are subject to majority vote in the first place
<BJKlein> hkhenson.. you can ask your question in #immortal2 now..
<BJKlein> if you wish?
<BJKlein> you have the + ;)
<Taskmaster> Eliezer: It's NOT your right to complain about people who wish you ill and may just act upon those wishes?
<EmilG> Task: He said as long as they can't actually act on those wishes.
<Taskmaster> ok
<Eliezer> Taskmaster: it's my right to complain, but not to interfere, providing that I am protected from direct harm (as opposed to people just bad-mouthing me and so on)
<Taskmaster> freedom of speech
<PD> Anyway, I still can't see for the life of me why hypothetical posthumans would rewire themselves to have radically different moral beliefs.
<Eliezer> freedom of thought, freedom of consensual self-modification, freedom of consensual interaction, freedom from nonconsensual violence
<Hugh_Bristic1> I can't seem to send messages to Immortal2 any idea why?
<caliban> We have two chat rooms tonight: #immortal and #immotal2 If you want to ask a question, make a comment or a point of information please type "/me ***question** * " in #immortal Your name will be recorded and when your time comes, you will be "given voice" in #immortal2
<avantopia> you do not have + (voice)
<Utnapishtim> hugh bristic because you are one of the great unwashed
<Utnapishtim> like all of us here
<Sumadartsun> it seems to me this would quickly run into problems as described here, for example: http://www.daviddfri...Chapter_41.html
<avantopia> + is a permission given by ops (those with @ in from of their nick)
<avantopia> in front
<Taskmaster> I thought the chat did not officially start for another 35 minutes...
<BJKlein> hkhenson you can ask a follow up if you like..
<Sumadartsun> not that these problems are necessarily unsolvable
<Taskmaster> was the time changed?
<Eliezer> Suma: many of these questions go away in, for example, Greg Egan's Coalition of Polises, where everyone is running on protected memory, and individual rights are guaranteed by the local operating system
<BJKlein> there ya go PD
<BJKlein> you're up
<PD> Hmm?
<BJKlein> question for jhughes1
<Hugh_Bristic1> caliban: by / me, do you mean /Hugh_Bristic?
<Sumadartsun> I'll have to read that some time, then
<MRAmes> hk: he he... on *subjective* day at a time ;)
<PD> lol I forgot my question
<MRAmes> *one
<Taskmaster> BJ: Has the official chat start time changed??
<BJKlein> nope
<caliban> current roll in #immortal2- 1)PD 2)MRAmes 3)Utnapishtim 4)Chestnut 5)Hugh_Bristic (you may ask follow-up question if you wish)
<Eliezer> <PD> caliban, ask hughes what he means by "even if its all meaningless."
<PD> Ah
<Eliezer> i.e... what computation, exactly, is James Hughes saying returns the answer 'meaningless'? See, this is the thing that I failed to specify back in 1998, and that was, like, a big glaring philosophical error.
<hkhenson> does anyone know if Hughes is signed up with one of the cryonics organizations?
<Taskmaster> ask him
<caliban> Taskmaster it is 8:26 PM eastern
<Hugh_Bristic1> I still don't get how to send messages on immortality2.
<caliban> current roll in #immortal2- 1)MRAmes 2)Utnapishtim 3)Chestnut 4)Hugh_Bristic (you may ask follow-up question if you wish)
<avantopia> Hugh_Bristic1: you do not have permission to.
<avantopia> only those users with + or @ in front of their nick can.
<Hugh_Bristic1> Not much of a "discussion" then.
<avantopia> it is question time.
<jhughes1> Hughes is not signed up to be frozen - I plan to though in a couple years
<caliban> the discussion part will be later
<Hugh_Bristic1> oh
<MRAmes> Hugh: Its a *moderated* discussion... just a litle structure, huh?
<Hugh_Bristic1> sorry. I'm a newbie with chat.
<PD> I can't help but wonder whether "absolute referents" is any sort of coherent notion in the first place.
<EmilG> My *biggest* fear about friendly AI is that convergence will *in fact* depend to some extent on the personality layer, and as the vast majority of humans are not transhumanists, the results will not be favorable to you or me.
<Hugh_Bristic1> my step-daughter spends all day on the shit though. I should askl her.
<BJKlein> MRAmes you're up
<PD> EmilG, whose personality?
<celindra> <PD> I can't help but wonder whether "absolute referents" is any sort of coherent notion in the first place.
<celindra> WHy?
<EmilG> PD: http://singinst.org/...iendliness.html
<Sumadartsun> "too Xian for my taste" -- wonderful argument :/
<jhughes1> I don't think absolute referents are coherent, but they are at the base of a lot of previous thought on value and meaning, from Xian thought to Kant and Rand
<caliban> current roll in #immortal2- 1)Utnapishtim 2)Chestnut 3)Hugh_Bristic1 (you may ask follow-up question if you wish)
<PD> You needn't pummel me with links. My point was, I'm not aware polling was of growing into panhuman norms.
<PD> *part of
<hkhenson> Good for you James. Get the insurance lined up as soon as you can though.
<Venezuela1> Venezuela, question for J. Hughes: what is your latest thinking regarding the singularity? By this I mean, scope, speed, timing?
<Hugh_Bristic1> Utopian doesn't equal Christian.
<hkhenson> unless you are well enough off you don't need it.
<Eliezer> the problem is in giving a formally well-specified definition of what you mean by "absolute" or "objective". I now have a fair idea of what it really means to talk about, e.g., the "objective truth", and there is a nice logically consistent definition you can give of that, and it even coincides pretty much with our intuitions... but it is not trivial to make it well-specified
<Eliezer> what kind of magic wand do you wave over a sheet of brain cells to see how much truth it contains?
<PD> Not a deflationary definition, I hope.
<Hugh_Bristic1> model xj40
<MRAmes> jhughes: Good answer... of course.
<Eliezer> PD: no, not a deflationary definition - more like, if you think of Bayesian reasoning as an engine, then truth is the stuff the engine processes
<avantopia> Eliezer: will you write about this 'definition' of the 'idea of what it really means to talk about, e.g., the "objective truth"' ?
<PD> Anyway, I'm not sure wheher "the objective truth" is a useful notion to apply to a sheet of brain cells. More like "learning", or "minimizing error", etc.
<Eliezer> avantopia: probably eventually
<avantopia> I would like to read your 'definition'.
<avantopia> excellent.
<caliban> current roll in #immortal2- 1)Chestnut 2)Hugh_Bristic1 (you may ask follow-up question if you wish) --- anyone else?
<LeonKass> I'm late !
<LeonKass> =(
<PD> lol
* BJKlein slaps around LeonKass
<caliban> We have two chat rooms tonight: #immortal and #immotal2 If you want to ask a question, make a comment or a point of information please type "/me ***question** * " in #immortal Your name will be recorded and when your time comes, you will be "given voice" in #immortal2
* hkhenson question
<Utnapishtim> 4Utnapishtim provides Leon Kass with the human dignity of a conclusive end
<caliban> current roll in #immortal2- 1)Chestnut 2)Hugh_Bristic1 3)hkhenson (you may ask follow-up question if you wish)
<Eliezer> Brad Templeton: "Whether to grant machine intelligences human rights is, of course, a political decision, and under American democracy, of course, political decisions are made by the Supreme Court."
<Eliezer> sad but true
<hkhenson> heh heh
<hkhenson> that's not entirely true
<Chestnut1> don
<Eliezer> yeah? if qualiabearing AIs show up tomorrow, eight gets you three that the Supreme Court ends up making the decision
<Chestnut1> t you mean the congress ...then the Supreme Court would interpret under the Constitution??
<Eliezer> sad how the one remaining branch of government that still actually works is the one composed of unelected appointees with lifetime terms
<Eliezer> fyi, all, that's not the real Leon Kass :)
<Taskmaster> whew!
<caliban> Chestnut1: please repost your question in #immortal2
<hkhenson> eliezer, more likely the AI will have the power and be arguing over which if any of us to consider intelegetn
<Eliezer> hkh: you don't need to tell me that, believe me
<Taskmaster> and can spell
<Taskmaster> : )
<hkhenson> or as sashcash said, "without us you freeze in the dark."
<caliban> current roll in #immortal2- 1)Hugh_Bristic1 2)hkhenson (you may ask follow-up question if you wish) -- We'll take TWO more questions
<BJKlein> After Hugh and Keith.. we'll have open question time
<Eliezer> I was simply speaking of an non-self-consistent state, "human-equivalent" AI, for purposes of the thought experiment
<hkhenson> skaskash
<hkhenson> sorry
<Venezuela1> Venezuela, question for J. Hughes: what is your latest thinking regarding the singularity? By this I mean, scope, speed, timing?
<Utnapishtim> 1what if ture immortality is impossible and the AI therefore sees no moral difference between a long or a short interval before being anhilated
<Eliezer> Hugh, your question got cut off
<PD> How would you tell whether AI is "qualiabearing"? :(
<Eliezer> How do you objectively make such a distinction-one which c
<Eliezer> PD: ooh, tough question, give me a couple of months
<PD> Hah.
<Eliezer> actually... I'd settle for a way to know *for sure* that an AI was not qualiabearing
<PD> I'd settle for a coherent way of talking about qualia, for the time being.
<caliban> current roll in #immortal2- 1)hkhenson 2)Venezuela1 (you may ask follow-up question if you wish) -- We'll take ONE more question
<PD> Hughes has a David Parfit in his references. Who's that? ;)
<Utnapishtim> caliban. Stop being the faithful soldier and ask a question of your own!
<Eliezer> *** question: What historical parallels do you see between the current situation and any points in past history?
<Venezuela1> Caliban, do people tell you that your nick sounds too much like Taliban?
<PD> lol
<Chestnut1> no picking at the chat moderator -who is doing a superb job! :)
<Utnapishtim> lol
<PD> Burn.
<GEddie6561> Eli: wrt what?
<Eliezer> geddie: it's an official chat with James Hughes, in #immortal2, about http://www.jetpress....lume6/death.htm
<GEddie6561> not in here, then?
<BJKlein> GEddie6561 join #immortal2
<BJKlein> also
<PD> I liked the paper, by the way.
GEddie6561 is JavaUser@AC994D12.ipt.aol.com * irc.extropy.org
GEddie6561 on #immortal
GEddie6561 using irc.lucifer.com [127.0.0.1] Excalibur IRCd
GEddie6561 has been idle 28secs, signed on Sun Jun 08 19:45:13
GEddie6561 End of /WHOIS list.
<BJKlein> GEddie6561 actually goto http://www.imminst.org/chat2
<Chestnut1> i did also...brought up interesting questions to think about...
<caliban> current roll in #immortal2- 1)hkhenson 2)Venezuela1 3) Eliezer (you may ask follow-up question if you wish)
<Hugh_Bristic1> hkhenson: I think uplifting must accompany Singularity
<Hugh_Bristic1> something to help u skeep pace, or else we face irrelevance.
<Eliezer> hkh: I don't think that, under a positive outcome, people need to "keep up" or fear being obsolete. Not everything that is smarter than I am, is necessarily a threat. The speed at which I grow up shoud be determined by my own choices about that matter... not fear of something bigger.
<PD> I still can't understand why people in cognitive science are at all interested in "qualia". :(
<Taskmaster> Eliezer: So you don't see postsingularity social darwinism going on?
<Eliezer> PD: because it's a term that refers to something we don't understand, and things we don't understand are cool
<Eliezer> Task: if there is postsingularity social darwinism there will be no postsingularity humans
<Eliezer> actually, let me just make that a definite 'no'
<caliban> current roll in #immortal2- 1)Venezuela1 2) Eliezer 3) caliban 4) BJKlein ***The chat will then open for general discussion!!!***
<PD> Eliezer, well that makes sense, I guess.
<Eliezer> because one thermostat AI turning the solar system into paperclips is not 'social darwinism' either
<PD> Of course, philosophers who go around talking about qualia think they *do* understand what they're talking about, and that bothers me. Because I don't.
<PD> Not even at the level of fuzzy introspective phenomenology.
<Guest> #operhelp
<PD> Open question time yet?
<Eliezer> no, I'm up next
<PD> lol @ caliban
<caliban> current roll in #immortal2- 1) Eliezer 2) caliban 3) BJKlein ***The chat will then open for general discussion!!!***
<Eliezer> sounds like a valid point to me, PD
<PD> All too valid.
<Eliezer> sometimes I wonder if Singularity discussion is not a high-entropy state of transhumanist discourse
<avantopia> :)
<PD> I generally think it is. ;)
<Utnapishtim> 12so do I
<Eliezer> despite all best efforts, no one is adopting strong, narrow definitions of Singularity terminology... so everyone uses it to mean whatever most appeals to them
<Taskmaster> Eliezer, you need to write "The idiot's guide to the Singularity" so everyone will be straightened out!
<avantopia> 'smarter-than-human intelligence'
* celindra suggests people start doing things on their own instead of waiting for Eliezer
<Taskmaster> the series has books on every other topic
<Taskmaster> He's better qualified to write it than I am, but I just might give it a try
<Utnapishtim> do we want idiots speculatin about the singularity?
<Taskmaster> they need to be informed too!
<Sumadartsun> when they do, we want them to speculate in the right direction
<celindra> We don't need Singularity elitism, people
<Taskmaster> and that is why Eliezer should write the book I suggest
<hkhenson> ping
<PD> I actually think that's because what we're talking about is vague and amorphous. Re: what Eliezer said.
* Eliezer says to hkhenson: "Pong."
<hkhenson> my irc failed
<Venezuela1> Eliezer, why discussing the Singularity would be a high-entropy state?
<Taskmaster> I'm surprised Eliezer is looking for similarities with the past. I thought with the singularity there were no historical parallels and all bets were off.
<PD> Not the singularity. The present state of the world.
<Eliezer> vene: because it's easy, as long as you make the definitions vague, and everyone projects their own opinions onto the subject, so they can talk past each other indefinitely
<hkhenson> or the singularly is going on the same as always
<hkhenson> sloping up
<hkhenson> faster and faster :-)
<Eliezer> Task: because we have to deal with the Singularity in today's world, which requires historical literacy to comprehend
<Eliezer> btw, w/r/t Singularity timing:
<Eliezer> "When I was in the third grade somebody brought a rabbit to class for show and tell, a kid asked if it was a boy rabbit or a girl rabbit, nobody knew. The teacher thought she knew how to find out, the class would vote on it. Even at the time I wasn't sure that was the best way to determine the truth."
<Eliezer> -- John K. Clark
<Venezuela1> Eliezer, so what is your chortest definition of Singularity?
<hkhenson> amusing. there are not many people who would mention Ghost Dancers.
<Venezuela1> shortest
<hkhenson> but the point a good one
<Eliezer> vene: "The Singularity: That which we should not be talking about, during a chat with James Hughes about the legal definition of personhood."
<Eliezer> hkh: James Hughes mentioned Buffy earlier, so we must consider the possibility that he is a fan of Shadowrun
<hkhenson> I have become far less concerned about trying to upgrade human intellegence through genes with the singularity looming
<jhughes1> what's shadowrun?
<Eliezer> hughes: RPG that mentions the Great Ghost Dance
<PD> I dislike the modded chats too much to contribute anything serious. :(
<hkhenson> I mentioned it in my 1987 analog article
<hkhenson> but a better example is the South African Cattle killing
<Utnapishtim> PD: What format would you like to see?
<PD> The usual.
<Utnapishtim> I sort of agree with you
<Utnapishtim> its not like it gets THAT hectic here to begin with
<Eliezer> What is the "problem" of personal continuity? Which problem?
<Utnapishtim> did your historical Q ever get answered?
<caliban> no
<jhughes1> If the continuity of personal identity is a fiction, then it problematizing the project of "immortalism>"
<Eliezer> nope.. I only know a little history, but the little I know is seems so enlightening, that I would be interested if anyone has suggestions of good historical periods to check out
<Utnapishtim> if the continuity of personal idenitity is a fiction then I advocate suspension of disbelief
<hkhenson> James, I think that is just a none starter as a question
<Sumadartsun> though not in the sense of wanting (trans)humanity to live forever
<PD> I think me, jhughes, and sumadartsun are the only people here who have seen Parfitian sanity.
<hkhenson> because it is like string.
<Eliezer> the parallels between today's Luddites and the Rousseauians of the French Revolution, for example
<hkhenson> no defined lenght
<jhughes1> In other words, what's the point of immortality if I bear the same relation to by 5000AD successor as I do to everybody else alive at that point?
<hkhenson> no defined amount of change makes one person into another
<hkhenson> I would venture to be that would be less change than from when you were one year old to now.
<hkhenson> to bet
<Utnapishtim> there is no meaningful connection any more with that past person other than the name
<Eliezer> hkh: that depends whether it's 5000AD objective or subjective
<Eliezer> Uth: continuity of memories, continuity of change, continuity of renormalization
<hkhenson> I suspect people's personalities are not going change that much
<Utnapishtim> just like the Boston Celtics of today have pretty much no connection with the championship teams of the 1950s
<Eliezer> heck, continuity of qualia
<Eliezer> hkh: I suspect you're wildly wrong about that, btw
<hkhenson> could be
<Eliezer> the change might be gradual, but a gradual tremendous change is still a tremendous change
<Utnapishtim> but transferral of memories and experiences will erode the barriers that create personhood
<hkhenson> on the other hand, my plans include keeping goals for 1/4 million years.
<PD> I don't think "personal identity can only be made sense of in the context of social practices determining criteria for sameness of personhood" and "I bear the same relation to my 5000AD successor as I do to everybody else" is quite the same.
<ravi2> is dr. James still with us
<caliban> Eliezer- I enjoyed analysing the paradigm changes of the Lutherian revolution in southern France - but thats a classic
<Eliezer> james, what do you see as "the point of immortality"? For me the point of immortality is that if, at any point, I die, then (a) my wish to stay alive one more day has been violated, (b) potential has been destroyed
<hkhenson> which may take a lot of time in suspended animation.
<Eliezer> I want to live one more day; immortality is the proof by induction
<jhughes1> I'm here, but with divided attentions
<Eliezer> hkh: I kind of doubt there's anything interesting in the Milky Way outside Earth. Without evolution or intelligence, you don't get much interesting order.
<Sumadartsun> black holes have interesting properties
<PD> Black holes aren't very orderly.
<Eliezer> yeah, but not 100 years of objective time spend travelling, worth of "interesting"
<Utnapishtim> there are likely to be some pretty spectacular views
<Eliezer> not worth the trip
<jhughes1> I don't see any point in "immortality" - but I do see the satisfactions of the illusion of personal continuity. But I'm sure that the wisdom and tech that would come with a couple hundred/thousand years of life would loosen my grip on the illusion of self.
<Utnapishtim> agreed
<hkhenson> well, I intend to give it a try
<Eliezer> not unless they can thread a wormhole there
<Sumadartsun> Eliezer: not sure, there have been proposals for achieving infinite computation using them
<Eliezer> Suma: then we'll make one at home
<PD> I don't think "the illusion of self" is a good idea.
<hkhenson> amusing factoid, I think we will have to duplicate people just to get enough to leave earth.
<Sumadartsun> we might not be able to make such a large one as is probably in the center of the galaxy
<Utnapishtim> James you said that you can't imagine your selfhood remaining intact in any substantial way past 2040 or so. Why?
<hkhenson> I doubt there are anywhere near enough
<Utnapishtim> that seems an exceptionally short time away
<Eliezer> I used to think space travel was very romantic and interesting, but today, physical distance just does not impress me
<PD> Because the social practices determining criteria for selfhood are likely to change radically by hat time.
<PD> Hat time is when everyone goes out and buys hats.
<Eliezer> it's the travel instinct, the frontier instinct, to boldly go exploring and all that... but it does not actually provide much opportunity for Fun
<Eliezer> Australia is interesting because there is life there; space is not
<PD> Well, I'm sure space walks are pretty fun.
<Sumadartsun> I'm not interested in space tourism, either
<hkhenson> eliezer, the problems I see here with or without the singularity are as much of a reason to get light years away as the draw of the interesting
<Sumadartsun> but there may be things with properties not found on Earth that can be used
<hkhenson> besided, you can't tell me that there is no life in the galaxy
<Eliezer> I can solve all those problems before you get to Proxima Centauri
<hkhenson> except here
<Eliezer> :)
<Eliezer> hkh, I find it highly probable there is no life in the galaxy except here; Fermi Paradox
<hkhenson> I agree there are almost certainly no techphiles in our light cone
<hkhenson> but the earth had life *long* before radio.
<Sumadartsun> hkhenson, if there is a nontechphile in our galaxy, then there are techphiles in nearby galaxies
<Sumadartsun> so, doesn't work
<jhughes1> re: selfhood - 2040 isn't my threshold for selfhood. I'm just pessimistic about living past the age 80. But if I did, I think the issue of personal continuity becomes fundamentally problematic in the 2100s.
<Eliezer> if life is that common, we should find ourselves in a very young galaxy
<hkhenson> sum, I rather doubt it
<Utnapishtim> You don't see radical steps forward in life extension by 2040?
<Venezuela1> James Hughes: why not living more than 80?
<Sumadartsun> hkhenson, do you think all intelligent civilizations destroy themselves quickly, or?
<Eliezer> James, I don't think I define "personal continuity" the way you do. I define it as neither dying, nor destroying my potential... not in terms of there always being someone similar to the person I am right now. The person similar to the one I am right now is *me*, Eliezer-2003.
<PD> I don't know what information you're using to determine how likely it is whether there is life here or there in the galaxy. All I have are unsubstantiated hunches.
<hkhenson> sum, I don't know
<hkhenson> but I know how to find out.
<Eliezer> Since Eliezer-2003 already exists in 2003, why should it be necessary that Eliezer-2003 exist in 2100?
<Sumadartsun> Eliezer, the argument from young galaxies in an interesting one that I didn't know of
<hkhenson> James, how long did your grandparents live on average?
<hkhenson> i.e., is 80 reasonable without advanced medicine?
<Sumadartsun> *is
<BJKlein> jhughes1, you said: "I don't see any point in "immortality": do you mean that it's impossible?
<jhughes1> Yes I see radical life extension steps in the next ten years. But my Dad died of cancer at 59, and my mom had cancer in her late 30s, and died of a car accident at 43. I'm 42. My pat. grandmother died of cancer at 32, and my mat. grandmother and pat. grandfather at 70. So my clock started ticking about five years ago.
<hkhenson> I think his point is that the word may not have a very consistant meaning
<Eliezer> Or to put it another way... if Eliezer-1996 had fun solving a Rubik's Cube, it doesn't mean that I must go on solving Rubik's Cubes forever
<hkhenson> ouch!!!
<hkhenson> James if I were you I would not put off signing up.
<BJKlein> jhughes1, and you're not signed up for cryonics? right?
<hkhenson> cancer is a dire situation, but about ideal for being frozen.
<Eliezer> James: I had, I think, around five or six living great-grandparents when I was born... they died in their 80s or 90s. All four of my grandparents are still alive. Wanna trade?
<hkhenson> my parents died in the last two years. late 80s early 90s
<hkhenson> that gives me another 30 if I luck out.
<PD> Bruce should make the next chat about personal identity or something.
<hkhenson> and the scientologists fail to kill me
<BJKlein> do you resign your death to the prospects of a Buddhist afterlife, reincarnation etc.?
<BJKlein> just wondering jhughes1
<Eliezer> personally, I intend to have four living grandparents when the last star in the Milky Way is dead
<ravi1> James...have u signed up to be frozen upon ur death
<PD> BJKlein, I think his ontology of personhoods precludes the idea of "the self" surviving indefinitely.
<PD> *personhood
<Utnapishtim> Eliezer: how old are your grandparents now?
<jhughes1> I'm not comfortable signing up without my wife being more on board, both as a relationship thing and a financial issue (and I've considered the counter-arguments). I'm working on her, but she'll take some more time.
<hkhenson> reason I say that about cancer is that a substantial fraction of the 18 or so I helped freeze were cancer patients
<Hugh_Bristic> he's not even here now. ;-)
<Eliezer> ut: 60s, I think... maybe 70s
<hkhenson> james, it is small money to get life insurance
<hkhenson> at your age around a dollar a day will do it
<hkhenson> that's not term,
<ravi1> james-what do u mean counter-arguments
<hkhenson> term is even less expensive
<BJKlein> hkhenson, are you in touch with Ben Best?
<hkhenson> kids?
<hkhenson> often bj
<Utnapishtim> Elizer: They still have a reasonable shot to stick around long enough then...
<BJKlein> you think he'd be interested in joining us sometime?
<Utnapishtim> my grandfather is in his 90s
<jhughes1> counter-arguments : that its not that expensive, etc. But ahealthy marriage also adds years to life, and my wife isn't there yet.
<Eliezer> Ut: I'll try and work fast, but I'd advise cryonics.
<hkhenson> hmm. james, you may want to get her to talk to others in the cryonics community.
<Hugh_Bristic> j: is your wife transhumanistically oriented.
<jhughes1> Thanks Eli - I say a little prayer for your deus x machina every night.
<hkhenson> what area are you living in?
<BJKlein> Hugh_Bristic, she's very supportive..
<Taskmaster> Dr.J: Even if you don't get signed up for cryonics right now, you should at least get life insurance.
<BJKlein> Chestnut1, ya there?
<Taskmaster> Especially consi




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users