• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans

Photo
- - - - -

Robin Zebrowski - AI & Qualia


  • Please log in to reply
3 replies to this topic

#1 Bruce Klein

  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 27 July 2004 - 03:18 PM


Chat Topic: Robin Zebrowski - AI & Qualia
University of Oregon Ph.D. Candidate in the Dept. of Philosophy, Robin joins ImmInst to discuss Cognitive Science, Artificial Intelligence, and Qualia.

Chat Time: Sun. Oct 17 @ 8 PM Eastern Time [Time Zone Help]
Chat Room: http://www.imminst.org/chat (irc.lucifer.com port: 6667 #immortal)

Posted Image

Robin Zebrowski (Ph.D.)
Interests: Embodiment, Cognitive Science, Artificial Intelligence, Qualia, Philosophy of Religion, Evolution.
http://www.uoregon.e...il/student.html

Robin presents at Transvision 2004: Posthuman AI: How Recognizing the Importance of the Body Will Change Things
http://www.transhuma...resenters.shtml

#2 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 23 August 2004 - 07:42 PM

Robin's update about TV04 - Toronto:

http://www.firepile....ekday/08082004/

#3 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 18 October 2004 - 01:07 AM

* BJKlein Official Chat Begins: Chat Topic: Robin Zebrowski - AI & Qualia
<BJKlein> University of Oregon Ph.D. Candidate in the Dept. of Philosophy, Robin joins ImmInst to discuss Cognitive Science, Artificial Intelligence, and Qualia
<BJKlein> http://www.imminst.o...&f=63&t=4027&s=
* BJKlein rubs hands
* RobinZ chuckles
<Cliff> Is a person necessarily a single, physically continuous entity or could a person theoretically have mutliple physical manifestations?
<Foodle> what do you think about the paradox of Ben Best
<BJKlein> single entity
<RobinZ> I'd have a very hard time seeing how a person could have multiple physical manifestations. This hinges on our normal notion of "personhood" which is, admittedly problematic
<BJKlein> Foodle, can you define Best's paradox?
<Foodle> yes one minute
<Cliff> Suppose a person is defined in terms of quality of sentient experience?
<Scott> Distributed identity or intelligence with redundant systems then is in the same category?
<RobinZ> I'd still want to insist the experience absolutely must be tied to a body (which isn't always the most popular view, but is definitely one I think can be defended)
<Cliff> Suppose two bodies are identical and have identical mind contents?
<Foodle> The Duplicates Paradox (The Duplicates Problem)
*** Joins: ct (~ct@c-67-171-36-123.client.comcast.net)
*** Joins: Gordon (~Gordon@64.192.74.132)
<Foodle> http://www.benbest.c...lo/doubles.html
<Foodle> I dont believe in Duplicates as Ben Best that`s why I sign up for CI
<Foodle> CI= cryonics.org
<RobinZ> Ahh, people love that one. I'm not convinced it's a possible scenario, but I'd still claim the "mind" is attached to the physical body and hence there are 2 different minds processing the exact same thing
* BJKlein nods to RobinZ
*** Joins: John_Ventureville (~John_Vent@24-117-201-237.cpe.cableone.net)
<BJKlein> Robin, why is understanding qualia important to creating AI?
<RobinZ> Well, qualia is really the very heart of what most people consider consciousness, and I can't imagine why anyone would want to bother building an AI that doesn't have that
<BJKlein> do we now have a good feeling for it, defined, etc?
<goomba> uh...they are good at math problems? :p
<Gordon> personally, I still don't understand what qualia is; I've seen plenty of explanations, but it never sinks in
<RobinZ> I also think our rationality and emotional processes are tied up for survival of the organism, meaning that any AI worth its weight would need some sort of qualitative feedback
<Cliff> Could a non sentient entity be more intelligent and creative than a sentient entity?
<John_Ventureville> Eliezer Yudkowsky should be here
<BJKlein> who?
<Gordon> John_Ventureville: he's not online
<John_Ventureville> too bad
<BJKlein> :)
<John_Ventureville> lol
<RobinZ> I'd define it simply as the felt quality of experience. So when I have an experience of feeling sad, or seeing red, it's an experience of qualia (although I am not committed to the idea that there is actually some new process within the body that arises)
<RobinZ> Cliff, good question
<RobinZ> I guess it depends 100% on how you want to define both intelligent and creative
<John_Ventureville> so "self-awareness" is a simple description of qualia?
<RobinZ> which is sort of a trick answer, but I don't think I can answer it without defining both terms :)
<RobinZ> self-awareness is a little different
<BJKlein> self-aware = sentient
<RobinZ> that's simply knowing that there's something that it's like to be you. Whereas qualia is what the experience actually feels like to you. (Subtle, but I think importantly different)
<Scott> A machine that monitors its location, termperature, etc. isn't conscious but is self aware to a degree
<BJKlein> qualia = reddnes of red
<John_Ventureville> there have been a near-infinite number of threads on the extropian list about qualia, but perhaps we will get somewhere now that we have a professional in the field among us
<RobinZ> there's still a lot of controversy about qualia, so it isnt surprising that people don't know what to do with it
<Cliff> The idea of atheistic evolution is a proposition that life was created without the benefit of any guidance from any sentient entity. Yet noone on earth has come anywhere near the level of creativity involved in the design of life on earth.
*** Joins: joshua (~who@c-24-20-117-141.client.comcast.net)
<Gordon> bah, no matter how many qualia threads there are on extropians, it will always be a mathematically small number, even if it seems large from the human perspective
<John_Ventureville> qualia = Mobius loop of all email list discussions!
<RobinZ> Interesting idea, Cliff. I'd be wary of the use of "creativity" there though
<Scott> nature's not creative...only adaptive
<John_Ventureville> Gordon, lol!
<RobinZ> Creativity sometimes implies directed activity or purpose, so I'm not convinced I'd call natural selection creative.
<John_Ventureville> ironic
<RobinZ> It sort of just explodes into every nook and cranny and the best ones survive!
<John_Ventureville> now it's time for complexity theory!!
<Foodle> In sleep, brain electrical activity and metabolism can be nearly as great as in the waking state
<BJKlein> RobinZ, do you think there is a god?
<Cliff> Evolution requires more than natural selection. It requires a substrate that has extensive properties capable of supporting it.
<RobinZ> Sleep is a puzzle that we'll be working on for a long time, I think
<mstriz> Only REM, not slow-wave sleep.
<Gordon> Robin: at some point qualia did not exist; what evolutionary path lead to its creation?
<RobinZ> I'm an awe-filled atheist, which means I think the world is much bigger and more amazing than any religion has shown us :)
* BJKlein claps
<John_Ventureville> Robin, over the next twenty years what would you like to accomplish in terms of qualia research?
*** Joins: gustavo (~gustavo@pool-141-156-91-114.res.east.verizon.net)
<John_Ventureville> I'm awe-filled that I actually even exist!
<RobinZ> Gordon - I think that's exactly the question we need to answer to get AI more on the right path. I think it's related to the level of complexity in our bodies, but I also wouldn't rule out qualia in something like my cat, either. He clearly knows what it's like to feel hungry, etc.
<Scott> I'd like to follow up on one of your first statements." Why would we want to create an AI without Qualia?" Why not build one without qualia if we could? What would that look like?
<RobinZ> so I don't think it was one moment, but a gradual process by which qualia evolved to helping us monitor our surroundings better
<John_Ventureville> Robin, cats can hold grudges and have definite emotional states
<Gordon> maybe this would be a better question: what kind of selection forces would cause qualia to arise?
<Cliff> What would be the absolute minimum requirements to create a system that would support artificial qualia?
<RobinZ> John, I think it would be really cool if in the next 20 years we were able to actually get enough of a handle on qualia to get a machine to have even the simplest of felt experiences (although I'm not so sure we could even conceive of this in 20 years time)
<RobinZ> I agree - my cat holds grudges every time I don't pet him enough
<John_Ventureville> the need to adapt to an environment which is growing more hostile?
*** Joins: daraknor (~daraknor@wbar8.sea1-4-4-094-214.sea1.dsl-verizon.net)
<John_Ventureville> or the need to better court females so successful matings will occur?
<Gordon> but the trouble is, I could program a simulated pet to behave the same way
<John_Ventureville> but it would be a *simulation*
<Gordon> if the petted counter is incremented often enough, act in a way that the owner will perceive as a grudge
<RobinZ> yeah, what John said - I think it developed because at some point simply processing incoming information wasn't enough (simple input-output) and a more immediate and quick form of processing needed to come into play
*** Joins: Natasha (~Natasha@dialup-4.230.147.98.Dial1.Houston1.Level3.net)
<BJKlein> Can we create a AI qualia on a desktop comptuer?
*** Joins: Natasha (~Natasha@dialup-4.230.147.98.Dial1.Houston1.Level3.net)
<RobinZ> Gordon - you're asking the very biggest question philosophers of mind tackle - the problem of other minds.
* BJKlein waves to Natasha
<RobinZ> How do I know you aren't all really machines? Somehow I do know, but no one is 100% sure how yet
<John_Ventureville> howdy, Natasha!
<Gordon> Robin: so you're equating perception with qualia? (at least that's how your thing about input-output read)
<Natasha> Hello
<RobinZ> Bruce, I don't think so. I'm of the very radical (and I believe very correct) camp that believes the body plays an indispensible role in intelligence
<BJKlein> for feedback?
<Natasha> In what ways?
<RobinZ> Perception probably played a role in the development of qualia, yes, but they aren't the same.
<John_Ventureville> some people take this idea to the point of entering the paranormal
<daraknor> RobinZ: that is a Turing test for an AI really
<RobinZ> Yes, for feedback
<RobinZ> What is the turing test for ai? (sorry, I'm scrolling and trying to keep up but youre all asking such good questions)
*** Joins: Randolfe (~Randolfe@ool-44c1e97a.dyn.optonline.net)
<daraknor> RobinZ: If an AI mixed in with normal people communicating over terminals is indistinguishable from the humans, the AI wins.
<Natasha> Robin -
<Natasha> Robin are you referring to the manner in which Deepak Chopra
<Natasha> references the intelligence in the body/brain
<Natasha> or are you referencing the abilility of our
<RobinZ> oh, right. Sorry, i thought you were proposing a new turing test! I actually have very little faith that the turing test as conceived by turing himself would be a decent gauge for AI
<Natasha> senses to intake and output information which increases our
<Natasha> brains cognitive processes?
<goomba> some people on IRC fail the turing test =p
*** Joins: Rotaerk (~Rotaerk@129.252.69.20)
<Cliff> Senitence is primarily an internal thing. Could a non sentient Turing machine communicate to all external entities in a way that is identical to the way a sentient identity would?
<RobinZ> ahh, I've never read any Deepak Chopra (I have a distinct distaste for that sort of thing) so I'm not sure what sorts of things he says, but I don't mean this in any mystical way. The body is so important because we aren't passive creatures that simply take in information like a movie and process it and spit back out an answer
<daraknor> I'm not talkng about Turing Machines, I'm talking about Turing's test to see if AI is successful or not.
<Natasha> Of course. But the body is not an intelligence, it functions to
<Natasha> provide the brain with information.
<RobinZ> Yeah i've seen people fail the turing test. it adds to my belief that the turing test is a bad test for AI. it's a good test for good programming and processing power, though!
<Randolfe> Human emotions can't be programmed into AI because AI is not truly human.
<Natasha> Goes to show that rating intelligence is fuzzy.
<Scott> Robin, could provide your definition of sentience?
<daraknor> which Turing test are you talking about Robinz?
<John_Ventureville> I still have a hard time believing AI will never develop emotions on their own
<RobinZ> ahh, there's the rub. Natasha, i don't think it's so easy to separate the body and the mind. When I talk about the brain, I actually mean the embodied brain.. They're entirely more interdependant than most people realize
<Natasha> Indeed.
<Natasha> John, Why?
<Natasha> Robin what do you think about John's view?
*** Joins: mike2050us (~mike2050u@lc-du220.cybermesa.com)
<Randolfe> John, and AI emotions, separate from human bodily environment, would be different and would betray AI as being AI and not human.
<mstriz> Randolfe: Are you sure about that? Most mammals share the same emotions, and as long as consciousness is not substrate dependent, I see no theoretical barrier.
<BJKlein> Robin, any group working toward ai+body+qualia now?
<Natasha> Emotions stem from Love and Fear. If AI could love and fear, then it could
<Natasha> have emotions.
<daraknor> All mammals are also capable of reproduction and death, as well as emotions.
<RobinZ> I don't usually use the word sentience since it could mean so many things. I'd rather keep the terms more clear and easily defined, but id probably say sentience is self-awareness PLUS qualia
<Randolfe> But AI does not have a mammalian existence. It does not give live birth, nurse young, etc.
<John_Ventureville> first of all I admit to not having the vast & deep knowledge of the subject as some do, but I just see emotion (being anthromorphic I admit) as vitally important in perception and that in time some AI would either inadvertantly or by actual choice as they self-upgrade develop at least some human-like emotions
<Scott> I think that emotions are cognitive interpretations of physiological arousal.
<Natasha> Sense driven Scott?
<RobinZ> (I'm getting behind!) natasha - I think John asks an interesting quesiton about AI and emotion - I do think AI will need some sort of emotional feedback loop to function in any way that we can recognize as intelligent, but it doesn't necessarily have to be directly reflective of our own human emotion
<Scott> Without the physiological component, they'd only have simulated emotional states.
<John_Ventureville> I see at least some future AI as being utterly fascinated by humanity (is that a display of emotion) and deciding to mimic our emotional responses to expand their own range of experience
<BJKlein> feel free to skip q's if needed, robin.. people can repost
<mstriz> Yes, but you're missing the point. Emotions aren't something special that only humans have, and any superordinate program (like an emotion) could, in principle, be written to consciousness on any substrate.
<Cliff> Sentience is very tricky to define because anything that is said about it could be imitate by a non sentient process. I am not sure what is the difference between sentience and consciousness. However, I see a need to distill a definition of sentience or consciousness that separates it from all the congnitive processes that are intimate to it.
<Randolfe> If AI developed human emotions, they might be based on those supposedly "altruistic genes that some animals exhibit--genes that benefit the herd over the indivual and therefore are in the best interest of all.
<Scott> Natasha, in a sense :-)
<John_Ventureville> I am not saying powerful AI with emotions are necessarily a good thing
<Natasha> Cats and dogs developed human-like emotions when we
<RobinZ> Bruce, as to anyone working toward embodied AI right now, I'd say not really. the MIT AI lab under rodney brooks worked on it for awhile, but they seem to have moved toward human-computer interaction more. there are a few independant researchers building androids in the hopes of something developing, but i think thats entirely too simplistic a hope
<Natasha> breed them
<John_Ventureville> just as emotions can derail human beings, emotions in powerful AI could lead to utter disaster
<Natasha> Forgive me Robin for being tardy in this forum, but could
<Natasha> you give a comment about what key concern or issue
<Natasha> motivates you today?
<Randolfe> With humans, emoptions (like anticipation of danger) can be life-saving ionstincts based on unconscious thought processes.
<mstriz> John_Ventureville: But the proliferation and persistence of emotions suggests that they are adaptive more often than not.
<John_Ventureville> which is a good thing!
<Gordon> Cliff: I think that's just the question
<Cliff> Would altruism work contrary to immortalism by emphasising the surival of an evolving system of beings over the survival of individual beings?
<RobinZ> natasha - my main interest is in embodied AI, which I think is the single way to actually solve the problem of felt subjective experience in a machine
<Randolfe> Not all human emotions are adaptive. Take jealousy for example. Hat4e for another. Bopth are generally destructive.
<Natasha> Excellent.
<Gordon> Randolfe: who says they're not adaptive?
<RobinZ> randolfe - but that's still adaptive for the creature who experiences it.
<Gordon> if it's there, it is an adaption; that's evolution
<Randolfe> Cliff, I think a good argument could be made that (in the beginning) only a select and deserving few would actually get immortality.
<mstriz> Randolfe: Jealousy is adaptive in the only sense that it matters; it makes you covet what others have, thus increasing your likelihood of getting it, thus increasing your social status, thus increasing your reproductive success.
<John_Ventureville> Randolfe, but not always, jealousy could let a person know they need to be more attentive to their mate, and hate could motivate them to prepare defenses against an enemy who in the past horribly mistreated them
<daraknor> which means they are killed shortly thereafter Randolfe
<mike2050us> Robin, I'd like to get your thoughts on something that Rafal Smigrodzki said in an email exchange w/me re: Friendly Ariticial Intelligence (FAI) -- (see next)
<Scott> Robin: Embodied AI? Sorry, but do you mean AI in an organic substrate?
<Randolfe> Jealousy is based on self interest and the desire to control another. It is not based on giving less than adequate attention to a loving mate.
<BJKlein> Robin, as you're an atheist, what do you think happens after death?
<RobinZ> Scott - I don't think an AI will *ever* develop as a desktop program
<mike2050us> "Conceptually, the FAI should be free of many of the limitations which prevent humans in positions of authority from effectively acting in the social interest. The FAI does not have a self-directed goal
<daraknor> haha BJKlein let's see how many parallel conversations RobinZ can engage in
<John_Ventureville> *B.J.'s big question!*
<Gordon> Randolfe: *bzzzt*, you're confusing psychological motivation with evolutionarily stable selection factors
<mike2050us> system, so its actions will not be self-serving (so public choice theory would not apply), and it could truly act in the distributed interests of all humans. The FAI might have sufficient computational power and the
<Scott> So are you taling about artificual human enhancement?
<mike2050us> ability to gather information, to not only understand humans in general better than humans themselves do, but even understand every single human better that she understands herself. By "understanding" I am mostly
<John_Ventureville> Robin is certainly a good sport about all of this!
<John_Ventureville> *I hope*
<John_Ventureville> : )
<mike2050us> referring to the ability to analyze volition - to answer the question "What do you really want?", thus overcoming the Hayekian limitation.
<RobinZ> bruce - i think that's about the end of it for subjective experience. end of the line, unfortunately (the only comfort in that is that i won't be able to feel the fact that there's nothing left to feel). (which is why i think the work you do rocks so much) :)
<mike2050us> Finally, its understanding of the world in general, from physics, to sociology, would be so far ahead of any human that completely new solutions would become possible.
<BJKlein> i've had the pleasure of meeting Robin, she's a great sport
<John_Ventureville> cool
<BJKlein> pearcings and all
<Natasha> Thank you Robin and good night to everyone.
<mike2050us> You might notice that these capabilities are of the kind that usually are ascribed to deities - but, being a techno-optimist, I don't hold it .to be impossible." RS
<BJKlein> piercing
<John_Ventureville> ouch
<BJKlein> conversation and bodily
<Randolfe> "The law" as we hqve it in this society is human efforts to turn justice into systematic programs and rational decision making. Actually, human emotion would lead to more "just" o0utcomes in many instances.
<BJKlein> take care, Natasha!
<John_Ventureville> good night, Natasha
<Randolfe> good night Natasha
<RobinZ> sorry, still scrolling back - mike2050us - i'm not sure what the question is regarding that exchange... my only comment would be that any AI would necessarily need some amount of self-serving mechanisms built in, since intelligence isn't simply information, but also a degree of survival instinct (or at least it arises from such an instinct)
<Cliff> Suppose your precise physical form happens to occur again in a far distant universe in the far distant future. Would you not resume your qualitative experience there?
*** Joins: pakaran (~nathan@65-37-29-184.nrp3.roc.ny.frontiernet.net)
<pakaran> hi sorry for running late
<BJKlein> welcome pakaran
<John_Ventureville> howdy
<RobinZ> cliff - i would say no. i don't think a body double of me would in any way BE me.
<Randolfe> AI might be like a fire. It might just have a great hunger for knowledge, power, fuel and burn so brightly, so quickly, it acted against its long term interest.
<Cliff> Robin- does this mean that there is more to a person than the physical?
<RobinZ> ahh. well, we act against our long-term interests all the time, so I wouldn't have a hard time imagining that
<John_Ventureville> wow
<John_Ventureville> first qualia...
<John_Ventureville> then the classic identity vrs. xox's issue
<Scott> More than physical? The mind is what the brain does.
<Randolfe> Cliff, contrary to what most people believee here. An identical twin, born now or later, would share many parts of your genetic experience and capacities.
<mike2050us> Thanks, Robin. Your answer was clear even if the context of the quotation was not. You have expressed the same concern that nags at me: even a purported friendly AI must has some self-sense. This might create problems down the road.
<Cliff> Identical twins are phsycially very similar but mind content is still radically different.
<John_Ventureville> do we want AI with a sense of self-preservation?
<John_Ventureville> lol
<RobinZ> cliff - nope, not at all, but the chance that simply another body put together like mine would BE me is a bit different (and, if you really push the question, on some level, even atomistic, it really won't be my body just because its put together the same way.) there's something mysterious about how our stream of consciousness works, and i don't think anyone (especially me) wants to pretend to understand it fully quite yet
<Randolfe> If we cloned an identical copy of one AI program and then pitted the two identical programs against one another, which one could possibly win?
<John_Ventureville> anyone here read about author John C. Wright?
<daraknor> I missed the beginning of the chat, so this may have been asked. If Qualia is unknowable, how do we program that in?
<John_Ventureville> *this blew me away!*
<John_Ventureville> he had quite an incredible experience
<Randolfe> John, what about John C. Wright?
<John_Ventureville> well...
<Schaefer> RobinZ: You believe that conciousness exists on the subatomic level?
<daraknor> Randolfe: black would win!
<mike2050us> Yes, John, I've read Wirhgt's Golden Age triology -- loved it!
<John_Ventureville> he is the author of "The Golden Space" and several other classic extropian-style SF modern classics
<John_Ventureville> golden age
<John_Ventureville> thank you
<Randolfe> Darakor, both would be black, equally black. They would be identically black.
<RobinZ> i wouldn't say qualia is unknowable - i'd actually liken it to something more physical like digestion. just because i can never digest your food for you (since digestion is linked to the body in which those proteins are delivered) doesn't mean we can't understand digestion. qualia is like that. linked inextricably to your own body, but still studyable in theory
<John_Ventureville> he was well known for being a die-hard atheist
<daraknor> Randolfe: black on black chess would be interesting.
<John_Ventureville> but last November he had a near-death experience which he will not reveal that much about, except to say he is now a believer in God & an afterlife and has become a practicing Christian
<Randolfe> Well, with black on black chess, we know that black would certainly win.
<BJKlein> Robin, do you think cryonics will work?
<John_Ventureville> I nearly fell out of my chair when I read this
<mike2050us> Wow! John, where did you hear this about Wright?
<John_Ventureville> yes
<RobinZ> schaefer - i wouldn't say consciousness is necessarily sub-atomic. i don't think we have any good theory of precisely where consciousness does its work (which is actually a bit depressing, given how long its been studied) but at the same time, if someone wants to claim they're my body double, it seems like that would have to be true all the way down to the very tiniest bits, and not just the parts on the macro level that we're interest
<John_Ventureville> Damien Broderick shared this on the extropian list just a few days ago
<Cliff> Does a person's fundamental identity change as the person undergoes physical and psychological changes?
<John_Ventureville> and he ended his posting with the words....
<John_Ventureville> "Could it happen to you??"
<RobinZ> bruce - i have a LOT more confidence in cryonics than in the idea that when i die some "soul" floats up to go through the pearly gates and live on a cloud with harp-playing babies! i'm not 100% convinced the technology is there yet, but i see no reason it can't happen
<John_Ventureville> lol
<Randolfe> RobinZ, fact is that some apparently identical bgeings can be quite different. Had the strange experience to see an "identical" clone of my own mutt. But three years later, she was turning white in a different pattern from my own dog and they were no longer identical.
<daraknor> John_Ventureville: Yes. I may die due to illness at any time. I'm not going to believe in god now just because I am sick.
<Schaefer> RobinZ: Would you object to undergoing a procedure that would replace, simultaneously, every atom of your body with a fresh new stock of atoms from some inanimate piece of matter, given that everything gets put in the right spot?
<Schaefer> RobinZ: By "body" I mean "brain", really.
<RobinZ> randolfe - no clone is actually identical
<John_Ventureville> daraknor, I am very sorry to learn that
<Scott> Won't we most likely figure out how to build AI before we figure out the nature of qualia? So AI will figure out how to program qualia before we do? Same for emoitons. And then they'll decide whether or not they want to incorporate them into their programming.
<RobinZ> they may share dna, but to overestimate the role of genetics at the expense of development is to make a grave mistake, i think
<Randolfe> RobinZ, you are wrong.
<John_Ventureville> but you may never have the sort of N.D.E. John C. Wright experienced
<RobinZ> ahh schaefer - thats a great question.
<BJKlein> we all may die at any moment.. do everything you can to limit short and long-term risk to your life
<Randolfe> A woman who uses her cell and her egg to bear an dientical twin will have an absolutely identical child conceived through cloning.
<daraknor> John_Ventureville: well i bring it up because I want to point out that I choose how events affect me. Wright choose to believe in something else. What happened to him?
<John_Ventureville> he had a heart attack I believe
<daraknor> lol
<Schaefer> For the purposes of this discussion we might want to be careful to distinguish between clones (genetic copies) and xoxes (copies down to the atomic level)
<daraknor> that is not serious
<John_Ventureville> every year thousands die from them
<Cliff> A clone is not truly identical. Genetics are identical but many details vary in the development process.
<RobinZ> in theory it seems like i shouldnt care if everything is being replaced bit by bit.. this is actually a problem that goes all the way back to Plato, i think. i'd rather not answer that sinc, as I said, we really don't know where consciousness is doing its work (although intuitively, i want to say i would undergo such a procedure)
<Randolfe> Heart attacks are geneticsw in action. Many health nut runners, ext. die from them at an early agte because that was their genetic destiny.
<John_Ventureville> BJ, what do you think of author John C. Wright's change of heart?
<BJKlein> seems humans will change if given the right opportunity
<daraknor> I'm diabetic and have to jam dull steel into my flesh several times a day. That is the easy part. The hard part is not being able to walk and having seizures from nerve decay. I still think Wright choose how he *wanted* to react to the situation.
<Schaefer> RobinZ: Especially considering that you _are_ being replaced bit-by-bit. The proteins in your brain and constantly getting recycled.
<John_Ventureville> daraknor, how old are you?
<RobinZ> right schaefer, sorry, i was referring to clones previously (which are only truly identical in science fiction - in reality the development process can trigger different protein expression even resulting in a clone that looks physically little like its "parent")
<daraknor> 27
<John_Ventureville> it sounds like you have a very severe case of diabetes
<Randolfe> It has been my experience that sick people actually "embrace" death in the end because it gives them relief from suffering.
<daraknor> been like this since 23
<John_Ventureville> I normally just hear stories like that when much older people have diabetes and for years have neglected it
<daraknor> Randolfe: many of us do, but some of us actually have something they want to do first.
<John_Ventureville> daraknor, do you have a cryonics policy in place?
<daraknor> yeah, i don't feel like talking about my health aside from counter to J.C. Wright.
<Randolfe> I had the power of attorney for a 33 year old friend with diabetes who decided to forego dialysis after losing his ability to walk, to see, and finally had kidney failure. I had to fight the doctors to let him die.
<daraknor> No Cryonics. I have to mooch food...
<Scott> Robin, I still don't understand your tem, "embodied AI"
<RobinZ> schaefer - yeah, exactly. and yet the real puzzle lies in how i still have perception of a single subjective experience. that's the puzzle
<John_Ventureville> damn
<BJKlein> that was in OR, robin?
<John_Ventureville> so at this point you can't work?
<Schaefer> RobinZ: Do you think a mind would necessarily have to have qualia in order to believe that it does?
<John_Ventureville> sorry
<John_Ventureville> I don't mean to get to personal
<John_Ventureville> you just remind me of my friend James Swayze
<John_Ventureville> he is a quadriplegic from an accident which happened in his early twenties
<RobinZ> scott - when i say 'embodied' ai, i mean mostly that no one sitting at their desktop computer is likely to ever program a real consciousness, because there will be no meaning in the system and none of the symbols will be grounded. the creature needs to be in the world and able to experience and learn from the world itself to solve those problems
<daraknor> yeah it is possible to work yourself to death too...
<Randolfe> Yes, embodied AI would have to experience walking and the pain of falling and cracking its knees
<RobinZ> bruce - was what in OR? (i'm starting to doubt my irc skills in here - you folks are quick)
<Scott> So mobile and interactive but not necessarily organic, right? You think it must learn pretty much as we learn, hands on?
<RobinZ> schaefer - i think that a machine could report qualia without experiencing it, but i'd be wary of the word 'believe' there
<Cliff> Shaefer- I think a machine could be designed to "believe" it has qualia without actually posessing it, but posessing qualia makes belief in its posession much more likely.
<Scott> Robin: Richard Dawkins says that brains are busy representing objects in the environment. The human brain became sophisticated enough that it developed the ability to represent itself as one of the objects in the environment and this was the beginning of consciousness. Does this ring true with you?
<BJKlein> ah, the "33 year old friend with diabetes who decided to forego dialysis" is there not assisted suicide in OR
<RobinZ> scott - precisely. i'm not committed to saying it needs to be organic, because i think i'd be out of the AI game if i said that. i'm still committed to a belief that AI is quite possible, but its a lot harder and more complicated than most people think
<Scott> I hope I'm not misrepresenting his ideas.
<Randolfe> No. He had no quality of life. He hated sufferring through idalysis. He was HIV positive also but that had not yet kicked in. He made his choice and I helped him keep it.
<mike2050us> Robin, if you think that a machine could report qualia without experiencing it, then why couldn't a human?
<Scott> Good question, Mike.
<BJKlein> ah, sorry.. that was Randolfe.
<BJKlein> ok
<RobinZ> scott - i think i'd say yes absolutely the brain has its own body-image that helps it monitor its processes. i've become wary of "representation" in recent years, so i'm not sure i'd necessarily want to use that term, but the idea you present from dawkins sounds very correct
<Cliff> Scott- I think that the Richard Dawkins idea of consciousness could have perhaps been done more simply without true qualia.
<Scott> Cliff: Does he use qualia to describe consciousness?
<RobinZ> mike2050us - a human probably *could* report it without experiencing it, but it would just be as if i told you i'm feeling sad when i'm not. it would be a lie, which just isnt terribly interesting
<Scott> Thre is something that it is like to think about Qualia. :-)
<Cliff> He does not have qualia in his description but qualia is understood to be integral to consciousness.
<Schaefer> RobinZ: Do you believe that all lifeforms capable of genuinely believing that they have qualia (on the level that humans do) must have qualia?
<RobinZ> many people avoid the term qualia because it was used within philosophy to describe something much more problematic at one time
* BJKlein End Official Chat (feel free to stay)
<mike2050us> Robin, my problem with qulia is that there is no objective measurement for them. All we have is self-reporting, so far from humans, but in future probaably from AI's as well. How would we know it they're lying?
<RobinZ> that's a great question schaefer. i'd have to say yes, but i'm not sure i can support that without some serious consideration first.
<Randolfe> BJK, what happens to conversations that continue after "official chat ends"? Don't want to seem ignorant, but are they kept on file also?
<Cliff> Signing off. Thanks much Robin.
<BJKlein> i post to imminst as much as guest stays.. (sometimes a bit more if relevant)
<RobinZ> as much as i'm really enjoying this conversation (and i would love to continue it at another time) i sadly have to run off and translate about 10 pages of ancient greek now... but i would love to come back and chat more sometime. that last question is going to be on my mind for awhile!
<BJKlein> wonderful, wonderful chat.. thanks much Robin.
<BJKlein> please return!
<Randolfe> Good guideline. Congrats Bruce. I think I'll go back to ordinary life again. The atmosphere here is very balmy. I need simplicity sometimes. :)
<Schaefer> RobinZ: Thanks for your time.
<mike2050us> Thanks, Robin. BTW, it was nice meeting you at TV04! -- Mike LaTorra
<RobinZ> i will. thanks for having me! what a great group.
<Scott> Thanks Robin. Hope you come back soon.

#4 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 18 October 2004 - 12:23 PM

<John_Ventureville> Eliezer Yudkowsky should be here


John's a good guy, but always disrespects the guest if Eliezer doesn't show up. Not cool.




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users