• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo
- - - - -

Consciousness As Scientific Tool


  • Please log in to reply
10 replies to this topic

#1 PaulH

  • Guest
  • 116 posts
  • 0
  • Location:Global

Posted 05 April 2004 - 01:52 AM


I think the time has come to acknowledge that consciousness itself, our minds, and our innate intelligence, all of it, is as much an instrument of science as any instrument ever invented. More so in fact. Now that we are on the verge of being able to engineer greater-than-human levels of intelligence, this acknowledgement could not happen soon enough.

One of my chief concerns with SAI (Self-Augmenting Intelligence) research is its almost exclusive reliance on a reductionist view of the mind/brain. This is an oversight that I think could prove dangerous when coding a moral seed-AI with the goal of it achieving greater-than-human levels of benevolence and kindness. Although I am in 100% agreement with the intent of this goal, I think it overlooks the fact that no mind exists in a vacuum, but is part of a massively complex globally wide network... a natural systemic intelligence as Flemmnig says... the sum total of 4 billions years of evolution resulting in the biosphere/noosphere/societal complex. For example, it's impossible to understand the workings of an ecology, if we only examine its individual parts. It's the interaction of those parts that gives rise to an emergent intelligence not found through reductionist means. An ant colony shows a global intelligence not determined by examining any one ant. Therefore, morality seems best understood from a systemic approach, rather than a reductionist approach. A seed AI then would seem to require sufficient interaction/experience in the outside world, embodiment, in order to better grasp this.

But there is another more vital approach that needs to be taken into account, that of using our consciousness as the tool of exploration. Like using the telescope or microscope to study the outer universe, we can use our own consciousness as an instrument to study the inner universe, which includes how we perceive the external universe. I find it odd then that Artificial Intelligence (AI) or Intelligence Augmentation (IA) researchers are trying to improve upon an instrument (the mind/brain) without actually using that very instrument to determine how it works. This idea of reducing our understanding of the brain from the outside, examining its parts, neurons, glia cell and neurotransmitter functioning, without actually using the instrument itself (subjective experience) seems disconcertingly incomplete.

To me it requires both modes of examination and study to effectively improve upon it, because examining its parts in a reductionist fashion, tells us little of the emergent intelligence we each experience subjectively. Therefore, a genuine IA or AI research program should include both an objective and subjective framework. To me this is so obvious, that I think it's the main reason so many people overlook it. Arguments about the limits of the human mind, its culpability to logical fallacies like availability bias, conjunction fallacy, Wason selection task, support theory, representativeness heuristic, misperception of random sequences, expert judgement and uncertainty, although very important, are only half of the equation. All instruments have their limitations, including the human mind. But the human mind also has vast potential we've only begun to understand.

In my opinion, the one person who has done more to map the regions and limits of innerspace in a rigorously scientific fashion is John Lilly, MD, Ph.D. For those of you wanting to gain a deeper understanding of how the mind works from an experiential software context, rather than just an exclusively hardware context, should read his book Programming and Metaprogamming in the Human Biocomputer. It also looks like its being republished under the title, Programming the Human Biocomputer. Another excellent couple of books on the subject would be Robert Anton Wilson's Quantum Psychology: How Brain Software Programs You and Your World, and Prometheus Rising.

It is through this internal form of study, that we can determine, discover and access modes of knowledge and understanding of how human minds work, and in turn extrapolate how minds-in-general work, that could never be ascertained by reductionist means alone.

This field is wide open. My particular favorite areas of research at the moment are lucid dreams and out-of-body experiences (OBE's). Primarily because they have been both extremely pleasurable, and could challenge our current understanding of reality. I have had several of both. Most often in my lucid dreams, I'm flying around with or without a body, doing impossibly fantastic, very pleasurable maneuvers, from floating in any angle, to jetting around like a UFO. Sometimes I'm flying around at 40mph over houses, futuristic buildings, and sometimes I'm flying thousands of miles an hour. And each and ever time it feels completely real, actually more than real.

Recently, I've also been having moments of what I like to call "waking lucidity". This is happening with increasing frequency. It's very similar to the blissful feeling of having a lucid dream while being awake. It feels amazing. Last time was a few weeks ago while I was driving to the airport. I felt very much the same as if I was in this really lucid dream, it was more real than real, and everything seemed more crisp, alive and joyful. I knew I was awake, but I was simultaneously experiencing the feeling I get while having a pleasurable lucid dream.

In the OBE's I've had, I was flying over my hometown. On two of these occasions I woke up right after the OBE ended (from my regular afternoon naps), and I ran out, got in my car, and drove to the place I flew to, and in my shock and amazement, I actually saw the same thing as in my OBE's - make/model/color of cars, people's faces in the park that matched(!), details like a crate leaning against one of the buildings. I was stunned. Up until that moment, I was skeptical about OBE's, thinking they were highly imaginary fabrications of the mind, now I'm much more open and excited about the possibilities.

For example, is waking life just another type of dream? Or are dreams another type of waking life? In the grander scheme of things, does it matter? Perhaps lucid dreams, OBE's, NDE's, etc. is our consciousness slowly evolving, opening and unfolding to a much greater, multi-dimensional reality, of which our so-called "waking" life is just one limited way of experiencing it. Shamans, Yogi's, and Psychonauts over the ages have been telling us that we need to wake up from "normal" consciousness, which in their eyes is sleeping. In either case, I think this idea that there is an objective, reductionist materialist universe seperate from the observer is nonsense. Quantum mechanics supports the necessity of an observer despite what some in the field have declared to the contrary.

If you think about it, everything you know and experience intersects within your head. That would seem to render objective/subjective differences illusions of a primitive either/or aristotlean mind. I believe such differences are transcended through increasing perspective. Neither objective or subjective, but transjective. As we increase our intelligence, first through reprogramming and metaprogramming our existing brains, and eventually upgrading the brain altogether, this thing we call the outside universe will expand with it. You could say boundaries between inner and outer space will disintegrate as we embrace hyperspace.

Regardless of what your position is on these matters, I think our understanding of the mind cannot be exclusively acquired through objective means. I see no reason why subjective research cannot be conducted scientifically as John Lilly has done.

Having said that I find it a bit dubious that the Singularity Institute is proposing to create an altogether "alien" mind that supercedes a human mind, and that is supposed to have the human minds best interest at heart, yet has no direct experiential knowledge of embodiment or the inner workings of our mind, that can only be ascertained by subjective and objective exploration and examination. At least with the IA approach, those of us who are getting intelligence augmentation can in turn work on advancing IA, and our well being, because we are in the best possible position to understand it, since we are it, not some alien intelligence that germinated from scratch based on principles derived by AI scientists using a woefully incomplete reductionist model of the mind.

Edited by planetp, 05 April 2004 - 05:46 AM.


#2 macdog

  • Guest
  • 137 posts
  • 0

Posted 05 April 2004 - 03:17 AM

I couldn't agree with you more overall. On another post, by someone I will not, said that once the singulairty occurs the world we know of as everyday will be "the concerns of bacteria". What a frightening concept! Why in the world should we expect the SAI to treat us any differently than we treat the bacteria in our toilet.

Personally, I am not afraid of this, and the reason is because I do not expect the Singularity to succeed. Certainly, some AI will occur, but the emergence of this computer god strikes me as reminiscent of polynesian cargo cults, worshipping mock ups of airplanes. I simply do not believe that the mind can be reduced to informatics. Why do I believe that? Call it intuition, and then tell me how this AI will achieve a similiar ability. Just as physics has discovered that the universe is mostly composed of dark matter, I believe we will eventually discover the mind is NOT mostly composed of gray matter.

I have been told that my concentration of biologically oriented transhumanism threaten to be a distraction and make only trivial advances. Frankly, I feel that SAI is the distraction, with frighteningly arrogant motives. What can absolutely be said is that overall, the ability of cybernetic nueral nets is the area that has made trivial gains. I have been following artificial life online, and have noted that there are almost no new programs in the last five years. The recent independent robotics race held by DARPA was a comically spectacular failure, the majority of robots completely unable to even cross the finish line, the maximum distance of the 100 mile race being 6 feet. Pathetic. In the mid-nineties a product called Creatures was released by Mindscape which promised to usher in a new age of virtual pets. In the first version, the Norns were unable to feed themselves without prompting, and would not go to sleep until so fatigued that they quite simply "died".

Since the fifties science fiction has imagined the emergence of a supercomputer that would be unstoppable and take over the world. My question is this, even if you did build a computer smart enough to do so, why would it want to take over the world. Get a computer to show genuine affection, not just like purple objects and dislike blue objects, but real affection and I'll start to buy into this SAI. Affection is one of the simpler emotions, displayed by birds, reptiles and mammals. It really shouldn't be that challenging. Until then we are basically painting eyes on the prow of our boats and claiming they have souls.

You are not the information of your thoughts, you are the one experiencing that information.

#3 macdog

  • Guest
  • 137 posts
  • 0

Posted 05 April 2004 - 04:02 AM

I'm not going to claim to know much about SAI, but I believe one of the arguements for its inevitability is the increasing rate of processing power. I'm going to try and knock that down with similiar, if brutish mathematics.

Humans were initially endurance hunters. They quite literally walked their prey to death. They could cover ~50 miles a day. 2.5 mph

95,000 years later humans take to horseback. Changing horses they might be able to cover ~100 miles a day. 4 mph, 2X previous ability.

Steam locomotives come around in another 4850 years. Able to cover 600 miles a day. 25 mph. 6X previous ability.

Airplanes come around after only another ~50 years. Their capacity is roughly 18X that of trains.

So Apollo, should have gone 54X the speed of a plane right? ~40,000 mph. It didn't. Apollo traveled the ~250,000 miles to the Moon in three days at ~3500 mph.

Anyone who studies ecology realizes that nothing, no curve rises forever. At some point you hit the inflection, the steeper the curve, the greater the angle of incidence.

sponsored ad

  • Advert

#4 macdog

  • Guest
  • 137 posts
  • 0

Posted 05 April 2004 - 02:45 PM

Wow, I was really combative in these posts. Intellectually I stand behind what I said, but diplomatically they stink. feel free to respond in kind and I will try to check my head.

#5 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 05 April 2004 - 05:21 PM

Paul,

Sorry for this really long post. I began this post by trying to make it as concise as possible, but it's already abundantly clear that I can't explain what I want to explain in a few concise words. There exist satisfactory answers to your concerns within the context of Friendly AI theory - I know because I went through many of the same concerns when I first began studying the field. The problem is communicating them, and that is not easy in the least - it usually requires actually getting the audience to read CFAI and the SL4 Wiki from beginning to end a few times. Friendly AI theory is designed to be deep enough to provide satisfactory answers to the concerns of practically any concerned person - be it Christian, Zoroastrian, materialist, nonmaterialist, or anyone else. Your concern seems to at least partially be a subquestion to the larger question of "would a Friendly AI still work in the absence of a universally agreed upon objective reality?", as considered distinct from the question of "would a Friendly AI still work in the absence of an objective morality?"

But first, I want to mention that I think you actually overestimate the degree of stereotypically strict scientific "reductionism" within the modern-day cognitive sciences. Many of the original reductionist theories turned out to be naive and have since been replaced with more systems-theoretic explanations or explanations based on multiple levels of looping feedback. Two examples of non-stereotypically reductionist thinking in the cognitive sciences would be the excellent paper "The Extended Mind" by Andy Clark and David Chalmers, and work by the eminent Valerie Gray Hardcastle, a thinker who tackles the intersection between evolution, neuroscience, and philosophy of mind (of which "consciousness" in an intimate part). I would also like to note that vast chunks of cognitive psychology make use of thousands or millions of conscious observations as data to formulate theories. Conscious observations are the cornerstones of many fascinating areas of research such as the cognitive science of dreaming. Theories of the "neuron equals transistor" class are long gone and obsolete.

The human brain is a particular kind of physical system that displays consciousness. The human brain is a physical system with certain functionally relevant parts, along with a huge quantity of complexity that isn't particularly relevant functionally, except as biological support structure. As Eugen Leitl puts it, "the noise floor of a biological system is sufficiently high to put the identity domain quite a few storeys up. You can model this very well as activity attractors in a nonlinear system. This is also easy to do with multielectrode grid recording and voltage-sensitive dyes in neuron culture." The "you" that cares for things, the "you" that engages in OBEs, the "you" that is writing the above words is made up of relatively high-level regularities floating on top of a fuzzy biological infoprocessing noisefloor. Whether consciousness is fundamental or not, this piece of information is very likely, but it shouldn't need to be true in order for a Friendly AI to display self-improving benevolence.

In CFAI, at the end of http://www.singinst....ge.html#content, Mr. Yudkowsky points out that these regularities are made up of bounded complexity: "Friendship structure and acquisition are more unusual problems than Friendship content - collectively, we might call them the architectural problems. Architectural problems are closer to the design level and involve a more clearly defined amount of complexity. Our genes store a bounded amount of evolved complexity that wires up the hippocampus, but then the hippocampus goes on to encode all the memories stored by a human over a lifetime. Cognitive content is open-ended. Cognitive architecture is bounded, and is often a matter of design, of complex functional adaptation."

Acknowledging that all functionally relevant cognitive architecture (not cognitive content) is a matter of design of complex functional adaptation is not runaway reductionism or asserting that morality and intelligence operate in a vacuum. But it does signify that everything we recognize as "human" emerges from a relatively low-entropy set of design features that interact in various ways within various environments. It is precisely because humans are so complex and evolution is so stubborn that many of our external behaviors manifest themselves almost independently of surrounding contexts. You can feel free to call human-ness "the sum total of 4 billions years of evolution resulting in the biosphere/noosphere/societal complex", but I think it's worth pointing out that the citation of extremely long timescales is actually somewhat misleading within this context; in the past 50,000 years, humanity has totally transcended the design methods traditionally used by evolution to build up complexity, and concretely surpassed evolution's capabilities in many, many areas. But again, none of the above should need to be true for a given Friendly AI to work safely.

Friendly AI engineers are committed to ensuring moral continuity between present-day humanity and Friendly AI. If the complexity represented by a renormalized version of the panhuman set of complex functional adaptations is not enough to result in fully fleshed-out Friendliness, then a good Friendly AI would be compelled to seek out further sources of Friendliness content based on its interim knowledge of Friendliness. (Yes, really!) Remember that Friendliness acquisition is probabilistic and incremental; "A Friendship architecture is a funnel through which certain types of complexity are poured into the AI, such that the AI sees that pouring as desirable at any given point along the pathway. One of the great classical mistakes of AI is focusing on the skills that we think of as stereotypically intelligent, rather than the underlying cognitive processes than nobody even notices because all humans have them in common. The part of morality that humans argue about, the final content of decisions, is the icing on the cake. Far more challenging is duplicating the invisible cognitive complexity that humans use when arguing about morality." Friendly AI is about "opening a channel to your own moral substance to provide the AI with an *interim* approximation of the substance of humanity", as Eliezer puts it in an SL4 post.

What you're worrying about here is that a Friendly AI will miss some portion of the invisible cognitive complexity that humans use to choose between moralities and instead focus on the "stereotypically intelligent" qualities that people like me and Eliezer presumably focus on exclusively because our personalities superficially mirror them. What you're ignoring here is the programmer-insensitivity that is such an explicit part of the Friendly AI paradigm. The whole point of Friendly AI is to produce an AI that transcends the errors of the programmers and improves itself in an open-ended fashion, starting off in the human moral frame of reference. More than that - Friendly AI is supposed to transcend the errors of its parent civilization in the same way that a self-improving upload could. But what you seem to be asserting here is that programmers operating off of a materialistic philosophy of science model will inevitably produce an AI with an identical philosophy of science of its own. Or, you might be arguing that the LOGI/CFAI model is so rigidly reductionistic/materialistic that if reductionism/materialism either turn out to be wrong, the whole AI is trashed. This would be an incredibly dangerous design strategy. Friendly AI must output benevolent behavior regardless of which philosophy of science is correct - simply too much is at stake for our policy to be anything else.

I think of human society as a huge ongoing symphony, composed of billions or trillions of individual "tones"; which would be the information processing of specific chunks of neuroanatomy or specific emotional modules. Friendly AI has the task of entering the symphony smoothly and elegantly, in a way that doesn't ruin the beat. One of Friendly AI's most basic design features - which you would know more about if you read CFAI again - is "external reference semantics", which embodies the idea that the symbol is not the actual thing it is supposed to represent. When the programmers begin "educating" a low-level cognitive supersystem, they will not be feeding it "correct by definition" information. Rather, the programmers will be pointing towards particular external referents that embody probabilistic information about the external world. Since ERS semantics are built into the very structure of the AI itself, no particular thought paradigm, philosophy of science, or particular morality can ever acquire correct-by-definition status.

Here I'd like to emphasize again that contemporary cognitive science certainly does study both internal and external facets of the human system in order to come up with better answers to the question "what is a human mind/brain?" Only the S-R behaviorists focused so exclusively on external, objective behaviors in the sense that you are arguing. Modern-day cognitive science observes a wealth of data directly or indirectly related to direct conscious experience such as neural activation patterns (which are proven to coincide with certain types of conscious experience), personal reports, and many experiments based on combinations of information from several different sources, not all of them "objective" in the starched-suit traditional "materialist" sense. Also, as a sort of side-note gedanken, couldn't an extensive copy of a human being based entirely on objective information about brain organization still give rise to another full human being with a complex subjective world?

We are interested in obtaining the concrete result of an AI that displays human-surpassing levels of benevolence and kindness. We will do whatever it takes to accomplish this - produce a physical system that shares major portion of the underlying complexity that humans use to choose between moralities, uses it output actions that are clearly altruistic, including modifications to the goal system that preserve and improve the interim model of Friendliness. As Peter says, we currently have only one N that does this, human beings. Most of the AI's education will consist of programmers pointing to certain human behaviors or principles that represent probabilistic sources of Friendliness content - it's the programmers' intent that the AI is trying to seek out, not some stiff interpretation of the programmers' words. If the programmer's intent points towards real altruism, then a low-level Friendly AI will feel motivated to absorb the causes underlying that real altruism, even if it requires extensively scanning all our brains.

Friendly AI is not an lifeless arrow being fired into the dartboard where the bullseye is genuine altruism and kindness; Friendly AI is a self-guiding arrow that tracks our deep intent, a "metawish" - a way of saying to an AI: "When you grow up, grow into what we would have made you to be, if we were as smart as you." This is a critical point.

Say we have a low-level Friendly AI that isn't conscious yet, but it still has a goal system - it still has differential desirabilities that lead it to behave in certain specific ways. One of its desires it that it wants to acquire real Friendliness; this fuzzy thing the programmers are pointing to. Say it considers the idea of an "altruistic human", and traces part of the cause of its altruism back to this thing called "subjective experience". At that point, "subjective experience" will quickly become tagged as a potential source of Friendliness content; and the Friendly AI sould be expected to self-redesign along the lines necessary to acquire conscious experience and begin acquiring "direct experiential knowledge of embodiment or the inner workings of our mind, that can only be ascertained by subjective and objective exploration and examination" you are talking about. The very thing that motivates a Friendly AI to seek out new sources of Friendliness content is uncertainty in supergoal content: see 3.4.1.4: Deriving desirability from supergoal content uncertainty.

I genuinely hope that the thing you said before about never pointing you to any links from CFAI or Eliezer's work still holds. This work contains the answers to your concerns and worries! If you had already read Eliezer's explanations of external reference semantics, deriving desirability from supergoal content uncertainty, wisdom tournaments, probabilistic supergoals, the constraint of programmer-insensitivity, and the massive contingency-loaded set of strategies underlying Friendly AI design, then your concerns would be phrased differently, if they still persisted at all. It is SIAI's job to create an AI that reliably produces altruistic behavior. If you believe that this requires that Friendly AI designers read John Lilly's work, research on OBEs and astral projection, and so on, then I would argue that you are setting unfair standards. The AI needs to be designed with an architecture such that it doesn't matter whether the designers are personally interested in OBEs or John Lilly. Today you feel that much of science is contaminated by the strict materialist paradigm, but what about the potential existence of thousands or millions of equally strong delusions that nobody even realizes the existence of, delusions that plague you and me both, and all other humans? A Friendly AI needs to transcend all of these reliably. If our Friendly AI model isn't capable of operating in a world that isn't exclusively materialist, how will it end up being stable in the face of other potentially massive unknowns? The universe might have a lot of counterintuitive surprises in store, and a Friendly AI simply needs to be able to handle them at least as well as a wise human-derived upload would.

#6 macdog

  • Guest
  • 137 posts
  • 0

Posted 05 April 2004 - 11:30 PM

An interesting post Michael, but to my mind flawed to the extent that it ignores some of the underpinning of altruism. It is our belief that as humans, our tendency towards altruism is born of metaphysical goals, but it is actually our own genetic supergoals that underlie this tendency. In fact, our altruism comes from the same motivator that often moves our species to the most evil acts. The two are not easily separated, and certainly any AI will realize that even if we don't. If only humans showed any tendency to altruism it would be an entirely different matter, but they don't.

Various species of corvids, considered among the most highly evolved of all passerine birds, have what are called "helpers-at-the nest". The primary mated pair tend to stick to a well established nesting and foraging area, usually established after they have successfully fledged one or two group of nestlings. Because crows in particular are large intelligent birds, they need significant areas of habitat for these fledgling nests. The vast majority of male crows will never pair bond, and group together with other unbonded males into groups called "murders". This name is quite apt, as murders, beyond foraging together, seem to have as one of their primary goals to kill the juveniles of a pair bond. In undergraduate I actually did a year long observation of a bonded family, and witnessed a murder attack a juvenile while one of the pair bond was teaching the young to fly. It was as coordinated an attack as can be imagined, and forsaking the scientists credo not to interfere out of my own sense of altruism, interfered in the attack by mimicking a "call to arms" call (I have learned several different calls). I must note that afterwards I had a much easier time observing the behavior of the group as they went on to make zero effort to conceal themselves from me, and actually became rather bold and curious about exactly what I was doing out there. But I digress. In addition to the pair bond were another group of crows, that would deliver food, stand watch, provide nesting materials, play with the young and pair bond in flight, and generally remained in the area to mob howks and eagles. Studies indicate that these "helpers" are actually male crows who fledged from the nest one to two years earlier. Females usually have little to no difficulty finding new mates and set out to colonize suitable habitat at the earliest possibility. What is truly shocking is the theory that after the crows are fledged from the nest, they spend one year in a murder, in an effort to open new potential territory by preventing an unrelated pair bond from establishing a lek (breeding site) and killing the young. After surviving this interlude, they return to help raise the next set of fledging from their parent's nest, at which time they may also be introduced to potential mates from other leks. Their ability to act as a provider is proven by their activity around the parental nest, and there is no greater chance for a one year old male to get next to a fledged female than for you to get near my 16 year old sister. The fledged females parents will see to that. Crow sociology is highly complex to say the least! My point being that all of the behaviors described above, from altruism to "murder", is generated by one supergoal: genetic success. Indeed, the lines are somewhat blurred between the specific genetics of the individual and the group genetics of the lineage. The impulse is not only to team up with genetically related family members to aid in the survival of fledgings, but to team up with genetically unrelated unrelated individuals to cause mortality to another lineage. All this from potentially the same individual bird.

Surely the paralells to our own human behavior is obvious.

My questions being this, exactly what supergoal can we expect the alien mind of the AI to adhere to that it would be consistently moved to perform in strictly altruistic methods? What if the AI's altruism leads it to decide that the most altruistic thing it can do for humanity is to euthanize all but the smartest 10% of the population?

Being smart doesn't mean you're good, it just means you're smart. I've known a lot of smart people. In fact I am a pretty smart person. I can't say that their intelligence has made even a majority of them morally superior to the stupid people I've known, or even less prone to mistakes. If anything, the smartest people I've know have made some of the worst mistakes I know of, and their intelligence level being above others was more likely to lead to contempt than altruism. Or even contemptuos altruism.

Again, frankly, I'm not worried. I think the long time scales that planetp uses are quite appropriate. The brain is built on the fundamental self-organizing prinicples inherent in the nature of existence, tried and tested by billions of years of evolution, and the idea that we'll go from vacuum tube transistors to machine god in 50 years, or a thousand years, strikes me as so exceedingly unlikely as not to be worth consideration.

#7 PaulH

  • Topic Starter
  • Guest
  • 116 posts
  • 0
  • Location:Global

Posted 06 April 2004 - 12:06 AM

Michael,

This is by far the best post you've written on the subect. Thank you, thank you. Absolutely awesome. For now, I have nothing to add, as you make some very good points, so much so that I'm now willing to jump in again and digest this from a brand new, more fresh persepctive.

In the meantime, expect some private correspondance so I can irony out some of the thornier issues that are still hanging on for me. :)

#8 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 06 April 2004 - 12:17 AM

This is by far the best post you've written on the subect. Thank you, thank you. Absolutely awesome.

Michael, I second that.

#9 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 06 April 2004 - 08:48 AM

Macdog, thanks for your response here.

My questions being this, exactly what supergoal can we expect the alien mind of the AI to adhere to that it would be consistently moved to perform in strictly altruistic methods? What if the AI's altruism leads it to decide that the most altruistic thing it can do for humanity is to euthanize all but the smartest 10% of the population?


Transhumanist Eliezer Yudkowsky has come up with a set of ideas called "Friendly AI" that some of us think would work pretty well.

Again, frankly, I'm not worried. I think the long time scales that planetp uses are quite appropriate. The brain is built on the fundamental self-organizing prinicples inherent in the nature of existence, tried and tested by billions of years of evolution, and the idea that we'll go from vacuum tube transistors to machine god in 50 years, or a thousand years, strikes me as so exceedingly unlikely as not to be worth consideration.


Oxford philosopher Nick Bostrom apparently thinks it is worth considering. He's a widely admired transhumanist and one of the original founders of the World Transhumanist Association. I also think it's worth considering, as do many other thoughtful people on these forums.

#10 macdog

  • Guest
  • 137 posts
  • 0

Posted 06 April 2004 - 04:01 PM

Okay, I read your paper Michael, and perhaps I should clarify what I mean by "worth consideration" which is slight different from "worth considering". Not to be too nitpicky, but semantics are important.

Surely improvements to cybernet capability is "worth considering". I'm old enough to remeber having a 25 pound computer that did little more than word processing, had a black on orange dot matrix screen, and was called "portable" because someone stuck a handle on it. Heck, my first computer hooked up to a spare tv with rf cables and used an audio cassette player for memory stotage.

I still stand by the idea that the EXPECTATION of greater than human intelligence through the creation of an artificial mind, as measured by difficult to quanitify factors like creativity, intuition, willpower, altruistic or even selfish motivation is not worth consideration in the light of the huge potential already untapped in the human biological brain. Recursive synergies among human minds have barely begun to be explored, mainly because of our all too human tendencies to either group-think or outward object. Liberalization of culture, radical expansion of education, improvements in biological health, regulating deleterious mood & cognitive tendencies, removing geographical barries to communication, and fantastic longevity rates are the real ways to an intelligence in society that has such meaningful consequences as to be considered transhuman. For instance, at this very moment with present technologies we could begin terraforming the Moon and Mars, we're not. The reason we're not is because our kinetic intelligence is much lower than our potential intelligence (yes, I did just make up those terms, but I think you understand, intelligence as akin to physics, yadda-yadda). It certainly is not a matter of the lack of economic resources. I'd bet if we forced all money that would otherwise have been spent on cosmetics and bath products for just an hour, we'd have more than enough money to start these terraforming projects. The reason we don't is because all those cosmetics and bath products consumers aren't "smart" enough to realize that this mild sacrfice, which at worst might leave them feeling unfresh, would have significant long term benefits to themselves, their offspring and human society.

As a last point, you singularions just keep doing what you're doing. To an extent I think it's great, and look at it this way, when you get me on board then you know you're really getting somewhere. Robert Kennedy once went on a trip to Everest (if I'm remembering this right) and was asked if he enjoyed mountain climbing. "No," he said flatly," but I do enjoy spending time with people who do."

#11 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 06 April 2004 - 05:46 PM

It is fun to watch this dialog unfold as in the "been there done that" category. It always amazes me how many times the same subject can be rehashed and dissected and basically the same positions result.

I suspect this describes a logical flaw in our understanding that is shred by all concerned as well as compounded by a flaw of perception. That is just my opinion.

We have a number of threads addressing this topic and I think this one has made a valuable contribution in that it nicely summarizes them and I hope that more cross-referencing links might be add to expand some aspects. I would like to also comment that more and more I am leaning toward IA over AI but fundamentally I understand this to be an inherently synergistic relationship, I also find myself more and more in basic agreement with Peter (ocsrzor) with respect to his pragmatic understand of Mind/Brain as not mystically dualist but reflecting a kind of Informational dualism.

I suspect this revolves around the qualitative distinction of belief/knowledge and its perceptual cognition of subjective/objective data both truth and falsity).

Anyway here is a recent article to add to the discussion and lately there have been quite a few works that I think we should catalog here.

Some of this article overlaps the politics of transhumanism, some the issues of self awareness and mind/brain function, defnitions of "consciousness" and even is reminiscent of some of Greg Pence's lecture last year at the WTA conference.

Enjoy,
LL

Posted Image
http://www.nytimes.c...e/04MEMORY.html
The Quest to Forget
By ROBIN MARANTZ HENIG
Published: April 4, 2004

A 29-year-old paralegal was lying in the middle of Congress Street in downtown Boston after being run over by a bicycle messenger, and her first thought was whether her skirt was hiking up. ''Oh, why did I wear a skirt today?'' she asked herself. ''Are these people all looking at my underpants?''

Her second thought was whether she would be hit by one of the cars speeding down Congress -- she wasn't aware that other pedestrians had gathered around, some of them directing traffic away from her. And her third thought was of a different trauma, eight years earlier, when driving home one night, she was sitting at a red light and found herself confronted by an armed drug addict, who forced his way into her car, made her drive to an abandoned building and tried to rape her.

''I had a feeling that this one trauma, even though it was a smaller thing, would touch off all sorts of memories about that time I was carjacked,'' said the woman, whose name is Kathleen. She worried because getting over that carjacking was something that had taken Kathleen a long time. ''For eight months at least,'' she said, ''every night before I went to bed, I'd think about it. I wouldn't be able to sleep, so I'd get up, make myself a cup of decaf tea, watch something silly on TV to get myself out of that mood. And every morning I'd wake up feeling like I had a gun against my head.''

Would Kathleen have been better off if she had been able to wipe out the memory of the attack rather than spending months seeing a psychologist and avoiding the intersection where the carjacking occurred? The answer seems straightforward: if you can ease the agony that people like Kathleen suffer by dimming the memory of their gruesome experiences, why wouldn't you? But some bioethicists would argue that Kathleen should hold on to her nightmarish memory and work through it, using common methods like psychotherapy, cognitive behavior therapy or antidepressants. Having survived the horror is part of what makes Kathleen who she is, they say, and blunting its memory would diminish her and keep her from learning from the experience, not to mention impair her ability to testify against her assailant should the chance arise.

Scientists who work with patients who suffer from post-traumatic stress disorder see the matter quite differently. As a result, they are defending and developing a new science that can be called therapeutic forgetting. True post-traumatic stress can be intractable and does not tend to respond to most therapies. So these scientists are bucking the current trend in memory research, which is to find a drug or a gene that will help people remember. They are, instead, trying to help people forget.

All of us have done things in our lives we'd rather not have done, things that flood us with remorse or pain or embarrassment whenever we call them to mind. If we could erase them from our memories, would we? Should we? Questions like these go to the nature of remembrance and have inspired films like ''Memento'' and, most recently, ''Eternal Sunshine of the Spotless Mind,'' in which two ex-lovers pay to erase their memories of each other. We are a long way from the day when scientists might be able to zap specific memories right out of our heads, like a neurological neutron bomb, but even the current research in this area ought to make us stop and think. Aren't our memories, both the good and the bad, the things that make us who we are? If we eliminate our troubling memories, or stop them from forming in the first place, are we disabling the mechanism through which people learn and grow and transform? Is a pain-free set of memories an impoverished one?

After her bike-messenger collision, Kathleen was taken to the emergency room of Massachusetts General Hospital. Once her physical wounds were attended to -- she wasn't badly hurt; just a few cuts and bruises -- she was approached by Anna Roglieri Healy, a psychiatric nurse. Healy was engaged in a pilot study to test whether administering drugs immediately after a traumatic event could prevent the development of post-traumatic stress disorder. Did Kathleen want to be part of the study?

''I thought it might be a good idea,'' Kathleen said recently. ''Not that I really thought I'd develop problems after this bike accident, but I knew I was prone to post-traumatic stress disorder because I developed it after my carjacking.''

Kathleen signed on to the study, which was being directed by Roger Pitman, a professor of psychiatry at Harvard Medical School. (Pitman requested that Kathleen's last name not be used for this article because of her status as a research subject.) Like the 40 other subjects, she took a blue pill four times a day for a week and a half and then gradually reduced the dosage over the course of another nine days -- a total of 19 days of treatment. Half of the subjects were taking an inert placebo pill and half were taking propranolol, which interferes with the action of stress hormones in the brain.

When stress hormones like adrenaline and norepinephrine are elevated, new memories are consolidated more firmly, which is what makes the recollection of emotionally charged events so vivid, so tenacious, so strong. If these memories are especially bad, they take hold most relentlessly, and a result can be the debilitating flashbacks of post-traumatic stress disorder. Interfering with stress hormone levels by giving propranolol soon after the trauma, according to Pitman's hypothesis, could keep the destructive memories from taking hold. He doesn't expect propranolol to affect nonemotional memories, which don't depend on stress hormones for their consolidation, but he said it could possibly interfere with the consolidation of highly emotional positive memories as well as negative ones.

Pitman's hypothesis, if it is confirmed experimentally, might lead to a basic shift in our understanding of remembering and forgetting, allowing us someday to twist and change the very character of what we do and do not recall.

The idea that forgetting could ever be a good thing seems counterintuitive, especially in a culture steeped in fear of Alzheimer's disease. When it comes to memory, most people are looking for ways to have more of it, not less. If you can boost your ability to remember, you can be smarter, ace the SAT's, perform brilliantly in school and on the job, stay sharp far into old age.

But with memory, more is not always better. ''At the extreme,'' James McGaugh, director of the Center for the Neurobiology of Learning and Memory at the University of California at Irvine, said recently, ''more is worse.''

McGaugh recalled the Jorge Luis Borges short story ''Funes, the Memorious,'' in which Ireneo Funes is thrown from a horse. The injury paralyzes his body and turns his memory into a ''garbage heap.'' Funes remembers everything: ''He knew the forms of the clouds in the southern sky on the morning of April 30, 1882, and he could compare them in his memory with the veins in the marbled binding of a book he had seen only once. . . . The truth was, Funes remembered not only every leaf of every tree in every patch of forest, but every time he had perceived or imagined that leaf.''

And yet, as Borges writes, such a prodigious memory is not only not enough -- it is much too much: ''[Funes] had effortlessly learned English, French, Portuguese, Latin. I suspect, nevertheless, that he was not very good at thinking. To think is to ignore (or forget) differences, to generalize, to abstract. In the teeming world of Ireneo Funes there were nothing but particulars -- and they were virtually immediate particulars.''

Most of us are quite capable, sometimes far more capable than we'd like, of forgetting the particulars. Where did you park your car at the train station this morning? What's another word for pretty? What year was the Constitution ratified? So many details seem out of reach, lost in a murky mental morass. But McGaugh has found that certain memories -- the ones associated with the strongest emotions -- tend to stay locked in longer, sometimes for life. You can't possibly remember every time you and your wife kissed, but you probably remember the first time.

At the memory center in Irvine, McGaugh and his colleague Larry Cahill did a simple recall test that demonstrated how much more sharply people remember emotional memories than neutral ones. Cahill showed subjects a series of 12 slides and told them a story to accompany the images. The slides were always the same, but the words to the story changed from one group to another. To the first group, Cahill told an emotionally neutral story: a boy and his mother leave their home and cross the street and pass a car that has been damaged in an accident. They visit the boy's father, who works in a hospital. During their visit, the staff is having a disaster-preparation demonstration. The boy and his mother see people with makeup on to make them appear as if they have been injured. The mother makes a telephone call, goes to the bus and goes home.

To the second group, Cahill told a different story. It began the same way -- a boy and his mother leave their home and cross the street -- but then it diverged: the boy is hit by a car. The boy is seriously injured and is rushed to the hospital. At the hospital, surgeons work frantically to save the boy's life and to reattach his severed legs. The second story ended just the way the first one did: the mother makes a telephone call, goes to the bus and goes home.

When the subjects were taken back to the laboratory two weeks later, they were asked to describe what they had seen on the slides. Don't tell the story, they were instructed; just say what you remember about the pictures -- how many people were there, what were they wearing and so on.

The people who had been told the neutral story remembered all three parts of it with the same degree of accuracy. Those who had been told the exciting story had significantly better recall of the slides corresponding to the boy's injury and operation. To McGaugh, this experiment underlines the phenomenon observed by Rene Descartes more than three centuries ago. ''The usefulness of all the passions consists in their strengthening and prolonging in the soul thoughts which are good for it to conserve,'' Descartes wrote. ''And all the harm they can do consists in their strengthening and conserving . . . others which ought not to be fixed there.''

Cahill took the experiment one step further: he gave a dose of propranolol to a new batch of subjects and showed them the same slides with the same gory story. Propranolol is one of a class of drugs known as beta blockers, usually given to heart patients to inhibit the action of adrenaline on the beta-adrenergic receptors in the heart. (Unlike some beta blockers, it acts directly on the brain.) This time, the gory story did not prove more memorable. Those receiving propranolol were able to recall the pictures no better than those who had heard the emotionally blander story. This was the first suggestion that it might be possible in humans to interfere pharmacologically with the recollection of intense memories.

McGaugh and Cahill's work gave Roger Pitman of Harvard the idea for his own study of propranolol in post-trauma treatment. It seemed to Pitman, who had spent much of his career studying post-traumatic stress in Vietnam veterans, that the drug could eventually offer relief to people disabled by horrifying, intrusive memories of battle.

''Working with veterans made it clear that post-traumatic stress disorder is different from just having bad memories,'' Pitman said. ''These men said that frequently when they remembered Vietnam, every detail came back to them -- the way it smelled, the temperature, who they were with, what they heard.''

The subjects in Pitman's study had all, at one time or another, been taken to the Mass. General emergency room after a variety of traumas. Several had been sexually assaulted; one had smashed her car into a tree; another had fallen into one of the huge pits created by the Big Dig construction project that has bewildered Boston's pedestrians and drivers for more than a decade. But while Anna Healy was pleased at how many people agreed to be part of the study, not everyone in the E.R. said yes. According to Pitman, several seemed too shaken to want to relive their traumas, even under controlled experimental conditions.

While she was enrolled in Pitman's study, Kathleen said she believed she was in the placebo group, since the pills made her feel no different -- no better, no worse. Three months after her accident, she went back to Mass. General and related the details of her collision, which a researcher compiled into a 100-word narrative and read into a tape recorder. A week later, Kathleen came back and listened to the tape while her physiological stress indicators (sweating, heart rate, muscle tension) were measured and compared with those of the other study subjects. When they heard the scripts of their traumas, 43 percent of the placebo group responded with increased physiological measures of stress. In the propranolol group -- of which Kathleen, despite her suspicions, was part -- no one did.

But when Pitman asked Kathleen and the other study subjects whether they believed that memories of their trauma were impairing their daily lives, he found no significant difference between the propranolol and the placebo groups.

The National Institute of Mental Health found these preliminary results intriguing enough to want more information about propranolol's usefulness after a trauma. The goal is nothing at all like the fictional goal in ''Eternal Sunshine of the Spotless Mind''; even if Pitman's treatment does everything he hopes, it would succeed only in easing the pain of the troubling memory, not erasing it. This summer, Pitman will begin recruiting participants for a new study financed by the institute, aiming for a total of 128. Once again he will give propranolol to half of them and a placebo pill to the other half, and he will test their physiological stress response to imagery of the trauma one and three months after it occurs.

''I'm prepared for the possibility that this second study will have negative results,'' Pitman said. ''But even if there's, say, just a 20 percent chance that I'm right, that's a 20 percent chance of finding a method that works in the secondary prevention of post-traumatic stress disorder. Think of the amount of human suffering that we would be able to avoid.''

Pitman's approach to post-traumatic stress disorder, however, is a blunt instrument. It could mean giving a drug to all the people who come to the E.R. after a trauma -- at least 70 percent of whom will never develop any long-term problems even if they're left alone. The drug is a relatively safe one, with a long track record of use for hypertension, but even relatively safe drugs carry risks. (Propranolol is not used much for heart disease anymore; the beta blockers now more commonly prescribed don't tend to reach the brain and probably don't have much impact on emotional memories.) If Pitman's research leads to making propranolol standard treatment in post-trauma care, this might mean that someday people who would have recovered from their trauma quite well on their own would be given a preventive medication they didn't need.

The better approach would be to target the people prone to the disorder and to treat them immediately. The trick, of course, is knowing who they are. Studies have shown that patients with post-traumatic stress disorder tend to have a smaller than normal hippocampus -- a brain region involved in memory. But is this size difference a cause of post-traumatic stress disorder or an effect? Pitman sought the answer in the brains of identical twins. He found 135 pairs of twins in which one twin had gone to Vietnam but the other had not. Some of the veterans developed post-traumatic stress disorder; others had no such problems. The noncombatant twins of the traumatized vets had smaller hippocampi than the twins of the vets who fared better psychologically. This finding suggests that a small hippocampus is a marker for susceptibility to post-traumatic stress disorder.

Another alternative to a propranolol-for-everyone approach would be to wait awhile after exposure to a trauma to see who develops debilitating symptoms. But no one is quite sure how long you can wait. When is it too late to keep these corrosive memories from taking hold? According to Barbara O. Rothbaum, director of the Trauma and Anxiety Recovery Program at Emory University, it takes at least a couple of weeks to see who will encounter long-term psychiatric problems after trauma

''In general, the initial response is not predictive of who is going to have a chronic disorder,'' she said. Immediate problems, according to Rothbaum, are almost inevitable: nightmares, difficulty sleeping, obsessive thoughts about the trauma.

''Most people come down a lot after the first month,'' she said. ''After that, the people who are going to improve continue to improve.'' And the ones who don't improve stay stuck. The rule of thumb, Rothbaum said, is that people with symptoms after four months will probably still have symptoms after four years -- and if left untreated, some of those symptoms may persist not just for years but for decades. The problem is that if you wait until you know who really needs treatment, you may lose the chance to make that treatment effective.

One way out of this dilemma may be through the window opened by memory reconsolidation. Memories, even intense and troubling memories, seem to be vulnerable to erasure at many points during a person's lifetime. This means that it could work to hold off on propranolol or other drug therapy until recurring problems develop.

''When you retrieve a memory, that's a time when you update it with all the relevant things that happened since you stored it,'' said Joseph LeDoux, the Henry and Lucy Moses Professor of Neuroscience at New York University. When a traumatic memory is brought forth, he said -- the scripted imagery used in Pitman's experiment is one way to accomplish this -- it is in a fragile state. And research suggests that unless it is reconsolidated with the formation of new proteins in the brain, the memory starts to disappear.

LeDoux and his colleague Karim Nader have studied the mechanism of memory reconsolidation in laboratory rats. He trained the rats to be afraid of a musical tone by following the tone with a mild electrical shock to the foot. Twenty-four hours later, he played the tone once more, thus reactivating the traumatic ''fear memory'' in the rats. But instead of giving a shock, he delivered a dose of anisomycin directly into the rats' brains. Anisomycin, a compound approved only for use in experimental animals, inhibits the synthesis of protein, which is needed to form the new synapses that are part of both memory consolidation and reconsolidation.

For about two hours the fear memory persisted: the rats heard the tone, and they froze in fear. But 24 hours later, playing the tone elicited no such fear response. ''It was as though the fear memory had totally disappeared,'' LeDoux said. Anisomycin had prevented the synthesis of protein -- and without new protein, the reconsolidated memory could not be glued into place, and the original memory apparently vanished.

What LeDoux doesn't know is whether the original memory is lost or simply can't be retrieved. ''We have trouble determining whether the failure to show the fear memory is because you've blocked encoding of the memory itself or of its retrieval,'' he said. ''Maybe the memory is still in the brain, but the animal just can't get at it.'' The distinction is relevant because memories that appeared to be lost sometimes have a habit of re-emerging.

Pitman points to rat research suggesting that the original memory is indeed still there, deep inside the brain, even if the animal's behavior makes it look as if it is lost. In one intriguing extension of LeDoux's experiment, the rats looked perfectly serene when the original fear-inducing musical tone was played: no frightened freezes. However, their amygdala, the region of the brain activated by fear, had not forgotten to be afraid; it remained just as ready to generate a fear response. What seemed to be happening was that another region of the brain, the infralimbic cortex, was signaling that fear was no longer necessary. The infralimbic cortex was keeping the amygdala from generating the fear response -- but the fear was still there, blocked and buried.

According to Pitman, this finding indicates that long-dormant fears can reawaken and might explain why Kathleen's relatively minor dust-up with the bike messenger set off memories of her earlier, deeper terror after the carjacking and sexual assault. Or it could explain why a World War II veteran, who had recovered from post-traumatic stress disorder decades earlier, gets a diagnosis of prostate cancer in his 60's and begins having nightmares about battlefield horrors that took place 40 years before.

If memories of a horrific trauma are haunting someone, overwhelming him with fresh, immobilizing feelings whenever he remembers the original event, should he be forced to hold onto those memories? Some would answer yes. Late last year, the President's Council on Bioethics issued a report called ''Beyond Therapy: Biotechnology and the Pursuit of Happiness,'' which dealt in part with the possibility of therapeutic forgetting. It concluded that it was not necessarily wise.

''Changing the content of our memories or altering their emotional tonalities, however desirable to alleviate guilty or painful consciousness, could subtly reshape who we are,'' the council wrote in ''Beyond Therapy.'' ''Distress, anxiety and sorrow [are] appropriate reflections of the fragility of human life.'' If scientists found a drug that could dissociate our personal histories from our recollections of our histories, this could ''jeopardize . . . our ability to confront, responsibly and with dignity, the imperfections and limits of our lives and those of others.''

One council member, Rebecca Dresser, expressed the majority sentiment during a council session in late 2002. The ability to suffer the ''sting'' of a painful memory is ''where a lot of empathy comes from,'' said Dresser, a professor of ethics in medicine at Washington University in St. Louis. ''That is, when we have an embarrassing experience, we develop empathy for others who have a similar experience. . . . We want some of that sting. So the question is: what is dysfunctional sting?''

The difficulty is defining ''dysfunctional,'' since the sting that's dysfunctional for an individual is different from the sting that's dysfunctional for society. As Dresser pointed out, society has a stake in having its citizens retain their own painful, awkward memories as a check on behavior. ''There probably is some sting that we would rather not have as individuals,'' she said, ''but it's good for the rest of us that others have it.''

Some ethicists add that it's also good for the people who are suffering themselves; the painful memories, they say, are all part of what makes us who we are, and diminishing them would diminish our humanity.

''Would dulling our memory of terrible things make us too comfortable with the world, unmoved by suffering, wrongdoing or cruelty?'' asks the bioethics council in its report. ''Does not the experience of hard truths -- of the unchosen, the inexplicable, the tragic -- remind us that we can never be fully at home in the world, especially if we are to take seriously the reality of human evil?'' The council also asked whether the blunting of our recollections of ''shameful, fearful and hateful things'' might also blunt our memories of the most wondrous parts of our lives. ''Can we become numb to life's sharpest arrows without becoming numb to its greatest joys?''

Still, to scientists who study memory, there is nothing beneficial, for either individuals or for society, about debilitating, unbidden memories of combat, rape or acts of terrorism. ''Going through difficult experiences is what life is all about; it's not all honey and roses,'' said Eric Kandel, a professor of psychiatry and physiology at Columbia University. ''But some experiences are different. When society asks a soldier to go through battle to protect our country, for instance, then society has a responsibility to help that soldier get through the aftermath of having seen the horrors of war.''

Of course, post-battlefield remorse serves as a check on our militaristic tendencies. Vietnam veterans haunted by memories of combat were among the most forceful opponents of the war after their return home. But have we the right to buy a surrogate conscience at the cost of thousands of ruined lives? If we have the responsibility to treat veterans' physical wounds, don't we also have a responsibility to ease their psychic suffering?

The human condition remains rich and complicated even without that psychic pain, said William May, an emeritus professor of ethics at Southern Methodist University in Dallas and a former member of the President's Council on Bioethics. ''Perhaps just as dangerous as writing out memory,'' May said at the same council session at which Dresser spoke, ''is the reliving of a past event that is so wincing in memory that one engages in a kind of suffering all over again, which is unproductive of a future.'' Remorse can be ''unavailing,'' he said, and can leave a person stuck ceaselessly in the past.

Without witnessing the torment of unremitting post-traumatic stress disorder, it is easy to exaggerate the benefits of holding on to bitter memories. But a person crippled by memories is a diminished person; there is nothing ennobling about it. If we as a society decide it's better to keep people locked in their anguish because of some idealized view of what it means to be human, we might be revealing ourselves to be a society with a twisted notion of what being human really means.

Robin Marantz Henig is the author of ''Pandora's Baby: How the First Test Tube Babies Sparked the Reproductive Revolution.''




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users