• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo
- - - - -

Ethics of Artificial Sentience


  • Please log in to reply
11 replies to this topic

Poll: Is this futuristic experiment ethical? (18 member(s) have cast votes)

Is this futuristic experiment ethical?

  1. It is ethical (8 votes [53.33%])

    Percentage of vote: 53.33%

  2. It is not ethical (6 votes [40.00%])

    Percentage of vote: 40.00%

  3. Nonsense- intelligence-free sentience is impossible (1 votes [6.67%])

    Percentage of vote: 6.67%

Vote Guests cannot vote

#1 Clifford Greenblatt

  • Member
  • 355 posts
  • 4
  • Location:Owings Mills, MD

Posted 14 May 2006 - 12:02 PM


Suppose scientists make significant progress toward conquering the mysteries of sentience. The phenomenon of sentience is isolated through experiments in which scientists design and build artificial sentient entities. Sentience is so isolated in these experiments that the artificial sentient entities have no capacity for intelligence. Each of these entities are so simple that they are capable of only one kind of sentient experience each. Some of these artificial sentient entities are designed to continually experience nothing but an extremely intense torment. The tormented entities have no ability to express what they feel. However, the scientists have a good understanding of the physical conditions that generate a sentient experience of extremely intense torment. Therefore, they do not require any means of expression from the entities they created to know that they are experiencing extremely intense torment. Is such experimentation ethical?

The scientists see nothing but positive benefits from the experiments, as more is being learned about sentience through them. Is this economic view justified? Should the private torment of the artificial sentient entities be of any concern to us? After all, their torment is nothing but a small, closed, physical process that is no threat to the economy, the environment, or any intelligent beings. Besides, these entities have no language or intelligence by which to report that they are suffering.

#2 jaydfox

  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 15 May 2006 - 12:40 AM

I'm not even sure where to begin to address this question. It sounds like bottled emotions, distilled sensations, free of anything else. I hate to say that's a bridge we'll cross when we get there, but this hypothetical situation is just far too broad and open to interpretation, without knowing more about sentience.

#3

  • Lurker
  • 1

Posted 15 May 2006 - 01:07 AM

Love these thought experiments of yours Clifford. I think we should have a special area just for them.. Much good science has resulted by thought experiments alone. SENS, after all, emerged as one thought experiment..

Personally, I view such experimentation as redundant but if it such a scenario did exist it is viscerally abhorent to me. I voted "not ethical".

sponsored ad

  • Advert

#4 John Schloendorn

  • Guest, Advisor, Guardian
  • 2,542 posts
  • 157
  • Location:Mountain View, CA

Posted 15 May 2006 - 01:09 AM

If such experiments are bad, then society should abolish experimental torment of intelligent and sentient entities (such as lab animals), before we consider abolishing the torment of your hypothetical entities. I.e. if sentience without intelligence is consistent, then I would (intuitively) value intelligent sentience higher than non-intelligent sentience. (Assuming that the intensity of sentience is left constant)

#5 Clifford Greenblatt

  • Topic Starter
  • Member
  • 355 posts
  • 4
  • Location:Owings Mills, MD

Posted 15 May 2006 - 08:20 AM

If such experiments are bad, then society should abolish experimental torment of intelligent and sentient entities (such as lab animals), before we consider abolishing the torment of your hypothetical entities. I.e. if sentience without intelligence is consistent, then I would (intuitively) value intelligent sentience higher than non-intelligent sentience. (Assuming that the intensity of sentience is left constant)

Would you value non-sentient intelligence above non-intelligent sentience? From your qualification about equal intensity of sentience, I think your answer would be no. However, suppose the non-intelligent sentience is extremely sentient and the non-sentient intelligence is extremely intelligent. Which would you value more? Would intelligent life in a world without sentiece have any meaning or would it be an intelligent oblivion?

#6 Clifford Greenblatt

  • Topic Starter
  • Member
  • 355 posts
  • 4
  • Location:Owings Mills, MD

Posted 15 May 2006 - 08:35 AM

Love these thought experiments of yours Clifford. I think we should have a special area just for them.. Much good science has resulted by thought experiments alone. SENS, after all, emerged as one thought experiment..

Personally, I view such experimentation as redundant but if it such a scenario did exist it is viscerally abhorent to me. I voted "not ethical".

I'm not even sure where to begin to address this question. It sounds like bottled emotions, distilled sensations, free of anything else. I hate to say that's a bridge we'll cross when we get there, but this hypothetical situation is just far too broad and open to interpretation, without knowing more about sentience.

Sentience remains a great mystery to science, despite Daniel Dennett’s efforts to reduce and minimise it. This is why the futuristic experiment I presented is a thought experiment, rather than a current congressional or parliamentary debate. However, it is a much simpler thought experiment than any thought experiments about person copying, because it reduces sentience to simplest terms, rather than connect it with an enormous complexity of intentional mental states.

#7 jaydfox

  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 15 May 2006 - 01:56 PM

Sentience remains a great mystery to science, despite Daniel Dennett’s efforts to reduce and minimise it.

This I wholeheartedly agree with. I suppose my initial hesitation was precisely because of the mystery, and my concerns about underlying assumptions made by either the question-asker or by third parties viewing my responses.

Pure emotions or sensations, experienced by a sentient "thing" in the absense of intelligence, is difficult but not impossible to comprehend. After all, in moments of extreme emotion or sensation (extreme pain, or e.g. when enjoying a particularly well-made peanut butter milkshake), the sensation can be so overwhelming as to block out most other aspects of sentience and intelligence anyway.

In such a case, if this pure torment inflicted by these future scientists can reasonably be determined to be real (as opposed to "simulated" but otherwise non-real—something people like Dennett would say is impossilble: there is no simulation; it's either real, or you failed to make it), then I would go along the lines of what John said. Sentience in the absense of intelligence shouldn't be held to a higher standard than sentience with intelligence, and may even be held to a lower standard. So we really need to answer the question of the ethics of this with respect to intelligent sentients, and then use that as a guide to answer the original question.

But degrees weigh into this as well: what if this lab-made sentience is more sentient or aware (not sure about the terminology, it gets all mixed up after reading Dennett, Chalmers, etc.) and the torment being inflicted is more exquisite than any ever inflicted on a human being in history?

#8 roof01

  • Guest
  • 41 posts
  • 1

Posted 15 May 2006 - 02:34 PM

I think there would be possibilities to study related sentiences (those isolated emotional states), not the extreme torment directly. You could at least do it on a very low intensity or minimal timeframe - if you really need this experiment for some reason.
And the reason should be a very good one.

#9 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 15 May 2006 - 06:05 PM

Of course, subjecting any entity that has qualia to torment is unacceptable. Who cares whether it can talk or not?

This experiment is not really "open to interpretation" - something obvious is happening - a conscious being is being tortured. Torture is something that should be banished from this universe.

Of course, the corollary is that subjecting lab animals and food animals to torment is also unacceptable.

#10 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 15 May 2006 - 06:28 PM

Welcome to the cost/benefit ratios of lesser evil argument and the Game Theory of Human Capitalism, the supply and demand side of life.

If Human Selection imperatives achieve a state of Human Racism for social value context there will always be this type of associated selection and the dilemma falls on anything that aspires to higher intelligence as the choice to make and live with.

While Natural Selection might place the ethical onus of predator-prey relationships upon simpler minds within their genes a higher intelligence would have no such refuge from the ethical consequences of making such choices. With great power comes perhaps even greater responsibility. Noblesse oblige?

Or power doth tend to corrupt and absolute power tends to corrupt absolutely as the burden of God?

Saying something is wrong to do does not mean in itself that no good can come of it. All bad (or evil if you like) choices are not of equal worth and as such people seek to justify wrong with right and sometimes this may even be true. Living with the consequence is the burden of memory and our definition of self worth.

Of course, the corollary is that subjecting lab animals and food animals to torment is also unacceptable.


Is it?

Or is it inevitably the dilemma of ever more complex choice for higher and higher intelligence?

Our Sisyphistic Burden or the perhaps just the chains binding us to the Earth so that our insides may be ripped out while we yet live.

#11 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 15 May 2006 - 06:40 PM

Is any taking of a higher life a torment if they are aware of it?

Do we define life categorically such that higher intelligence *feeds* off lower?

Is this a Darwinian imperative and could a truly higher intelligence *live* beyond the dependency of sustenance?

When ignorant of torment it is easier to rationalize the end result but once we are aware we cause suffering by our decisions it becomes a burden of choice predicated on larger analysis. That is why we call it higher intelligence as we recognize both the need to make such choice and to be able to with respect to the life that has sustained us both personally and throughout evolution.

The higher the intelligence, the more *aware* and the more difficult to ignore consequence but also the more power to make such decisions and even necessity to. Perhaps it is possible for a high enough intelligence to rise above the predator nature it evolved from but it may never be above having to make such choices.

So Cliff I must suggest that you forgot at least one selection I would prefer; the choice of being Beyond Good and Evil in other words beyond ethics since it depends more on consequence, that hazardously slippery slope of "ends justifying means."

#12 Clifford Greenblatt

  • Topic Starter
  • Member
  • 355 posts
  • 4
  • Location:Owings Mills, MD

Posted 16 May 2006 - 01:57 AM

The thought experiment has become complicated, as suffering on the part of some sentient beings brings benefits to other sentient beings. I did not have this in mind when I started this thread, but I inadvertently introduced it, from the outset, when I wrote that the scientists saw benefit in learning more about sentience. There has been much good discussion above that is relevant to weighing the issues involved in letting some sentient entities suffer to benefit other sentient entities. I can see that intelligence is a powerful consideration in such decisions, with the provision that intelligence beings take care to employ their superior means to avoid needless suffering by any sentient being. Intelligence is of great value to sentience, so separating sentience from intelligence is not a good thing to do.

Animals do show some intelligence, and some of them can be quite affectionate. We may assume that more advanced animals are sentient, because they display some of the same emotions that we do, but we have no way of truly determining the presence or absence of sentience in them. If only humans are sentient, then the issue of cruelty to animals is strictly a matter of how it affects the mind of the human observer, and not an issue of how it affects the animal being mistreated. If some animals are sentient, then the cruelty to animals issue is of paramount importance, even before factoring in the effect on a human observer.

Perhaps I should have written that the scientists had no useful purpose in tormenting the sentient entities they were creating, except to marvel in their accomplishments. However, I have a scenario that may be more useful, although quite unrealistic. Imagine a community of extremely intelligent entities. They are of great benefit to each other in achieving their purposes. None of them are sentient. None of them have any harmful or beneficial effect on any sentient entities, except for one. This one sentient entity has no intelligence. The intelligent entities do not intend any harm to the intelligence-free sentient entity. However, unknown to them, their prosperity imposes a state of extreme torment to the intelligence-free sentient entity. Circumstances could be changed if the sentience-free intelligent entities were subject to extremely hostile forces that would subject them to a relentless and severe hardship. The intelligence-free sentient entity would then experience a peaceful existence. If the intelligence-free sentient entity is kept in extreme torment, then the sentience-free intelligent entities could continue to prosper, with amazing productivity and creativity, but with no effect on any sentient being. If you could decide between the two possibilities, which would you choose?

Edited by Clifford Greenblatt, 16 May 2006 - 02:49 AM.





0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users