• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

AI/AC and Sanity?


  • Please log in to reply
14 replies to this topic

#1 platypus

  • Guest
  • 2,386 posts
  • 240
  • Location:Italy

Posted 03 July 2007 - 05:05 PM


A couple of questions from a layman:

1. Does AI also imply AC (Artificial Consciousness)? Isn't AC what we ultimately want?
2. If an AI/AC is able to modify it's own code, how can it be guaranteed that it stays "sane" (whatever that means for an artificial mind)? As an AI/AC will probably be many orders of magnitude faster than biological minds, things could go very wrong very very quickly.

#2 modelcadet

  • Guest
  • 443 posts
  • 7

Posted 03 July 2007 - 06:36 PM

From one layman to another:

1. "Consciousness" does not exist.
2. A very thin line separates the genius and the insane. Of course, the line doesn't do a good job of keeping the two from fraternizing.

These answers are short and unsatisfying, I know. Arguing these answers to a convincing level requires more resources than I can currently allocate (it's 2:33 PM on a Tuesday, and I'm drunk). Essentially, it all boils back down to Western society's overemphasis on the Platonic form (or, alternatively, "heuristic").

I wrote a meagrely more satisfying answer here 10 hours ago: http://www.transhuma.../...:Topic:2562

sponsored ad

  • Advert

#3 Normal Dan

  • Guest
  • 112 posts
  • 12
  • Location:Idaho, USA, EARTH, Milky Way, 2006

Posted 03 July 2007 - 07:00 PM

Funny thing about consciousness. The only way to study consciousness is to study ones own. There is nothing conclusive I can say about anyone else's consciousness. For all I know, you could be just as conscious as this cup sitting next to me. Perhaps less so. For all you know, I could be an AI system lacking any sort of consciousness. So we may or may not be able to create AC. When we do, there will be no way to know with all certainty.

As for how we can guarantee AI stays sane, the short answer is, we cannot. We can do things try to prevent insanity, but when it comes to AI, there is rarely ever any guarantee.

#4 platypus

  • Topic Starter
  • Guest
  • 2,386 posts
  • 240
  • Location:Italy

Posted 03 July 2007 - 08:31 PM

From one layman to another:
1.  "Consciousness" does not exist. 

What do you mean it does not exist? I don't understand what you mean. [huh] I would even have more sympathy for the argument that claims that consciousness is the only thing that exists.

Edited by platypus, 03 July 2007 - 08:51 PM.


#5 platypus

  • Topic Starter
  • Guest
  • 2,386 posts
  • 240
  • Location:Italy

Posted 03 July 2007 - 08:37 PM

As for how we can guarantee AI stays sane, the short answer is, we cannot.  We can do things try to prevent insanity, but when it comes to AI, there is rarely ever any guarantee.


Are we going to even have a chance? It takes the right genes plus 1.5 decades of interaction with other people to create a sane human being. How could the same be achieved for a consious self-modifying superintelligence that's a billion times faster than us? As there are no natural peers for such an entity, perhaps we should create a herd and hope that at least some of them turn alright. Humans will not turn out morally ok without peers, so why would a superintelligence?

#6 luv2increase

  • Guest
  • 2,529 posts
  • 37
  • Location:Ohio

Posted 03 July 2007 - 10:02 PM

A couple of questions from a layman:

1. Does AI also imply AC (Artificial Consciousness)? Isn't AC what we ultimately want?
2. If an AI/AC is able to modify it's own code, how can it be guaranteed that it stays "sane" (whatever that means for an artificial mind)? As an AI/AC will probably be many orders of magnitude faster than biological minds, things could go very wrong very very quickly.



1. No
2. There is no guarantee if it were able to modify its code. Every computer program has errors in it that are constantly being rid of but never 100%. AI is going to be a reality some day soon but a scary one IMO.

#7 platypus

  • Topic Starter
  • Guest
  • 2,386 posts
  • 240
  • Location:Italy

Posted 04 July 2007 - 10:08 AM

A couple of questions from a layman:

1. Does AI also imply AC (Artificial Consciousness)? Isn't AC what we ultimately want?
2. If an AI/AC is able to modify it's own code, how can it be guaranteed that it stays "sane" (whatever that means for an artificial mind)? As an AI/AC will probably be many orders of magnitude faster than biological minds, things could go very wrong very very quickly.



1. No

Do you mean that AI does not imply AC? Anyway, isn't creating an AC the ultimate goal? A superintelligent conscious synthetic lifeform? Sanity becomes an even bigger issue.

#8 modelcadet

  • Guest
  • 443 posts
  • 7

Posted 04 July 2007 - 01:34 PM

I'd rather think my answers have a pretty solid basis. Even if you don't necessarily agree with them, many scientists do. Again, I don't really think I can win you over in a 2500 year old debate, so of course I didn't try to justify them. But while I agree with the 'theory' of those who criticize Platonics, I don't claim to have invented the position.

Smooches.

#9 platypus

  • Topic Starter
  • Guest
  • 2,386 posts
  • 240
  • Location:Italy

Posted 04 July 2007 - 01:43 PM

I'd rather think my answers have a pretty solid basis.  Even if you don't necessarily agree with them, many scientists do.  Again, I don't really think I can win you over in a 2500 year old debate, so of course I didn't try to justify them.  But while I agree with the 'theory' of those who criticize Platonics, I don't claim to have invented the position. 

Cogito, ergo sum?? Consciousness does not "exist"? I'd be really interested in hearing how that position can be defended...

#10 modelcadet

  • Guest
  • 443 posts
  • 7

Posted 04 July 2007 - 02:18 PM

No. Not, "Consciousness does not 'exist.'" Instead, "'Consciousness' does not exist." Consciousness, simply is a label for a multitude of other properties of human cognition which, when explained or thoroughly recreated, somehow get kicked out of the club. It's the straw man catch-all in the losing argument for human uniqueness. Funny thing about consciousness: You can't define it. And I'm not drawing this point for epistemological bases. I'm simply discounting it on a rhetorical basis. You know, like the Sophists did in the good old days, before all of this philosophy nonsense came along.

Sorry again. I'm sure each answer I provide is even less satisfying. I'll keep digging, and soon we'll end up on the other side of the world (which I believe does exist).

#11 platypus

  • Topic Starter
  • Guest
  • 2,386 posts
  • 240
  • Location:Italy

Posted 04 July 2007 - 02:59 PM

No.  Not, "Consciousness does not 'exist.'"  Instead, "'Consciousness' does not exist."  Consciousness, simply is a label for a multitude of other properties of human cognition which, when explained or thoroughly recreated, somehow get kicked out of the club. It's the straw man catch-all in the losing argument for human uniqueness.  Funny thing about consciousness: You can't define it.  And I'm not drawing this point for epistemological bases.  I'm simply discounting it on a rhetorical basis.  You know, like the Sophists did in the good old days, before all of this philosophy nonsense came along.

Well, there can be no cognition without consciousness/awareness, or would you argue that unconscious entities cognite? Therefore one cannot meaningfully dispute whether one exists or is conscious, as both of these are self evident to a conscious/aware entity capable of any kind of cognition. Note that I'm not claiming human uniqueness, higher animals are very obviously conscious too.

#12 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 04 July 2007 - 08:33 PM

Cuing from Dan, the "unireferent" and the "coreferent" could be helpful ideas toward some compromise in our soft and hard science tension. Those who don't care about a potential compromise (or pedantry of an extremely simplistic sort) will ignore this, for science is science, realism is realism. However, the so-called compromise shouldn't require dissolving any portion of any platforms.

We might say, roughly, that soft science, when applied, is concerned with unireferents and hard science, in practice, with coreferents. In philosophical-logic parlance, from the sense-reference distinction, a sense is information of an object, and that object is a referent. To reject that statement, evidently, would require one to believe that when perceiving a tree, for instance, the perception of the tree is identical to the object tree. But if so, your head would explode upon its conception, i.e., your brain doesn't reproduce the object tree, it represents it. A unireferent, therefore, is an object that only one person has a sense about, an object that person yet can't point to with an index finger at a bonfire conference. A coreferent is an object that more than one person has a sense about, where every person implicitly supposes, perhaps because of how the innately perceived stakes are acutely felt, that she is substantially distinct from the others is immaterial for their senses to be "identical."

Usually, qualia avowers don't have a problem with the existence of unireferents, while qualia rejectors need everything to be coreferential. Each, in my perhaps unireferential opinion, has its potential hazards and benefits. Fortunately, there seems to be symmetry for ease of some recalibration if one so chooses. A potential hazard for qualia avowers is that they could put too much cognitive emphasis on philosophy and mathematics, not to mention religious mysticism, at the expense of science and engineering. A potential hazard for qualia rejectors is that they could put too much cognitive emphasis on science and engineering, not to mention logical mysticism, at the expense of philosophy and mathematics. However, a potential benefit for qualia avowers is that it shouldn't be very difficult for them to distinguish themselves ontologically should technology ever become sufficiently advanced. And the more obvious benefit for qualia rejectors is that it's a straightforward cognitive process to technical-standards conformity at any given moment.

I happen of course to be a qualia avower, and by association 'a quantity is a quality' avower (e.g., to say that Earth has 1 moon is to augment that moon's prior identity, similar with operations to operands), but at the same time I'm less interested about the unireferential qualities of an AI's representations, even if I might've had some modest influence in their manifestations, like parents to a child, and more interested about how well I can relate or cooperate with its coreferential volition, if indeed I'm able to produce any sense of it. One can still be either a qualia avower or a qualia rejector and recognize that the volitions, or action processes, of inferior machines can easily be modeled while those of superior constructors cannot, and that if a person happens to be an inferior constructor in some situation, the reality of their qualia isn't necessarily of concern to a superior constructor. Yet that's not to say that a quale is not an action and that it cannot be adequately modeled by a superior system or, if you're more technologically advanced, absolutely shared, in terms of a very strict identity specification, with an equally advanced person, where its instantiation would be unireferential and its memory, when there are two from separation, coreferential.

This has been more showing than telling, because I wouldn't be able to point to most of this with an index finger at a bonfire conference. Or it could be that I'm currently lacking particular knowledge of coreferential objects that would've given me the ability to be more straightforward, although I don't regard this as necessarily a sinful or an insane thing.

#13 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 05 July 2007 - 12:52 AM

I'd rather think my answers have a pretty solid basis. Even if you don't necessarily agree with them, many scientists do. Again, I don't really think I can win you over in a 2500 year old debate, so of course I didn't try to justify them. But while I agree with the 'theory' of those who criticize Platonics, I don't claim to have invented the position.

Smooches.


For humans.

Edited by hankconn, 18 August 2007 - 02:38 AM.


#14 7000

  • Guest
  • 172 posts
  • 0

Posted 06 July 2007 - 02:01 PM

A couple of questions from a layman:

1. Does AI also imply AC (Artificial Consciousness)? Isn't AC what we ultimately want?
2. If an AI/AC is able to modify it's own code, how can it be guaranteed that it stays "sane" (whatever that means for an artificial mind)? As an AI/AC will probably be many orders of magnitude faster than biological minds, things could go very wrong very very quickly.


AI is all about a program and the artificial consciousness of it exist in the program.I know what you are trying to say,artificial consciousness is just a point in which the AI could have its own views, opinions towards different object.Though AI could be regarded as AC because this point exist and might be that it is the only thing that exist but AI will not make a choice like human will do.

7000.

sponsored ad

  • Advert

#15 PeriPhysis

  • Guest
  • 51 posts
  • 0

Posted 06 July 2007 - 08:38 PM

Platypus, I am not a professional in any area related to this question, I just happen to have some knowledge in this area, so this is only my opinion on how to answer that two questions.

1. No it does not but that however will depend on how intelligence and of course consciousness is defined. If you define intelligence as a property of mind necessary to interpret and solve problems and learn from experience and consciousness as the acknowledgement of self being than obviously the answer is no. There is some discussion on how to define either concepts, for example if intelligence is also related to creativity.

2.That will depend on how the progam is made, if the ability to modify the code is restricted or unrestricted. If it is unrestricted you can't of course expect that the AI/AC will remain what is normally considered sane.

see you arround ;)

--evilthinker




2 user(s) are reading this topic

0 members, 2 guests, 0 anonymous users