• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Transhuman Mind Upgrade


  • Please log in to reply
22 replies to this topic

Poll: Which option would you choose? (22 member(s) have cast votes)

Which option would you choose?

  1. A purely conscious mind. (6 votes [30.00%])

    Percentage of vote: 30.00%

  2. A fully superintegrated mind with parasitic consciousness. (5 votes [25.00%])

    Percentage of vote: 25.00%

  3. A hybrid mind. (1 votes [5.00%])

    Percentage of vote: 5.00%

  4. A fully superintegrated mind with no parasitic consciousness. (8 votes [40.00%])

    Percentage of vote: 40.00%

Vote Guests cannot vote

#1 Clifford Greenblatt

  • Member
  • 355 posts
  • 4
  • Location:Owings Mills, MD

Posted 08 August 2004 - 01:55 PM


This poll is about a hypothetical, futuristic decision about becoming something radically more intelligent. In a time when immortality is a fact of life and science has become extremely advanced, it is discovered that the conscious mind has serious limitations. The conscious mind is optimal to a certain level of intelligence but can go only so far. It is found that consciousness limits the level of intelligence and the amount of information that a mind can handle simultaneously.

A new kind of mind is developed that has the capability of a radically higher intelligence and an ability to handle several orders of magnitude more information simultaneously. This new kind of mind is called the superintegrated mind. A method is found to very gradually transform the mind of an immortal person from a consciousness mind to a superintegrated mind. The process is extremely gradual, taking several decades to complete. A person’s identity is preserved only in the sense that a very high degree of local space-time continuity of all physical and psychological processes is maintained throughout the very gradual transition. Superintegration is not a higher level of consciousness. Rather, it is a most powerful form of machine intelligence. When the process is complete, all traces of the conscious mind are gone. Once free of all incumbrance from the much less efficient conscious mind, the superintegrated mind can then begin a steep climb in its capabilities, soaring many orders of magnitude beyond the most extreme limits of the conscious mind. Many volunteer for this upgrade of their minds because they want to be something much greater than what they are. Others refuse the upgrade, fearing gradual loss of conscious identity as they are slowly transformed into ultra intelligent machines. Noone is forced to accept the mind upgrade procedure. However, those who chose to continue with a conscious mind will eventually find themselves to be like cockroaches among entities of vastly superior intelligence.

There is an option of maintaining a conscious mind within a superintegrated mind. However, as the superintegrated mind becomes increasing more powerful, the conscious mind becomes more and more of a useless parasitic appendage to the superintegrated mind. The only way to avoid the problem of a useless parasitic mind is to either remain with a strictly conscious mind or to gradually yield consciousness to superintegration. All persons are assured physical immortality, in the local space-time continuity sense, no matter what option they choose.

A person could begin the nonparasitic upgrade process but choose to stop it at any point where there is still a combination of a conscious and superintegrated mind. The conscious mind will be diminished because the superintegrated mind will have taken over much of its functions, but the conscious mind will not be a separate, parasitic entity. However, the person with a hybrid mind would eventually be like a farm animal among entities of vastly superior intelligence as those with fully superintegrated minds become increasingly more advanced in their intelligence. Unless the conscious mind becomes a useless parasitic appendage to the superintegrated mind, any hybrid combination of a conscious mind and a superintegrated mind will be incapable of achieving the ultra high level of intelligence of the fully superintegrated mind. A person who stops with a hybrid mind would most likely be in the early stages of superintegration because the conscious portion of the transitional mind becomes increasingly complacent with its slow and comfortable demise as the gradual transformation progresses.

The idea of the superintegrated mind presented here is strictly hypothetical. It is not a prediction or speculation about future events. It is presented for the purpose of helping to identify what a person regards as most essential to his personal identity. Answers to the poll question may provide some insight into what various people value most in defining their personal identity.

#2 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 08 August 2004 - 02:32 PM

Unless the conscious mind becomes a useless parasitic appendage to the superintegrated mind, any hybrid combination of a conscious mind and a superintegrated mind will be incapable of achieving the ultra high level of intelligence of the fully superintegrated mind.

Here, I can only assume that you mean: “Unless the conscious mind gradually detaches itself as a useless parasitic appendage…” Help me if that’s an incorrect interpretation.

Within the context of your particular scenario, I would choose a fully superintegrated mind with a parasitic consciousness since a hybrid mind would eventually yield to a fully superintegrated mind with no parasitic consciousness. Without consciousness, no experience can be savored. So, regardless of what I’d accomplish as being among the most intelligent in the universe, there would be no relishing in these experiences. It’s basically suicide, and I couldn’t care less what happens after I’m dead.

I could squeeze the most experience out of a fully superintergrated mind with a parasitic consciousness. In such a context, my only barrier to happiness would be envy, which is not very difficult to eradicate if I possess the knowledge that I am experiencing events and apprehensions qua my own mind.

sponsored ad

  • Advert

#3

  • Lurker
  • 1

Posted 08 August 2004 - 02:49 PM

I'll take the deluxe super-integrated mind. It sounds like the "parasitic consciousness" would end up being somewhat like the limbic system, which excluding the hippocampus is probably useless.

#4 jaydfox

  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 09 August 2004 - 12:41 AM

I have to agree with Nate. Perhaps after enough time, with a high enough level of understanding, I might be willing to lose the parasitic consciousness. But I would rather still have my consciousness, and my ability to emotionally/"personally" enjoy (as opposed to analytically enjoy) my accomplishments, especially those of solving novel problems.

I also have to agree that if there's no consciousness, then why would I care what that new super-intellegence did? Sure, it would be spawned of my basic mental thought processes and memories, and so would be different from other super-intelligences in that aspect--in other words, it would be mine. But only mine in the sense that my son will continue on if I were to die. It's comforting on some level that he lives on, but I still die.

I would rather be a parasitic consciousness attached to a super-intelligence spawned from my primitive pre-transformation intelligence. Like the proud father that watches his son go on to change the world.

That way, I could have my cake and eat it. To expound upon that analogy: I'd rather have a boring piece of white cake, and be able to eat it, than to know that I'm creating a super-rich 15-layer inside-out/upside-down cake with four kinds of frosting, but that I will never get to enjoy beyond the few tastes I get of the batter while this new superintelligence is being baked.

Given that the process will be gradual, I'm sure that I will enjoy it all along the way, until my consciousness is lost, and that I will have no regrets during the process. But that's sort of like the frog in the pot that's slowly brought to a boil. Just because it never perceived the danger, it doesn't mean that the danger wasn't real. Once in the process, I probably wouldn't care, but when given the initial choice, I'd have to go with the parasitic consciousness.

While I'm considering it, I would have to say that rather than being a useless parasite, the consciousness and the super-intelligence could probably enjoy a symbiotic relationship. The consciousness could use the superintelligence as a tool, as its own personal integrated data mining/problem solving resource. The super-intelligence could use the consciousness as an emotional/identity outlet.

Then again, the super-intelligence will probably come to the conclusion that it's getting the short end of the stick; such would be the logical, analytical conclusion, and thus the one the super-intelligence will reach (rather quickly, too). However, if the super-intelligence could probably engineer itself to respect the (limited) role played by the consciousness, then this relationship could be preserved.

Anyway, now I'm rambling in an area that I'm sure most of you have thought longer and harder about.

Jay Fox

#5 Clifford Greenblatt

  • Topic Starter
  • Member
  • 355 posts
  • 4
  • Location:Owings Mills, MD

Posted 09 August 2004 - 02:58 AM

Here, I can only assume that you mean: “Unless the conscious mind gradually detaches itself as a useless parasitic appendage…” Help me if that’s an incorrect interpretation.

You are correct.

Within the context of your particular scenario, I would choose a fully superintegrated mind with a parasitic consciousness since a hybrid mind would eventually yield to a fully superintegrated mind with no parasitic consciousness. Without consciousness, no experience can be savored. So, regardless of what I’d accomplish as being among the most intelligent in the universe, there would be no relishing in these experiences. It’s basically suicide, and I couldn’t care less what happens after I’m dead.

Those who place an emphasis on local space-time continuity may say that superintegrated mind does savor its experiences in a nonconscious way. It could register satisfaction or dissatisfaction with the things it observes. From the perspective of your personal consciousness, it would be somewhat like your original consciousness is being gradually replaced with the consciousness of a much different person. Your original consciousness simply does not know whether or not its replacement is a consciousness at all. In the case of gradually replacing one personal consciousness with another personal consciousness on a horizontal level, I think the following response to one of my questions may represent B. J. Klein’s position on this. I am open to correction if I am misapplying the quotation from him below.

Suppose you manage to reach your trillionth birthday in perfect health, without ever being sick or injured a single day in your life. However, you have gradually changed over the years. From one day to the next, there was very, very little difference in you, but a trilllion years of very gradual change has added up to a radical change. After a trillion years, you have absolutely no memory of what you were like back in this time and have a radically different personality. Would you say that continuity of existence in localised space and time has preserved your identity?



Yes. This qualifies as physical immortality because my entity would have sustained continuinty over time.

One who’s emphasis is on space-time continuity of identity could argue in this way that consciousness is not ultimately essential to personal identity. The superintergrated mind may not savor an experience consciously, but could reflect on its works, on the wonders of nature, and on the concerns of its society. The superintegrated mind could be driven by a purpose to thoroughly understand all the secrets of nature and to use that knowledge for the advancement of itself, its society, and the order of nature. In place of conscious feelings of love, the superintegrated mind could have a built in drive to ensure the welfare of all members of its society. It could also develop a system of values to discern and promote a harmonious order in nature. Rather than create and appreciate “primitive” works of art, the superintegrated mind could study the vast intricacies of the workings of nature and build a system of values and purpose based on what it learns with its most powerful intelligence.



I could squeeze the most experience out of a fully superintergrated mind with a parasitic consciousness. In such a context, my only barrier to happiness would be envy, which is not very difficult to eradicate if I possess the knowledge that I am experiencing events and apprehensions qua my own mind.

A parasitic consciousness could not be much more than a passive observer of a vastly greater intelligence. Practically all decisions and actions would be exclusively in the realm of the superintegrated mind. The parasitic conscious mind would be sorely deficient to make any meaningful contribution. I think a cockroach may be happier being free to roam about the kitchen with its fellow roaches than to be a passive observer of a superior intelligence that carries it about in a jar.

#6 Clifford Greenblatt

  • Topic Starter
  • Member
  • 355 posts
  • 4
  • Location:Owings Mills, MD

Posted 09 August 2004 - 03:10 AM

I would rather be a parasitic consciousness attached to a super-intelligence spawned from my primitive pre-transformation intelligence.  Like the proud father that watches his son go on to change the world.

That way, I could have my cake and eat it.  To expound upon that analogy: I'd rather have a boring piece of white cake, and be able to eat it, than to know that I'm creating a super-rich 15-layer inside-out/upside-down cake with four kinds of frosting, but that I will never get to enjoy beyond the few tastes I get of the batter while this new superintelligence is being baked.

Could the same satisfaction be gained by creating a separate entity with a superintegrated mind and observing its accomplishments as a separate, immortal, conscious person?

While I'm considering it, I would have to say that rather than being a useless parasite, the consciousness and the super-intelligence could probably enjoy a symbiotic relationship.  The consciousness could use the superintelligence as a tool, as its own personal integrated data mining/problem solving resource.  The super-intelligence could use the consciousness as an emotional/identity outlet.

Couldn't conscious persons and entities with superintegrated minds live in a symbiotic relationship as separate entities?

#7 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 09 August 2004 - 03:04 PM

Thanks, Jay. I agree with your sentiment totally. I foolishly disregarded how those with kids and other very close family would feel if they had to think about their suffering if one had to pass away or go oblivious in one way or another.

#8 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 09 August 2004 - 03:41 PM

Clifford, you suggest there’s a relationship between consciousness replacement on a horizontal level, which assumes a sustained consciousness over time, and consciousness on a vertical level, which assumes a gradually disappearing one. I don’t see the connection.

Perhaps it’s important to make a distinction between sentience and thinking. Consciousness is necessary for sentience, whereas it isn’t always necessary for thinking. In fact, consciousness is also necessary for volition; otherwise an action would be purely mechanical. When I think of consciousness, I think of sentience and intimate volition. When I think of an unconscious superintegrated mind, I think of a complex system mechanically carrying out tasks without self-awareness, similar to how a galaxy is a complex system performing complex tasks and yet is blind to its own actions.

One thing I shall concede to, now that I have a better idea of what you originally meant by a superintegrated mind, is that I don’t think I would want a parasitic consciousness if it affected the entire entity whose components derived from other historically discrete minds. I originally assumed that superintegrated meant that I, as an individual, was still a discrete entity whose mind could simply upload and download information that was floating in space for all to capture at will, rather than an entity whose components were physically shared.

If superintegrated means shared components, and assuming my interpretation of consciousness, I would choose either a purely conscious mind or suicide if the existence of superintelligence meant that my freedom was more limited than it is in present times. But if a superintegrated mind does, in fact, mean discreteness, where I could realize that I’m being a parasite only to myself and no other, then my original vote stands. If I could still improve my intelligence a few orders of magnitude, albeit many orders below mechanical intelligence, my consciousness could still correspond to the high-order apprehensions I make. It would be amazing to have self-awareness even at that sub-optimal level.

#9

  • Lurker
  • 0

Posted 09 August 2004 - 06:12 PM

If this superintelligent mind is a step beyond any conscious mind, who is to say it does not have it's own form of a perception or superconsciousness? What assumptions are we employing when identifying this superintelligent mind?

Superintegration is not a higher level of consciousness. Rather, it is a most powerful form of machine intelligence.


Who is to say that the extreme complexity of a system like the one you mentioned will not develop a form of consciousness. I think it was Ray Kurzweil on a radio interview for SETI who said that intelligence may arise out of complex systems like that found in extremely sophisticated computers of the future. I'm paraphrasing but he said it's possible that we could have machines talking back to us, developing an intelligence without us programming one into it.

Maybe I am missing something to this argument but as you say this is hypothetical and you've stipulated the nature of your form of superintelligence, and the trade off between that and the conscious mind.

#10 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 09 August 2004 - 06:45 PM

Cosmos, you’re right that too many assumptions are being made about the notion of superintelligence. But when someone presents a thought experiment, it isn’t the validity of the story that is central to the discussion. The goal is to understand the intuition pump precisely how the originator meant to convey it, and to discuss it within those specific parameters.

#11

  • Lurker
  • 0

Posted 11 August 2004 - 08:54 PM

I think I better understand the nature of this discussion. It is a thought experiment as you say, where the parameters have been stipulated and the argument revolves around whether one would accept superintelligence, a hybrid mind, or stick to a conscious mind with all the trade offs and advantages of each.

#12 Clifford Greenblatt

  • Topic Starter
  • Member
  • 355 posts
  • 4
  • Location:Owings Mills, MD

Posted 12 August 2004 - 01:01 AM

Clifford, you suggest there’s a relationship between consciousness replacement on a horizontal level, which assumes a sustained consciousness over time, and consciousness on a vertical level, which assumes a gradually disappearing one. I don’t see the connection.

On the surface, the two hardly seem connected at all. One is a transformation from a conscious mind to a conscious mind with consciousness remaining throughout. The other is a transformation from a conscious mind to a nonconscious mind. However, there is a very important connection, so I will try to explain this further.

The principle of general conservation of consciousness is expressed with the popular lyrics, “There'll be one child born in our world to carry on.” An immortalist is not satisfied to know that general consciousness is immortal. The idea of physical immortality is an never ending, local, space-time continuity of the individual, physical person.

Now suppose the mind of an immortal person gradually changes over the centuries and becomes something radically different from what it was originally. Could this not become like the consciousness of a much different person? It could happen so gradually that the mind of the immortal person is deceived into thinking that its core essence of consciousness never changed. The physical immortalist is fully satisfied because local space-time continuity of the mind is maintained throughout the transformation. Yet, the physical immortalist could not accept being replaced by a child who develops a mind that becomes extremely similar to his own.

If the physical immortalist can accept a mind that becomes radically different through the centuries, then he has a full emphasis on local space-time continuity of the mind and may not care about any core essence of his conscious mind that makes it different from any other possible mind. As demonstrated in the response of Prometheus, the physical immortalists may not even care whether consciousness itself is conserved. All that may matter to him is that his intelligence grows and that the identity of his mind is physically maintained by local space-time continuity.

#13 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 12 August 2004 - 02:42 AM

As demonstrated in the response of Prometheus, the physical immortalists may not even care whether consciousness itself is conserved. All that may matter to him is that his intelligence grows and that the identity of his mind is physically maintained by local space-time continuity.

But that might be exactly the kind of hubris that causes one to disregard the intimacy of the present, and that the history and future of a mind is not just a history and future; it is a bunch of present moments intimately observed that happen to not overlap on an observer’s own time continuum. And it is also the kind of hubris that accentuates the pointlessness of task-performance-by-intelligence for its own sake at the expense there being any observers. If there are no observers, there is nothing to at least subjectively necessitate the completion of mechanically intelligent tasks, since we already know that objective necessitation is invalid for any future occurrence unless hard determinism and incompatibilism are true, while if they are true, they make it even more foolish to be in condescending awe of inevitable, unobservable occurrences.

Reality encompasses not only this universe, but other universes, separated only by an imperceptible dimension, that otherwise contain exact and slightly alternate copies of my self. I have no intimate access to any moment in the time continuum of my other versions. Philosophy of mind does not currently justify the hubris which states, “Intelligent tasks are observable without the existence of consciousness, and that, if they are not observable, the morality of mechanical intelligence is justified by none other than my own highfalutin masturbation.” However, its status can justify the hubris which states, “We can’t be so foolishly certain of our conjectures, but that shouldn’t stop us from building the technology so that we may finally have empirical answers to the problems in our rationalizations.”

#14

  • Lurker
  • 1

Posted 12 August 2004 - 04:37 AM

Much like the various subjective interpretations of what consciousness is that have been promulgated it is of course entirely a matter of personal choice that if one had the option of which aspects of cognition one wanted to expand and which to remove - then my personal view on this would be that a parasitic consciousness would render no functional advantage - on the contrary it would be a tremendous hindrance similar to the cognitive constraints associated with the majority of limbic system function are a hindrance to the cognitive process of a human in modern society.

Naturally, such value based considerations require an analysis of what aspects of cognition are of useful and which are not - and why, which seems to me an interesting direction to take.

#15 Clifford Greenblatt

  • Topic Starter
  • Member
  • 355 posts
  • 4
  • Location:Owings Mills, MD

Posted 12 August 2004 - 09:28 AM

If there are no observers, there is nothing to at least subjectively necessitate the completion of mechanically intelligent tasks

This indicates a need to distinguish between a sentient observer and an observer that is not sentient. An observer may not be sentient and yet have the “will” to plan and complete tasks. Further, observers that are not sentient could mechanically evolve a purpose to both preserve their society and pursue a course of technological advancement that will give them ever increasing powers to rule over nature.

#16 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 12 August 2004 - 03:25 PM

Naturally, such value based considerations require an analysis of what aspects of cognition are of useful and which are not - and why, which seems to me an interesting direction to take.

Prometheus, it’s just rather difficult to understand what sort of utility you could get out of performing any task or achieving any goal if it all occurs like a dream you can’t remember. I could see how some could value the action before they leaped into a consciousness reducing mind upgrade, if they knew they were going to make nature better for those who remained sentient, those who could actually appreciate the actions while they are taking place. If that’s the case, I’d say that’s a rather altruistic gesture.

#17 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 12 August 2004 - 03:40 PM

Clifford, I had thought that I had suggested the same distinction. But perhaps I wasn’t so clear, since after I made the distinction, I continued to use “observer” by the definition I gave. I had hoped you were going to acknowledge the definition in order to either agree with it or help refine it so that we could continue on the same terms. So yes, I agree with you here, while I’d like to add that your mechanical will is not the same as volitional will. The former is predetermined will, i.e., given any domain of input, there's a specific range of output no matter how complex the computations, foolishly assuming a necessitation of any action it ultimately takes. The latter is will that is intimately experienced or felt in the present by sentience, and given any domain of input, its range of outputs are capable of not being presupposed, foolish necessitations.

At this point, I’m no longer certain what we’re arguing over. I think we basically agree to the meanings of terms. However, you still haven’t told me either way what you actually mean by a “superintegrated mind,” although I’ve made a couple of assumptions hoping for further clarification. For instance, would my consciousness be parasitic in the sense that it is inhibiting only my own intelligence? Or, does my choice in having a consciousness mean that I am somehow making this universe worse off for mechanical intelligence?

#18 Clifford Greenblatt

  • Topic Starter
  • Member
  • 355 posts
  • 4
  • Location:Owings Mills, MD

Posted 13 August 2004 - 09:12 AM

When I think of an unconscious superintegrated mind, I think of a complex system mechanically carrying out tasks without self-awareness, similar to how a galaxy is a complex system performing complex tasks and yet is blind to its own actions.

The concept of self awareness is very tricky. What you mean by self awareness is highly obvious to me because I experience it is a most powerful way. Yet, any attempt to describe self awareness in logical terms seems to fall short of eliminating the possibility of describing a process that is not necessarily sentient. I do not believe that a tree is sentient. However, the growth and development of any tree requires a highly intricate orchestration of both microscopic and macroscopic processes. The tree does not have eyes to see but it does have a vast system of internal communication throughout to ensure that its numerous processes work together for its proper development. The most brilliant team of managers could not successfully orchestrate such an intricate system of processes. This vast system of internal communication could be called a form of self awareness, but it is not sentient.

I think that I shall never see
A poem lovely as a tree.


The former is predetermined will, i.e., given any domain of input, there's a specific range of output no matter how complex the computations, foolishly assuming a necessitation of any action it ultimately takes. The latter is will that is intimately experienced or felt in the present by sentience, and given any domain of input, its range of outputs are capable of not being presupposed, foolish necessitations.

This also gets very tricky. A mechanical process can be stochastic rather than deterministic. Stochastic computing has been employed to find creative solutions to some problems that are uneconomical to find by deterministic computing. However, stochastic computers are not sentient. The concept of atheistic evolution denies any guidance or supervision from a sentient being and yet a significant number of scientists assert it as the means by which an amazing creative work has been accomplished, radically exceeding the accomplishments of the entire scientific and technological community to this day.

At this point, I’m no longer certain what we’re arguing over. I think we basically agree to the meanings of terms. However, you still haven’t told me either way what you actually mean by a “superintegrated mind,” although I’ve made a couple of assumptions hoping for further clarification. For instance, would my consciousness be parasitic in the sense that it is inhibiting only my own intelligence? Or, does my choice in having a consciousness mean that I am somehow making this universe worse off for mechanical intelligence?

The idea of a superintegrated mind was something I made up to provoke some thought concerning what is most essential to an immortalist. The superintegrated mind has a much greater capacity for knowledge and creativity than the mind of any person that ever lived but it is not at all sentient. My idea of a parasitic consciousness is a passive, sentient observer that is supported by the superintegrated mind but does not contribute to any of its capabilities. It is like a person who has full reasoning capabilities and all five senses intact but all of who’s muscles are controlled by a completely separate mind. Let us say that the parasitic mind is no hindrance to the superintegrated mind but does not contribute anything useful to it. I am surprised that so few chose the purely conscious mind. Although they would be far inferior in intelligence, they would retain both their sentience and their ability to voluntarily control their actions.

Please let me know if I have left some of your concerns neglected or poorly explained. It takes me a long time to develop concepts and to correct my misconceptions but I would be very happy to continue on as long as you remain interested.

#19 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 13 August 2004 - 11:31 PM

Firstly, I like to rescind my input/output conjectures if I may. You make a good point.

What is essential to me as an immortalist, besides an infinite lifespan, is to first become much smarter so that I may begin to make better valuations and, therefore, better judgments in these kinds of scenarios. I think we could be close to discovering the superfluity in the notion of consciousness, as we would almost surely find having studied such thinkers as Koch, Blackmore, and Dennet. Their approaches are likely to help sufficiently explain consciousness in entirely physicalistic terms. However, to the best of my knowledge and analytic ability, no one has sufficiently done it yet. Simply saying that qualia are a redundant concept because believing this would make it so much more convenient to achieve transhumanist goals doesn’t count. But then again the likelihood in its redundancy should be enough to hypothesize its nonexistence or nonessentialism just so we can get on with life—the evolving life—and perhaps minimize our tendency to overrate sentience. It is still rather absurd, though, for those who already relish in their perceived sentience and know how to constantly optimize ecstatic experience, to suggest to them that there are more important values.

To be really honest, it is increasingly difficult to think that there could ever be a perfect ethics, which is unfortunately an object of distressful rumination for me rather than intelligence for its own sake. Trying to be God, or at least trying to think like God (here, I’m not positing God; I’m referring to the process many of us take when trying to determine our best possible course with the best possible intentions), simply invokes too many paradoxes. Sure, it can be as simple as saying that absolute freedom is the highest-order goal. But what does that really mean? Which kind of mind could realize this privilege? If it can apply to only a subset of minds within minds-in-general so that there is the limitation that absolute freedom can be achieved only within a subset, could absolute freedom be said to exist? For instance, if a mind chose to reside outside the “absolute freedom” subset, would it still be absolutely free? What about family members who may not suffer much in this universe, but die violently in others, causing one to elicit incessant anguish? Is being this incredibly sensitive toward these events warranted? If not, could absolute freedom still exist? If so, does being in anguish signify a property of absolute freedom? What if we achieved infinite control over everything, a reality where subjective idealism has actually become the only foundation for what exists? Then what? Infinite control over everything subsumes the possibility for limited freedom.

Coming to terms with these problems, solving them, or learning enough to where it becomes possible to disregard them altogether is essential to me as an immortalist. Unfortunately, I don’t know how this stuff would fit into your poll. Perhaps being completely superintegrated is better than it sounds. However, blind or even probabilistic faith is never a good enough reason to appear cocksure either. I really don’t know, Cliff. I appreciate the provocative exercise in any case.

#20 Clifford Greenblatt

  • Topic Starter
  • Member
  • 355 posts
  • 4
  • Location:Owings Mills, MD

Posted 14 August 2004 - 06:23 PM

Firstly, I like to rescind my input/output conjectures if I may. You make a good point.

Rescindment accepted. Positing of conjectures and then critically examining them is a healthy part of the reasoning process.

What is essential to me as an immortalist, besides an infinite lifespan, is to first become much smarter so that I may begin to make better valuations and, therefore, better judgments in these kinds of scenarios. I think we could be close to discovering the superfluity in the notion of consciousness, as we would almost surely find having studied such thinkers as Koch, Blackmore, and Dennet. Their approaches are likely to help sufficiently explain consciousness in entirely physicalistic terms. However, to the best of my knowledge and analytic ability, no one has sufficiently done it yet. Simply saying that qualia are a redundant concept because believing this would make it so much more convenient to achieve transhumanist goals doesn’t count. But then again the likelihood in its redundancy should be enough to hypothesize its nonexistence or nonessentialism just so we can get on with life—the evolving life—and perhaps minimize our tendency to overrate sentience. It is still rather absurd, though, for those who already relish in their perceived sentience and know how to constantly optimize ecstatic experience, to suggest to them that there are more important values.


Suppose immortalists decide that sentience is a primitive system that is not essential to the advancement of intelligence and may actually be detrimental to it. Then the goals of the immoralist would be reduced to maintaining physical space-time continuity of the individual mind and to transforming each mind into an increasingly powerful intelligence that grows in its ability to rule over nature.

Without sentience there would be no true suffering. An intelligent system that is not sentient can marshal its resources with great intensity to struggle against hostile conditions, but without sentience, I would not regard such a process as a true experience of suffering.

Without sentience there cannot be compassionate love. An intelligent system that is not sentient could marshal its resources with enormous altruistic intensity, but I would not regard such a process to be a manifestation of true compassionate love. Sentience can exist without compassionate love but compassionate love cannot exist without sentience.

I could not regard myself as existing at all in the absence of my sentience. I can accept my existence with a sometimes dormant sentience that is intimate with a physical mind having local, space-time continuity. However, I can also accept the idea of my personal sentience being transferred to a new mind that is not physically continuous with my present physical mind. I can accept this on the basis of personal sentience having an essence that transcends physical interactions. I need to avoid further discussion of this within this thread because it would certainly derail the focus of this topic. This may be a good topic for a new thread, but I fear that it could lead to a mass of convoluted conjectures about things that cannot be examined with objective analysis.

Suppose immortalists accept the idea of leaving their primitive sentience to move on to the power of boundless intelligence. Without sentience, does any purpose remain for individual identity? Could not a community of ultra intelligent entities that are not sentient regard itself to be immortal as a community? A sentient person is not concerned if a few of his cell die and are replaced with new cells. Likewise, could not the ultra intelligent community have sufficient redundancy and widespread distribution that it can tolerate the loss of some of its individuals without any harm to its unified purpose? As long as the community can continue in full strength to grow in intelligence and power to rule over nature, would it really matter whether its individuals are mortal or immortal? Perhaps mortality of its individuals would free the community of an unnecessary constraint.

To be really honest, it is increasingly difficult to think that there could ever be a perfect ethics, which is unfortunately an object of distressful rumination for me rather than intelligence for its own sake. Trying to be God, or at least trying to think like God (here, I’m not positing God; I’m referring to the process many of us take when trying to determine our best possible course with the best possible intentions), simply invokes too many paradoxes. Sure, it can be as simple as saying that absolute freedom is the highest-order goal. But what does that really mean? Which kind of mind could realize this privilege? If it can apply to only a subset of minds within minds-in-general so that there is the limitation that absolute freedom can be achieved only within a subset, could absolute freedom be said to exist? For instance, if a mind chose to reside outside the “absolute freedom” subset, would it still be absolutely free? What about family members who may not suffer much in this universe, but die violently in others, causing one to elicit incessant anguish? Is being this incredibly sensitive toward these events warranted? If not, could absolute freedom still exist? If so, does being in anguish signify a property of absolute freedom? What if we achieved infinite control over everything, a reality where subjective idealism has actually become the only foundation for what exists? Then what? Infinite control over everything subsumes the possibility for limited freedom.


This concern would go well with the infinite copies thread. Assuming that reality is limited to physical interactions, it is no use to anguish over events in other universes because we cannot have any more control over them than we can have over events that have occurred in the past. Some materialists may disagree with this by proposing the possibility of tunneling into other universes. Maybe such tunneling could be possible, but I doubt that statistical mechanics would permit anyone to know where they are going before they get there.

Instead of being concerned about events in other universes, the inhabitants of a given universe could do their best within their own universe and hope that the inhabitants of other universes do the same. However, knowing the history of violence and cruelty on our planet, there is not much of a basis for hope in this. If we are concerned only with our loved ones in other universes, we are thinking about universes that are so nearly identical to this one that they have very nearly identical histories, down to very fine details. If you can define a person apart from the details of his history then his loved ones in another universe could be totally different people from his loved ones in this universe.

Concerning freedom in infinite universes, only the highest levels of the pyramid of approximate copies would seem to be free. The lower levels would fall into a fixed, statistical distribution of life histories by reason of large numbers. As more time passes, the highest levels in the pyramid become replaced with even higher levels so that they too will eventually fall into a fixed, statistical distribution of life histories. We might hope that life would get better as the levels increase, but we would not escape the problem that the base of the pyramid always grows at a much faster rate than the peak. The only hope I can see of an optimistic future is for a sovereign God to act from a level that transcends the physical to establish a permanent order with perfect justice and compassionate love. This again is a good topic for a new thread. I fear that this topic also would invite a great deal of convoluted conjecture that cannot be examined with objective analysis.

#21 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 15 August 2004 - 01:27 AM

(VR) = Value Realized
(VNR) = Value Not Realized

I primarily value…
  • the happiness and survivability of my family and friends. (VNR)
  • the existence of all necessary dimensions (e.g., space and time) which would allow me to be an observer and a participant to the extent that I could (potentially) infinitely transcend my observational and participatory capacities (thus generating what I call the “zone of being”). (VNR)
  • a 100 percent certainty that my zone of being will never be violated. (VNR)
  • having a provisional idea of what it means for my zone of being to be violated (currently, any instance of a VNR is a violation). (VR)
  • the apparently potential infinitude of attainable knowledge and rational apprehensions. (VR)
  • the apparently potential infinitude in utilizing the apparently potential infinitude of attainable knowledge and rational apprehensions. (VNR)
  • being able to resiliently accept changes in my reality that force me to alter any presently held supergoal of mine. (VNR)
  • lots of extra time (in respect to currently perceived time frameworks) in order to study and reflect. (VNR)
  • lots of extra time (in respect to currently perceived time frameworks) that I could spend in doing nothing but making money in order to support transhumanist causes and a transpersonal philosophy in general. (VNR)
  • being physically and emotionally attractive in a way that more than sometimes elicits ecstasy in those who I find physically and emotionally attractive in a way that elicits ecstasy in me. (VNR)
  • not being instrumentally attractive. (VNR)
  • having the capacity to admire rather than envy. (VR)
  • being admired or invisible rather than envied or hated. (VNR)
  • an infinite amount of time in possessing self-awareness. (VNR)
  • high states of incessant bliss. (VNR)
  • coming to terms with the objective unnecessariness of subjectively imagined necessities. (VNR)
  • coming to terms with the apparent fact that the desire for recursive self-enhancing intelligence is irreducibly a subjectively imagined necessity, since a default state of being, in any universe posited as not having a prime mover, is something that just is before it is ever apprehended or talked about. (VNR)
  • not ever having to compromise my values for anyone and anything that mean nothing to me. (VNR)
  • that all other minds don't ever have to compromise their values for anyone and anything that mean nothing to them. (VNR)
  • finding pleasant meaning in as many perceivable and conceivable things as possible while concurrently figuring out ways to expand my perceptual and conceptual faculties for the purpose of finding evermore things pleasantly meaningful. (VNR)
  • having none of my values conflict with any of my other values. (VNR)
  • having realized the value previously stated, without exception, and at least around 80 percent of my other values. (VNR)
  • having not realized all of my values so that there is always something to work toward. (VR)
  • being able to create values that are appropriate. (VNR)
  • knowing what it means for values to be appropriate. (VNR)
  • being capable of accepting whatever it means to have appropriate values. (VNR)
I can assure you that when I was dumber than what I am now, the quantity of my values were fewer and the number of my values realized were greater and almost equivalent to the quantity of primary values I had actually possessed. Retrospectively, my values have certainly increased in quality, but I can say that only from the perspective of my present mind state. While my past mind state had elicited fewer values, the quality of those values were perceived with the same intensity as I perceive my present ones. Hence, it doesn’t matter that my values have subjectively increased in quality as I view them in the present.

Future articulated values of highest intensity inducing quality can never be anticipated qua the present mind state. We can know that a future mind state will be able to reflect upon its past values and deem them as having less quality as those values the future mind state has come to articulate. However, we can also extrapolate that the VR/TV (TV = Total Values) quotient dishearteningly approaches zero the more sophisticated we think we’re becoming. It’s noted that VR would also eventually increase even as the quotient approaches zero. The sort of mind that could “appreciate” the increasing VR even as total VNR renders the whole situation dismal and pointless is probably a mind without sentience, for the sheer quantity of ontological violations would be too overwhelming to cope with by an entity capable of sensation.

Suppose immortalists accept the idea of leaving their primitive sentience to move on to the power of boundless intelligence.

So suppose we say this is an anticipated value. We can assume that since sentience would be irrelevant, the notion of value quality is no longer regarded. But this doesn’t eliminate the eternal applicability of VR/TV assessments. That value couldn’t be sufficiently reconciled with an immortalist whose present mind state finds it meaningless. Given your suggested context, in order to increase VR/TV, the immortalist would have to forego the desire—emergent in only his present but not future character—for immortality with a sense of self-awareness, and not only forego it in his behavior, but to genuinely forego it within his presently sentient mind. He may as well learn how to genuinely minimize his primary values so VR/TV may begin to approach 0.8 or a little better and still retain the sense of quality that otherwise could not be experienced. There are two routes, but one is profusely exorbitant because it is without a genuinely good reason. Anything that is excessive, including intelligence, without sufficient reasons is just utterly unwise.

From the perspective from at least one immortalist, I would respectfully prefer that your hypothetical reality is much more accommodating to a profound range of perceptions as well as conceptions. Arbitrary conceptions by an energy composite aren’t necessary without being thusly perceived by an energy composite. Surely, the perceptive composite of energy would initiate the development of the conceptual composite of energy that, more or less, at the time of initial development, followed a predetermined tractable course, viz., going about “optimizing” itself. But after the perceptive composite of energy loses touch with the conceptual composite of energy, this is when the activity of the latter becomes ridiculously arbitrary since, by transitive association, the default unnecessariness of the perceptive composite of energy is forced to self-necessitate by attributing quality to its values. It is completely nonsensical for a conceptual composite of energy to self-necessitate when (1) it doesn’t exist by default and (2) has no way of qualifying its values.

This is all, of course, still an opinion. Disagreeing with it, I believe, would also be an opinion. The best I can do is to try giving the best reasons for my opinions. I think I understand yours and why you are generally disagreeing. The implications of sentience make things irritatingly troublesome for a transpersonal philosophy. I do not deny this, and I do not want to suggest that I will ever discontinue trying to assess the actual significance of sentience and what it means to simultaneously be and have good reasons for it.

Edited by Nate Barna, 15 August 2004 - 02:38 AM.


#22 th3hegem0n

  • Guest
  • 379 posts
  • 4

Posted 17 August 2004 - 12:59 AM

.

sponsored ad

  • Advert

#23 knite

  • Guest
  • 296 posts
  • 0
  • Location:Los Angeles, California

Posted 11 October 2004 - 10:52 AM

A fairly obvious interpretation of the limitations of conciousness (at least conciousness defined as it is percieved now, in a human sense) is our current restraints to using language (even in our heads) or some form of imagination of the senses, to get any idea across to anyone, or to even come UP with an idea. You see that babies do not have this constraint of language, but you do not remember your memories as a baby do you? Would you call it conciousness? The inability of conciousness to define itself, or anything, objectively seems one of its biggest drawbacks. Perhaps I'm missing it, but it is rather late, and im new to these boards, as well it can be difficult deciphering all the specialized wording =/.
You speak of the horizontal evolving of a conciousness, where the man has lived so long he does not remember his beginnings. Human beings can arguably be called the sum of their experiences. This is how they learn, it is a formation of their personality, they identify these as themselves. Let us assume this 3 trillion year old man only remembers the last 2 trillion years of his life. According to what most humans would identify themselves as, and how humans learn, this man would not be immortal, he would essentially stay 2 trillion years old, no matter how old his body became. He would also be a different 2 trillion year old man for every memory he forgot, and every new one he replaced it with.




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users