• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo
- - - - -

An Immortalist Proposition


  • Please log in to reply
14 replies to this topic

#1 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 03 March 2005 - 05:17 PM


All nonsense. I vow not to have ideas again until I have the skills to think rigorously. Thank you for your patience.

Edited by Nate Barna, 20 July 2005 - 07:07 PM.


#2 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 04 March 2005 - 12:12 AM

Nate

Free will is presumed.


I think there is the possibility that I am misunderstanding you. Is your starting premise FW? Or are you positing free will as a subjectively perceived feature of a cognition? Your line of reasoning is of great interest to me because I was recently trying to explain to Infernity the reason why the will for the eternal is such an influential component of the human psyche. I didn't come up with an answer I found satisfactory...

You state:

It’s axiomatic that intelligence is possible, and wherever it’s possible, by a facet of its description, its imperative is to persist on the basis of self-interested, but not necessarily selfish, choices.


Could you elaborate on this point further. Why is the imperative to persist a necessary correlate of volitional existence?

f a cognition (i.e., a series of events that function as volitional) denies this imperative, it’s morally neither right nor wrong, but it abdicates its status as intelligent and perhaps, in some cases, from moral consideration of intelligent agents. However, as long as it, instead, desires to be intelligent, it’s necessary that its ethic derives from the foregoing imperative in order to maintain its status as an intelligent agent.


This line of reasoning depends on what the agent perceives as "maintaining itself" as a rational agent. A christian would not view death as a termination of one's "intelligence". Conversey Dawkins, who has repressed his "imperative to persist" would be seen [from this perspective] as being irrational since he is not actively pursuing his continuation (ie, he has resigned himself to a finite existence).

It seems to follow, then, that an ethical faculty is intrinsic of intelligent agents, so that there is nothing particularly arbitrary about the choices of an agent while it’s in a state of being intelligent. The only arbitrary choice would be to renounce any present state of intelligence on a sure course toward death.


Creating value structures seems to be a universal among volitional agents.

Question: Isn't there a difference between being "logical" (ie, intelligent) and being *rational*?

#3 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 04 March 2005 - 12:16 AM

Isn't there a difference between being "logical" (ie, intelligent) and being *rational*?


And by this I mean one can be completely logical, but at the same time absolutely wrong and absolutely rigid in how they maintain their conceptual reality.

Rationality, to me, implies flexibility in the ability to change one's base values if necessary.

sponsored ad

  • Advert

#4

  • Lurker
  • 0

Posted 04 March 2005 - 12:27 AM

I lost a large portion of my text when my computer crashed. I'll have to prepare my response over again.

#5 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 04 March 2005 - 12:28 AM

Oh man, I feel for you buddy. Losing posts make me want to SCCCCCREaaammmMM!! [ang] [":)] [:o]

#6

  • Lurker
  • 0

Posted 04 March 2005 - 12:50 AM

This post may cover issues that overlap what Don has already discussed, but regardless here I go.

If the Immortalist proposition holds true, should it not apply to all intelligent beings in this universe? At some point should we expect an intelligent race to come to the realization you have arrived at? If this proposition is universally applicable to truly intelligent life then it probably cannot be arbitrary (or so I would think). Does this then entail a convergence of goals and beliefs?

Excuse the brevity of my post. I can't be bothered to expand on it at the moment after losing my more lengthy post earlier. [wis]

#7 Kalepha

  • Topic Starter
  • Guest
  • 1,140 posts
  • 0

Posted 04 March 2005 - 01:07 AM

DonSpanton Is your starting premise FW? Or are you positing free will as a subjectively perceived feature of a cognition?

The purpose of supposing free will is only to provide a context for an axiom. It’s provisional whether there’s free will. But if my generalized representation of actual conditions is accurate, which I believe it is, then there’s free will in the sense represented. I'm not as stringent as some about what qualifies as free will (e.g., full control over all nomological forces and the results thereof). There's no basis for it.

DonSpanton

Nate It’s axiomatic that intelligence is possible, and wherever it’s possible, by a facet of its description, its imperative is to persist on the basis of self-interested, but not necessarily selfish, choices.

Could you elaborate on this point further. Why is the imperative to persist a necessary correlate of volitional existence?

What’s missing from this diluted proposition is my complete working definition of “intelligence” and a clarification of “persist.” Facets of intelligence must include thinking and acting on behalf of an agent. It’s because of this behalf that, if an agent is intelligent, it must take itself into consideration whenever it deliberates. In turn, this entails that if an agent persists deliberately, rather than quasi-deliberately (i.e., it doesn’t fully articulate to itself what it’s doing) or blindly, it analyzes every real contextual layer within its capacity. This analysis includes accurately abstracting ontologies and conceptually representing the physical universe. If an intelligent agent doesn’t deliberately posit a parent-goal “achieve physical immortality,” it could mean that its analysis lacked the data whose synthesis would converge on both the desirability and feasibility of such a parent-goal.

There’s a distinction between intelligence and volition. Therefore, the imperative doesn’t apply to mere volition.

DonSpanton Isn't there a difference between being "logical" (ie, intelligent) and being *rational*?

Given the above and what you say about flexibility, being merely logical wouldn’t qualify as intelligence.

(Edit to fix quote.)

Edited by Nate Barna, 04 March 2005 - 03:35 PM.


#8 Kalepha

  • Topic Starter
  • Guest
  • 1,140 posts
  • 0

Posted 04 March 2005 - 01:16 AM

cosmos If the Immortalist proposition holds true, should it not apply to all intelligent beings in this universe? At some point should we expect an intelligent race to come to the realization you have arrived at? If this proposition is universally applicable to truly intelligent life then it probably cannot be arbitrary (or so I would think).

In general, I'm supposing that a choice under two different conditions is arbitrary. The choice is to deny a state of being intelligent (under the terms suggested). The two conditions when making this choice are being deliberate and being quasi-deliberate.

cosmos Does this then entail a convergence of goals and beliefs?

On some goals and beliefs, yes, such as those articulating intelligent deliberations. But when thinking and acting become increasingly automated, instead of converging toward deterministic cognitions, conditions will be such that the highest-level abstracting will be devoted to non-existentially threatening games of an infinite variety.

(Edit: non-existential => non-existentially)

Edited by Nate Barna, 04 March 2005 - 03:36 PM.


#9 Kalepha

  • Topic Starter
  • Guest
  • 1,140 posts
  • 0

Posted 04 March 2005 - 03:32 PM

Sorry, cosmos. I think I misunderstood you.

Nate

cosmos If the Immortalist proposition holds true, should it not apply to all intelligent beings in this universe? At some point should we expect an intelligent race to come to the realization you have arrived at? If this proposition is universally applicable to truly intelligent life then it probably cannot be arbitrary (or so I would think).

In general, I'm supposing that a choice under two different conditions is arbitrary. The choice is to deny a state of being intelligent (under the terms suggested). The two conditions when making this choice are being deliberate and being quasi-deliberate.

Yes, I think it would be universally applicable, and I believe it may have been implied, although it could've been too subtle. The proposition also is a step in taking several difficulties of being into account.

One difficulty is Werking’s fourth anthropomorphic conceit in The Posthuman Condition. In so many words, he states that all volitional agents are constrained in making every decision on the basis of gene propagation or happiness. The ultimate implication of this, he seems to indicate, is stasis, since there aren’t intrinsic technological advancements and moral philosophies in gene propagation or maximizing happiness, especially after the point that either of these underlying goals have been automated.

I might suggest, however, that a universal description for intelligence could be concerned with neither. Firstly, intelligence is a thesis unto itself (Corwin said something once like “a goal is a thesis unto itself,” so the phrase is borrowed). Independent of this state, reality doesn’t demand it. The state itself only demands it. If an agent has the opportunity, and knows it does, to be in this state, or if the agent already is, it would be completely arbitrary to deny it, for there would be nothing intelligent about the decision. In contrast, to affirm it would not be arbitrary, for to affirm it would be based on intelligent decision-making. Arbitrariness is not an abstraction that exists independent of minds. It can only gain meaning in the context of a volitional agent that apprehends, at least implicitly, a relationship between being in a state of intelligence and a state of non-intelligence.

Secondly, philosophy has nothing to do with getting richer, healthier or happier, yet it’s absolutely vital to complement the impersonal act of understanding reality – since there’s no justification to understand reality by only acting to understand reality, and there certainly is no independent justification to act in any way based on any of its understanding – with co-creating reality through reasoning – organizing it with abstractions that are distinct from sense experience – to maximize the effectiveness of actions, whatever it means to be effective (i.e., being effective in accordance with thoughts doesn’t necessarily entail getting richer, healthier or happier, but it would be necessary and sufficient to achieve those states). Philosophy is the vehicle for deliberation, whether an agent has access to concepts and not deeper, or has access to and manipulative ability of its elemental states. (This doesn’t mean that it’s value-laden, since deliberation can include the deliberate act to do absolutely nothing based on absolutely zero values. But the key idea is that both thoughts and actions are observed and stipulated with zero bias.) If it isn’t paid much attention, or is ignored altogether, then actions are no more than quasi-deliberate, which is not sufficient to be effective.

(Edit: punctuation)

Edited by Nate Barna, 11 March 2005 - 05:02 PM.


#10

  • Lurker
  • 0

Posted 05 March 2005 - 12:13 PM

Nate, are you familiar with Michio Kaku? He is a theoretical physicist who claims that it is possible to escape this universe, should the need arise, he discusses possible escape routes that are supposedly consistent with what we know in physics today.

Dr. Kaku also speculates as to the purpose of intelligence. Among other things, that purpose may be to persist and propagate, as he states below.

From a Jan. 2005 article written by Dr. Kaku:
Could a hole in space save man from extinction?

...
So, when contemplating the question raised by Huxley in 1863, our true role in the universe might be to spread the precious germ of intelligent life throughout it and, one day, to spread the seed of life by leaving a dying universe for a warmer one.


I read "our true role" as the wider role of intelligent beings, not just humans.

-----

If I were asked whether Michio Kaku is an immortalist, I would be hard-pressed to provide an answer. While he's contemplating measures that would allow intelligent individuals collectively to persist indefinitely, I cannot determine whether he personally shares the goal of physical immortality. As with Dawkins, Dr. Kaku may have resigned himself to a limited lifespan. The baggage of human existence could be the deterring factor preventing the pursuit of continued life, even for philosophers and physicists.

Edited by cosmos, 05 March 2005 - 12:33 PM.


#11

  • Lurker
  • 0

Posted 05 March 2005 - 12:30 PM

Nate, I should add that your last two posts clarified a few issues for me (hopefully I did not misconstrue anything you wrote).

#12 Kalepha

  • Topic Starter
  • Guest
  • 1,140 posts
  • 0

Posted 05 March 2005 - 05:23 PM

Yes, cosmos, I’m familiar with Michio Kaku. He’s a good fella. Like you, I’m unsure whether he’s resigned himself to a limited lifespan. By all accounts, it appears probable. It seems like on some smartness vectors (henceforth, SVs), though not all, death becomes inversely easier to accept.

At the moment, I’m guessing one may classify individuals, who have a tendency to deliberately optimize their betterment, whatever that means to each, into three types. Each type rides on at least two SVs. The first type’s SVs are all death-accepting. Indeed, everyone would have probably fallen into this category 100 years ago, regardless who they were. The second type’s SVs are both death-accepting and death-denying. I happen to fall into this category, for several reasons, one being that I have a hunch that a synthesis of such SVs will engender death-denying SVs that have greater velocities than would have been possible otherwise. And, of course, the third type’s SVs are all death-denying. In the case of the third type, it’s possible to have ultimately greater-velocity SVs than the second type, although I’m personally unable to bet on it. But it’s a risk-reward situation where a second type’s death-accepting SV could prevail at any moment at the cost of a possible, though not necessarily probable, unprecedented death-denying velocity.

Again, this thought is simply experimental, with the intention to make sense of some popular, smart people’s choice, or anyone else’s even, to accept death in 2005.

#13 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 05 March 2005 - 06:00 PM

Nate

Again, this thought is simply experimental, with the intention to make sense of some popular, smart people’s choice, or anyone else’s even, to accept death in 2005.


But is it just a matter of acceptance, or a denial of the *possibility*? I've met many Immortalists who accept that their death is all but inevitable, while still maintaining that Physical Immortality is a realistic possibility at some point in the future. It seems to me, that there is something "lacking" in a perspective that fail to even consider the possibility of an infinite existence.

Also Nate, I think I follow you on the first two types of SVs, but would you mind further ellaborating on the third one. Even though I have not had the time to participate much in this discussion I am following it with great interest...

#14 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 05 March 2005 - 06:02 PM

I've met many Immortalists who accept that their death is all but inevitable, while still maintaining that Physical Immortality is a realistic possibility at some point in the future.


Then again, most of these individuals choose to sign up for cryonics...So I guess they really don't accept death (ie, a finite existence). :)

#15 Kalepha

  • Topic Starter
  • Guest
  • 1,140 posts
  • 0

Posted 05 March 2005 - 07:09 PM

DonSpanton But is it just a matter of acceptance, or a denial of the *possibility*? I've met many Immortalists who accept that their death is all but inevitable, while still maintaining that Physical Immortality is a realistic possibility at some point in the future. It seems to me, that there is something "lacking" in a perspective that fail to even consider the possibility of an infinite existence.

Don, that’s a good point. I agree with you, if this is in fact your belief also, that acceptance not only accompanies reasoned acceptance but also an SV that falls short of perceiving the possibility.

Now, I currently generalize that people who are interested in their own growth in 2005, yet don’t envision the possibility of an infinite existence, don’t do so, firstly and a little more significantly, because they innately don’t have a great enough tingling ambition in that direction (we’ve been dying and accepting it for the last four-point-something million years) and, secondly and a little less significantly, because they don’t find themselves embedded in a culture that doesn’t make them feel at least somewhat guilty for merely desiring to be physically immortal. I say “more”- and “less significantly” because I believe, if that tingling ambition is there, the immediately surrounding culture shouldn’t have that much of an effect on them. “It’s 2005, and, from how I’m situated in the universe, there aren’t very great obstacles inhibiting my ability to participate in a developed society. If I want to physically live forever, I’ll do it at all costs, including at the expense of my reputation inside my more immediate group that I inherited,” sort of thing.

DonSpanton Also Nate, I think I follow you on the first two types of SVs, but would you mind further ellaborating on the third one.

I’m assuming that, by “type,” you mean one of the three classifications of individuals rather than the only two types – death-accepting and death-denying – of SVs I generalized.

The third individual type simply doesn’t ride any death-accepting SV. In other words, each one of its SVs either never challenges its other death-denying SVs, or the SV is a solution trajectory that is specifically aimed at solving the death problem or specifically aimed at solving some other problem that may just so happen to solve a multitude of problems, including death, along the way but has no chance of increasing the overall probability, relative across all their own SVs, of failing to solve the death problem. One may note that, from the preceding statement, we could probably generalize that all individual types have a death-neutral SV. But if they all do, it would be irrelevant when classifying these types.

DonSpanton Even though I have not had the time to participate much in this discussion I am following it with great interest...

No problem. I appreciate your interest, Don.

(Edit: "...over all SVs..." => "...relative across all their own SVs...")

Edited by Nate Barna, 05 March 2005 - 07:46 PM.





1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users