Paul,
Thanks for your lengthy response!
[quote][quote]If uploads end up being radically reconfigured versions of the original persons .. for all practical purposes.[/quote]
Yes, that’s correct, never meant to imply otherwise.[/quote]
Whoops, I just noticed the (appropriate) quotation marks in this statement; "will they process a perfect copy, or will they modify “us” for their purposes rather than ours". The identity of the original human being would terminate at that point, as we have agreed.
[quote]That is the type of initial upload I’m talking about, and that is precisely the point I was making. Your human self in this scenario is left behind, unless the new uploaded self finds another way to get you into post-human status without copying, but upgrading using a more advanced methodology (i.e. nanobot brain re-engineering).[/quote]
Why is there a human self that is left behind at all? Wouldn't uploadees practically universally desire that their biological neurons be deleted as they are reinstantiated as cybernetic neuron-equivalents? In a Moravec transfer, there is no "human self left behind"; the subject moves from meatspace into cyberspace in one fluid movement. And if there are cautious folks trying to avoid destructive uploading, then I would figure that they would take the incremental cognitive enhancement route rather than uploading, right?
[quote][quote]Otherwise you're just *copying* the being, creating two different people with the same memories, and each with their own rights. Also, if our uploaded selves chose to upload their human "originals", then wouldn't that mean that the original would still exist, since you imply the original still exists after the first upload? I am slightly confused.[/quote]
You seem to be operating under the assumption that uploads can only occur by copying, rather than by some more advance methodology, such as upgrading existing hardware via nano re-engineering the brain, as Kurzweil describes.[/quote]
My original quibble here was "why would there be a human original left at all?" Why not just a fluid movement? Yes, incremental enhancement is definitely a possibility.
[quote]Technically speaking, this is true. However, what I’m referring to are the large macro-economic and political forces that will be in play in the years leading up to upload capability. My point, which is backed up by a lot of historical thought, is that technologies capable of creating an upload are also the same technologies enabling complete “genie out of the bottle” annihilation.[/quote]
Certainly; but the technologies allowing the creation of uploads would *also* allow for "benevolence enhancement", breaking humanity's upper bound on kindness, and the intelligence enhancement to effectively implement genuinely benevolent goals. Whether society continues to become more benevolent and safe in the world of uploads, or rapidly falls into a destructive attractor, may very well depend upon the first being to kickstart the avalanche.
[quote]Ask yourself this – what kind of society would have to exist where only benign nanotech is in play?[/quote]
A society composed of kinder-than-human intelligence as well as human intelligence, in which the society has the technological capability to detect and respond to potential disasters before they happen. (This would probably require near-ubiquitous intelligences operating on nanosecond timescales.) Over time, I would expect the society to settle into a happy equilibrium as far as potential disasters are concerned, just as the vast majority of our internal homeostatic mechanisms operate normally and silently, preserving the basic foundation and form of the human organism. Solely-human societies are just not stable in the long run. (On this I figure we agree.)
[quote]Will it be a totalitarian solution, or some kind of universal transparent society as David Brin is advocating? In either case, I have yet to hear a convincing argument that such technologies could exist for long without some kind of global benignity being pervasive, either forced, co-axed, or freely chosen.[/quote]
I feel that we need to turn our attention to the structure of the minds underlying the civilizations, rather than political systems representing special cases of human political organizations; whether theoretical as in Brin's scenario or historical as in the case of totalitarianism. (This may just be a difference in our semantics.) For society to survive with advanced technology, it must be composed of greater portions of minds with *hardware-level dispositions* to acting rationally (in the Bayesian sense), cooperatively, compassionately, and so on. For arbitrary levels of technological advancement, Homo sapiens is bound to break down eventually; on this I think we agree. Some combination of the options you suggest is likely to take place.
[quote]I disagree that SI’s can be created using simple software tricks as Minsky and Yodkowsky suggest. I will state here unequivocally, that SI’s will not exist, until we are able to match the complexity of the human brain via mapping and equivalent molecular complexity (streamlined or not) as Kurzweil suggests.[/quote]
You seem to be saying that nothing less than a direct hit on the precise neurological pattern corresponding to a certain type of Earth-dwelling, protein-based, evolved, predator-descended homonid will be sufficient to create a living example of general intelligence. In some ways, this seems to me like an alien civilization discovering a functioning PC and saying "nothing less than an atomically precise match will be sufficient to replicate this machine". Evolution didn't know what it was doing; it probably messed up a lot in (accidentally) creating a particular special case of general intelligence. As Nick Bostrom says,
"The number of clock cycles that neuroscientists can expend simulating the processes of a single neuron knows of no limits, but that is because their aim is to model the detailed chemical and electrodynamic processes in the nerve cell rather than to just do the minimal amount of computation necessary to replicate those features of its response function which are relevant for the total performance of the neural net. It is not known how much of the detail that is contingent and inessential and how much needs to be preserved in order for the simulation to replicate the performance of the whole. It seems like a good bet though, at least to the author, that the nodes could be strongly simplified and replaced with simple standardized elements."
Do you disagree with most of the points made in
http://www.nickbostr...telligence.html?
[quote]That means that grey-goo like nanotech will be around before we can engineer the first SI. [/quote]
Hopefully not, but this may be the case. Although you mention "SI" here, any substantially smarter-than-human or kinder-than-human intelligence could substantially decrease the chances of nano-disaster, as would intelligent policymaking by human beings. (Case in point: Center for Responsible Nanotechnology.)
[quote]If I understand Eli correctly, he agrees with Marvin Minsky's sentiment, that once we figure it out, we will be able to run an SI on a Intel x286!! I think Minsky is a genius, but this statment is absurd.[/quote]
I have never seen or heard Eliezer say this. Do you have a reference? What has he said that gave you the impression that he holds this view? I expect AGI to be technologically more feasible than you do, but not *that* feasible.
[quote][quote]Also, what about the possibility of creating a more compassionate society by upping the incentives to be good and simply disallowing harmful actions using an elegant and safe detect-and-response system?[/quote]
I think it's necessary... utopia or oblivion. There is no third way.[/quote]
Agreed.
[quote]That’s a good question, and my speculation is, assuming we even get to upload capability, is we won't have a "malevolent people" problem by the time we reach this stage of technology. Otherwise we will have destroyed ourselves before we got there. How is this possible? My guess is we are going to see radical improvement in mental health because or a much deeper and thorough understanding of brain chemistry over the next couple of decades. I expect to see more improvements in mental health over the next 20 years, than in all of human history combined. I highly recommend everyone read David Pearce’s The Hedonistic Imperative for a good introduction and future roadmap on how this is both possible, pracitcal and as I am arguing, necessary.[/quote]
Will 20 years be enough to eliminate malevolent or self-centered intentions in everyone on Earth? What about mistakes made through ignorance, like someone who enhances her own intelligence, accidentally wires her motivations so that she desires nothing but cupcakes, and proceeds to enhance her own intelligence furthermore, acquire nanotechnology, and turn everyone on Earth into cupcakes? And that's just one example; many other things could go wrong in the absence of outright malevolence.
[quote]Do you agree or disagree with Nick Szabo simulation argument? You’re answer here would in turn determine the best way to answer your question.[/quote]
Agree, of course.
[quote]I completely disagree with this, because they would already have a working model – themselves. So all that wasted computer power would never be expended in the first place.[/quote]
The silent assumption I made in my original statement was that simulators only influence their simulated universes by fine tuning physical constants and the shape of the tiny dimples on the original Big Bang Particle. Projecting which sets of physical constants and tiny dimples are likely to give rise to successful immortalists is a task about as large as simulating the universe itself. But I acknowledge that this argument is moot if simulators have intervened at some point after the Big Bang to push the odds towards the creation of successful immortalists.
[quote][quote]but they would have need to have done it unless they possess the capability to modify variables *aside from* the initial conditions; and this does not currently seem to be the case.[/quote]
How would we ever know?[/quote]
Because our current universe appears to follow from the Big Bang. This could simply be a complex illusion, but that would require postulating a little bit of extra information; the simulators are trying to hide from us, but not *that* diligently. (Otherwise we wouldn't even be able to form hypothetical scenarios about them.) Assuming that simulators (if they exist, as it seems they do) have only manipulated physical constants and initial conditions requires postulating no extra information.
[quote][quote]One intelligent race finds it easy to simulate worlds. It simulates billions just for fun, but simulated so many that it can't pay attention to or manage them all. It doesn't care when a few destroy themselves; that's tough luck. Another intelligent race finds it difficult to simulate worlds. It can only simulate a few dozen, although it does pay them close attention.[/quote]
This doesn’t make sense to me. An intelligence capable of simulating billions would also be able to monitor to ALL of them with total ease. Using our current model, of all this computation occurring in a Pentium without knowledge is a very poor analogy to how future fully integrated intelligence will work in the future. Do you require I expound on this?[/quote]
No; it seems now my original argument was quite weak to begin with. It seems less convincing in retrospect. To be honest, I'm just a beginner in anthropics. I would probably defer to Nick Bostrom if I had to make some massive decision that depended upon sensitive anthropic information.
Wording comment: instead of saying "how intelligence will work in the future", shouldn't you say "how intelligence is likely to work in the class of all worlds technologically capable of simulating isolated subworlds with sentient inhabitants"?
[quote]By benevolent ones, no question.[/quote]
But hey, where's my banana at?
[quote]I think our universe's simplicity is not at odds to a higly streamlined artificial life/emergent complexity simulation. So you do disagree with Nick's and Moravec's simulation arguement then?[/quote]
It isn't. My current model includes both the Simulation Argument and the Simplest-of-All-Possible-Worlds Argument. Do we agree? Do you think there are there others like us? Shall we start a club...? [g:)]
[quote]Because I’m not talking about post-humans, but human politics leading up to uploads and the singularity. After that, all bets are off of course. :-) [/quote]
Gotcha. Just to let you know, this conversation has been a major pleasure!