• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans

Photo
- - - - -

The economics of the singularity - Robin Hanson


  • Please log in to reply
10 replies to this topic

#1 Bruce Klein

  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 13 December 2003 - 01:09 AM


Chat Topic: The economics of the singularity: what might happen when?

Assistant professor of economics at George Mason University, Robin Hanson joins ImmInst to discuss how long-term trends in economic growth may reveal a coming Singularity within this century.

Recommended Reading:

1. http://hanson.gmu.edu/longgrow.html
2. http://hanson.gmu.edu/aigrow.pdf

Chat Time: Sunday Jan 4, 2004 @ 8 PM Eastern
Chat Room: http://www.imminst.org/chat
or, Server: irc.lucifer.com - Port: 6667 - #immortal


Posted Image

Robin Hanson - Homepage

#2 advancedatheist

  • Guest
  • 1,419 posts
  • 11
  • Location:Mayer, Arizona

Posted 13 December 2003 - 03:02 AM

According to Hanson's AI paper:

Since at least 1821 economists have recognized that dramatic consequences for wages and
population can result from machines which can directly substitute for human labor


I think this is a bit misleading. The Wealth Revolution began when our ancestors incorportated nonhuman "energy slaves", mainly from fossil fuels, into the productive process, which we used to power machines at such levels that they totally swamp what mere human labor could accomplish. For example, no amount of human labor can fly a jetliner across the ocean, whereas the proper application of several thousand liters of jet fuel can accomplish the task nicely.

Today developed countries are full of machines, but we are rapidly running out of thermodynamically accessible energy sources to make them do the things we want. Natural gas extraction in North America is falling at a faster rate than we can build the infrastructure to replace it with imported cryogenic natural gas. Some energy industry analysts think Saudi Arabia's oil production has already peaked, and we're in the process of discovering that Iraq's oil potential isn't nearly as great as we were led to believe. China, which has been trying to industrialize at an absurd rate, this very winter is experiencing shortages of oil and coal, and many Chinese cities are blacked out a couple days or more a week. There are empirical reasons to suspect that industrial civilization is facing a discontinuity in about a decade, based on declining world per capita energy consumption. But this discontinuity won't be the sort of "singularity" foreseen by many Transhumanists. Nobody is going to become "immortal" once the world's power grids, oil-dependent agriculture and transportation systems permanently fail.

Edited by advancedatheist, 13 December 2003 - 04:42 AM.


#3 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 13 December 2003 - 08:16 PM

Robin Hanson is an extremely smart person, maybe one of the ten (or five) smartest transhumanists in terms of raw brainpower. I think Hanson takes a somewhat anthropocentric view of the Singularity, but I greatly admire all his writings, which delve into very sophisticated topics. One of my favorites is right here:

http://hanson.gmu.edu/matrix.html

Mark: wouldn't we be forced to rely on solar power and nuclear energy, in that case? If we could even anticipate the shortages by a few years, wouldn't that give us sufficient time to transfer over our infrastructures to solar, hydrothermal, and nuclear energies? For example, Futurist Marshall T. Savage has proposed the Open Cycle OTEC (Oceanic Thermal Energy Converter), 284 feet in diameter, which generates energy based on the temperature delta between high-depth and low-depth water in our world's oceans. Savage has ran the numbers, concluding that a typical OTEC module would cost $157 million to build, yet produce 59 megawatts of electric power. There exist numerous other plans and proposals for systems that generate power without the need for fossil fuel. The main reason we haven't transferred over yet is because oil is still cheaper, because people are good at exploiting natural resouces to the bone. (It's also worth keeping in mind that if nanotech or intelligence enhancement tech were invented before your postulated collapse, these technologies would easily handle it.)

(If you want to continue this discussion, you should probably start a new thread.)

#4 Jace Tropic

  • Guest
  • 285 posts
  • 0

Posted 15 December 2003 - 06:17 AM

BJ, if there is any correlation between my recent economic inquiries and your taking initiative in setting up a chat with Robin Hanson, thank you. If there is not, I thank you anyway. I have not heard of him before. This opens another relevant door for me and I appreciate it.

#5 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 15 December 2003 - 07:10 AM

Heh.. there must be symbiosis at work. Wasn't aware you were thinking about it.. I did see your post at the Brights forum though. Fairly interesting discussion going on over there.

#6 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,058 posts
  • 2,000
  • Location:Wausau, WI

Posted 19 December 2003 - 10:26 PM

I agree with your analysis MA. People are rational. When oil becomes shorter in supply and the prices go up they will just move to alternatives. People are not just going to sit around in the dark and wonder "what happenned to all the oil?"

#7 robinhanson

  • Guest
  • 8 posts
  • 0
  • Location:Burke, VA

Posted 05 January 2004 - 12:10 AM

Mark Plus took issue with my saying in a paper of mine: "Since at least 1821 economists have recognized that dramatic consequences for wages and population can result from machines which can directly substitute for human labor."

He writes:

I think this is a bit misleading. The Wealth Revolution began when our ancestors incorportated nonhuman "energy slaves", mainly from fossil fuels, into the productive process, which we used to power machines at such levels that they totally swamp what mere human labor could accomplish. For example, no amount of human labor can fly a jetliner across the ocean, whereas the proper application of several thousand liters of jet fuel can accomplish the task nicely.


Yes of course we are far more productive now because of machines. But the question is about whether machines substitute for or complement human labor. So far they have complemented overall, but this could change.

#8 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 05 January 2004 - 03:35 AM

CHAT ARCHIVE

* BJKlein Official Chat Start - The economics of the singularity: what might happen when?
<Robin> So, what would you guys most like to talk about?
<Robin> I'm not a very authoritarian conversationalist.
<hkhenson> robin, what is the most interesting recent thing that you know about?
<Mind> Economics of the Singularity
<Eliezer> Incidentally, some intros: Lucifer is David McFadzean, mporter is Mitchell Porter, MichaelA is Michael Anissimov, hkhenson is Keith Henson, Nick Hay is Nick Hay, and I'm no one of any particular importance.

<Robin> Most interesting thing? I guess the evolutionary psychology of laughter.
<hkhenson> ah. you up on Minsky's thoughts on that or have you extended them?
<Mind> How did laughter evolve?
<hkhenson> minsky makes the claim that it is a way to break us out of dangerous loops.

<Robin> The main thing to say about the economics of the singularity is that very simple long term trends suggest a sudden transition to economic doubling times of a few weeks, sometime in the next century.
<Mind> before 2100?
<Robin> Read Minsky long ago, seems like he was pretty far off, compared to the best current thinking.

<hkhenson> have a pointer to current thinking on this subject?
<cyborg01> What's the economic doubling time?
<hkhenson> time it takes for wealth to double
<Lucifer> How long is the economic doubling time today?
<hkhenson> last time I looked, many years ago it was about 20 years
<Robin> More exactly, the time for world product, how much we all produce in a year, to double.
<Robin> Today it is about 15 years.
<hkhenson> ah interesting.

<Eliezer> 15 years? That doesn't sound right. What metrics are being used?
<Jonesey> what's doubling in 15 yrs? not minimum wage?
<Robin> Standard GDP sort of metrics.
<hkhenson> eliezer, too short? too long?
<Jonesey> in real terms min wage is declining
<Eliezer> sounds much too short
<hkhenson> it is a reasonable number eliezer from what I know.
<Robin> The world mostly changes behind the scenes.
<Jonesey> inequality might be doubling every 15 yrs tho. top 1%/bottom 1% income and/or wealth
<Eliezer> maybe China's dragging up the statistics
<hkhenson> problem is that when population grows faster, the income per capita goes down


<Robin> It is about 4% per year I think.
<hkhenson> and population doubling was running about 6%
<Mind> population growth is slowing down
<Lucifer> Are we talking total GDP or per capita?
<Robin> The interesting thing is that world product growth is pretty simply exponential, while things like population grow faster then slower, in much more complex ways.
<hkhenson> total lucifer.
<Robin> Total world product
<Eliezer> I don't think population has doubled in the last twelve years, so it can't be growing at 6%.


<Robin> Right. Population growth rate is less than world product growth rate, which is why per capita is growing.
<hkhenson> depends on where you are eliezer. some places it is higher.
<Mind> Population...from today's knowledge and considereing today's trends will level off sometime around 2025
<Eliezer> Robin, did you see Ilkka Tuomi on Moore's Law?
<hkhenson> mind, that means that wealth per capita will turn upward at that point
<Mind> sure...why not
<Eliezer> (Ilkka = ILKKA)


<Robin> Never heard of Toumi
<Eliezer> http://www.firstmond...ssue7_11/tuomi/
<MichaelA> He's big on arguing against Ray K.
<Eliezer> "The Lives and Death of Moore's Law"
<Eliezer> exponential, yes; steadily exponential, no
<BJKlein> why is economic growth exponential rather than linear?
<Jonesey> where is inflation in all of this? inflation adjusted growth is the right measure to use not nominal
<Mind> Given what I've seen over the last year...it seems Ray K. is pretty much on target


<hkhenson> eventually moore's law runs up against the particular nature of matter.
<Eliezer> BJ, as long as population growth is exponential and per capita is not decreasing planetary GDP will be at least exponential
<Robin> The introduction of computers hasn't yet made a noticeable dent in overall economic growth.
<hkhenson> but before that, seems to be on track
<Eliezer> Jonesey, I can't imagine that economists would use nominal.
<Robin> We are talking real growth here, not nominal
<Mind> Robin...haven't computers helped in productivity growth over the last 5 years
<Robin> You have to distinguish the big picture from the small one.
<hkhenson> robin, that has an interesting side effect. I have argued that wars are caused by falling income per capita.


<Mind> Companies still seem to be shedding IT jobs?
<Mind> and increasing profits
<Mind> due to computerization
<Robin> The big picture is steady exponential growth. Within that steady growth new techs arise, nations rise and fall, lots of things change, but overall pretty steady growth.
<hkhenson> more correctly, such conditions result in high growth rates of xenophobic memes.


<Mind> xenophobic?
<Jonesey> hkh: huh? israel/palestine? india/pakistan? yugoslavia? rwanda? seems like wars are much more likely to be caused by ethnocentrism/nationalism than declining gdp
<Lucifer> Robin, are you saying we haven't seen the effects of computers yet, or they will remain insignificant?
<hkhenson> which in turn give rise to irrestible population pressure to go to war.
<Robin> I'm saying they are the new way growth is expressed, but doesn not change the growth rate itself.


<Lucifer> So we're switching s-curves, then?
<Robin> The history of life on Earth so far can be summarized as follows:
<John_McCluskey> I've often wondered if the rise of the internet will cause a shift in the exponential growth constant, due to the much higher "velocity of information" effect on technology development.

<Mind> right...but without computers...the growth rate would have been flatter

<Robin> 1. The big bag creates the universe we know.
<Mind> they are just the newest tool in continuing the upward trend
<Robin> 2. Life arises somewhere, and grows on Earth.
<Robin> 3. ~500Mya, multi-cellular life arises, and brains double in size every ~30Myr
<Robin> 4. ~2Mya, human ancestors discover language, and their number doubles every ~200 Kyr
<Robin> 5. ~5Kya, humans discover farming, and their number doubles every ~1000 years.
<Robin> 6. ~200ya, humans discover industry, and their number doubles every ~15 years.
<Robin> err, let's correct that last one.
<Robin> 6. ~200ya, humans discover industry, and world product doubles every ~15 years.


<hkhenson> john, robert wright makes that case in his book non-zero
<Jonesey> hkh:wars are primarily fought for tribal reasons
<MichaelA> Robin, so does that make the Singularity the literal End of Time, because the whole universe has been leading up to it, or what?
<Robin> Just taking these stats, we might project another big event within the next century, where the new doubling time is a few weeks.
<Eliezer> Michael: No.


<cyborg01> Hi Robin=) I think an ape analogy for the future would be a second industrial revolution -- replacement of all types of work by intelligent machines
<hkhenson> right jonesey. exactly
<Jonesey> nothing to do with gdp for the most part, just my tribe hates ur tribe
<Mind> Now even though our population is not increasing it seems our overall intelligence is still on the growth curve due to the internet and computers...that sound right?
<MichaelA> I'm wondering whether the uploads would live after the Singularity in your model, or before it.


<Eliezer> Michael: There is nothing in the universe that has purpose except people; the rest may have direction but it isn't going *to* any particular place.
<Robin> My best guess about what this next transition would be would be machines that can substitute wholesale for human labor, most likely uploads.
<MichaelA> Eliezer: I already know that, I was just asking in the context of *his* model.
<Jonesey> the american civil war exploded out of a pretty prosperous and expanding economy
<Jonesey> bloodier than all other US wars combined.
<John_McCluskey> Whoa, Eliezer... we know next to nothing about the rest of the galaxy, let alone the rest of the universe.
<Robin> Once uploads show up, they would could change the growth rate by a factor of a hundred.


<Eliezer> Robin, I am in severe doubt about the ability of curves derived from old dynamics to predict transitions between dynamics, or even just major substrate transitions in the model. There is no obvious reason why one cause would be correlated with the other.
<Jonesey> aren't there many machines that substitute wholesale for human labour already?


<MichaelA> A hundred? Wow, how can you predict that?
<Eliezer> In other words, even if we grant that the curve is steady... what of it?
<Robin> Eliezer, the curves show the previous transitions, as in how quickly the new growth rate appeared.
<Lucifer> Does anyone know of any efforts to measure overall intelligence?
<hkhenson> eliezer to some extent they are. if they took over, they were correlated
<MichaelA> Lucifer, "IQ testing" is one, "g" is another, "prefrontal cortex size compared to the rest of the brain" is another, etc..
<Eliezer> Robin, I know that the curve shows the previous transitions; what I don't understand is the causal mechanism whereby the interval between previous transitions correlates to the new forces producing the new transition.
<Robin> I have worked out specific math models of how uploads could change the economic growth rate.


<Jonesey> the "flynn effect" sez that avg iq for the species overall keeps going up
<Lucifer> MichaelA, I meant for the planet, not individual
<Eliezer> Lucifer: I can't measure the intelligence of a planet until you tell me what kind of work the planet is trying to perform.
<Mind> Elelizer...by inductive (not bayesian) reasoning...most people predict new transitions fairly well
<MichaelA> "Specific math models" seem useless when the subject you are analyzing is going to make decisions you cannot imagine, <insert cats trying to predict humans analogy>
<cyborg01> Robin: wouldn't it be AI's rather than uploads, at least at first?
<hkhenson> eliezer, converting matter into computronium.
<MichaelA> Lucifer, Flynn effect was discovered through analysis of global average IQs


<Robin> MichaelA - we have to try as best we can - the only alternative is to give up and say "its a mystery."
<hkhenson> that's the current job of a planet.
<Robin> cyborg01 - my bet is that AI is too hard, uploads will happen first
<MichaelA> Robin, what's your take on John Smart's ideas?
<Eliezer> Henson: But the planet is not, in fact, trying to do that. It is a side effect of other forces at work. Even if you measure the rate of computronium conversion, what you are measuring is not the intelligence of an optimization process.
<Jonesey> "IQ testing" and "g" are still very shaky, e.g. Gould's "Mismeasure of Man" vs "Bell Curve"


<Robin> What does Smart say?
<hkhenson> that's really an interesting prediction robin. you might be right
<Lucifer> Eliezer, assume we are trying to reach the Singularity
<MichaelA> You seem to have a lot of similarities going, except for the uploading thing.
<MichaelA> He also believes that past transitions will serve as extremely useful analogies for predicting future transitions such as the Singularity
<MichaelA> Talks about the Spiral of Progress and such, www.singularitywatch.com is his site


<Robin> I tend to think of these transitions as something that no one person has much influence over.
<MichaelA> Yeah, he thinks that as well
<hkhenson> it is, however, going to be a total bitch if you speed up and nobody else can keep up. gonna be lonely
<Eliezer> Robin: I don't credit the "We have to try as hard as we can to predict it, even if we can't" argument since I started reading Tversky and Kahneman, because they make too good a case for "Sorry, you really CAN'T predict it no matter how much you want to."


<Lucifer> Is it my imagination, or does John Smart add a mystical element to the singularity?
<Eliezer> Lucifer: It's not your imagination.
<Mind> It seems at this point...nothing can stop the further "rise" of intelligence
<MichaelA> hkhenson: unless you engineer out the part of your brain that feels lonely, or make copies, or come up with some other superintelligent solution...
<Eliezer> Hell, practically everyone who still lives in a demon-haunted world adds a mystical element to the Singularity.
<hkhenson> ick michaela


<Eliezer> And most people, even transhumanists, still live in a world with a few demons left in it.
<mporter> my question for robin would be, how does he turn 'mysteries' (i.e. all those transhuman ineffables which people love) into such down-to-earth discussions? is there an art to it? is it a variation of something which economists routinely do for present-day social ineffables?
<Robin> This big transition is probably a good thing overall, but there are lots of negatives.
<MichaelA> Tversky and Kahneman didn't deny the possibility of incrementally better models, however


<hkhenson> mporter, put a dollar value on everything. :-)
<Mind> Eleizer...of course it cannot be predicted with certainty....but inductive reasoning works out a lot of the time
<MichaelA> John Smart does *not* add a mystical value to the Singularity, nope
<Eliezer> Michael: What they said is that people, right now, have a very strong tendency to predict things which experiment shows they just cannot predict, then vastly overestimate their competence to the point of ignoring experimental evidence.
<MichaelA> Not even a smidgin


<Robin> mporter - Economists are known and hated for just that - sucking all that beauty and mystery out what we most love.
<Lucifer> Does that explain the "dismal science" moniker?
<hkhenson> MichaelA, consider this aspect. A singularity could *simulate* the raputure for all the christians.
<Eliezer> Lucifer: No, that's cause economics often just doesn't work.
<Mind> Moore's law was inductive reasoning...that worked out pretty well....I am sure someine could have told him "hey you have no basis for that prediction"...he still made it


<MichaelA> Yeah, I think that would be one solution to give religious people what they want
<Eliezer> Mind: It didn't work out nearly as well as people think, see the Tuomi link.
<Robin> The "dismal science" label came because economists were anti-slavery, and the pro-slavery folks complained about how they insisted that people were really the same inside.


<MichaelA> A Singularity could also simulate a kabillion other things that have nothing to do with human mythology
<Eliezer> Also, he said "every year" at first, not "every 18 months".
<hkhenson> and micheala, I would bet you long odds they would hate it after a week. :-)
<Mind> close enough
<Eliezer> Robin: Really? Historical fact? Cool!
<Eliezer> Henson: There's a really cruel piece of literature, if you have the time... write one of those hideous popular Rapture novels we now have in the US, only make it realistic.


<hkhenson> LOL!
<Lucifer> Can we expect new technologies to become increasingly disruptive?
<Robin> Lucifer - not in general. In fact, techs were a lot more disruptive about a century ago. We have learned more how to make change happen behind the scenes.
<Robin> That said, what I can see of an upload transition is extremely disruptive.
<Lucifer> Eliezer, the trick would be to write the novel on many levels, so enjoyable by children, people of faith, and all SLs :)


<cyborg01> My guess is that AI's will come first... uploading has its own difficulties...
<MichaelA> Even when you use superhuman intelligence to cushion the impact..?
<MichaelA> Intelligence a trillion times faster than us and smarter than us as we are smarter than fish, or whatever?
<John_McCluskey> I'm reasonably sure that uploads will require some level of AI assistance.


<Robin> The main prediction of the upload transition is that wages fall very rapidly. Humans who count on their ability to earn wages may well starve.
<MichaelA> Uploading requires either general AI or a frustratingly slow enhancement trajectory


<Eliezer> No matter how much computing power you have, you can only use it to answer well-formed, answerable questions.
<Mind> Robin...how will the increasingly rapid growth of technology and intelligence affect the world politically? Will growing interconnectedness lead to more socialism...or will powerful technology allow the individual more power
<cyborg01> Robin: its not true if human employs AI's as a tool...
<hkhenson> that's a really interesting prediction.
<Robin> The uploading vs AI first is an open question - I just gave my bet.
<Robin> I mean to distinguish wages that people make from working, versus the income they get from owning/lending to AIs.
<Eliezer> Actually, Robin, I can't see where you came out on "uploads first" or "AIs first" - where?
<Mind> Uploads


<cyborg01> Practically everyone will be using AI's to work for them.. is my prediction - except prostitution =)
<hkhenson> so far an AI isn't good enough to put milk cows.
<Robin> I was an AI researcher for 9 years - it is just a damn hard problem. Uploads seem hard to, but manageable.
<Robin> People who don't own AIs can't put them to work for them.
<Eliezer> origins of "dismal science": http://www.evolvingt...ves/000395.html

<Lucifer> cyborg01, won't human prostitutes be put out of work by sexbots?
<cyborg01> Robin: it's very easy for AI's to replace people in McDonald's etc
<John_McCluskey> Back to economics. Robin... what segment of the economy will grow fastests? Techs as usual? I'm thinking Intel & IBM have extreme long term potential, but wildcards (startups) could take it all too.
<cyborg01> That will happen pretty damn fast I'm afriad....


<Robin> http://www.marginalr...2/laughter.html is a source on laughter

<Mind> <Mind> Robin...how will the increasingly rapid growth of technology and intelligence affect the world politically? Will growing interconnectedness lead to more socialism...or will powerful technology allow the individual more liberty?
<Eliezer> Robin: Is AI hard, or just anthropic? All the branches of reality where someone solved the AI problem have already been deleted.

<Jace> poor Mind :(
<Robin> The creation of upload hardware and software would be enormous parts of the economy.

<xeiox> I've heard about studies that have been saying that consciousness not only exists in the brain but could be fundamentally ingrained within every part of our physical self. Has anyone else heard about this and know more about it? It seems to me that uploading is something that has not even been determined to even be feasible.
<xeiox> Yet you all throw it around like it is inevitable

<Robin> Mind - predicting political reactions is hard. The big problem is that uploads can be painted as alien machines who are stealing our jobs.
<Robin> Eliezer - you lost me.
<Robin> xeiox - for economic purposes, it doesn't matter if the uploads are conscious. They just have to act like humans do.
<MichaelA> Universe Swallowing AIs = not so easy to assume everyone understands
<Eliezer> I'm saying that inventing an AI, from the perspective of conscious observers, may "mangle" a branch just as thoroughly as having too little amplitude.
<Jonesey> babies are "uploads" to a large extent


<Eliezer> or, rather, the local sphere of that branch
<Jonesey> unfortunately we upload alot of shit
<John_McCluskey> I think it's Hard Eliezer... If it was easier, the dinosaurs would be running the planet by now...
<Lucifer> for economic purposes, it doesn't matter if the uploads are conscious. <-- same for humans ;)

<cyborg01> I studied uploading for a few years... and it seemed also an insanely hard problem.. the issue is that you cannot legitimize uploads unless they are really human-equivalent
<Eliezer> or, in plainer language, you got like a gazillion Everett branches, and all the branches where someone invented AI are dead

<xeiox> How do we even determine what is a human equivilent and what is not? Have we even accurately defined what it is to be human?
<Robin> for uploading you need three things 1) enough computing power 2) good models of each neuron type 3) scan brain at enough resolution
<Mind> an upload could also be a gradual transition...it doesn't have to be all at once
<Robin> EAch of these three techs is on a trend line, and should get there well within a century.


<Eliezer> Robin, unless you have absurdly perfect models, and absurdly good resolution, you are going to need (4) good models of higher-level processes, to make sure that biases introduced at the lower levels are not additive
<xeiox> Robin, that's if you are to say that the entirety of human consciousness is in the brain. I have seen people say otherwise?

<Robin> Eliezer - I don't see how a branch having AI makes it likely that its evolution will be driven by some other branch. That is what mangling is.
<Eliezer> xeiox: they're living in a demon-haunted world
<cyborg01> Robin: uploading also need complete elucidation of biochemical signalling pathways... =[

<Eliezer> Robin: I was speaking metaphorically. My attempted point is that having a world transformed into paperclips is an anthropic effect just as much as mangling it.
<Jonesey> for uploading u just need an ai that's as smart as a baby
<MichaelA> for AI you need 1) enough computing power, 2) an abstracted version of the critical dynamics of intelligence, which can be simpler than human intelligence, and 3) a suitable training program
<Jonesey> we get thru uploading babies just fine without all that crap
<Robin> Look, let's agree that both AI and uploading are damn hard. But think: 100 more years of tech progress! Isn't that a whole damn lot of progress?
<Mind> yes
<Eliezer> Only if you equate tech progress to progress.
<hkhenson> do you really think it will take 100 years of wall clock time?
<Eliezer> There seems to be a lot of serious evidence building up, not just religious sermons, to the effect that money does not make people happy.
<hkhenson> if true it makes a cryonics contract essential
<xeiox> I think it would be a wise decision to study the implications of certian advancements

<Eliezer> Even improvements in a country's real domestic product don't make people happy.
<Robin> Eliezer - whether an upload transition would make people happier is a whole different question
<Lucifer> Machine powered flight was damn hard. Building working models of a bird is even harder.


<Mind> money does not make people happy....hello...I thought everyone knew that
<John_McCluskey> Yeah, but lack of money can make you seriously unhappy...
<Jonesey> uploading isn't hard.
<Jonesey> we do it all the time.
<hkhenson> eliezer, that's not the point. having a rising amount of wealth per capita keeps the "tribe" from going to war with the next tribe over
<Eliezer> Robin: An upload transition by itself changes nothing. What matters is what happens once cognition and perception are more malleable.


<MichaelA> If the set of AI failures that create massive copies of conscious observers without taking their volition into account (rather than nonsentient paperclips) is larger than the set of worlds where AI isn't created yet, I would expect myself to already be in post-AI land, even if the mangling effect in AI-creation branches is very high...
<Eliezer> John: Yes, the studies agree on that.
<Robin> I'm skeptical that more money makes people much more happier. But I do think that having more happy people is a good thing. We could litterally have many trillions of happy uploads.
<Jonesey> it takes a while but doesn't require high tech if the upload-ee is smart enough.


<Eliezer> Ani: There is no "mangling effect" as such, it was an unfortunate metaphor and I withdraw it.
<hkhenson> mind, while money does not make people happy, lack of money makes them *unhappy*
<MichaelA> "Anthropic deletion effect" will do fine; the same effect that ensures that observers don't pop up in extremely entropic worlds, then.
<xeiox> It's not necessarily the money itself that makes people happy obviously, its what comes with a large amount of money: A higher standard of living, satisfaction results because personal/general vitality is guaranteed
<Eliezer> Robin: If you multiply the population too fast, you put a severer upper bound on how large individuals can grow, if that number is matter-bounded and not time-delay-bounded.

* Eliezer nods to hkhenson
<BJKlein> at some point, perhaps we can move into the discussion of physical immortality with Robin before the official chat ends
<Mind> The level of money has to be pretty low before "unhappiness" sets in...so low that they begin starving....if they have food and shelter (from my experience) they can achieve a good level of happiness


<Robin> Wealth has grown by huge factors, and since population has not grown by factors as large, per capita wealth has increased. While wealth will grow even faster than before, population may well grow even faster, leading to declining per capita wealth.


<hkhenson> i hope not
<Eliezer> Mind: actually what makes people unhappy is *decreases* in wealth... even if it is a decrease from absurdly high levels
<Mind> why would population grow faster?
<Jonesey> per capita wealth is not useful when more and more wealth is increasingly concentrated into smaller and smaller % of the population.
<hkhenson> that's a path to war, evolved into us in tribal days.
<Jonesey> as robert reich likes to put it, "shaq and I have a per capita height of 6 feet"


<Robin> With uploads, the number of them can grow as fast as factories can crank out new upload brains.
<Eliezer> Robin: Then there are the gloomy papers about the end of class mobility in the US, and accelerating wealth concentration, and declining real income for most of the population...
<Eliezer> Robin: Producing huge numbers of people sounds dystopic to me.
<Robin> Eliezer - those are short term small trends, that usually reverse before long. We are discussing huge changes here.
<hkhenson> eliezer, that's what you want if you are trying to get a population to support a war.


<Lucifer> Jonesey, I don't get your reference
<Eliezer> When you consider how many Everett branches there are, I doubt that the number of people matter much; what matters is the proportion of happy people and how happy they are.
<Robin> My first job as an economist is to predict what will happen, if we do nothing to change it.


<Eliezer> Henson: I don't believe politicians are that competent, that evil, or that stupid.


<hkhenson> robin do you really think it will take a century?
<Mind> I've read the gloom and doom to Eleizer...but the fact is lower income people have more of the neccessities than ever before in U.S. history...everyone has a phone, most have air conditioning, everyone has a tv, everyone has food
<Jonesey> lucifer:you don't understand that mean is very misleading as an indicator of the centrer if the distribution is very lopsided?
<Jonesey> lucifer:had any statistics?
<Mind> most people have catrs


<Robin> The transition from hunting to farming probably reduced average happiness.
* Rotaerk clears his throat. "I think Bruce wants the discussion to get to immortality..." Steps back into the shadows.
<Mind> cars
<Eliezer> Mind: And the fact is, they aren't happy.
<Jonesey> robin:No wonder it's been so universal?!
<BJKlein> heh Rotaerk.. no rush.. whenever Robin wishes..
<Robin> Knowing that, who would have been in favor of preventing the transition to farming?


<Rotaerk> :)
* Eliezer raises his hand.
<hkhenson> LOL!
<Robin> Keith - I just don't know, but I don't think it will take more than a century.
<Eliezer> If you're going to do something, do it right.
<Jonesey> risk aversion means that predictability increases happiness even if it means a slightly lower mean expectation.
<hkhenson> ok


<Mind> I peronally know a fellow who has 6 kids and a wife....lives in a trailer....and works for 10 dollars an hour...and is happy....does he want more, yes, is he unhappy, no
<Jonesey> that's why people pay for insurance of all kinds
<Jonesey> so hunting->farming increased happiness, no more feast or famine
<Rotaerk> Mind: is he happy or content?
<Jonesey> that's what a negatively convex utility function means
<Eliezer> I don't see why a hunter-gatherer civilization can't build printing presses.
<hkhenson> there are good reasons eliezer


<Jonesey> hard to lug press around chasing woolly mammoths
<mporter> smaller population
<Jonesey> dang critters keep stomping the presses
<Eliezer> It's obvious enough why the agriculturalists did it *first*, I don't see why hunter-gatherers would *never* do it.
<Jonesey> movable type?? STOMP
<Mind> Rotaerk...both happy and content, as far as I can tell
<Robin> Honstly, we tend to overestimate our ability to change these huge trends. We may see them before others, but ...


<Jonesey> i just gave u a mammoth reason why eliezer, pay attention
<hkhenson> eliezer, have you read Non-Zero yet?
<Robin> BJKlein - we have actually been talking about immortality.
<Mind> I used to make 5 dollars an hour just 6 years ago....I was barely making it, but still happy....that is my personal experience
<BJKlein> true.. i have some specif questions for you..
<Eliezer> Henson: No, but I'm familiar with a good deal of surrounding literature - evolutionary transitions, cooperation in game theory, psychological altruism, etc.
<BJKlein> do you think death = oblivion?
<Jonesey> of course, bjk


<Jonesey> happiness is not wealth.
<BJKlein> just a nice point to start from..
<Robin> The big thing is that immortality will be available given enough wealth, but my analysis suggests that most creatures in the future won't have that much wealth.
<Mind> Wierd Al once quipped "if money can't buy happiness....I guess I'll have to rent it
<FuturQ> Lack of wealth can sure be sadness
<hkhenson> why?


<Robin> You have a key choice to make about your progeny - do you want lots of progeny, or do you want each of them to be rich?
<BJKlein> Robin, from you.. do you think that Death = Oblivion?
<Robin> BJK - yes, almost by definition.
<BJKlein> good good..
<John_McCluskey> Robin, did you mean the near future? (20 to 50 years), or further out than that?


<BJKlein> thus the need for cryonics
<hkhenson> good point robin, but if wealth grows faster then population, any fixed price can be accomodated by people eventually
<Robin> My predictions are about soon after the next big transition, which I think will be an upload transition.
<hkhenson> hmm
<Robin> Keith - but wealth won't grow faster than population.
<hkhenson> it is now, why not in the future?
<FuturQ> it' isn't for everyone
<BJKlein> Robin, do you think physical immortality will *become* the most important goal for all life?
<Robin> Malthus had a theory. It was right about the thousands of years just before his theory. Just his bad luck it was wrong about the next few hundred years, because then human numbers could not grow as fast as the economy did. But with uploads, we're back to Malthus being right.


<hkhenson> bj, it would take quite a shift in evolved in priorities
<LazLo> HI folks, sorry I am late but could you define wealth please Robin. I am not sure weare all using the same defintion
<Eliezer> BJK: No, I think the problem will be solved or proved insoluble by a small number of specialists, and most people, regardless of the resolution, will care about other things.
<Robin> BJK - no, I don't think personal immortality will be the most important goal for individuals.


<Eliezer> Robin, did you see my recent posts (to SL4, or the forwards to >HTech) about darwinian dynamics probably not applying to superintelligence?
<hkhenson> robin, you think uploads are going to be considered people or is there some chance they could just be slaves?
<BJKlein> immortality will never be known for sure.. thus will not there always be room to reduce risk of death?


<Robin> LazLo - defining wealth in detail would be too hard in this forum. Just means more of what people want.
<Eliezer> My points don't rule out Malthus but they do rule out Malthusian natural selection.


* Eliezer opens his coat. "Psst... anyone want to buy some utility?"
<Robin> Eliezer, I saw and responded today.
<BJKlein> official chat ends here.. robin feel free to stay
<Jonesey> what goal is more important than personal immortality?
<MichaelA> heh
<LazLo> I agree but much of what people want is not measured by capital
<Eliezer> 'sec


<Robin> Keith - politics is harder to predict than the rest.
<Jace> Thank you, Robin.
<hkhenson> jonesey, kids have been more important for a *long* time.
<Lucifer> Eli's thread>> http://forum.javien....gN-VqYp5kVfNORX
<Robin> I'll stay around for a while.
<BJKlein> thanks Robin


<Jonesey> hkh:yes, personal immortality will be sacrificed for the life of a kid. that makes sense. but aside from such brutal choices...?
<Robin> I tend to think that what future people will want will largely be determined by natural selection.
<Jonesey> natural selection is not operating on us very much.
<BJKlein> interesting
<hkhenson> in the long run for sure
<Jonesey> we're preserving all manner of "defective" genes
<hkhenson> genes are falling under control fast
<LazLo> and I think Natural Selection is being over ruled by Human Selection hence teh importance of the evaluative process in subjective human terms
<Robin> With uploads the rate at which natural selection can operate would go way up again.


<Jonesey> and that's going to increase sharply going forward
<BJKlein> Robin, do you think you will become physically immortal?
<Jonesey> quite the opposite hkh, crappy genes are doing better and better as medicine improves


<Robin> BJK - I think that is an option, but I don't know if I will choose it.
<BJKlein> so you don't think you have the free will here?
<Jonesey> the avg human now is already pretty flabby and weak and would have a very hard time in a neolithic environment
<Robin> DNA is not the way future genes will be encoded!
<Jonesey> knockneed and clumsy
<Eliezer> Robin: The thread was on SL4, some forwards were to transhumantech, I just passed on the original post to Extropians cuz I thought it was worth crossposting
<BJKlein> sorry.. scratch that last question
<Jonesey> you have to believe our distant ancestors were much more athletic than avg human is now.


<hkhenson> jonesey, bad genes may not be bred out of the race, but if not, it will be because we abandon flesh
<Robin> Elizer - by email tell me what SL4 is.
<Jonesey> could be hkh in the long run yup
<Guest> interesting, personally we will simply build better bodies and have our minds either pulled out of the old shell or uploaded to new from
<hkhenson> eliezer, cc me please about sl4

<BJKlein> Robin, why do you think you would *not* choose immortality?
<Robin> I might want to have lots of progeny, rather than ensure that one of them is very rich.
<LazLo> bad genes todays vould be good genes tomorrow and visa versa it really depends more on advantageous adaptation to environment keith
<hkhenson> robin how does the wife feel about this plan? :-)
<Eliezer> Robin - the SL4 mailing list, sl4@sl4.org, www.sl4.org, will email.
<Guest> custom bodies have the potential to do allot to forward progress on other planets (no suits needed) and as beauty goes , you can become whatever you may dream


<Robin> There is a good chance of a big first mover effect with uploads - there may be trillions of copies of the first few hundred people to upload, and not much room in the economy for everyone else.
<hkhenson> hmm
<Jonesey> we have the ability to "customize our bodies" now with exercise and diet and few of us exercise that option even though it results in increased happiness
<Jonesey> too much work


<cyborg01> Uploading is difficult because we can't be sure the simulation can capture all biological nauses --- what are the problems pertaining to AI?
<hkhenson> one burger flipper, cloned a zillion times?
<Robin> Keith - by progeny, I don't mean human babies.
<BJKlein> Robin, with your thoughts that Death = Oblivion, how can you entertain the idea of not trying for immortality?

<Robin> BJK - I care about more than just me.
<BJKlein> i see..


<hkhenson> I think laws are going to forbid multiplying persons on earth, possibly in the solar system.
<Guest> custom bodies will give us an aveneue to try different things, Dream of flying , try our new ZX150 bird form
<hkhenson> 20 light years out though . . . .
<BJKlein> so you feel that if you care about yourself to much, others will not make it?
<Rotaerk> What else?


<John_McCluskey> I'm sure the legislatures will allow at least 1 backup.. Fair use, ya know.
<LazLo> since when has any law been written that was not broken?
<hkhenson> john, inactive backups maybe
<Lucifer> I bet Robin also discounts the future like a good economist
<Robin> Politics is hard to analyze. But we are talking huge effects here. If one nation doesn't persue this, and another does, the first nation becomes very far behind very fast.


<John_McCluskey> Of course.. There can be lots of problems trying to integrate multiple active identities.
<Guest> to be frank I would like to see more openings for managed funds for future revival and upgrades
<Robin> Lucifer - I predict that our descendants will not discount the future as much. It turns out that sexual selection induces a discount rate of about one half per generation, while asexual selection has a zero discount rate relative to population size.
<hkhenson> that's an angle I had never considered.
<hkhenson> but it is straightforward math.
<Lucifer> interesting


<Robin> BJK - If I spread my wealth over many progeny, each one is not rich enough to ensure perpetual survival.


<LazLo> We have probably already entered the race for AI on a subtle international basis the way we were once in a race to control fission before WWII
<Rotaerk> Robin: Given that everyone has easy access to immortality, what do you feel about restriction of reproduction?
<Eliezer> Robin: Discount rates may also take into account planning uncertainty, that is, it may compensate for overconfidence on the success of long-term plans, which is known to exist in humans.


<hkhenson> robin, that's a hell of an argument to go interstellar.
<Guest> Indeed , time to get off the rock
<BJKlein> Robin - even with exponential economic gain?
<Robin> Rotaerk - the net externality from having a kid is probably positive, so if anything we want to encourage it now.
<mporter> lazlo: except that the AI race is within countries as well as between them, something not true in the pre-1945 nuclear case
<Eliezer> If my p(x) for the distant future is too high, then u(x)p(x) can be twinked by taking down u(x).


<Robin> Eliezer - yes, if one part of your decision system is biased, you can correct for that via biases elsewhere.
<hkhenson> if it takes 100 years, that's time for 4 generations.
<Eliezer> (can *partially* correct)
<Robin> BJK - yes, if per capita wealth is falling.
<LazLo> actually when looking at Italy and Germany prior to WWII the race was internal as well as external and the US benefited greatly from the mitotic divison of scientists caused by the politics.


<Robin> Eliezer - yes partially.
<hkhenson> if per capita wealth is falling, you automatically get wars.
<mporter> i stand corrected
<Robin> Keith - there is some optimal time to colonize space, but it is not now, and probably not for a while. Earth is pretty damn big, after all.
<BJKlein> Robin - If population levels off, would you change your mind and try for immortality?


<Jonesey> discount rates don't factor in volatility and optionality
<hkhenson> depends robin. 10,000 to one reduction in relative cost would do it right away
<Jonesey> net present value is an inadequate valuation methodology.
<hkhenson> and you can get that from a geosync elivator.
<Jonesey> gotta go to real options to get a better handle on inherent optionality
<hkhenson> which in turn we can get from nanotubes.
<Robin> I have an analysis of evolutionary selection of interstellar colonization strategies - the outcome isn't pretty.
<hkhenson> hmm


<hkhenson> fighting?
<LazLo> divergant evolution
<Robin> A fast "forest fire" burning.
<hkhenson> the far edge party is just for fun
<Robin> see: http://hanson.gmu.edu/filluniv.pdf
<hkhenson> like dyson spheres behind the colonization front?
<mporter> you could combine that with the recent guesstimates about the shape and extent of the 'galactic habitable zone'


<hkhenson> which incidentally was the first thing drexler looked for when he started to understand nanotech
<BJKlein> Robin, how many children do you have ;)
<Robin> What happens if you figure out what you want, and then you figure out that evolution selects for creatures who want something else? If so, you have to accept that creatures like you will become a declining fraction of the future
<Robin> I have two sons, age 12, 10.
<BJKlein> wonderful...


<Eliezer> Evolution ain't so formidable. I can take it on.
<LazLo> or alter environmental demands on selection by redefining environment
<xeiox> Evolution has never lost Eliezer
<MichaelA> Never lost = it must win forever?
<Robin> If it is possible to "take over" evolution, then the moment at which that happens is the defining event in all future history.
<MichaelA> Defining moment... *starry eyes*
<Robin> From that moment on, what creatures want is determined by what then wanted at that first "Ur" moment.


<hkhenson> robin, re laughter, this seems to be in line with minsky or not to far over.
<LazLo> evolution doesn't select environmental adaptaion efficiency is the selection coefficient of competition for sustainable resources
<BJKlein> MichaelA.. heh
<cyborg01> Robin: so you're suggesting immortality is selected against in evolution?
<Rotaerk> What exactly is evolution winning?
<hkhenson> my goodness robin has it really been that long?
<Robin> Immortality is probably not selected for, but its hard to be sure.
<FuturQ> Natural selection is over. We have the keys to life now. From now one we choose our evoluion.


<hkhenson> Amber just got out of college at 21.
<LazLo> Human Selection FuturQ
<Rotaerk> There are still selection pressures, I dont see why it isnt occurring still
<FuturQ> Exactly
<MichaelA> I suggest that how our future will unfold will depend on the entity or entities exerting the strongest force of leverage over that "Ur-moment".
<Robin> FutureQ - *How* is natural selection over? Humans can change their DNA, but that just means that genes have moved, not that natural selection is over.
<LazLo> we aren't just changing our own DNA
<LazLo> we are selecting the entire biome based on human demand
<FuturQ> Only marginally and it has been affected atrificially by social pressures for 100,000 yrs.


<Rotaerk> Natural selection is still occurring, though it may not be the only changing force around now...
<Robin> Keith - would be nice to see you if you're in the area
<Jonesey> natural selection is weakened a lot, but yup long way from over
<Rotaerk> mmhmm
<Jonesey> our genes contain the seeds of our posisble extinction by war
<Jonesey> there is still every possibility of that
<Robin> You guys are confusing natural selection with selection of DNA. DNA is over, ok, but natural selection is alive and well.
<Jonesey> humans re extremely squabbly


<LazLo> but Human Selection is favored the more powerful our species becomes and teh longer we now exist. Our species is no just the top of the food chain, we are designing the food chain for better and worse
<Jonesey> what's the diff? natural selection operates at the DNA level.
<hkhenson> yeah, but at the moment I am an exile. you would not want me around your kids in the event scientology paid bounty hunters started shooting at me.
<FuturQ> I define natrual selection as ntural pair bonding. we don't do that and haven't since argiculture.


<FuturQ> It's going to be even more in orur control through genetics
<Robin> "Genes" are *whatever* it is that determines our construction and behavior.
<Jonesey> that's a nutty def of natural selection
<Jonesey> w'ere not pair bonders, we are sluts
<hkhenson> heh.
<MichaelA> men are
<Jonesey> yeah right


<hkhenson> not exactly the case, but funny
<Robin> Natural selection can occur among computer viruses - even though they have no DNA.
<Jonesey> dna test every baby and you'll find out
<LazLo> but we are also defining mechanism that are more than genetic and we are determining which species will cohabit the planet with us
<LazLo> hence not "Natural" selection
<FuturQ> But we as Transhumanists propose to no longer be at the ficlkle whims of nature any lonegr and we'll decide what behanvoirs and body forms that we want for ourselves and out children.
<cyborg01> Natural selection probably determines how much fun we'll have....?
<LazLo> not random chance, and not merely adaptation to what ever envronment comes along


<LazLo> we are also terraforming our planet and altering environment
<xeiox> Arn't humans in fact a part of nature itself?
<xeiox> Why the seperations?
<Eliezer> Robin: It's not enough to determine construction and behavior; they have to be heritable.
<Robin> If some of us are inclined to do things one way, and others to do them another way, and one way leads to more decendants, then selection has force.
<Rotaerk> natural selection is an abstract process implemented by the system of genetics


<xeiox> Could human selection not be a subset of natural selection?
<Eliezer> Robin, natural selection takes limited resources
<Eliezer> AND frequent death to free up resources
<Eliezer> AND multiple phenotypes with heritable characteristics
<Eliezer> AND good fidelity in transmission of heritable characteristics
<Eliezer> AND substantial variation in characteristics
<Eliezer> AND substantial variation in reproductive fitness
<Jonesey> there's nothing abstract about natural selection, tis very concrete.
<Robin> Eliezer, human behavior is heritable many ways, and resourses are now limited, and behaviors change often ...
<Eliezer> AND persistent correlation between the variations
<cyborg01> Yeah.. I think eliezer has a point here....
<Eliezer> AND this is iterated for many generations
<Eliezer> THEN you have a noticeable amount of selection pressure
<FuturQ> AND we propsoe to pick and schosoe all of the above
<MichaelA> Eliezer is defining it narrowly, which is his choice, which is far from universal


<LazLo> xeiox Human Selectionis more than artificial selection it is when one species overrules ecological determinsim and a applies a species specific demand on all global life sustaining resource. I call it Human Selection because we are that species
<Eliezer> Robin: But do the other forms of inheritance, besides genes, persist with high fidelity for long enough to be subject to natural selection?
<xeiox> Laz Ahh
<Robin> Corporations now evolve. They copy behavior, especially that which succeeds.
<Robin> For corporations, "genes" sit in their corporate practices handbooks, in customer expectations, and in MBA courses.
<cyborg01> But how do you know what kind of things that you desire will be selected for or against?
<Rotaerk> I mean it is abstract because it does not depend on the system of genetics
<Rotaerk> it can occur on other "platforms"
<Eliezer> Robin: But how faithful is the fidelity of the transmission? How great is the variance in practices? How great is the variance in success? How closely are they correlated? How many generations has this been going on with good fidelity of transmission?


<LazLo> don't you think those are more rightly called what Dawkins defines as 'memes' Robin?
*Utnapishtim* your welcome!
<Robin> "memes" are too loosely defined for me to refer to them and expect you to have much of an idea what I mean.
<Guest> I dont expect any of the archatypes of human thinking to survive the next 100 years. Humans will change as technology changes. Even the idea that we will continue on as humans is sort of silly


<Eliezer> If each pair of parents has an average of 8 offspring, natural selection will apply at most 2 bits of selection pressure per generation - absolute tops, to be distributed over all characteristics being selected on.
<Eliezer> Natural selection, in a way, is very, very weak.


<Robin> I don't see how you can look at corporate behavior today and not see variation and selection going on. The behaviors that corporations engage in are the ones that they see have succeeded lately in other corporations.
<Eliezer> It has to be iterated over a long, long time before it starts laying down complex functional substrates like ourselves.


<LazLo> there is that problem but essentially it is the overlap of learned behavior that is transmitted from one generatin tot he next and mutated through "improved" effeciency
<Eliezer> Robin: I don't deny that variation, selection, correlation, and heritability are there. I ask how weak they are, and how weak their products are, and how long it's been going on, and so I ask not whether evolution "exists" in this case, but how quantitatively strong the effects are.


<Eliezer> Especially bearing in mind that humans are also optimizing corporate practices.
<Eliezer> And that corporate practices spread for many other reasons beyond success.
<Robin> Remember that the economy doubles every 15 years? Most of that change is emodied in corporate behavior. Corporate behavior is changing big time.
<Eliezer> I am expecting natural selection in this case to be drowned in other effects.
<mporter> bayes vs darwin


<LazLo> Is corporate selction equivalent then with natural selection in your mind Robin?
<FuturQ> exactly
<Eliezer> If there are a few big successes that everyone else emulates - along with gurus who never succeeded but talk a good game - then that is certainly not natural selection as known in biology.
<Guest> natural selection will continue for those willing to accept technology
<Robin> Corporate selection is an example of natural selection.
<Jonesey> "the economy" doesn't double.


<Eliezer> Mitch: That's what it's all about.
<LazLo> why?
<Jonesey> perhaps the wealth of the wealthiest people doubles
<LazLo> they are human corporations
<Jonesey> but htat's not a good measure of what the typical person is experiencing
<Jonesey> and useless for measuring the lives of those at the bottom of the planetary wealth scale


<Eliezer> Evolution of corporate practices is driven by *plausibility* and *fame*, not directly by successful corporations reproducing.
<Eliezer> Success is only one possible factor in plausibility and fame.
<cyborg01> Actually evolution is known to have random components--- that japanese guy's theory.. what's his name..
<Robin> Say we assumed that future behavior will not be determined by natural selection. Instead assume that people like us create machines and organizations that want what those people want.


<Robin> Given this assumption, a huge question is: what do people like us want? I think the answer to that question is rather dissapointing.
<cyborg01> "Genetic drift"
<BJKlein> robin.. perhaps physical immortality?
<xeiox> Robin, what do you think people want?
<LazLo> not independent naturally evolving species. Or are you comparing corporate behavior to the hive characteristics of insects? But Robin we are already doing that
<xeiox> And why, is that awnser so dissapointing?
<Eliezer> cyborg: that's part of standard darwinian theory and appears as the residuum error in any model of population genetics


* MichaelA advises that everyone reads Robin's "Was Cypher Right?: Why We Stay in Our Matrix" paper.
<Robin> People have high minded ideas of what they think they want, but in fact what they want is pretty pedestrian, selfish, and well explained by simple evolutionary psychology.
<LazLo> That was why I asked when I first came in if we ar all defining wealth by the same criteria
<FuturQ> I think if put to task, asked yo think seriously anbout it, more people than one might think =will want things like peac for all and prosperity for all. Thsis to me is not so bad.


<xeiox> Heh that's very interesting
<Robin> FuturQ - that is what they would say, but not what they would do.
<Eliezer> Robin, people only need to want one high-minded, open-ended form of fun for fun to be inexhaustible.
<FuturQ> You've been around toomany Eron types robin :)
<FuturQ> Enron


<BJKlein> heh
<BJKlein> Eh, you'd think Robin was en economist or something
<Robin> My main point about corporate behavior is that one can make a lot of progress predicting corporate behavior by assuming a simple evolutionary competition.
<Robin> Eliezer - you lost me.
<Eliezer> Robin, about the predictability - is that theory, or has it been demonstrated?
<Robin> I'm just talking about my informal impressions of corporate behavior.
<LazLo> The social darwinism of economics is an old idea and it has some validity but it is not genetics
<FuturQ> Is there a link to Robin's paper?
<mporter> http://hanson.gmu.edu is robin central
<Eliezer> Robin, I'm not in a position to start throwing stones, but I'm going to throw a stone anyway: you may find that there is less predictability in foresight than there seems to be in hindsight.
<LazLo> In the forum post as well
<FuturQ> thanks


<Eliezer> and it's the foresightful probability that determines the success of a Bayesian predictor
<Robin> OK, what follows from saying the future is hard to predict? Shall we just go back to living our lives and ignore the future?
<hkhenson> future did make the point that wealth and much else is directed toward making a persons (especially males) attractive for pair bonding
<BJKlein> http://imminst.org/f...=ST&f=63&t=2601
<FuturQ> Hell no!


<Eliezer> I think the future is terribly hard to predict, and yet I have this whole clever plan... it doesn't seem to hamper my will to meddle.
<LazLo> does mutation anticipate environmental shift or is the resultof many mutations some lucky enough to be in place in time?
<Robin> What's the point of having plans if prediction fails?
<BJKlein> one person can make a difference
<hkhenson> btw, evolution can act with frightening swiftness in some cases.
<Eliezer> I am in favor of calibrating probabilities. If calibrating probabilities means that they're all in the 60% range instead of the exciting 99% range, so be i

#9 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 05 January 2004 - 03:37 AM

<Eliezer> I am in favor of calibrating probabilities. If calibrating probabilities means that they're all in the 60% range instead of the exciting 99% range, so be it.
<Rotaerk> mutation is random.
<Rotaerk> luck
<hkhenson> one generation may be enough to get rid of thrifty genes.
<LazLo> not all that is why itis called selection
<Robin> The thing I'd most like to have is a big betting market where we can get numbers for all these probabilities.
<Eliezer> Robin: My plans are not sensitive to a lot of variables you are trying to predict... that is perhaps why I find it much easier to believe that those variables are just not predictable.
<Rotaerk> mutation has nothing to do with natural selection.
<Rotaerk> natural selection occurs AFTER mutation
<Rotaerk> they are 2 separate parts of evolution
<Eliezer> Whether Moore's Law is smooth or choppy makes no difference to what I'll be doing tomorrow.
<LazLo> environment determines success but environment is not a fixed quality
<Jonesey> it could if you are a moore's law groupie who tries to model ur daily actions on the law as u perceive it
<MichaelA> Yeah Robin, big thumbs up on the future idea markets thing. Any progress on that front?
<Rotaerk> well...natural selection also occurs after environmental change too...
<Robin> On laughter, the story is that smiles show others we are comfortable with the, and laughter does even more so.
<BJKlein> Robin, would you say you're a pessimist in comparison to most other futurists?
<LazLo> a mutation like sickle cell is bad in teh modern North American environment but good as a defense against tse tse fly
<Jonesey> laughing at someone shows derision, not comfort
<hkhenson> malaria lazlo
<Robin> Eliezer - as best I understand them, your plans are sensitive to implicit economic assumptions I think are pretty wrong.
<Eliezer> Robin: example?
<mporter> mocking someone generally implies you don't fear them
<Robin> Elizer - the idea that one small team can suddenly advance far beyond the rest of the world.
<mporter> at least, mocking them in person
<LazLo> natural selection is not progressive it is simple change that adapts to environment, OK Keith I stand correct but the paradigm is still valid
<hkhenson> one of the most spectacular one was xanadu, but in many ways they got it right.
<Robin> BJK - haven't done a survey to know
<Rotaerk> the fact that organism X was both in the environment of north america and had the mutation of sickle cell is luck. the purpose of natural selection is to prune the unlucky ones from the bunch
<Robin> MichaelA - not much progress - mainly needs funding
<Robin> There are a half dozen companies trying internal markets now
<hkhenson> rotaerk, africa not north america.
<MichaelA> You could probably create a ramshackle one with volunteers
<LazLo> the only purpose of natural selectionis to improve survivability not to cull specifically
<Rotaerk> oops sorry :)
<Eliezer> Robin: I agree that's an implicit assumption; what I disagree is that it is an economic assumption.
<Rotaerk> didnt notice I typed that
<hkhenson> there is a future market of sorts
<hkhenson> though it is reputation based rather than money
<Robin> Economic research has done much to show just how incremental most technical progress is.
<Eliezer> Specifically, I disagree that the model which determines that question makes predictions about economic progress over the last 100 years - hence, how can looking at economic progress over the past 100 years fit parameters of the model? One does not seem to be Bayesian evidence about the other.
<Robin> On laughter, an interesting question is whether AIs and uploads will use similar mechanisms to show comfort.
<hkhenson> robin, no AIs I suspect that AIs should be highly concerned with improving their status in the eyes of their co AIs and humans
<hkhenson> same as humans are.
<Eliezer> Robin: All that means is that, afterward, historians of science will look at SIAI and say: "Oh, but Yudkowsky didn't really invent all that much, his basic ideas were all echoed in Shannon and Barsalou and etc. etc. etc. and Yudkowsky just reinvented some parts and put together other parts and..."
<Robin> Eliezer - not sure I follow. You seem to be saying that our experience with technical progress over the last few centuries is irrelevant, because what you have is all new?
<Eliezer> RSI is all new
<Robin> Keith - yes, they should be concerned with status, but will such mechanisms actually work, or will they be too easy to fake to be useful? Humans find it hard to fake a smile, which is why smiles work.
<MichaelA> And apparently impossible to understand too
<Eliezer> And the road to RSI, when historians trace it out, will turn out to be well-precedented in the pieces, with only their assembly being novel, barring one or two special leaps
<hkhenson> real status is confered by real actions robin.
<Robin> I need to stop now. Any last questions or comments?
<hkhenson> unlike the kind of fake status one gets from a cult
<BJKlein> Thanks Robin.. feel free to come back
<hkhenson> much thanks for coming by Robin.
<Eliezer> it's been surreal
<FuturQ> what market sectors would you invest in in the next 10 ys?
<Robin> This was fun, and more productive than I had estimated. I'd like to do this again sometime.
<hkhenson> I will stop if I can next time I am down there. Get the CIA to give me a lift to and from canada
<LazLo> YOu have been a scholar and a gentleman sir a pleasure to share some time with you please come back
<Robin> I won't bet against the current experts on market sectors.
<MichaelA> Cult leaders take real action, real action to create cultists ;)
<hkhenson> I wonder what it would take to get solar power from space?
<LazLo> A Sun ;)

#10 yosa

  • Guest
  • 4 posts
  • 0
  • Location:Caracas, Venezuela

Posted 05 July 2004 - 02:16 AM

Wow, great material to review:-)

Immortally yours,

Yosé

#11 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 10 October 2004 - 05:00 AM

Robin Hanson: The Next Really Big Enormous Thing
Assistant Professor of Economics, George Mason University
http://www.futurebri...robinhanson.asp
[16](read his bio)

A postcard summary of life, the universe and everything might go as
follows. The universe appeared and started expanding. Life appeared
somewhere and then on Earth began making larger and smarter animals.
Humans appeared and became smarter and more numerous, by inventing
language, farming, industry, and computers.

The events in this summary are not evenly distributed over the history
of the universe. The first events are relatively evenly distributed:
the universe started fourteen billion years ago, life appeared by four
billion years ago, and on Earth animals started growing larger and
smarter about half a billion years ago. But the other events are very
recent: our species appeared two million years ago, farming started
ten thousand years ago, industry started two hundred years ago, and
computers started a few decades ago.

Do we over-emphasize these recent events relative to their fundamental
importance, because they are about our species and us? Are these
events just arbitrary markers, chosen from thousands in a long history
of relatively continuous change?

I think not, and here is why: most of these events separate a chain of
distinct exponential growth modes. (Exponential growth is where a
quantity doubles after some time duration, then continues to double
again and again after similar durations.) The growth rates of these
modes have varied enormously.

The slowest growth mode started first. Our fourteen billion year old
universe is expanding, and that expansion is becoming exponential due
to a mysterious "dark energy." The distance between the galaxies is
predicted to double every ten billion years.

We don't know enough about the history of non-animal life in the
universe to identify its growth rates, but we can see that for the
last half billion years the size of animals on Earth has grown
exponentially. While the size of the typical animal is largely
unchanged, the variation among animal size has greatly increased.
Because of this, the mass of the largest animal has doubled about
every seventy million years, and the mass of the largest brain has
doubled about three times every hundred million years. So the largest
brains have doubled about three hundred times faster than the distance
between galaxies.

Humans (really "our human-like ancestors") began with some of the
largest brains around, and then tripled their size. Those brains, and
the innovations they embodied, seem to have enabled a huge growth in
the human niche - it supported about ten thousand humans two million
years ago, but about four million humans ten thousand years ago.

While data is scarce, this growth seems exponential, doubling about
every two hundred and twenty five thousand years, or one hundred and
fifty times faster than animal brains grew. (This growth rate for the
human niche is consistent with faster growth for our ancestors -
groups might kill off other groups to take over the niche.)

About ten thousand years ago, those four million humans began to
settle and farm, instead of migrating to hunt and gather. The human
population on Earth then began to double about every nine hundred
years, or about two hundred and fifty times faster than hunting humans
doubled.

Since the industrial revolution began a few hundred years ago, the
human population has grown even faster. Before the industrial
revolution total human wealth grew so slowly that population quickly
caught up, keeping wealth per person at a near subsistence level. But
in the last century or so wealth has grown faster than population,
allowing for great increases in wealth per person.

Economists' best estimates of total world product (average wealth per
person times the number of people) show it to have been growing
exponentially over the last century, doubling about every fifteen
years, or about sixty times faster than under farming. And a model of
the whole time series as a transition from a farming exponential mode
to an industry exponential mode suggests that the transition is not
over yet - we are slowly approaching a real industry doubling time of
about six years, or one hundred and fifty times the farming growth
rate.

A revised postcard summary of life, the universe, and everything,
therefore, is that an exponentially growing universe gave life to a
sequence of faster and faster exponential growth modes, first among
the largest animal brains, then for the wealth of human hunters, then
farmers, and then industry. It seems that each new growth mode starts
when the previous mode reaches a certain enabling scale. That is,
humans may not grow via culture until animal brains are large enough,
farming may not be feasible until hunters are dense enough, and
industry may not be possible until there are enough farmers.

Notice how many "important events" are left out of this postcard
summary. Language, fire, writing, cities, sailing, printing presses,
steam engines, electricity, assembly lines, radio, and hundreds of
other "key" innovations are not listed separately here. You see, most
big changes are just a part of some growth mode, and do not cause an
increase in the growth rate. While we do not know what exactly has
made growth rates change, we do see that the number of such causes so
far can be counted on the fingers of one hand.

While growth rates have varied widely, growth rate changes have been
remarkably consistent -- each mode grew from one hundred and fifty to
three hundred times faster than its predecessor. Also, the recent
modes have made a similar number of doublings. While the universe has
barely completed one doubling time, and the largest animals grew
through sixteen doublings, hunting grew through nine doublings,
farming grew through seven and a half doublings, and industry has so
far done a bit over nine doublings.

This pattern explains event clustering - transitions between faster
growth modes that double a similar number of times must cluster closer
and closer in time. But looking at this pattern, I cannot help but
wonder: are we in the last mode, or will there be more?

If a new growth transition were to be similar to the last few, in
terms of the number of doublings and the increase in the growth rate,
then the remarkable consistency in the previous transitions allows a
remarkably precise prediction. A new growth mode should arise sometime
within about the next seven industry mode doublings (i.e., the next
seventy years) and give a new wealth doubling time of between seven
and sixteen days. Such a new mode would surely count as "the next
really big enormous thing."

The suggestion that the world economy will soon double every week or
two seems so far from ordinary experience as to be, well, "crazy." Of
course similar predictions made before the previous transitions would
have seemed similarly crazy. Nevertheless, it is hard to take this
seriously without at least some account of how it could be possible.

Now we cannot expect to get a very detailed account. After all, most
economics has been designed to explain the actual social worlds that
we have seen so far, and not all the possible social worlds that might
exist. Even then we are still pretty ignorant about the causes of the
previous transitions. But we do want at least a sketchy account.

It turns out to be hard to create such an account using things like
space colonization or new energy sources, mainly because we now pay
only a small fraction of our budget on things like land and energy.
But we pay seventy percent of world income for human labor, so
anything that can lower this cost can have a huge impact. I am thus
drawn to consider scenarios involving robotics or artificial
intelligence.

While machines have sometimes displaced human workers, they have much
more often helped humans be more productive at tasks that machines
cannot do. Machines have thus on net raised the value, and hence the
cost, of human labor. And because people are essential, the limited
rate of human population growth has limited the economic growth rate.

Once we have machines that can do almost all the tasks that people can
do, however, this picture changes dramatically. Since the number of
machines can grow as fast as the economy needs them, human population
growth no longer limits economic growth. In fact, simple growth models
which assume no other changes can easily allow a new doubling time of
a month, a week, or even less.

Now admittedly, progress in robotics and artificial intelligence has
been slow over the decades, primarily because it is so hard to write
the software. And at these rates it could be centuries before we have
software that can do almost all tasks that people do. The "upload"
approach, however, of scanning human brains then simulating them in
detail in computers, seems likely to succeed within the next half
century or so.

The transition from farming to industry seems to have been more
gradual than the transition from hunting to farming. Even such a
"gradual" transition, however, would be very dramatic. Assume that a
new transition was as gradual as the one to industry, and that the
world economic growth rate was six percent in both 2039 and 2040, plus
or minus a typical yearly fluctuation of half a percent.

If so, then in 2041, the increase in the growth rate might be the size
of a typical fluctuation, and then in 2042 the growth rate would be a
noticeably different eight percent. Growth would then be 14% in 2043,
50% in 2044, 150% in 2045, and 500% in 2046. Within five years the
change would go from barely noticeable to overwhelming.

This is disturbing because human wages should fall quickly with the
falling price of machines. So while humans who owned shares in the
firms that made machines would get very rich, those whose only source
of income was their labor could die of starvation. And if people wait
to see the transition happen before they believe it is real, they
might not have time to arrange for other sources of income.

If we stand back from all the big events and innovations we have seen
in the last century and look at the overall world economic growth
rate, it seems surprisingly steady. All those events and innovations
contribute to growth, but have not much changed the overall growth
rate. From this, one might expect such steady growth to continue for a
long time.

Looking further back in time, however, we see that once in a while
something has changed the growth rate by enormous factors in a
relatively short time. We might do well to not ignore such a speeding
freight train until it actually hits us.

For more information see my papers:

[17]Long-Term Growth As A Sequence of Exponential Modes

[18]Economic Growth Given Machine Intelligence

[19]If Uploads Come First

This essay is original and was specifically prepared for publication
at Future Brief. A brief biography of Dr. Hanson can be found at our
main [20]Commentary page. Other essays written by Dr. Hanson can be
found at his [21]web site. Other websites are welcome to link to this
essay, with proper credit given to Future Brief and Dr. Hanson. This
page will remain posted on the Internet indefinitely at this web
address to provide a stable page for those linking to it.

References

15. http://www.futurebri...RobinHanson.pdf
16. http://www.futurebri...om/robinbio.asp
17. http://hanson.gmu.edu/longgrow.pdf
18. http://hanson.gmu.edu/aigrow.pdf
19. http://hanson.gmu.edu/uploads.html
20. http://www.futurebri.../commentary.asp
21. http://hanson.gmu.edu/vita.html
22. http://www.futurebri...RobinHanson.pdf
23. http://www.futurebrief.com/brief.asp
_______________________________________________
wta-talk mailing list
wta-talk@transhumanism.org
http://www.transhuma...stinfo/wta-talk




1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users