• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans

Photo
- - - - -

Controlling Runaway Nanotech


  • Please log in to reply
5 replies to this topic

#1 Bruce Klein

  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 14 April 2003 - 02:20 PM


Controlling Runaway Nanotech
Immortality Institute Online Chat :: Sun. Apr 20th 2003
Location: Cyberspace - http://www.imminst.org/chat

On Apr 20th 2003 at 8:00 PM EST the Immortality Institute will hold a moderated chat to discuss the recent congressional hearing which addressed public concerns about nanotechnology. Fears of potentially deadly nanotechnology voiced by computer scientist Bill Joy prompted the U.S. House Committee on Science of the U.S. House of Representatives to hold a hearing on April 9, 2003 to "examine the societal implications of nanotechnology and H.R. 766, the Nanotechnology Research and Development Act of 2002."

Posted Image
Kurzweil, Colvin, Winner, Peterson: beyond gray goo

Article:
http://www.kurzweila...es/art0558.html

WEBCAST of the hearing:

http://app2.streampi...=20278&e=2406&

#2 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 18 April 2003 - 01:41 PM

Eric Drexler responds to Smalley concerning the feasibility of nanotech. This letter is an amazingly frank account for the potential bad that may come from functional nanotech. Please read:

http://www.imminst.o...&st=0#entry9129

#3 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 18 April 2003 - 01:50 PM

Sir Martin Rees, Britain's astronomer royal, has a new book, Our Final Hour, concerning the potential for missuse of biotech, nanotech, and computer technology. Related article:


Posted Image

Saving the universe by restricting research
Astrophysicist says technology has potential to annihilate

Keay Davidson, Chronicle Science Writer

History's worst technological catastrophes could kill millions or billions of people in this century, and to prevent them, society may need to consider restricting specific types of scientific research, a famed astrophysicist proposes in a new book.

The proposal by Sir Martin Rees, Britain's astronomer royal, is an unusually high-placed challenge to the scientific community's traditional belief in the value of research that is "pure," unrestricted and independent of public oversight.

Because of the growing sophistication and proliferation of biotechnology, computer technology and nanotechnology, civilization could be ravaged or destroyed by irrational or evil amateur scientists who operate alone or in small groups akin to the terrorists of Sept. 11, 2001, Rees warns in the book, "Our Final Hour," just published by Basic Books.

As a result, "I think the odds are no better than 50-50 that our present civilization on Earth will survive to the end of the present century," Rees says.

Doomsday books have appeared for centuries. But Rees' book is unique, and not only because of his fame as a Cambridge University professor who is not prone to making scary public statements. Rees is one of the world's leading authorities on black holes and the origins and evolution of the universe.


WHEN GOOD SCIENCE GOES BAD
Another reason is that the book defies the scientific community's long- standing taboo on suggestions that humanity might be better off by not exploring certain avenues of science.

Only rarely have such suggestions been taken seriously in the scientific community, and never permanently. The most famous instance is the so-called Asilomar agreement of the 1970s, in which molecular biologists meeting at the Asilomar Conference Center in Pacific Grove temporarily agreed to limit their research because of concerns about possible accidents that might damage the environment.

He considers, for example, speculation that scientists might invent micro- robots that could reproduce out of control and devour Earth's surface, or that physicists might accidentally generate black holes or "rips" in the space-time continuum that could destroy Earth.

"Some experiments could conceivably threaten the entire Earth," he writes. "How close to zero should the claimed risk be before such experiments are sanctioned?"


EXPERIMENTS CAN GO AWRY
As a case study of such "extreme risks," Rees cites a controversial project that began in 2000 at Brookhaven National Laboratory on Long Island. Physicists there have used a particle accelerator to try to create a "quark- gluon plasma," a soup of extremely hot, dense subatomic particles that mimic conditions of the "Big Bang" that spawned our cosmos 13.7 billion years ago.

Critics speculated that this high concentration of energy might have one of three undesirable results:

-- It could form a black hole -- an object with such immense gravitational pull that nothing could escape, not even light -- which would "suck in everything around it."

-- The quark particles might form a very compressed object called a strangelet, "far smaller than a single atom," that could "infect" surrounding matter and "transform the entire planet Earth into an inert hyperdense sphere about 100 meters across."

-- Space itself, an invisible froth of subatomic forces and short-lived particles, might undergo a "phase transition" like water molecules that freeze into ice. Such an event could "rip the fabric of space itself. The boundary of the new-style vacuum would spread like an expanding bubble," devouring Earth and, eventually, the entire universe beyond it.

Could such bizarre tragedies really happen? To reassure the residents of Long Island and critics beyond, Brookhaven physicists presented calculations indicating the answer was no. Indeed, independent evidence indicates that similar concentrations of energy occur naturally in the cosmos, because of the interaction of cosmic-ray particles, without tearing the fabric of space.

Although Rees finds such counterarguments "reassuring" and believes a catastrophe is "very, very improbable," he cautions that "we cannot be 100 percent sure what might actually happen."

Which triggers his core question: Even if the odds against such a cosmic disaster are vanishingly small -- one estimate is 1 in 50 million -- are the potential benefits of the experiment worth risking the worst-case outcome, namely the annihilation of Earth and the entire universe?

Speaking of science in general, he says: "No decision to go ahead with an experiment with a conceivable 'Doomsday downside' should be made unless the general public (or a representative group of them) is satisfied that the risk is below what they collectively regard as an acceptable threshold. It isn't good enough to make a slapdash estimate of even the tiniest risk of destroying the world."

Should financial support "be withdrawn from a line of 'pure' research, even if it is undeniably interesting, if there is reason to expect that the outcome will be misused? I think it should, especially since the present (funding) allocation among different sciences is itself the outcome of a complicated 'tension' between extraneous factors."


FEAR OF ROBOTS TAKING OVER
Rees also entertains the creepier risks of nanotechnology, one goal of which is the construction of super-small robots that replicate like viruses. "Nanobots" might have useful purposes -- for example, patrolling the body for cancer cells. But some, including two Bay Area figures, nanotech guru Eric Drexler and Bill Joy, chief scientist at Sun Microsystems, have speculated that they might race out of control, devouring all matter and reducing Earth's surface to a "gray goo."

Rees waffles on the question of whether such a weird threat is possible. "After 2020," he cautions, "nanobots could be a reality; indeed, so many people may try to make nanoreplicators that the chance of one attempt triggering disaster would become substantial. It is easier to conceive of extra threats than of effective antidotes."

Rees admits there are no easy answers to the futuristic crises he depicts. Restrictions on research could backfire, for example: "The same techniques that could lead to voracious 'nanobots' might also be needed to create the nanotech analogue of vaccines that could immunize against them," he writes.

"I wouldn't characterize myself as being unrelievedly gloomy," Rees said in a recent phone interview. "It's just that the more I have followed science and its potential, the more I have been aware of both the exciting hopes and the unintended downsides."

E-mail Keay Davidson at kdavidson@sfchronicle.com
http://sfgate.com/cg...14/MN255128.DTL

#4 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 21 April 2003 - 01:17 PM

CHAT ARCHIVE

<BJKlein> Official Chat Starts Now:
<BJKlein> Topic: Controlling Runaway Nanotech
<OmniDo> The problem has long since been conceived ever since Drexler and Merkle began to envision the potential gains of the tech.
<OmniDo> There are only 3 possibilities that could incur those results.
<OmniDo> 1) Nanotechnology, specifically the self-replicating assemblers, are not moderated within their control, and the enviornment in which they are designed provides more than ample material and energy for them to replicate indefinately. This is something that the designers would be fools to allow from the beginning.
<OmniDo> 2) The nanotechnology is deliberately designed to multiply out of control, a.k.a as a terrorists weapon, and is deilberately released into an enviornment where their replication can do harm.
<OmniDo> Or 3) The nanotechnology is programmed with sufficient artificial intelligence with which they, as a collective, reach the conclusion either deliberately or as a result of probably chance, that they should multiply without stopping. Nature exhibits similar characteristics.
<Nus> I remember that the consensus among transhumanists was that intentional nanotech disasters are much more likely/dangerous than unintentional ones; is that still generally agreed on?
<BJKlein> Nus, yes
<OmniDo> Now while the prospect of "Controlling" runaway nanotech might seem improbable, it is possible. Just as we can control runaway viruses and bacteria, given the right enviornment and tools.
<MichaelA> It's unlikely that biovorous nanomachines would be released into a biological environment accidentally, an intentional release seems more dangerous because it is more likely
<OmniDo> But Nus is correct in that we shouldnt reach that result to begin with.
<Utnapishtim> I do worry about option no. 2. It seems that we are sharing this world with an abundance of people that don't like life very much. There are factions of humanity that are growing increasingly alienated. What do we do if they decide they want to end the whole show and pull us down into their suicidal death wish
<OmniDo> <{(MichaelA)}> I agree. There is also neglagence that could result in such an "accidental" occurance.
<Nus> Utnap: we die? :/
<Utnapishtim> Nus: How do we prevent this scenario from unfolding in the first place?
<BJKlein> Utnapishtim, that's the key.. a small group gaining access
<Utnapishtim> It seems to me by far the most immediately dangerous of the three scenarios outlined
<OmniDo> <{(Utnapishtim)}> This is a result of the vast and ever increasing pace of changes to humanity and its lifestyle, thanks to technology and diversity. If we are all to cope, we must all endeavor to achieve a common goal or to share in a commonality to prevent alienation, and post-rivalry which would lead to drastic consequences.
<BJKlein> so this looks like a problem with only one real solution... controlling access
<MichaelA> I'm worried about groups finding security workarounds in environments where nanotech already exists
<Utnapishtim> Well there will ALWAYS be someone who doesn't want to play nicely but wants to destroy instead
<Utnapishtim> mindlessly
<MichaelA> People talk about garage nanotech, but governments will pour trillions into it before anything tangible is developed
<OmniDo> <{(Utnapishtim)}> Indeed, but such people with relavent resources, prestiege, or control with which to make a significant impact are few.
<BJKlein> previous models are nuclear and biological weapons..
<OmniDo> <{(MichaelA)}> True, but remember, Edison was a "garage" scientist, and he accomplished a great deal on his own. So true, in the right hands with the right mind(s), private researchers could equal if not surpass the governments.
<Utnapishtim> How long can we keep nuclear and biological weapons out of the hands of nuts? We have done pretty well at his when you consider that nukes have been around over almost 60 years
<Nus> OmniDo, that was a long time ago, when garage science still worked
<Nus> science nowadays takes much, much greater resources
<MichaelA> I think Edison lived in an age where that was far easier, can you name any more recent scientists that achieved substantial progress in areas without the help of collaborators and money?
<OmniDo> <{(Nus)}> Well, we wont debate the success/failure of garage science in the modern world, but that could be a very good topic for another thread.
<MrDebt> nus i gree with you
<MichaelA> I think it has a lot to do with the topic at hand, but we can move on to another subtopic
<MichaelA> Possible solutions
<BJKlein> Nus, it's not quite that simple.. look at the web.. free access to info..
<MichaelA> Has anyone here checked out www.responsiblenanotechnology.org?
<Utnapishtim> I agree with Nus: The prerequisite knowledge base has simply grown too broad
<OmniDo> In the end, it is a question of information, knowledge, and skill(s), moreso than it is of resources and funding. Although, the materials and energy are required, it doesnt take a $10 million dollar laboratory to invent nanotech..
<Utnapishtim> Once upon a time significant contributions could be made by the exceptionally bright nonspecialist. That time is over
<OmniDo> <{(Utnapishtim)}> I disagree, but again, that is a topic for another thread.
<BJKlein> but the real problem will happen after functional nanotech is created...
<MichaelA> I still think the danger of exploiting dangerous loopholes in a society where nanotech already exists is a greater danger than reckless individuals being the first to invent it
<BJKlein> and it gets out..
<MichaelA> Yeah, what do you do when it is already everywhere?
<urgen> funny how the very society that bears innovation as it's standard is now interpreting that very same necessary escalation as a potential threat... does this mean maturity for our young nation?
<MichaelA> There are several options, the semicentralized "nanofactory" approach espoused on crnano.org
<OmniDo> <{(BJKlein)}> If we are to assume that scenario does take place, our scientists will have a very short time to control it, depending on the complexity and advanced nature of whatever "Gets out"
<BJKlein> let's consider this solution...
<MichaelA> Urgen, immune systems giving us safe nanotech requires just as much innovation as inventing nanotech in the first place, if not far more
<BJKlein> in tandem.. create a human control device..
<BJKlein> something that will short stop us from using it to harm each other..
<Utnapishtim> We are staring at the dark side oof the democratisation of knoweldge and opportunity that has taken place during the last century
<OmniDo> There are only so many ways to "protect" oneself from nanotech, and no doubt, all those will be explored and put into action. Let us just hope they are put into action within sufficient time before it would be "too late"
<MichaelA> The ideal method is bestowing nanotech with the same common sense that humanity has, but that method is a bit more complex and philosophically sticky than "immunity systems" or whatever
<Utnapishtim> BJklein: What would you consider to be a human control device?
<BJKlein> for instance, a truth ring.. a lie dector test..
<Utnapishtim> BJ: Not happening
<BJKlein> or some type of mind control device
<Utnapishtim> Not happenign either
<BJKlein> just to stop us from forms of aggression
<BJKlein> we already prescribe medication to do it..
<Utnapishtim> we are culturally incapable of making the leap toward those kind of security measures
<MichaelA> Ah, Bruce is talking about neurologically reengineering humans
<MichaelA> Yes, we are culturally incapable
<MichaelA> And I'm not sure if it would be ethical to make those modifications mandatory
<OmniDo> <{(MichaelA)}> I refer to a counter-technology that is designed to function similar to "bug spray". One example would be field theory usage of specific design as to cause nanomachines to break down with sufficient amount of exposure. A high-radiation field would accomplish this, and that is one method of "protection". However, that in itself presents its own dangers as well.
<BJKlein> ethics vs oblivion ?
<MichaelA> BJ, instead of forcing everyone to undergo surgery, wouldn't it be a better option to have nanotech controlled by sentiences with their own positive, human-compatible senses of morality?
<OmniDo> We need to also remember that Nanomachines are not invincible. They are subject to destruction much the same way as any material object or machine, they merely function on a smaller, faster, and more efficient scale. They are still prone to obliteration, it merely requires more specific and higher energy systems to do so.
<BJKlein> of course.. but what will be first?
<MichaelA> Actually, saying that they would "control" it is misleading; humans are supposed to control it
<BJKlein> nanotech or ai
<MichaelA> I hope AI, but nanotech may actually come first
<MichaelA> It would take a long time to seek out and perform surgery on everyone, keep in mind that there will be ~8 billion people by that time
<Utnapishtim> There is absolutely NO WAY you could enforce stuff like this on the american people
<MichaelA> And I really think that abandoning ethics gets us into trouble
<MichaelA> You're right that humanity only has so much intelligence to deal with these complex situations, though
<OmniDo> <{(Utnapishtim)}> Unless the american people could be given a choice in the option, and also be presented with appealing "gains" as a result of such modification
<BJKlein> not surgery.. but devices that could be mandadted to like a watch or bracelet that detected aggresive behavior and changed behavior....
<Nus> OmniDo, then the ones who will abuse nanotech will choose not to have the neurological modification :/
<OmniDo> If the gains outweigh the losses, it would be decided by majority vote or by deliberate initiative with or without the american opinion, such as the Bush Administration.
<MichaelA> An interim solution might be to make nanotech more like conventional technology, holding back universal nanomachines from the public and making people pay for designs that are certified and safe
<OmniDo> <{(Nus)}> Exactly. It is a double edged sword, unfortunately.
<OmniDo> I still advocate complete dissolution from scientific research and the state, but such will never happen
<Utnapishtim> BJ: A few major world capitals would have to be destroyed before people would even consider imposign the kind of restrictions on personal freedom you suggest, at least in the western world
<OmniDo> Seperation from church and state can be maintained, only because the church has no advantage over the government. Hence, the government "Allows" it to happen, because it is of no importance.
<MichaelA> Interesting proposition, Omni
<OmniDo> But seperation from Scientific research and state would present far too much aprehension on the part of the state, due to the advancements that could be made by the research community.
<MichaelA> I agree that it will never happen, but it's an interesting notion nonetheless
<MichaelA> The government would still offer money to the scientists, and would control the community by default
<MichaelA> And the government would still buy its own scientists and control them
<OmniDo> The state POSSESSES NO CONCERN for any possible degree of curiousity or benevloence on the part of the researchers. Regardless of their intentions, they will ALWAYS force control over such endeavors, whether the researchers are aware of their efforts or not.
<urgen> so you guys solved it already? that this is just a political ruse similar to the war on terror, drugs, capital challenge....
<OmniDo> So, alienation or relocation of the potential researchers is the only way to obsolve oneself from the state and their corrupt deliberations.
<MichaelA> A lot of this has to do with the left's propensity to demonize politicians
<MichaelA> Urgen, I agree, this is just a distraction
<MichaelA> Researchers can be corrupt to, the potential to be corrupt is within us all
<OmniDo> <{(MichaelA)}> To assume that the politicians are not as such, is to assume that there exists no desire for control within such an organized system on the basis of benevloence. Since benevolence doesnt make money or generate power, then neither would such politicians.
<MichaelA> It's just the overarching system which discourages or encourages corruption more or less
<MichaelA> Anyway, back to nanotech, what's your solution, Omni?
<BJKlein> Scenario: 19 million people are killed by a runnaway nanovirus on the island of Sri Lanka... only stopped by water.... after this tragic event.. the UN mandates everyone wear a patch that indicates if they have nano skills or not.. and these peolpe are forced to move to a secure location near the south pole
<OmniDo> I can only offer possible alternatives, but all are risky.
<MichaelA> Do you think that nanotech in general is a dangerous or risky technology?
* MichaelA does, when humans are the only ones making the choices
<OmniDo> <{(BJKlein)}> Well see, either way, people will have to DIE before they understand the value or the "risk". If the government(s) want it to be implimented and the people disagree, then the governments will simply create a scenario where the arguments demonstrate themselves to their favor.
* BJKlein agreed
<Utnapishtim> BJ: Then people barricade themselves in their homes.. You wanna try and patch me mofo? That would be IMPOSSIBLE to pull on any free people who aren't accustoemd to state oppression
<BJKlein> unless there's a kick ass movie
<OmniDo> <{(BJKlein)}> Heh
<MichaelA> I don't see how a patch can control the behaviors of people
<BJKlein> Ex: Armageddon
<MichaelA> Why don't you just "patch" the nanotech you distribute? Or are we assuming that determined people will always be able to use nanotech for malevolent purposes, no matter how well it is engineered?
<OmniDo> This much is certain. Once the technology has been PROVEN to be valid, it will arouse mass hysteria and idealism as well as fear from the potential prospects. People everywhere will learn of what nanotech can bring in terms of gain, and in terms of risk/loss. It will become a huge deliberation, far beyond a mere text chat room discussion.
<Utnapishtim> Much of that discussion will be far less rational than the one we are having
<MichaelA> I wonder if politicians chat in chat rooms nowadays
<OmniDo> <{(MichaelA)}> For every problem there is a solution. Every program that has predesigned copyright protection, can be hacked, cracked, and distributed. The only way would be to design the nanotech with an irreversible process by the engineers themselves.
* Nus doubts it
<OmniDo> They will never do that, for that would put even them at risk.
<Utnapishtim> It will use such tiresome frames of reference as nationalism Allah Jesus and 'nature'
<BJKlein> yeh.. probably 3 to 5% do
* Nus remembers images of the Dutch prime minister picking up a computer mouse and using it as a remote control, hee hee
* MichaelA laughs
<MichaelA> maybe in a few years they will, though
<OmniDo> <{(MichaelA)}> but even if the nanotech itself was designed with an unbreakable protection, that does not destroy the original tools used to create that tech. As long as those tools remain, new "versions" can be made without such restrictions.
<MichaelA> I worry that the complexity involved in nanotech will make "irreversible" safeguards impossible, even in principle
<OmniDo> <{(MichaelA)}> I agree.
<MichaelA> Or products could be used on one another to release these safeguards
<BJKlein> MichaelA, this is the reason why I think we have to 'fix' the human..
<MichaelA> I'm okay with humans around, as long as they don't have the power to harm others
<BJKlein> guns don't kill people.. people kill people
<MichaelA> Nanotech is not a cool technology for humans to have
<MichaelA> (Alone)
<MichaelA> We need supervision, IMO
<Utnapishtim> BJ: And who is going to 'fix' the human. Who the hell could enforce soemthing like this on a free people?
<MrDebt> u r wrong imo ma
<BJKlein> you pay taxes no?
<OmniDo> It truly requires an objectively honest, benevolent, and compassionate power, or body of people, to invent and develop this technology without any apprehension on the part of those using it for positive gains. Since that type of ruling administration has never existed, or never been able to prove themselves as such to ALL, it will always remain a concern.
<MichaelA> I wouldn't trust Ghandi with nanotech; he could make a mistake
<MichaelA> OmniDo, you are absolutely right
<MrDebt> ma would you be happier living in the stone age?
<OmniDo> Hence people will ALWAYS refrain from trusting ANYONE with it.
<MichaelA> Only when a power or body like that can prove itself to EVERYONE will we have reached a positive point
<MrDebt> you could say tech is dangerous at any level of it.
<BJKlein> if the Shri Lanka scenario played out.. people would be very interested in a way to controll other humans..
<MichaelA> MrDebt, yes, but nanotech is uncharacteristically dangerous
<MrDebt> no
<MichaelA> Humans controlling humans is almost as bad as humans controlling nanotech
<Utnapishtim> The American people are culturally INCAPABLE of accepting the restrictions you suggest, regardless of what occurs in Sri Lanka
<MichaelA> I agree with Uth
<OmniDo> My solution would be to gather, screen, and develop such a working adminstration for the development of such a technology, and then further seperate that group from the governments of the world.
<MichaelA> OmniDo, I completely agree
<OmniDo> It would be extremely difficult, but if it could be accomplished, then humanity would gain from it undoubtedly.
<MichaelA> But I fear that creating such a human organization would be impossible
<Utnapishtim> Omni: I'll second that
<MichaelA> Politically impossible
<OmniDo> <{(MichaelA)}> Politically, yes. Privately, maybe. But Id rather it be within a private group than within a political group. Thats just my opinion.
<MichaelA> Omni, how about building a machine that generates moral statements that all of humanity agrees with?
<OmniDo> <{(MichaelA)}> That itself would be a very difficult endeavor.
<Utnapishtim> all of humanity or every human society?
<urgen> why?
<urgen> I don't think it would be so difficult at all
<MichaelA> I agree, but it would be more ideal than an all-too-human body
<MichaelA> Humanity has a huge bedrock of shared complexity
<OmniDo> The machine still needs to be invented, and who are the inventors? The same questions would apply no matter what the equation.
<BJKlein> but, see what we're doing... we're trying to stay one step ahead of each successively more powerful technology.. and we're adapted to hunt woolly mammoths?
<MichaelA> The variance we see all occurs within a very narrow band
<OmniDo> The only way to reach a solution that is not removed, is to involve EVERYONE. And that, also, is difficult.
<MichaelA> I don't think it's the same question, Omni; the inventors would need to build the machine with nothing but complete programmer-independence in mind
<OmniDo> If we are to be ruled by ourselves, then all must have a say.
<MichaelA> How about we scanned the brains of everybody and composed decisions based on what we anticipate everybody would like the most?
<BJKlein> yikes
<OmniDo> <{(MichaelA)}> But building such a machine like that would still entail a risk: That either the programmers themselves were benevolent, or that the only possible result from the machine would be benevolent. Either way, we are still taking a risk.
<MichaelA> An easier way to do that would be to have a "generic human model" and extrapolate decisions based on that
<MichaelA> Would creating a human, international governing body not entail equal risk?
<OmniDo> <{(MichaelA)}> Yes. There will always be risks as such.
<MichaelA> In both scenarios there are risk
<OmniDo> Unless we have some benevolent alien visitors who decided to expedite our evolution, heh
<OmniDo> That is improbable though.
<Utnapishtim> EVEN IF your machine is making decsiions in the best interests of everybody and this was self-evident, people still won't be happy. They will feel threatened and emasculated
<MichaelA> How about we engineer such an alien based on what we know about the cognitive correlates of morality and benevolence?
<MichaelA> The machine would need to be able to take stuff like that into account, Uth
<OmniDo> To return power to the people is to Hope that they all reach some agreement. To remove power from the people is to place them within fear, hatred, and distrust. As I said, a gamble either way.
<MichaelA> Not make decisions for people who feel threatened
<urgen> you come full circle.. supposing the 'runaway' will be seen as a political ruse,, but gave it the benefit of the doubt and attempt a sound solution. Exploration of the solution suggests necessary increase in understanding the principles involved in establishing 'runaway' systems. We build machines all the time that are intended to increase efficiency; indeed the battle of increasing sophistication is what is generating this need
<OmniDo> And due to technology, more and more people have greater and greater access to information and knowledge. Thus, an ever increasing population of semi-educated people are beginning to recognize the possibilities.
<MichaelA> I want power to lie with a the wisest, most benevolent entity around, whether that entity is a human or a nonhuman, and humans may have to do for the short term, which is why I take Omni's idea seriously
<urgen> it *has* to be possible
<urgen> it is proving itself both ways
<MichaelA> Urgen, what we want here is not "efficiency", but a complex set of heuristics and safeguards such that no one is capable of using a nanotechnological device to harm others, while other freedoms are maintained
<OmniDo> If such a body of people were to congregate, then they must also provide to the outside public, developments that yield near-universally positive results. If they do not, then people will question the intentions of such a group, and fear them just as much or more than they would fear a terrorist or cult organization.
<urgen> eh, back to semantics :-)
<Utnapishtim> Omni: Would such a body of people require their own nuclear deterrent?
<OmniDo> The question is: Are politicians of todays modern world deserving or capable of being such an administration? They certainly are not the authority of knowledge on the subject.
<MichaelA> Omni, I think a group like that is cognitively impossible, given only human puzzle pieces to build it with
<MichaelA> Raising humans from birth for that purpose would be better than grabbing humans from our present world; but still not good enough
<OmniDo> <{(Utnapishtim)}> If we presume that this body of people could be collected of all the major nations of the world, then they would essentially be benefitting all their originating nations. Since their goal is to work together, then they should be respected.
<MichaelA> Reengineering the brains of humans on a surface level would be better, but still not good enough
<Utnapishtim> Omni: What abotu the minor marginalised nations?
<OmniDo> <{(Utnapishtim)}> If such a nation had people with whom could represent their population and social structure, AND possess sufficient education to provide useful contribution, then yes.
<Utnapishtim> Omni: What if they can't make a useful contribution but feel emasculated and angry at being excluded
<OmniDo> Basically, we are looking for transhumanists and immortalists who have benevolence in mind for the human race, who come from all corners of the world, and who express a genuine desire to work together to accomplish the greater "Good".
<MichaelA> It seems that their goals would eventually include modifying themselves on the cognitive level for benevolence, or building something that can
<OmniDo> <{(Utnapishtim)}> Their "exclusion" is not one for maintaining their presupposed "inferiority", as their nations will also benefit from the developments of the researchers.
<OmniDo> To demonstrate benevolence that isnt reserved, ALL must benefit. From the richest nations, to the poorest and weakest.
<Utnapishtim> doesn't matter whether they will ultimately benefit. Their pride is hurt. They are feelign sidelined
<OmniDo> Such a demonstration will serve to gain favor in the eyes of the politicians of the world. As long as that is all this body of people endeavor to do, they still leave the "governing" to the politicians.
<MichaelA> And what we're talking about here is a universal solution (or at least help) to ALL problems, not just nanotech
<OmniDo> <{(Utnapishtim)}> You cannot please everyone all the time, immediately. But, you can endeavor to do as best you can. Until greater advances are accomplished, those who are disgruntled will have to remain as such.
<MichaelA> What if something happens in the middle of the night that requires an unbiased, benevolent decision in 2 milliseconds, or a major disaster happens?
<OmniDo> There is little that can be done to stop them, just as there is little that can be done to stop a baby from crying, regardless of how many toys you give it.
<MichaelA> I think that's just refusing to confront the problem, Omni
<OmniDo> <{(MichaelA)}> I highly doubt that would be the case.
<MichaelA> In nanotech, it could be
<OmniDo> <{(MichaelA)}> Not at all, Im just endeavoring to make the best of my current limited mortal ability.
<MichaelA> Technologies might not care about human-typical timescales
<OmniDo> <{(MichaelA)}> And the latter is also that Technologies might not care about humans at all.
<MichaelA> I'm saying that none of our mortal abilities will ever be enough
<MichaelA> I refuse to believe that technologies in general will not care about humans, because with fine-grained nanotech, I could synthesize a benevolent human from the ground up, and that could be called a "technology", but it could care about humans
<OmniDo> I would refrain from abiding the council of a machine, insofar as we've already deliberated the design limits and lack thereof as both advantages and disadvantages as a result.
<Utnapishtim> I see no ceiling on the capacity of human beings to be petty and egotistical in a manner that is quite clearly against their own self interest.
<MichaelA> Humans are machines, mind you
<OmniDo> <{(MichaelA)}> But who will trust you? And who will trust your "Synthetic human?" You have to consider that the people as a whole need to evolve as well, not just place the problem on someone elses shoulders with the hope that your efforts will go ideally. A collaboration is a must.
<MichaelA> We need machines as complex as humans but more benevolent, that treat all humans equally
<MichaelA> Omni, that is true, but as you say, there will always be people who fight for their right to stay human, and I want a safe world even if they choose that route
<OmniDo> <{(MichaelA)}> Then segregation is all you have left.
<MichaelA> If I make a machine that generates statements that everybody agrees with and trusts, then of course everyone would trust it
<OmniDo> Welcome Back Lazarus_Long. How nice of you to join us.
<OmniDo> : )
<Lazarus_Long> Good evening folks
<BJKlein> Welcome Back Lazarus_Long ;)
<Utnapishtim> Michael: I think there is a limit with what you can do with exiting language structures
<MichaelA> Omni, natural cultural processes will eventually convince the stay-behinders to evolve, I just don't think it's moral to force them
<OmniDo> <{(MichaelA)}> No they wouldnt, because they didnt create it and had no part in it.
<MichaelA> What if they don't know who created it, and they stop caring because it delivers such a massive benefit to them?
<Lazarus_Long> Hiya Micheal Bj Omnido and Utna, and all the rest I will follow for a moment in ghost mode till I catch up to speed
<MichaelA> What if it's a sentient being like everyone else who takes the advice of others and customizes itself while in their presence?
<OmniDo> <{(MichaelA)}> There are really only 2 options. Either all collaborate together, or one singular group/person makes the endeavor alone. With the latter, all will be suspicious and mistrusting in the beginning. With the former, there is less room for apprehension, as the ideals of all were considered.
<MichaelA> I'm not arguing that it's physically possible, just saying that it would constitute a clean, complete win
<MichaelA> Then I guess we have things like electronic democracy
<OmniDo> <{(MichaelA)}> I recognize your logical idealism with regard to simple and "clean" systems, but as we all know, humans are not as such...yet. :)
<Lazarus_Long> Are you folks still discussing Runnaway Nano?
<urgen> ok, I'll try again.. this sounds like an attempt to discover the magic mitigation, and that everyone agrees on, but how, has not been settled. To quote Gregory Bateson, "Through varying combinations of negative and positive feedback within a system, minds can grind to a halt, be self-regulating, or spin wildly out of control on a runaway course." So the question is: How does feedback happen?
<MichaelA> Isolated, rapid events will require unbiased decisions that humans simple don't have the time to take
<MichaelA> I'm not arguing humans are simple or clean, just that something could be
<MichaelA> No "idealism" is logical, btw
<MichaelA> Laz, we are, we're discussing what it will take to stop it
<MichaelA> We've drifted away from engineering safeguards into talking about "what should be allowed to control it", instead
<OmniDo> <{(Lazarus_Long)}> Yes. And how it could be prevented? HOwever, the discussion leads back to where it inevitably leads; Who and how should the ideal be created, what is the "best" choice, etc..
<Lazarus_Long> It sounds like more sociology then tech is at issue right now
<OmniDo> <{(Lazarus_Long)}> Ultimately, that will always be the main concern.
<Lazarus_Long> What if I suggest it can't be stopped, only balanced "ecologically"?
<OmniDo> The how is far more simple. Implimenting it with the Who is what invites dischord.
<urgen> so the dischord is when feedback breaks down
<OmniDo> <{(Lazarus_Long)}> Then you would most likely be correct if that circumstance were to come about, given that the sociology problems were resolved before the tech got out of hand.
<Utnapishtim> Most people ULTIMATELY want to feel validated and important. This is an incredibly central human drive. It is this that allowed Napoleon to boast that he could make people give their lives for little pieces of ribbon
<Lazarus_Long> As in all forms of ecology it will provide a means of balancing itself but humanity as we understand it might be extinct by then.
<urgen> and you are saying this is human nature so deal with it??? we're doomed :-)
<OmniDo> <{(urgen)}> Or communication in general, with regard to truth and honesty.
<OmniDo> <{(Lazarus_Long)}> its a possibility, yes.
<urgen> regarding truth and honesty
<urgen> I agree
<Utnapishtim> People care more about their individual importance than baout overall effectiveness
<Utnapishtim> so how do you impose any kind of control and still make people feel important
<OmniDo> <{(Utnapishtim)}> And that is part of the problem. Egoism and selfishness, as well as distrust and advantage/gain assessment.
<urgen> got it Utnapishtim
<OmniDo> The only way such a concern could be removed is if all were to be harmless, but all equally wanted for nothing. The utopian dream.
<Lazarus_Long> No I don't say such apocalyptic outcomes are inevtiable, just that we are talking about "environmental issues and as you are noting people don't perceive the environment much past individual survival mindsets
<OmniDo> <{(Lazarus_Long)}> And people dont for that matter. "HOw will this affect my family and my home? What are the advantages for ME and MY family?" etc.. is what people think.
<Lazarus_Long> What control can be imposed once the knowledge is plentiful?
<BJKlein> mini recap: We all pretty much agree nanotech will need some sort of control.. the question is of method: 3 Solutions So Far:
<BJKlein> 1. develop benevolent ai to walk us through
<BJKlein> 2. control the human (guns don't kill, humans kill)
<BJKlein> 3. develop nanotech by specialist in a controlled government or indpendent environment
<urgen> so the 'machine' requires participation to establish consensus
<Nus> I don't think nano-disaster is an environmental issue (any more than nuclear war is an environmental issue)
<BJKlein> any others?
<OmniDo> <{(BJKlein)}> I think that pretty much sums up our options.
<urgen> ya sounded like mine was #1 as long as that walking is also hand holding
<Lazarus_Long> Are you serious NUS? OR just being sarcastically flippant?
<OmniDo> I opt for #3, indpendant specialists.
<Lazarus_Long> I am not clear that any control is possible other than open access to the tech and detante between rival groups
<BJKlein> i'm for 1 and 3 but 2 is probably the only one that will work for the long term
<Lazarus_Long> AI and nano are coevolutionary
<OmniDo> <{(Nus)}> To give you an example: A self replicating nanoassembler can reduce the entire earth to "Grey goo" within 48 hours, IF there were no opposition that could not destroy/control them, and IF those assemblers could function in any enviornment on this planet.
<Nus> Lazarus_Long: I mean that I don't consider a disaster that wipes out humankind "environmental"
<Lazarus_Long> repression of the masses is not a concern, it won't prevent naotech development, they aren't the principle groups developing the tech
<Nus> I didn't mean it would have no environmental consequences
<OmniDo> <{(Nus)}> All this from 1 single assembler.
<OmniDo> That far outweighs the prospects of nuclear war.
<Nus> Yes, agreed
<BJKlein> maybe we could have a liscense for nanotech..
<OmniDo> That is what is feared by the developers/politicians.
<Nus> I think we agree, I just phrased it weirdly
<BJKlein> you can't play around or work on it unless you pass a world body test..
<OmniDo> <{(BJKlein)}> Pointless. People would simply "ignore" the liscence, as software pirates ignore the copy protection.
<BJKlein> and a psycho test
<Lazarus_Long> repression will yield more weapons aplications instead of positive applications that provide developmental integration and safeguards
<Lazarus_Long> The problem is that social repression makes weapons applications more likely.
<MrDebt> omni, how small is a nanoassembler?
<OmniDo> <{(Lazarus_Long)}> Indeed.
<OmniDo> <{(MrDebt)}> The ideal is between 1/50th to 1/80th the size of the average human cell.
<OmniDo> no larger than 1/10th to 1/40th
<MrDebt> Let's see ...
<Lazarus_Long> DNA is a nano assembler
<MrDebt> a fab costs a billion dollars now
<MrDebt> and micromachines would require fabs.
<MrDebt> No?
<Lazarus_Long> Only because of the approach being taken Mr.Debt
<OmniDo> <{(MrDebt)}> Depends on how they are constructed, but yes. Unless a private engineer had the skill to design/construct his/her own lab with less funding.
<BJKlein> what would be one of the first possible disasters from nanotech..
<Lazarus_Long> I can conceive of a small warehouse scale that can operate on hundred of thousands instead of billions
<OmniDo> <{(BJKlein)}> The Grey Goo scenario is the worst.
<MrDebt> Grey goo is silly.
<Lazarus_Long> only if it isn't the objective
<BJKlein> the worst .. but what will be functionally possible the soonest?
<OmniDo> <{(BJKlein)}> The second would be human transmission, in which "Human goo" is the result, depeding on the speed of the spread, etc..
<Lazarus_Long> I can conceive of using grey goo as a WMD counter threat to conventional NBC and doing it on the "cheap"
<OmniDo> But for the most part, theyre immediate threat assessment is no objectively "worse" than a contageous virus, they merely do so with far more speed and resiliancy.
<Lazarus_Long> It directly fits into political Mutually Assured Destruction strategies
<BJKlein> so it seems replication and assemblers is all you need to fuck things up?
<OmniDo> <{(BJKlein)}> Basically, yes
<BJKlein> maybe 4. would be to develope conter replication tech..
<OmniDo> Those are the only "disasters" of sorts. This does not imply though that they cannot be used to deliberately engineer all other possible threats to humanity.
<OmniDo> Far, far better weapons. Far better defenses, etc...
<OmniDo> Anything plausible with material science is possible with nanotech.
<Lazarus_Long> How about just targeting paper products? Or electronic insulation? Or just the hyfrocarbons in Petroleum products? See what happens?
<OmniDo> As they say, Nanotech is both the physcist and engineers dream tool: Either can prove/disprove or create/destroy according to their imagination.
<OmniDo> <{(Lazarus_Long)}> Such as a mutual intent for the first assemblers? Thats a good idea.
<OmniDo> I dont know if the researchers themselves will limit their research to merely those aspects. K Eric Drexler certainly wont.
<Lazarus_Long> I am saying that DARPA is already moving to take over nano for Security reasons and weapons applications will dominate funding
<OmniDo> <{(Lazarus_Long)}> Yes, but the greatest weapons won't be those that are projectiles from fabricated trigger devices.
<Lazarus_Long> this is the first major conflict of the coming New Age Arms race and this is what should get addressed politically
<Lazarus_Long> No, they will be DNA programmed hunter/killer pseudovirus assassin bots.
<OmniDo> The greatest weapons will be biological. Everything from cellular manipulation to mind control, to total biological control/manipulation.
<OmniDo> <{(Lazarus_Long)}> Exactly.
<BJKlein> all the more reason to transhumanize
<OmniDo> <{(BJKlein)}> All the more reason to "get out of here" and develop it somewhere in private.
<OmniDo> Thats my opinion anyway.
<MichaelA> All the more reason to build a Friendly AI, hehe
<Lazarus_Long> I agree somewhat there, It would be preferable to do nanotech on the moon, or better yet a geostatinary or lunar orbital facility with rotatinal gravity to significant percent of One Gee.
<MichaelA> Is that what you meant, Omni?
<MichaelA> The moon?
<urgen> that's like the alternative roots that are being developed independent of ICANN
<OmniDo> <{(MichaelA_&_Lazarus_Long)}> Thats a very good idea. In space or out of the range of earth would be the safest in terms of ecology.
<MichaelA> Antartica could probably work just as well; doubt you could make very effective nanomachine reproduction there
<MichaelA> Unless the nanomachines were continually building their own fusion reactors, or whatever
<OmniDo> There are more than ample materials on the moon, and no one has anything to lose there since nothing there would be threatened.
<MichaelA> Which materials on the moon are ample?
<OmniDo> <{(MichaelA)}> Moon Rock has lots of Hydrogen, oxygen, silicon, and other resources.
<MichaelA> Biological material holds far more energy in the chemical bonds than nonbiomaterial does
<MichaelA> Okay, hydrogen
<Lazarus_Long> I think Antarctica is still too hazardous but I think there is a bottom up approach that also could provide a safety net
<MichaelA> But do you agree that it would take a more sophisticated kind of nanomachinery?
<MichaelA> Biovorous-ness would be easier to engineer
<OmniDo> <{(MichaelA)}> Primarily, effective nanomachines have to be capable of functioning far more efficiently than a cell, which means there are no chemical gains/losses from the production of energy needed to fuel them. Biological materials would be out of the question for the creation of all-enviornmental assemblers.
<Lazarus_Long> That is the safety net Omnido
<OmniDo> Exactly.
<Lazarus_Long> I think certain species can be genetically modified to do the assembly
<MichaelA> But since biological materials are so energy-rich, don't you think the ideal nanomachinery would make use of them as well as other materials?
<BJKlein> ok new list..
<BJKlein> 1. Benevolent AI to hold our hand
<BJKlein> 2. Transhumanize to avoid biohazards
<BJKlein> 3. Control human instince with devices (guns don't kill, humans kill)
<BJKlein> 4. develop nanotech by specialist in a controlled government or indpendent environment (on the moon)
<BJKlein> 5. Develop counter nanotech to quell and controll runnaway nanotech in tandem
<OmniDo> You can build lots of materials that arent biological out of biological materials, but building them to function like biology would be a mistake. They wouldnt be nearly as efficient or capable, unless that was their intent from the beginning.
<MichaelA> Wait, are you talking about nanomachines that use only electricity..?
<MichaelA> Right, of course
<MichaelA> But they could eat biology
<MichaelA> Not be it
<MichaelA> Biology is energy-rich whether you are a machine or a cell
<OmniDo> <{(BJKlein)}> Change #5 to: Develop counter measures to quell and control the runaway nanotech.
<MrDebt> biobots are gross
<OmniDo> <{(MichaelA)}> Correct.
<MichaelA> MrDebt, totally, I'm stuck in a big one
<BJKlein> thanks
<Lazarus_Long> Not if the "product" was self replicating, but required seeded conditions.
<MrDebt> they are called gastrobots actually
<MrDebt> I'd rather create a brainless human that is remote controlled
<MichaelA> sounds pretty useless and unaesthetic
<MichaelA> if you don't mind me saying so
<MrDebt> hehe
<OmniDo> At least you could turn it off with the press of a button.
<OmniDo> ; )
<OmniDo> <{(Lazarus_Long)}> Are you referring to a predesigned limit to its replication?
<Lazarus_Long> The "bottom line" for our government and most funding sources is controlling "profitability" for investors not safety per se
<BJKlein> hmm I wonder if we should put #6. do not try to halt nanotech
<OmniDo> If so, that is entirely feasible. The problem lies with its eclusivity, in that the originating "Seeder" could replicate indefinately unless it was designed as well with a limit.
<urgen> definately BJKlein, you have to have the "what me worry?" option
<OmniDo> Then you end up with a "culture" of nanomachines that you endeavor to keep alive within limits.
<BJKlein> so in effect we're in a foot race.. trying to stay one step ahead...
<OmniDo> It all comes down to the ultimate starting goal behind nanoresearch. What the first stepping stone should be, and what direction it is intended to traverse from there.
<OmniDo> <{(BJKlein)}> Essentially, yes
<OmniDo> If the goal is to merely create an assembler that can duplicate itself within a given environment, then outside of that environment the threat is nullified.
<urgen> where the race end is inevitable, you are only trying to chase a reduction of it having to take longer
<OmniDo> Depending on the researchers, and how "bold" they decide to be with the first developments, will determine the ultimate "risks" involved.
<Lazarus_Long> Not "nullified" but at least reduced
<BJKlein> ok.. timeline..
<BJKlein> when should we expect the first assemblers
<urgen> history shows a pattern of announcements post-discovery
<urgen> surprises
<urgen> it has to be unexpected or it would be known, and if it were known we'd already have it
<urgen> there is already activity in the area
<OmniDo> <{(BJKlein)}> That is entirely speculative at the moment, but as Drexler said in his book: "The pessimist would say 10-20 years. The extreme pessimest would say 25-50 years, and the optimist would ask why it isnt already here."
<BJKlein> an interesting side question: "What happens to the monetary system when everyone is able to satisfy his own basic material needs at very low cost?"
<Lazarus_Long> DNA is a nanoassembler and it works now
<BJKlein> from www.crnano.org
<Lazarus_Long> it is a natural analogue people overlook just like the Sun which is a functioning fusion reactor in front of their eyes.
<OmniDo> <{(BJKlein)}> That is when the society undergoes its greatest duress, due to the final obsolution of the need for money.
<OmniDo> However, until everyone has their own personal "replicator" and their own acre of land with which to live, money will merely take a different form.
<BJKlein> material bliss.. stagnation.. or birst of creativity and enjoyment..
<Magneto> But will governments allow the technology into the hands of the common citizen??
<OmniDo> It will revert back to material possession and control as money. Just as gold was used as cash for its aesthetic value, rocks, sand, dirt, air, and water will become the new "valued materials".
<Magneto> I can envision them doling it out.
<BJKlein> welcome Magneto
<Magneto> hi
<BJKlein> I doubt that governments will successfully hold such control....
<OmniDo> <{(Magneto)}> That is where the government faces its greatest opposition: from the citizens whom it was designed to govern. They will essentially lose alot of their control, but if they are wise, they will attempt to ensure some degree of dependance.
<OmniDo> Controlled systems will no doubt be the result
<OmniDo> The governments greatest fear will be some reserved citizen or citizens who attempt to offer non-controlled systems to the whole of society, en masse.
<BJKlein> but, it may be our only chance to avoid oblivion..
<Magneto> The hackers of the future will be seen as serious terrorist threats on a level not seen now.
<OmniDo> Personally, I could care less as long as I am immortal, and I know that my thoughts are my own, not some government induced program that is dictating what I should think and how I should feel in accordance with their own design.
<OmniDo> If the aforementioned can be granted me and any other human, then I will be content for a great while.
<Lazarus_Long> One example of a dilemma application, is weather modification
<MichaelA> Magneto, agreed, and there will be many who try to hack the internal machinery of nanotech
<OmniDo> <{(Magneto)}> Exactly. The hackers of the future will desire freedom from all control, just as they hack programs and distribute software, so too will they endeavor to free themselves from government regulation.
<MichaelA> Nanotech, its functionality primarily being software-based
<Lazarus_Long> I can conceive of tech applications for simple nanobots that would allow an organized international consortium to control weather and thus food production
<OmniDo> <{(Lazarus_Long)}> There are numerous applications.
<MichaelA> Laz, the folks a the Center for Responsible Nanotechnology are predicting mature nanotech only months after assemblers, due to the bootstrapping dynamic
<Lazarus_Long> It just happens to be one that may be more practical then is understood generally
<MichaelA> I would expect nanomachines synthesizing food from raw materials only a few months after the first assembler, in addition to more "basic" stuff like weather control
<OmniDo> <{(MichaelA)}> I agree. They would first be used for the processess of foodstuffs, and then onto power and energy production, etc...
<Magneto> MichaelA: What is the view on nanotech (compared to extropians) at The Center for Responsible Nanotech?
<MichaelA> www.responsiblenanotech.org
<MichaelA> the folks who run it are transhumanists
<Magneto> thank you
<Lazarus_Long> I can envision methods for both toxic cleanup and low impact mining as wellas waste reduction as before foods
<BJKlein> just wish we had the nanoassembler to finish the paint work
<Magneto> lol
<OmniDo> <{(Lazarus_Long)}> Basically, to be honest, anything plausible is possible. From positive to negative, although the positive seems to have far more appeal in my mind.
<Lazarus_Long> foods will require a LOT of testing for safety AND politics to be generally accepted
<OmniDo> Let us hope that the wonder and awe inspires our greatest leaders and developments into the positive.
<MichaelA> Appeal has nothing to do with likelihood of occurence, of course
<Lazarus_Long> BJ that may be closer than you think
<OmniDo> <{(MichaelA)}> True.
<OmniDo> <{(MichaelA)}> But thinking pessimistically rarely yields positive results.
<OmniDo> Notice I said "Rarely" not never. :)
<MichaelA> I find that I can delete the self-defeating emotions that sometimes come with maintaining a realist perspective
<Lazarus_Long> I like the nano particle Photocells that are starting to come out and the idea of a nanopaint that changes color at the touch of a switch is only months to a few years away
<MichaelA> Pessismism and optimistic are both non-realism
<MichaelA> optimism*
<Magneto> How about "practical optimism?"
<Magneto> lol
<OmniDo> <{(BJKlein)}> So, before this discussion strays too far off topic, are we just about finished? Or does anyone else have anything specific to add?
<MichaelA> Magneto, you are a true extropian, heh
<Lazarus_Long> Practical optimism is "necessity as the Mother of Invention"
<Magneto> I try.
<BJKlein> seems we're about done with the official chat
<MichaelA> non-official chat begins now?
<OmniDo> <{(BJKlein)}> Keep those points that you so kindly compiled for us, and perhaps we can put them on the forums?
<Magneto> Let the insanity begin!
<MichaelA> I think most people, even me, are secretly practically optimistic
<BJKlein> i'll post them into the chat topic
<OmniDo> They would make great reference. Im sure you could splice out the better parts of the chat that reflected those points for a thread post.
<BJKlein> I will..
<OmniDo> : )
* OmniDo Applauds BJKlein and his efforts
<BJKlein> i'll see if I can create a mini article as well..
<Lazarus_Long> I think that there is too much confidence is what "controls" mean" I prefer open competition and peer review for nano
<Lazarus_Long> Who will watch the watchers?
<BJKlein> hmmm, OmniDo maybe you could add an intro paragraph concerning your most advocated solution
<Magneto> I think a future topic should be how we should work together to actively create the world we all so desperately want as transhumanists.
<OmniDo> I think of developing Nanotech like Gambling at the Roulette table. To win, you have to wage huge amounts, but to win BIG, you have to risk losing it all.
<OmniDo> <{(BJKlein)}> Id be happy to. Let me know where I should post it, and I'll compile one for you. :)

Edited by OmniDo.

And I corrected my faux pas too.

Edited by Lazarus Long, 24 April 2003 - 12:26 AM.


#5 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 23 April 2003 - 02:22 PM

Summery and Official ImmInst Nanotech Statement Available Here:
http://www.imminst.o...=105&t=1114&st=

#6 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 24 April 2003 - 12:07 AM

OK I guess I should edit the grotesque typos I left in there too ;)




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users