• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo
- - - - -

Accelerating Progress and the Potential Consequenc


  • Please log in to reply
13 replies to this topic

#1 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 07 July 2003 - 12:08 AM


Transvision 2003 - Conference Proposal - Cyber or Other Track
Accelerating Progress and the Potential Consequences of Smarter than Human Intelligence

Posted Image
by Michael Anissimov - ImmInst.org Co-Director

For several years, Artificial Intelligence wasn't on my conceptual radar - I was interested in more conventional futurist topics such as bio- and nano- technology, they seemed closer and more feasible. The human brain is the most complex object in the universe - it seemed like matching its functionality would require the full force of mature nanotechnology and an army of genius programmers. The logical order of technologies seemed to imply that Artificial Intelligence would come later rather than sooner. Today I've changed my opinion. Why?

Technological and scientific progress is accelerating, at an ever-increasing rate. Moore observed that the number of transistors on a chip doubled every 18 months. Today this doubling cycle is as short as 16 months, but who knows - this so-called "Law" could peter out, or stutter a bit, or jump paradigms from silicon chips to something else. AI enthusiasts are accused of counting too heavily on Moore's Law, and these accusations are at least partially true - exponential increase in computing power does not give us AI for free. Nevertheless, processor speed, the availabity of RAM, and hard disk space, are still increasing exponentially and do not currently show any signs of slowing down.

Let's examine other areas where exponential change is taking place. Exponential change is not limited to the computing industry alone, it appears in a wide range of industries and fields, all mutually driving and supporting one another. Entrepreneur Ray Kurzweil is a famous futurist in the analysis of these trends, and is working on a third book on accelerating technological progress, "The Singularity is Near". Kurzweil has observed that when the human genome scan started fourteen years ago, critics pointed out that at the current speed of genome scanning, it would take thousands of years to finish the project. The 15 year project was actually finished slightly early, due in part to the increased availability of sequencing software and supercomputers. The field of nanotechnology recieved more funding last year than it did in the entire prior decade.

Some of these fields are accelerating so fast independently that workers in each field are missing out on the benefits that can come from synergy among their disciplines. But a growing group of scientists and technologists are noticing these opportunities, and now interdisciplinary studies and media articles praising the benefits of technological convergence are becoming ever more common. The National Science Foundation recently hosted a conference in Los Angeles called "NBIC Convergence", where NBIC stands for "nano-bio-info-cognitive", focusing on the convergence of these scientific revolutions and gathering science and industry leaders. But whether all these promises are smoke or true fire, there is a thing which neither these nor any other technologies have yet changed. What is this constant feature of our world?

Our species. Most importantly, our brains. Our brains are the substrate that underlies our minds and cultures. The routine that constructs a homo sapiens brain has not changed for fifty thousand years or more. Like all species, there is a band of genetic variance that depends on who your parents are, along with environmental factors like our level of technology and education. But compared to the space of all *physically possible* minds, human minds are extremely similar to one another. We all have brains that weigh about 3 lbs, with two hemispheres, and cerebral cortices about two millimeters in thickness layered into six sections. This basic biophysical design holds constant across all of humanity throughout history. Neuroscientists have discovered that learning, experience, and conscious thought effect relatively superficial aspects of brain function and organization - they are the shuffling of submillimeter arrangements in tiny neural structures called dendrites.

What about our mental world? All physiologically normal humans have ten fingers, ten toes, two eyes, two ears, a mouth and teeth several centimeters long. Then shouldn't we accept that our minds will share panhuman traits in the same way that we share bodily characteristics? Evolution could not bear the cost of humans without inborn triggers ready to interrupt conscious thinking with survival reactions. In other words, evolution could not sustain organisms with so much self-control and deliberative power that they would intuitively make decisions that threatened their own reproductive viability. Mental traits evolved in the same way that bodily ones do - through natural selection and differential rates of reproduction. Mothers with the tendency to ignore their babies would end up with less offspring reaching puberty and having children of their own, which is why mothers have a natural urge to care for and love their babies. The way that mothers talk to their babies in a high tone, enunciating each word with euphoric emphasis, turns out to help children pick up language effectively. Most mothers aren't thinking deliberately about which pitch and tempo to use while talking to their babies; it just comes naturally, a cognitive feature humanity picked up during its evolutionary history.

The fact that present day humans make decisions that contradict their evolutionary origins is the effect of contemporary culture and technology. Evolution takes millions of years to adapt organisms to new environments, but we humans have created our own independent environment and culture so rapidly that our decision-making capacity has drifted out of synchrony with its natural context. For example, humans do not instinctively avoid contraceptives because contraceptives were not around fifty thousand years ago. This perspective on cognition is called "evolutionary psychology, a growing academic subfield. They have isolated sets of automatic functions and dedicated preprocessing routines that human beings share as a species, such as incest avoidance, social contracts, coalition forming, pest avoidance, mating rituals, political deception, reciprocal altruism, and a common environmental aesthetics.

We clearly have a tendency to magnify our apparent differences, to form coalitions, moral preferences, notice aspects of body shape or personality that differ among us, and so on. If we're all supposedly the same design, then why do we perceive all this diversity? Objectively, the vast bulk of complexity that goes into human DNA dictates what separates a human being from an amoeba, but only around one-thousandth or less corresponds to the visible differences between human beings. Humans are designed by evolution to magnify these differences as a survival strategy. In an environment where resources were scarce and how popular you were could mean the difference between passing along ten copies of your genes or none, noticing tiny differences between other humans tended to matter a heck of a lot. Noticing interspecies differences would have been less useful because the main reproductive challenges to humans would be groups of competing humans. Environmental factors such as famine or storm may have killed a few hunter-gatherers and nudged the trajectory of evolution, but for hundreds of thousands of years the most adaptively relevant objects to humans have always been other humans. Humanlike metaphors, humanlike assumptions, and good upstanding humanlike behavior were all that mattered when it came to passing your genes into the future.

What does this mean for Artificial Intelligence? Humans tend to see things in human terms, a phenomenon contemporarily known as anthropomorphism. When we begin to talk about intelligences *outside* of the space of human familiarity, built out of different materials, with different cognitive patterns that have been designed deliberately by programmers rather than blindly by evolution, we are confronting something profoundly foreign, far more foreign that we initially realize. By comparison to a *real* AI, HAL is just like a little man in a box flashing a spooky red light. In the space of all physically possible organism designs, we can visualize humanity as a tiny slice of a huge pie known as "intelligence". We don't know what it's like beyond this slice - we don't have names for these beings because no one has observed them. But, what is really pretentious and anthropocentric is to assert that our slice is already the most complex kind that can possibly exist.

When a human points out an idea or design and calls it "astoundingly brilliant", in comparison to a poor idea or design, what they are really talking about are submillimeter, one-hundreth-of-a-percent differences in the brain chemistry or structure of the person who came up with it. Tiny differences in brain structure among humans can magnify themselves to multimillion dollar differences in budgets, or finishing a project in one week or ten. For the most part we are all the same, but these tiny differences make up the whole of our daily reality in interacting with other people and our internal mental worlds. Someone will go to school for a decade simply in order to make a few microscopic changes in the brain pattern of themself and their colleagues and contacts, but from the viewpoint of humans going about their daily lives, these tiny changes can mean so much.

Artificial Intelligence design will operate in a world where a small change in codebase can mean the difference between prodigy or idiocy, sanity or insanity, kindness or confusion. In the present day, however, so-called AI are essentially glorified software programs that do not nearly approach human-level complexity; I take objection to using the label "AI" to describe systems which are clearly not intelligent. This is simply a marketing tactic for hype-promotion. We think of these supposed AIs as tools, and that is what they are. Real AI, AI that meets our intuitive definition for "intelligent", AI that has a complex subjective world and the tendency to pursue tangible goals and care about them, simply doesn't exist yet. However, it is a goal worth considering. There are credible groups out there, such as Peter Voss's A2I2 project, Ben Goerztel's Novamente project, and the Singularity Institute for Artificial Intelligence, that are pioneering this emerging field, dubbed "Artificial General Intelligence", or AGI for short. The idea of a truly intelligent AI begs many questions, but let me set aside the moral issues for later, and examine the technical feasibility issue alone for now.

I mentioned before that I thought AI would come sooner rather than later, so it's about time I said why. Since human beings are currently the only genuine intelligences we are aware of, real AGI designs will certainly be inspired by them, although a complete copying process would be overly exhaustive and unnecessary. A modern day electrical engineer can look at a device made 80 years ago, out of vacuum tubes, and recapitulate its functionality in a new device thousands of times smaller and less expensive, with internal algorithms optimized for taking advantage of their new-found substrate. Human minds run on neurons that conduct computations around 200 times per second. An AI mind would run on transistors or logic gates millions or billions of times as fast, lifting all the design constraints of the 200Hz clockspeed of human neurons. Human beings possess internal cognitive hardware specialized for myriad purposes that is essentially unalterable; in an AI, all the computational elements and functions can be reconfigured and analyzed, opening up another degree of freedom for designers and for the AI itself. Given what we know about the evolutionary process relative to the process of intelligent design, it seems highly probable that designing a functional AI would be far easier than knowing every minute detail of how the human brain works and painstakingly duplicating these details in code.

For one thing, biology and evolution are both outstandingly messy and inefficient. Our minds are a set of fortunate mistakes and approximations to idealized intelligence that worked barely well enough to pass their genes onto the next generation and perpetuate themselves. Brains had to evolve layer by layer, so by the time human intelligence came around, it had to manifest in a container completely loaded with outdated tools. Our modern neocortex - the brain section that truly makes us human, evolved solely in the absence of any special assistance or pre-preparation; it had to work with what was already there, the more primitive primate brain. Before that, the primate brain had to make use of smaller and simpler brains, all the way down to the beginnings of nervous systems. Contrary to legend, we do use our entire brains; the cost in energy and nutrients required to keep a full brain functioning would simply be too high of a price for evolution to pay if the entire thing were not being used. Evolution designs organisms in a fantastically incremental fashion - if a given mutation does not confer an adaptive benefit persistently and quickly, it will never live into the future.

The main point that I'm trying to make is that the brain is a very complex object, the most complex object in the universe that we know of, but its complexity and power is not as extreme as many intuitively believe. We may have around 100 billion neurons in our brains, but big numbers do not necessarily entail massive complexity...neurons seem more like incidental tools evolution needed to use - specialized cells - rather than unique vehicles for intelligence that could not have been constructed any other way. The human brain is just a slightly upgraded version of a chimp brain, and while it is undeniable that some threshold was crossed when homo sapiens emerged, our brains still share a fantastic amount of similarity, in terms of functionality and organizational principles, with our primate cousins.

Scientists have zereoed in on the mechanisms of memory and learning, emotion and spatial orientation, and even clues to the neurological correlates of consciousness and moral decision-making ability. The brain's functioning is far from opaque to us; we have fMRI scanning, PET scanning, sophisticated neurocomputational modeling, and many thousands of clever experiments precise enough to answer questions like "which neuronal groups are activated during a chord or pitch change in an emotionally stirring musical piece?"

Human brains are modular, composed of domain-specific mechanisms for confronting challenges our ancestors faced. Brain functioning has to be domain-specific in part because a jack of all trades is an ace at none, and evolutionary arms races demand that organisms specialize to their niches. As stated earlier, evolution can only design things incrementally and is largely incapable of synthesizing compatible functions elegantly into more general problem-solvers. A common objection is that since evolution took so many billions of years to evolve humans, it will take engineers longer than a few decades or centuries to match it. This argument seems clever on the surface, but the objectors should note that the task is not to copy all of humanity's unique complexity, but create a mind with the bare essentials for intelligent learning and self-improvement capacity.

Regardless of whether AI is created in 10 years or 100, we have to ask ourselves what will happen when it finally arrives. I use the phrase "human-similar" to describe AIs of roughly human capacity for innovation and intelligent thought, rather than human-equivalent, because I think the phrase "human equivalent" implies that these AIs will be just like humans. In pursuing the goal of human-similar AI, AI projects have devised a paradigm known as "seed AI", an AI explicitly created for self-improvement capacity. At first, self-improvement might take place on a very low level, and the AI's mind would simply serve as a slightly better compiler. But, as the intelligence of the AI increases, self-improvement could get more powerful and less assistance from programmers would be necessary. Humans could take over higher-level tasks in AI creation, leaving the grunt work to the AI itself, with speedy transistors allowing it to think at millions of times the characteristic human rate. As the AI reaches a threshold where it has the knowledge to create overall changes to its own architecture and high-level cognitive content, it might take over the role of the programmers and begin to initiate its own cycle of self-improvement. How fast could this happen? Due to the relative speed differences between neurons and transistors, and the design constraints lifted by virtue of existing as engineered, self-modifying software entities running on silicon, rather than evolved organisms running on specks of meat wired to each other, it shouldn't be considered radical to state that self-improvement could take place quite rapidly.

This positive-feedback cycle of ultrafast minds capable of creating new designs and assisting in the addition of new cognitive hardware, to the point where assistance from humans becomes unnecessary and these minds start to reach new heights of intelligence and superintelligence, far beyond human capacity, has commonly been called the Singularity. The term "Singularity" was originally invented by mathematician Vernor Vinge by analogy to the center of a black hole, where our model of physics breaks down - in this case, human understanding would break down in the face of entities qualitatively smarter than it and much more complex. Many skeptics take offense to the idea that something smarter than human could exist and change human destiny, but they are ignoring the fact that human intelligence is far below the theoretical maximum, and if chimps can't understand human society, we have no reason to believe that entities smarter than us would be just as incomprehensible. Would it be fair to call these posthuman entities "AIs"? I don't think so - the term "artificial" is supposed to refer to uniquely human artifacts - and the pattern making up these beings could bear little resemblance to the initial seed intelligence from which it sprang. Transhumanists have taken to calling these beings "SIs", or superintelligences, to describe the level of intelligence difference between them and natural human beings. To make clear the division between the kind of self-improvement that drives everyday technological progress, and the vastly accelerated progress of SI endeavors, transhumanists have also distinguished between "strong" and "weak" self-improvement.

The prediction of extremely rapid self-improvement for human-similar AIs that reach a particular threshold of intelligence has been called the "hard takeoff model" of the Singularity, described and analyzed by Singularity Institute for Artificial Intelligence researcher Eliezer Yudkowsky. Part of the idea is that human-equivalent AIs seem to be anthropomorphic constructs better suited for science fiction than real-world projections; by the time an AI has reached a level where it is capable of improving itself open-endedly, it could easily soar to far beyond human intelligence capacity, unless it restrained itself for some reason.

This idea has profound moral and societal implications. We tend to think of AIs as brains in boxes, immobile and at the mercy of their programmers, who can pull the plug at any time. Early AIs will certainly be of this nature, but as complexity and intelligence rises, so will the AI's capacity to convince the programmers to let it out of its confines, whether it chooses to use it or not. So will the AI's capacity to get involved in real-world activities such as stock market trading, proteomics, or nanotechnology research. If the AI were capable of extensively modifying its own source code on its ultrafast computing substrate, it would quickly make less sense to talk about the AI working *within* the human system and start to make more sense to talk about the AI as working *beyond* or *above* the human system. We can't predict how the AI will go about accomplishing its goals because we just aren't that smart, in the same way that chimps aren't smart enough to comprehend human activities. We like to think of humans as possessing special broad-brush intelligence that will never let the workings of the world escape beyond our general understanding, but the support for this assertion is weak. A simple system cannot model a system far more complex than itself at anything more than a very low resolution. Given that self-improving AI might be able to rupture the fabric of human understanding, what can be done to minimize the negative impact and maximize the probability of a positive outcome?

First of all, I see no way of avoiding AI. Even if the first AIs are kept below human-similar level, or prevented from modifying all of their codebase, improving itself beyond human intelligence. Mature nanotechnology will become feasible within a few decades or less, and when it arrives, it will likely offer computing power many orders of magnitude greater than the human brain. Chris Phoenix of the Center for Responsible Nanotechnology recently published a rough draft of a paper suggesting that the bootstrap project from a basic assembler to a functional nanofactory and nanocomputing could take as little as a few weeks.
Cognitive science is penetrating deeper and deeper into the intimate workings of the human mind, and it is only a matter of time until the algorithms of higher intelligence become known in scientific communities. Yes, there could be delays, yes, we could nuke ourselves to smitheereens first, yes, our world could be overrun by a totalitarian government banning all computer processors above the speed of a Pentium II. Regardless, these alternatives seem less likely than progress continuing exponentially as it always has, and it does indeed seem that eventually humanity will need to face full-fledged, self-improving Artificial Intelligence. What can we do?

Many are familiar with Asimov's Laws of Robotics, a science fiction plot device invented in the early days of space. There are three laws; basically, don't harm humans, obey them, and don't let yourself come to harm.

These may sound fine and dandy on the surface, but a deeper exploration reveals many problems. The idea of Asimov Laws, even if we intuitively agreed with them, don't even begin to solve the problem. Human words are symbols for huge quantities of underlying complexity.

The reason that words even work to communicate is that they exploit the mutual complexity our brains have in common. Even human beings that speak different languages can guess at the meaning of body language or speaking tone, but a true alien or AI might be at a loss as to what these signals mean.

Speaking the words "do not harm a human" to an AI means very little unless the AI has a good idea of what the programmers mean when they say "harm" and "human", plus all the common sense rules that humans are so familiar with, yet have little reason to notice. These common sense rules should not be phrased in the form of more words, but in patterns of cognitive complexity we transfer over to the first AI. After the AI and the humans begin to share some of the same underlying complexity, then higher-level communication and verbal interaction may become possible, but probably not until the final stages of the complete project.

The rules for creating robustly benevolent AI will not be simple. Many have suggested that a completely trustworthy AI is impossible; that all thinking entities will necessarily be self-centered. Evolutionary psychologists, however, that organisms sculpted by evolution *must* be self-centered to survive; selection pressures almost always operate on the level of the individual. But in ant colonies, for example, where no single ant is an independent reproductive unit, selection pressures operate on the colony as a whole and the supergoals of the ants focus on the colony rather than themselves. It will take time for us to judge the potential consequences of transhuman intelligence, or the likelihood of it being created altogether, but I suggest that the time to start thinking about it is now.

#2 Sophianic

  • Guest Immortality
  • 197 posts
  • 2
  • Location:Canada

Posted 07 July 2003 - 01:16 PM

When I look at the directory of subjects in a large metropolitan library, I am amazed at the sheer number of general subjects that humanity has discovered and explored, suggesting to me that the capacity of the human brain and mind are phenomenal. I am further amazed that a relatively small difference in genetic inheritence between humans and chimps can result in such a staggering difference in cultural attainment. Let us take care not to minimize or denigrate the human mind with such references as "computing with meat" or "humans are not much different from chimps." This is not a criticism of Michael ~ just a cautionary note for the cynical among us.

Re: the rise in complexity and sophistication of artificial general intelligence (AGI) ...

We would all do well to be concerned if (a) AGI can acquire the faculty of awareness, and the capacities for subjectivity and intentionality (with or without elements of the biological), and (b) AGI, as a consequence, bootstrap itself into a level of intelligence and sophistication that outstrips human general intelligence (HGI). The first remains an open (but fascinating question) and the second may not necessarily be as urgent as it appears, especially if a select group of people can find a way to keep pace with the self-improvement process in AGI by augmenting their own intelligence in tandem with, or in cooperation with, agents of AGI. Now, that would be an interesting subject for a novel or movie.

#3 Jay the Avenger

  • Guest
  • 286 posts
  • 3
  • Location:Holland

Posted 07 July 2003 - 04:05 PM

Imagine an AI that thinks many orders of magnitude faster than we do.

Won't this AI sink away in a puddle of boredom, as it has to wait for a reply from us humans for (what will seem like) millions of years after it has said something to us?


Don't get me wrong... AI is pretty cool. But this idea makes me feel sorry for the first AI that will ever be developed. Maybe two of them should be developed at once, so they can at least communicate with each other.

sponsored ad

  • Advert

#4 Jay the Avenger

  • Guest
  • 286 posts
  • 3
  • Location:Holland

Posted 07 July 2003 - 04:13 PM

By the way: there's a new series on the Discovery channel on the human brain.

It's on tonight at 23:00. In Holland that is.

I'm gonna watch it for sure. I hope I'll learn new things from it.

Edited by Jay the Avenger, 08 July 2003 - 09:06 AM.


#5 Araanor

  • Guest
  • 8 posts
  • 0

Posted 10 July 2003 - 11:21 AM

Imagine an AI that thinks many orders of magnitude faster than we do.

Won't this AI sink away in a puddle of boredom, as it has to wait for a reply from us humans for (what will seem like) millions of years after it has said something to us?


You're trying to put yourself in the shoes of the AI, it doesn't work quite like that. The AI will be designed by humans and work fundamentally different from humans. It's desires will be mapped by it's designers.

Edited by Araanor, 10 July 2003 - 11:22 AM.


#6 Discarnate

  • Guest
  • 160 posts
  • 0
  • Location:At a keyboard of course!

Posted 10 July 2003 - 11:45 AM

Jay - That's a possible scenario. Araanor - that's another possible scenario. The problem is, no one has an AI yet. We don't know what the results will be - it's a problem we've got no real answer to yet.

I suspect that the answer may be somewhere between the two of your answers to the problem - it *may* be bored, but it may be interacting either with multiple people simultaneously (think like a chat program on steroids) or with one while pursuing its own intellectual/programmed goals simultaneously.

Of course, both of these indicate non-human capabilities - the ability to efficiently parallelize the train of thought and experience, and to be able to reintegrate it as needed....

-Discarnate

#7 ocsrazor

  • Guest OcsRazor
  • 461 posts
  • 2
  • Location:Atlanta, GA

Posted 10 July 2003 - 01:01 PM

Hi Gang,

Michael - I am sorry I missed you talk at TV2003 - but I'm very glad you posted it here. I agree almost completely with you. The points where I would differ are:

1) estimates of neuronal clockspeed - Spike rate is probably not a good estimate of information processing in neurons. This takes into account only the electrical information channel in nervous systems and ignores the rest. I have had discussions with some of the best electrical engineers in the world at modeling single neurons. It is not possible to correctly model the behavior of a single neuron from even simple organisms such as slugs with many transistors right now.

2) downplaying complexity - you may not have to model the biology to get AI, but you are probably going to have to create a system that has at least equivalent complexity to the human brain in order to get something on the order of the same intelligence. I'm still pretty optimistic that we can crack the problem (either how brains work or AI, I see them as two sides of the same problem), but it is THE HARDEST problem humanity has ever faced. We really do need someone with the creative mathematical abilities of a Newton to solve the complexity and dynamics issues, or a programmer smart enough to build a system to crack the problem for them.

As a general comment, I think we could be moving much faster if we could get more communication between AI and neuro workers. Better analysis software could help neuro researchers sift through data such as signal information much faster than is currently being done, and the information being learned by neuro reseachers about novel computational structures (especially massively parallel processing) can provide new programming structures for AI researchers. It is my hope that increased synergy between the fields will get us all to our goals much faster.

Best,
Peter

#8 MichaelAnissimov

  • Topic Starter
  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 12 July 2003 - 01:15 AM

I'm responding to Sophianic, Jay, and Peter. Thank you everyone for your responses!

Let us take care not to minimize or denigrate the human mind with such references as "computing with meat" or "humans are not much different from chimps."


To my ears, it doesn't sound any more denigrating than "humans were built by evolution", or "humans do not have immaterial casual agents called souls", "the human brain performs computations and is differentiated into modules, like computer software".

The first remains an open (but fascinating question) and the second may not necessarily be as urgent as it appears, especially if a select group of people can find a way to keep pace with the self-improvement process in AGI by augmenting their own intelligence in tandem with, or in cooperation with, agents of AGI.


I'm not sure if that would be too easy - minds on computing substrates are rebootable, completely self-transparent (they can read and modify their source code), capable of splitting up their stream of consciousness/attention into several selves, freedom from biological constraints, et cetera. (www.singinst.org/LOGI) It takes far more knowledge and attempts to improve human intelligence than it takes to improve AI intelligence. So, I doubt that equal-speed AI and IA are possible, even in principle. It seems like we will actually have to "let go" to AI and 1) accept that an AI *can* be robustly benevolent under self-enhancement, just as a very trustworthy human could be, and 2) we probably don't have many other choices, because someone will build AI and mess it up if we wait too long. If you profess confidence in a "select group of people", then you must profess confidence in a mind build from scratch for unbiasedness, wisdom, and universal altruism!

Won't this AI sink away in a puddle of boredom, as it has to wait for a reply from us humans for (what will seem like) millions of years after it has said something to us?


It would, if it had its boredom-detectors tuned in the same way that humans do. But human boredom-detectors represent a noncentral special case, not necessarily typical of minds in general. In different universes, there are probably minds that can sit still for billions of years without boredom, and other minds that can't sit still for anything longer than a quadrillionth of a second! We will need to program AI in such a way that "boredom" only exists for the reasons that are right for that AI - "boredom", as we know it, is an evolved emotion created by the design pressure of natural selection in our ancestral environment. It has a characteristic shape corresponding to design by the process of evolution, and if that shape is even preserved slightly through the transfer of intelligence from cogsci inspiration to working code, it will be because the programmers and the AI both thought it was a good idea.

Don't get me wrong... AI is pretty cool. But this idea makes me feel sorry for the first AI that will ever be developed. Maybe two of them should be developed at once, so they can at least communicate with each other.


If the AI got bored, couldn't it split its computing power into two parts and create two individuals? Or create endlessly entertaining "video games" with bits of spare computing power? The options are endless, and quite interesting to think about. Also, go to www.sysopmind.com/essays and look for "Singularity Fun Theory".

1) estimates of neuronal clockspeed - Spike rate is probably not a good estimate of information processing in neurons. This takes into account only the electrical information channel in nervous systems and ignores the rest. I have had discussions with some of the best electrical engineers in the world at modeling single neurons. It is not possible to correctly model the behavior of a single neuron from even simple organisms such as slugs with many transistors right now.

I agree; when I state neuronal clockspeed, I should speak in more uncertain terms, or cite an author, in order to be more neurologically accurate.

2) downplaying complexity - you may not have to model the biology to get AI, but you are probably going to have to create a system that has at least equivalent complexity to the human brain in order to get something on the order of the same intelligence. I'm still pretty optimistic that we can crack the problem (either how brains work or AI, I see them as two sides of the same problem), but it is THE HARDEST problem humanity has ever faced. We really do need someone with the creative mathematical abilities of a Newton to solve the complexity and dynamics issues, or a programmer smart enough to build a system to crack the problem for them.


I don't believe we'll need a system as complex as the human brain in order to get human-equivalent intelligence; Lloyd Watts has modeled critical parts of the human cochlea using an algorithm around a thousand times less complex than the algorithm we guess the human cochlea uses. (At least, Kurzweil says so - I'm trying to find more examples of this sort of thing, and I've seen them before, but have not catalogued them or dug up the quantitative data yet. Perhaps you have see stuff like this all the time, as well?) Anyway, to quote Watts' Ph.D thesis:

"We are not limited by our technological substrate; rather, we are limited by our lack of understanding of the organizational principles at the heart of robust and efficient biological sensory systems."

I certainly agree that with *today's* computing power we would need a Newton (or perhaps several) to crack the problem of AGI, because exponentially accelerating computing power and the secondary benefits thereof have only begun to get started. Part of the reason why I always couple the "moral urgency" bit together with AGI discussions is that while it may take a Newton or Einstein today, it might only take a "supergenius" next year, and then only a "genius" the next year, until finally it becomes inevitable and someone brute forces the whole darn thing.

The key is that for a *successful* Singularity to take place, i.e., a Singularity that does not result in a superintelligence concerned only with maximizing its pleasure storage indicators and optimizing all reachable physical matter to that end, will require geniuses implementing Friendly AI before anyone else gets a chance to mess it up. This is how the Singularity Institute could fail, even with tons of funding - they'd still need the intelligence or the money might be moot.

As a general comment, I think we could be moving much faster if we could get more communication between AI and neuro workers. Better analysis software could help neuro researchers sift through data such as signal information much faster than is currently being done, and the information being learned by neuro reseachers about novel computational structures (especially massively parallel processing) can provide new programming structures for AI researchers. It is my hope that increased synergy between the fields will get us all to our goals much faster.


Absolutely agreed! Do you think that cogsci is moving in this direction today?

#9 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 12 July 2003 - 02:14 AM

Part of the reason why I always couple the "moral urgency" bit together with AGI discussions is that while it may take a Newton or Einstein today, it might only take a "supergenius" next year, and then only a "genius" the next year, until finally it becomes inevitable and someone brute forces the whole darn thing.


Michael,

1) Are there people, or groups of people, trying to create an AI/ initiate a singulariy right now who shouldn't be trying to do so? (AKA, their actions are dangerous to themselves and others)

2) Are they quacks or do they have a chance?

The reason I ask this is because you always get those people at Singularity sites that say they work for DARPA. Come on, if they really worked for DARPA they would never say so. Are there really rogues out there who are for real, or is this just an urban legend that has been circulated through our community?

Thanks
Kissinger

#10 bitster

  • Guest
  • 29 posts
  • 0

Posted 13 July 2003 - 12:20 AM

Many have suggested that a completely trustworthy AI is impossible; that all thinking entities will necessarily be self-centered. Evolutionary psychologists, however, that organisms sculpted by evolution *must* be self-centered to survive; selection pressures almost always operate on the level of the individual. But in ant colonies, for example, where no single ant is an independent reproductive unit, selection pressures operate on the colony as a whole and the supergoals of the ants focus on the colony rather than themselves.


The assertion that "selection pressures almost always operate on the level of the individual" may be useful as a definitive, rather than derivative, truth. In other words, it might possess some merit to define "individuals" as those units of organization on which selection pressures most clearly operate.

When one says "individuals" in the context of biological evolution, the assumption is of individual creatures, or humans, not individual organs, or even cells, both of which are fairly clearly defined levels of individuation. Already, social coneventions of nationalilty or incorporation are merging humans into living things larger than themselves, upon which selection pressures in the form of economic rather than biological competition can be applied. Using technology to increase the integration and dependency level among humans may well shift the focus of natural selection onto organisms (organizations?) of larger scale.

#11 Casanova

  • Guest
  • 93 posts
  • 0

Posted 14 July 2003 - 06:43 AM

Where did human emotions, such as love, get lost in all this talk?

Friendly AI was mentioned, but if the super computer is really super, then it will have the ability to override the original programs, programmed into it by the original programmers.
If not, then the super computer is not really free, so it is not really a super computer.
These super computers will not be raised like human children, with loving parents, and in a society of fellow beings in which the socialization process takes place.
That is what is so chilling, to me.

And saying that a loving parent program can be programmed into the computer is then turning the computer into a quasi human entity.
The idea of programming love, and compassion, is really the end of the line, for cold-blooded materialism.
The urgent issue is not to try and plant compassion into a computer, but to find, and reawaken compassion in ourselves, which seem to have been buried under a ton of microchips.

There seems to be an assumption here that "self-awareness" will suddenly bloom up inside the super computer like a lightbulb going on.
But why should it? Saying it will bloom due to wired complexity is just s guess, and a crossing of fingers.
Personally, I don't buy the idea that any artificially built machine, no matter how complex, will ever have "self-awareness".

And playing devil's advocate to myself, and saying that a kind of self-awareness could be achieved, then what would it do?
Not having any human emotions, the super computer would be a stauch pragmitist of the most cold-blooded variety.
It would have no interest in human art, in human emotional conflict, and struggle. Being a machine, it would probably find "mathematical calculation" to be the most interesting thing to do, so it would soar off into very abstract mathematical theorizing, and ignore us. But, if it doesn't have any emotions, then why would it find any pleasure in ultra abstract math?

Evolution is not blind, and the human brain is a marvel. The universe created the human brain, so the universe is smart; in fact it is smatter than we are, because it is a system that designed our brains, and as you said, a less complex system, our brains, cannot completely fathom, a higher complex system, the universe that made our brains.

We live in an intelligent universe and there is nothing, "it's just this, or that" about the human brain.

But, why has the universe planted this desire in humans to build super computers. Maybe it is a pathological desire; the whole enterprise to create a super computer, or computers, that will usher in the Singularity.
Frankly, the whole idea seems kooky.
Why in the world do we need these super intelligent machines in the first place? To feed the poor?
We could do that right now, but we lack the collective compassion to do so.

The whole enterprise seems like a bunch of geekie kids building something just to build it. Nothing wrong with that, but we are adults, not kids, so we have to carefully examine our motives.
Why are really interested in doing this?

Returning to the human brain, no damn super computer will ever create symphonies as marvelous as those of Haydn, Beethoven, Mozart, Mahler, and the music poems by Ravel, Debussy, etc. No way, no how.
That music was created by real men, by real human beings, living, loving, and suffering, in a particular place, and time.
They represent the glory of humanity, it's dignity, and worth.

To get that kind of music created again, we have to educate young people's brains, through fine arts schooling, and bring them up in a society that appreciates fine art. We don't even need computers to do that. Tchaikovsky didn't need a computer; Rapheal, the painter, didn't need a computer, the Bronte sisters didn't have word processors.

Kurzweil's silly predictions that we will have a computerized Schubert, or Beethoven, is the absolute height of nonsense. It is an insult to these men. Schubert died when he was about 31, but his marvelous brain left to us some of the greatest music ever composed. No cell phones, no computers, no digital wide-screen TVs, just a pen, or quill, and a piece of paper.

If we can speed up the processing of our brains a million times, how would we move; like squireels on speed; like the Flash. Why would a hyped up brain produce anything of value? How would we talk to each other; like chittering chipmunks.
And if we could dump all the libraries of the world into our heads; then what? Automatic Mozarts, Magrittes, and Hesse's?
No, most likely just a clutter, and mishmash, of nonsense, scrambling about.

Maybe what I am driving at is a phrase such as, "unneeded technologies".

We don't really need super- duper ultra computers. We only need them because we are told that we need them; but by whom?
Who, or what, is whispering into our ears; "build the things"?

My feeling is that the super computers will never become self-aware, but only experts at mimicry, or mimicking. They will be like the audio-animatronics robots at Disneyland, but supped up.
They will tell us they are aware, but there will be no one at home.
I call them Mimicoms. The AI's will all be Mimicoms.

A visual example of this is the excellent 1960s Twilight Zone episode, "The Lonely". The original Twilight Zone, not the mediocre remakes.

I talk to persons on the "streets" sometimes, and none of them like the kind of ideas posted here, and elsewhere. They find all this stuff scary, and promoted by persons whom they feel lack common sense.

I find that a good sign. It means that common sense will most likely slowdown a lot of this research, and develpment, and even nip some of it in the bud, before it destroys us.

We can aleviate all excessive suffering right now, if we had the will. We are just making excuses, and distracting too much of our time, on "pie in the sky" in the future.

#12 ocsrazor

  • Guest OcsRazor
  • 461 posts
  • 2
  • Location:Atlanta, GA

Posted 14 July 2003 - 02:07 PM

Casanova,

Terms like love have not been lost, but those of us well educated in the neurosciences and human psychology are able to see human emotion both as subjective people and as objective observers. I have enjoyed many deeply fulfilling relationships, but I can also see that human emotions have served particular evolutionary ends, most siginificantly in the way that families, tribes, and cultures shape themselves. These drives are not being neglected in the search for AI.

Artificial intelligences are not being raised outside of the human family, we ARE raising them. The work at SingInst on Friendly AI is an example of such. AI's are going to incorporate everything that is us, good and bad. Let us not forget that human families are very capable of producing monsters with world destroying capabilities. Technological power of any kind is not inherently evil or destructive, but it is a magnifying glass for the intelligence that wields it.

Materialism is not any more cold-blooded than organized religion . Materialism as a philosophy has produced both incredible good through technologic advance and the rise of humanism, but has also been hijacked by the power hungry. The hijacking of spirituality and purpose for the drives of human power by organized religion was a more primitive form of the same thing. Materialism is simply a more advanced form of human thought than the old spirituality, just as the current drive for the fusion of material thought with deep purpose will result in a new philosophy. (btw, as I have pointed else where the direction of authors like Wilber is correct in searching for this new philosophy, although his understanding of ideas is poor)

I think you should look to yourself before saying that we have lost compassion. Our society in particular is polarizing, there are acts of incredible charity and kindness done now, that would never have been possible 200 years ago. Things in Western society are homogenizing at the middle, but polarizing at the edges - higher highs and lower lows.

Human brains and self awareness arose through increasing wired complexity - this can be clearly seen from observing primate evolution and the developent of nervous systems in general. You must realize that the way biological brains were assembled does not differ significantly from the way we are doing it, although this time their is another layer of feedback as intelligence seeks to directedly evolve intelligence. As programs increase in complexity, they become increasingly unpredictable and show propeties which look much like their biological counterparts. As Moravec and Brooks have pointed out - we are recapitualing evolution at a greatly advanced rate. If a AI is raised simply in a world of pure math that is what it will be interested in, but researchers are building systems that live in the real world, with the same senses we have. For this reason it is very likely that the general "shape" of their minds will look much like ours.

You misuse the word design, the universe assembled brains through a process of trial and error acting on a particular state space that produces higher complexity. Simple systems produce systems of more complexity all the time. There is a deep force that seems to give universal evolution a directionality, but this does not imply design, just a general direction towards more complexity. The human brain is the most complex node we know of in the universe, but this does not imply that the universe itself is necessarily more complex, it is just the playing field.

The drive for artificial intelligence is an expression of the the universal drive for more complexity. It is something that is being stoked on by deep cultural forces, because greater computational power gives greater control of information, and it is information tha allows us to act on our world and make nonmaterial connections between material things. We are creating increasing levels of self-refernce and waking the world up to itself.

AI's will likely produce greater feats of creativity than we could ever imagine, because the state space of possibility will be so much greater than anything we can comprehend. Intelligences than can act on themselves and have much greater control of the world around them will have a tremendous greater palette to work with. We are hitting the edge of the possibility of state space for human creativity. All possible basic plots in the English language were explored by the time of Shakespeare, human musical state space is close to being completely discovered, we need new spaces to explore.

The technological development curve is unlikely to slow down for anything, because the advantages of control of information are just far too great - the economic, political, military, etc. advantages to greater computational power are far too tempting for anything to slow this down. Stasis in any system is equivalent to death and stagnation, we can't stop or go backward - we have to hang on for the ride and make it as good possible. It is very critical that people in these fields try and inject their sense of right and wrong into their projects, but attempting to stop technological development is a fools errand and will only cause more strife than the technology itself.

My sense is anything but common. I can't speak for others, but my ethics and way of treating others is much more well developed than the majority of people around me. I find that people with much higher intelligence often consider the consequences of their actions much more deeply than those less gifted.

Best,
Peter

Edited by ocsrazor, 14 July 2003 - 02:10 PM.


#13 MichaelAnissimov

  • Topic Starter
  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 15 July 2003 - 01:43 PM

1) Are there people, or groups of people, trying to create an AI/ initiate a singulariy right now who shouldn't be trying to do so? (AKA, their actions are dangerous to themselves and others)


Yep! I don't want to name any names, though. The really dangerous part is that even people with *good* intentions can probably mess up Friendliness pretty easily.

2) Are they quacks or do they have a chance?


Oh, they definitely have a chance!

The reason I ask this is because you always get those people at Singularity sites that say they work for DARPA.


Which Singularity sites? What people?

Come on, if they really worked for DARPA they would never say so. Are there really rogues out there who are for real, or is this just an urban legend that has been circulated through our community?


It's complete nonsense.

#14 MichaelAnissimov

  • Topic Starter
  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 15 July 2003 - 03:10 PM

Peter and Casanova, thank you both for your responses. I would answer Casanova's objections, but WOW! Peter probably said it way better than I could.




1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users