• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

AI or IA - it's all about recursion


  • Please log in to reply
11 replies to this topic

#1 ocsrazor

  • Guest OcsRazor
  • 461 posts
  • 2
  • Location:Atlanta, GA

Posted 04 April 2004 - 04:01 AM


This is a post of desperation. I am becoming incredibly frustrated by very bright people who know how to create wonderful algorithms but who don't understand complex biological systems and on the flip side I am incredibly frustrated by very bright people who know lots about complex biological systems but are mathematically naive. It is my firm opinion that effort should not be spent arguing about whether artificial intelligence or intelligence augmentation will come first, but that a tremendous amount of time should be spent on making sure they appear at roughly the same instant in history. The purpose of this post is to attempt to get a project started to make sure exactly this happens.

I agree completely with the insight that recursive self improvement in any intelligent system is what will spark the Singularity. To my mind the Singularity is well into its birth pangs, but the time it will take to reach true recursive self improvement of intelligence is highly dependent on human factors.

What I need is a group of talented programmers and neuroscientists who would be willing to cross pollenate their fields with the ideas of each others. This process has begun, but it is still ridiculously slow.

Why neuroscientists need programmers:
Complexity! We are beginning to generate far more data than the human mind can handle. Every single good computational neuroscience paper that comes out is so chock full of information there is no way we are getting all the relevant information out of it. We need to start looking at all the neuronal and neuronal network models that are out there, see where appropriate reductions can be made, i.e. start mining all this information for the relevant data. I think it is very likely there is enough data out there to get a pretty strong handle on what is actually going on in neuronal network processing. This just isn't being done right now. We need to get some people who are great at algorithmics to start chugging through this stuff and start running large scale models. There should be a downloadable consciousness@home process akin to the folding@home or seti@home which will be working through this stuff.

Why programmers need neuroscientists:
Complexity! Evolution has already done what some of you want to do - created a functioning conscious intelligence - we just need to go in and reverse engineer it. The majority of current schemes for generating seed AI or GI are ignoring the fundamental type of processing going on in brains. I fully agree with many people in the AI/GI community that much of the information coming out of single cell neuroscience can be ignored, BUT you cannot ignore the network behaviors, and neuroscience still is nowhere close to understanding those - but we are developing the tools now. There are completely novel computational schemes to be discovered here - so start looking at this data. Most of the good work in AI has been done at the high level, attempting to look at cog sci or psychology and piecing together algorithms, BUT these systems are still far below the operating capability of similar biological systems. AI as it exists now has a packing problem, we need to reverse engineer neuronal systems to figure out how to get faster, leaner, more connected algorithms and hardware, so that it can operate on par with and beyond the biology.

The Specific Project I Have in Mind

My laboratory is in the process of generating data from large scale (20-100,000) networks of cultured cortical neurons. We are able to send and receive electrical signals through an array of 60 electrodes to this network. I am in the process of creating a system which will integrate live-high sensitivity high resolution imaging of this system with the electrical information. In this way I hope to create the ability to have as much information as possible about the development and function of this system.

I believe the fundamental cortical unit of information processing in the mammalian brain contains somewhere in the range of 10-100,000 neurons, so our system is ideal for examining its properties. This is a new area of neuroscience, people are just beginning to explore the possibility of studying this many neurons at once. The fundamental processing units of a brain (in my mind 10-100,000 cells) need to be understood so that we can then attempt to organize them into high level networks and link up their type of processing with the existing high level models.

Where I need is help developing machine intelligence which will examine the electrophysiologic and imaging data we are generating for patterns of activity and patterns of physical growth. I am interested in questions such as how the amount of connectivity in the network affects signalling, how much information could possibly be in the signals the network generates, what is the state space of possible signalling behaviors for the network - in short a total structure function model of this network of tens of thousands of neurons. If we possessed a model like this we would be able to create larger scale models which could then be used to simulate the formation of large modules in the brain through connecting many of the fundamental model processing units together. This is beginning of truly modular, scalable, networkable AI. In addition this model would be used to create the next generation of neural implants, because this information would allow neuroscientists to understand how to accurately stimulate and record from large numbers of neurons in order to get information into and out of a brain.

This project itself is recursive - the more we understand about the the biology, the better the computation gets, which in turn is able to understand more about the biology. This recursive cycle will carry us forward to the point where the artificial computational system has learned all that is necessary about biological computation to carry forward with its own development.

As a final note, I fully support the efforts of people like Eliezer and Michael to try and incorporate human moral foundations into a seed AI - this is critical work for the eventuality of any superintelligence. I don't want them to drop what they are doing to work on this, but I do think the high level implementation is not going to come until we get our fundamental algorithmics and hardware to be as lean and as dynamic as the biology. All current estimates of human computational ability FAR underestimate the complexity we see in brains - there just isn't anything out there with the same level of connectivity or dynamics as a brain, so we need to figure out how to duplicate this type of system before moving forward.

Best,
Peter

#2 reason

  • Guardian Reason
  • 1,101 posts
  • 249
  • Location:US

Posted 04 April 2004 - 04:46 AM

Based on my following of the relevant branches of science, I don't think IA will beat AI or vice versa by all that much. It seems a little close to call, not to mention the fact that the two are very much intertwined through cognitive and neuro sciences. The more we understand about the workings of our intelligence, the more likely we are to get both IA and AI as an output.

Reason
Founder, Longevity Meme
reason@longevitymeme.org
http://www.longevitymeme.org

sponsored ad

  • Advert

#3 PaulH

  • Guest
  • 116 posts
  • 0
  • Location:Global

Posted 05 April 2004 - 12:17 AM

Peter,

You won't get any argument from me on this. I've been saying essentially the same thing for years.

One of my concerns with most AI, and especially SAI research is exclusive reliance on a reductionist view of the brain. This is an oversight that I think is dangerous when coding a moral seed-AI for example. It almost completely ignores the fact that no brain exists in a vacuum, but is part of a massively complex globally wide network... a natural systemic intelligence... the sum total of 4 billions years of evolution - the biosphere/noosphere/societal complex. For example, it's impossible to understand the workings of an ecology, if we only examine its individual parts. It's the interaction of those parts that gives rise to an emergent intelligence not found through reductionist means. An ant colony shows a global intelligence not determined by examining any one ant. Therefore, morality seems best understood from a systemic approach, rather than a reductionist approach.

Consciousness As Scientific Tool

Like we use the telescope or microscope to study the outer universe, we can use consciousness to study the inner universe. I find it odd then that people are trying to improve upon an instrument (the mind/brain) without actually using that very instrument to determine how it works. This idea of reducing our understanding of the brain from the outside, examining its parts, neurons, glia cell and neurotransmitter functioning, without actually using the instrument itself seems disconcertingly short-sighted. To me it requires both modes of examination and study to effectively improve upon it, because examining its parts in a reductionist fashion, tells us little of the emergent intelligence we each expereince internally. Therefore, a genuine IA or AI research program should include both an objective and subjective framework. To me this is so obvious, that I think it's the main reason so many people overlook it.

In my opinion, the one person who has done more to map the regions and limits of innerspace in a rigorously scientific fashion is John Lilly, MD. and Ph.D.

It is through this internal form of study, that we can determine, discover and access modes of knowledge and understanding of how human minds work that could never be ascertained by reductionist means.

I don't know about anyone else, but I find it a bit dubious that the Singularity Institute is proposing to create an altogether "alien" mind that supercedes a human mind, and that is supposed to have the human minds best interest at heart, yet has no knowledge of the inner workings of our mind, that can only be ascertained by subjective exploration and examination. At least with the IA approach, those of us who are getting intelligence augmentation can in turn work on advancing IA, and our well being, because we are in the best possible position to understand it, since we are it, not some alien intelligence that germinated from scratch from entirely "alien" principles, derived by AI scientists based solely a woefully incomplete reductionist model of the mind.

Edited by planetp, 05 April 2004 - 01:24 AM.


#4 ocsrazor

  • Topic Starter
  • Guest OcsRazor
  • 461 posts
  • 2
  • Location:Atlanta, GA

Posted 05 April 2004 - 01:18 AM

This is another part of the argument planetp, and should have its own thread if there isn't one out there already. I completely agree with you that the environment in which an AI develops is critical to its functioning in a way that resembles what we think of as human intelligence. I am a strong advocate of embodiment for AI systems, they need to live and breathe in our world as much as possible if we want them to be like us and have a natural understanding of the "real" universe.

I like to think of myself more as systems theorist, but its my strong feeling that we need to get a handle on what a fundamental cortical circuit is and how it operates to move forward with both IA and AI.

That said, I am extremely interested in studying information flows across huge systems like economies and cultures and think this is a critical area of study for understanding intelligence as well - I believe this will be a hot area of research for a number of reasons and you are right to question its inclusion in the hunt for AI/GI. I believe Eli Yudowsky may be starting this process by trying to incorporate Evo Psych and human culture into his thinking, but there is a long way to go yet before pulling in enough information to properly train an AI/GI. It is my opinion that exposing as much as possible to the real world in every possible sense and exposing to as much of human and natural history as possible would be the best way to evolve it so that it is not "alien".

Best,
Peter

#5 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 05 April 2004 - 02:07 AM

ocsrazor:I am a strong advocate of embodiment for AI systems, they need to live and breathe in our world as much as possible if we want them to be like us and have a natural understanding of the "real" universe.

I have gotten the impression that it is not a good idea to be anthropocentric when attempting to develop transhuman intelligence. Would you say, then, that it's okay for now to be anthropocentric as long as we're still in our early stages of understanding intelligence, uniting thought and being?

#6 ocsrazor

  • Topic Starter
  • Guest OcsRazor
  • 461 posts
  • 2
  • Location:Atlanta, GA

Posted 05 April 2004 - 02:25 AM

The problem Nate is we have a N of one. We are the only system we know of with human level intelligence. I think it is fine to explore the state space of possibility for the shape of that intelligence, but you have to let it absorb all the processing human and biological and universal evolution has already done. When I say "want them to be like us" I meant in the sense that Eli Yudowsky means - that they have a human moral and historical reference frame, but not necessarily taking our particular form of embodiment or even the particular structure of our minds. Although I think it highly likely that even if you didn't model their systems after biological neural ones, through their own "evolution" they would come to look like our systems (esp the neocortex) with the exception of the particuarly biological legacy systems we have in our brains.

Peter

#7 PaulH

  • Guest
  • 116 posts
  • 0
  • Location:Global

Posted 05 April 2004 - 05:48 AM

I've opened a new thread at Peters request here:

http://www.imminst.o...t=0

which argues that part of any AI or IA program requires that the subjective understanding of the mind-brain is as important as an objective understanding.

Edited by planetp, 05 April 2004 - 10:25 AM.


#8 PaulH

  • Guest
  • 116 posts
  • 0
  • Location:Global

Posted 05 April 2004 - 11:16 AM

Nate wrote:

I have gotten the impression that it is not a good idea to be anthropocentric when attempting to develop transhuman intelligence.



Nate,

I completely disagree with that impression. Michael and Eli are wrong, wrong, wrong. And I could care less how smart they are at this point. Actually I do care as their extreme intelligence combined with their blindsided, very unwise approach makes them very dangerous. Especially after having detailed discussion with both of them, I think its an extremely BAD IDEA. The goal should be to create a transhuman intelligence that it is antropomorphized as much as possible, without any of the drawbacks. I just finished writing a detailed piece about why I think this.

I suggest you read Flemming's and my post over at Future Hi to get the other side of this very important argument.

Here is an excerpt from my piece:

With the IA approach we are pursuing intelligence augmentation and self-fullment from within our own internally guided framework. We are in the best possible position to understand and direct it based on on our own inner knowledge, rather than being forceably programmed into something else by some alien intelligence thats germinated from scratch based on principles derived by AI scientists using a woefully incomplete reductionist model of the mind.

The remaining question is why does it have to be so alien? With a more comprehensive understanding of human contelligence (consciosuness + intelligence) that comes from deep and prolonged inner exploration and mastery - such as the super-benevolent yogi's, would come the answers to creating a super-benevolent SAI. What Greg Burch has called an Extrosattva. With the addition of a morality derived from a systemic embodied experience, such a being would possess a broad understanding of genuine compassion and benevolence, along with a deeb embodied understanding of humaness (necessary to help us), as well as not being prone to logical fallacies. In the light of all this, the idea of creating a competely de-anthropomorphized SAI is completely genocidal and fool hardy, as if we humans don't have anything worth contributing going forward in our evolution. I think the existence of super-benevolent yogis including the likes of people like Ghandi or the Dalai Lama prove otherwise.


Edited by planetp, 05 April 2004 - 11:31 AM.


#9 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 05 April 2004 - 05:52 PM

At http://www.imminst.o...t=0 I have responded to Paul's concerns as best I can for now. I have argued that through the use of external reference semantics, desirability derived from supergoal content uncertainty, probabilistic supergoals, and the constraint of programmer-insensitivity, we can build an AI that turns out altruistic no matter which philosophy of science is right. Part of the argument is that we would expect a competently built "nonanthropomorphic" Friendly AI to independently absorb "anthropomorphic" characteristics that turn out to be required for Friendliness.

On the topic of the technological difficulty of AI and IA, I'd like to take a little bit more time to create a more thorough response. Achieving significant IA is radically harder than duplicating the critical features necessary for general intelligence on a radically more robust, flexible, rapid, reprogrammable substrate where individual thought processes can be tweaked, replayed, paused, overclocked, made redundant, removed, reconnected, or redesigned entirely. This is one of the most important points I've learned in almost three years of extensive Singularity research, and also one of the points that becomes stronger the more I learn. It is a point I will need to carefully present and argue to others, but I'd rather present it as a bunch of strong points all linked together than a few unconvincing weak points I type up over the course of 5-10 minutes.

On the "safety" side, it really is tempting to look at Brain-Computer Interfaces and form the belief that BCI would be reliably safer than straightforward AI, and therefore more desirable. Indeed, a very incremental BCI project might indeed be safer than an AI implementation; it might not - but it would probably take a very long time. It is not the safety of any specific approach we are concerned with here, but the integrated safety of working on any single approach when someone working on a faster approach is likely to succeed before you do. Nanocomputing will be here before human-compatible nanomedicine is, which will throw open the doors to AI while the doors to IA are still steadfastly shut. Part of the reason AI is easier is that there is simply so much more state-space to work with; and it's all flexible state space; there are no innate homeostatic mechanisms that snap things back into place and cause undesirable side effects. But I don't want to say too much too early - please allow me to formulate a more thorough response as time allows.

#10 John Doe

  • Guest
  • 291 posts
  • 0

Posted 05 April 2004 - 10:59 PM

On the "safety" side]might indeed[/i] be safer than an AI implementation; it might not - but it would probably take a very long time.  It is not the safety of any specific approach we are concerned with here, but the integrated safety of working on any single approach when someone working on a faster approach is likely to succeed before you do.  Nanocomputing will be here before human-compatible nanomedicine is, which will throw open the doors to AI while the doors to IA are still steadfastly shut.  Part of the reason AI is easier is that there is simply so much more state-space to work with; and it's all flexible state space; there are no innate homeostatic mechanisms that snap things back into place and cause undesirable side effects.  But I don't want to say too much too early - please allow me to formulate a more thorough response as time allows.


I tend to think AI is much safer than IA and obviously so. The reason is that AI will start from scratch whereas IA will be built upon mammalian middle brains which include fundamentally non-altruistic and cruel Darwinian impulses. Friendly AI as described by EY will have no such impulses. The FAI will not even possess a "self". This is one of the beauties of FAI. By programming a superintelligent computer we can accomplish a saintliness that no human -- not even a IA human -- could possess.

#11 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 06 April 2004 - 04:03 AM

Kip, you have a very good argument there. :)

There are even additional potential benefits to Friendly AI that you don't specifically mention here - a cleanly causal goal system rather than one based on strong attractors, for example. Wisdom tournaments too. For humans there's the added benefit that you know all the necessary complexity is in there somewhere - it's just the process of eliminating the non-humane complexity that would worry me.

That's actually why I added "it might not". In this post, for the sake of discussion I was taking the "global" view, where I assume the necessary tools reach many people at once through nanocomputing or sophisticated nanomedicine. What I was actually trying to say here is that a reeeaallly gradual and careful BCI experiment *might* be safer than an average (non-Eliezer) AGI designer slapping Friendliness together. Luckily, in reality, Eliezer exists, and was able to come up with a lot of unusually bright ideas about how to put together a Friendly Species AI. If nanocomputing arrives before SIAI completes its mission, however, we could be in for real trouble.

But you're basically right. According to my current best picture of the Singularity on Earth, AI is faster, cheaper, easier, and safer than the IA approach.

sponsored ad

  • Advert

#12 ocsrazor

  • Topic Starter
  • Guest OcsRazor
  • 461 posts
  • 2
  • Location:Atlanta, GA

Posted 06 April 2004 - 10:40 PM

Hi Michael and Kip

I want you guys to keep doing what you are doing, no question about the value of doing things in software. I strongly disagree with you though that doing AI progamming alone without biological experimentation will be faster than combining both effectively. Abstraction is great, the problem lies in the fact that we just aren't smart enough yet - our programming techniques are not good enough, our hardware construction isn't good enough - to build something on the order of human intelligence. There are many lessons yet to be learned from the biology about how to build evolvable complex systems.

If there is one thing I have learned in my years in science is that theoretical and experimental approaches are most effective when combined. Theoreticians tend to wander abstract spaces that may not be important to their goals when they don't look at real world experiments (which is exactly where I see most AI work right now). Experimentalists who don't look at theory get stuck on fine points of reductionistic nonsense details which are not important for big picture synthesis (which is exactly where much of neuroscience is right now).

What I am trying to express is that AI researchers could greatly limit the state space of their search for good algorithms of fundamental, recursively self-improving intelligence by taking a closer look at what is going on in neuroscience. There is still a huge packing problem of lack of speed of dynamics and complexity which I don't see as being anywhere close to solved by artificially constructed intelligence. Nanotech or even just MEMS will likely solve the material end of this problem, but that still leaves with the question of algorithmics - how to build these incredibly complex networks so the dynamics is correct, which we still don't have a clue on. This is the absolute far end of highly complex nonlinear systems problems and the only system in which this has been solved is the mammalian brain.

I applaud anyone looking at evolution of cultural systems for generating AI as this is an absolutely critical level of information required for AI/GI, but there is also much information to be gleaned from the process of evolution of brains which is being ignored right now and may not be easily abstractable through simulation alone.

Neither system is inherently safer from the point of view of unenhanced humans. To be honest - we are creating systems in which there is no possible way we can fully predict the outcome of our engineering. These supeintelligent systems (AI or IA) will be be inherently more complex than we are - this is the whole point of the singularity. We can do our best to limit the dangers as much as possible, but we just can't robustly predict what is going to happen once these systems arise. I find it fascinating to consider what the intial weights in a "selfless" system such as Eli's may result in. How altruistic do you make the AI? How selfless? Is the value of one human life equivalent to that of the whole species.? Small differences in intial conditions on a system such as this will make a great deal of difference on the decisions it makes and the system it evolves into. We should be aware from our own history that saints are sometimes very scary people Kip - it is the absolute idealists who have always done the greatest amount of both good and harm throughout human history.

On a personal note, one of my motivations for working on IA is that I wan't to make sure that we limit our risk of being pushed to the wayside (intentionally or unintentionally) by an AI by maintaining our ability to understand and process information at a level at least somewhat closer to what that of a full-on superintelligence will be capable of and can more easily transition to its existence. I want to come along for the ride, I want my consciousness to be part of the big adventure of transhumanism.

In summary,

AI/GI research will move much quicker by taking its cues from the information processing evolution has already done to create mammalian brains - i.e. Don't reinvent the wheel when you can copy existing engineering advances! Extracting information experimentally or theoretically works best when the right balance is struck between the two. We must start to see AI/IA as the same field of research and not separate them so rigidly - this only slows down progress and makes people ignore good work being done outside their primary field.

Safety prediction is an extremely tough game with extremely complex systems. Is the devil you do know easier to deal with than the one you don't? I think this is anybody's guess, but that also these are two sides of the same coin. Abstraction for AI/GI and biological intelligence are going to tell us the same things, but we will still are facing a wall of complexity and dynamics that it will be nearly impossible for the unaided human mind to breach.

Best,
Peter




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users