• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Why humans have the best artificial intelligence


  • Please log in to reply
11 replies to this topic

#1 scottl

  • Guest
  • 2,177 posts
  • 2

Posted 30 July 2006 - 02:07 AM


http://www.collision...umans_have.html

collision detection
content | discontent
send me yours

« Eric Church, life hacker | back to collision detection | My New York Times article on "serious games" »
July 27, 2006
Why humans have the best artificial intelligence





Ever heard of Amazon's new company, Mechanical Turk? The concept is pretty simple: You sign up as a Turk, and go to the site to see what jobs are available. The jobs all consist of some simple task that can be performed at your computer -- such as viewing pictures of shoes and tagging them based on what color they are. You get a few pennies per job, and according to a recent story in Salon, some people make up to $30 a day by clicking away at these nearly-mindless tasks during slow moments at their day job.

What I love about the Mechanical Turk is that it capitalizes on an interesting limitation in artificial intelligence: Computers suck at many tasks that are super-easy for humans. Any idiot can look at picture and instantly recognize that it's a picture of a pink shoe. Any idiot can listen to a .wav file and realize it's the sound of a dog baring. But computer scientists have spent billions trying to train software to do this, and they've utterly failed.

So if you're company with a big database of pictures that need classifying, why spent tens of thousands on image-recognition software that sucks? Why not just spend a couple grand -- if that -- getting bored cubicle-dwellers and Bangalore teenagers to do the work for you, at 3 cents a picture? As Amazon notes in its FAQ:

For software developers, the Amazon Mechanical Turk web service solves the problem of building applications that until now have not worked well because they lack human intelligence. Humans are much more effective than computers at solving some types of problems, like finding specific objects in pictures, evaluating beauty, or translating text. The Amazon Mechanical Turk web service gives developers a programmable interface to a network of humans to solve these kinds of problems and incorporate this human intelligence into their applications.

Mind you, while the cognitive-science aspects of the Mechanical Turk are incredibly cool, the labor dimensions freak the hell out of high-tech labor unions. "What Amazon is trying to do is create the virtual day laborer hiring hall on the global scale to bid down wage rates to the advantage of the employer," as one WashTech organizer argues. Either way, it's a really odd way to think of human intelligence: Just more processing time, a few more cycles in the machine, and the global community of freelance workers a massively-parallel computer, floating out there in the aether like the world's hugest graphics card.

I actually wrote a little essay for Wired in 2002 that predicted this, sort of.

(Thanks to Jason Fisher for this one!)
Posted by Clive Thompson at July 27, 2006 09:50 PM

#2 scottl

  • Topic Starter
  • Guest
  • 2,177 posts
  • 2

Posted 30 July 2006 - 02:09 AM

Anyone want to comment on this:

": Computers suck at many tasks that are super-easy for humans. Any idiot can look at picture and instantly recognize that it's a picture of a pink shoe. Any idiot can listen to a .wav file and realize it's the sound of a dog baring. But computer scientists have spent billions trying to train software to do this, and they've utterly failed."

sponsored ad

  • Advert

#3 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 30 July 2006 - 02:45 AM

Any man who thinks he knows what is good for others is dangerous


How true.

": Computers suck at many tasks that are super-easy for humans. Any idiot can look at picture and instantly recognize that it's a picture of a pink shoe. Any idiot can listen to a .wav file and realize it's the sound of a dog baring. But computer scientists have spent billions trying to train software to do this, and they've utterly failed."


Stated simply, "artificially intelligent systems" haven't yet achieved parity with human minds in pattern recognition. Well, duh.

Of course it is going to be incredibly difficult to design SAI, but this doesn't in anyway conflict with a functionalist position in philosophy of the mind.

If someone has a problem with a functionalist account of consciousness, then what s/he must do is put forward a framework that works as well or better in terms of its explanatory power.

#4 scottl

  • Topic Starter
  • Guest
  • 2,177 posts
  • 2

Posted 30 July 2006 - 03:52 AM

Don,

I had not read anything on or about AI in literally decades. Given all the discussion of the singularity, I had perhaps naively assumed that some of the fundamental problems had been solved (at least to some significant degree). I posted this, and in this forum, because I was curious if we are really as far away i.e. "uterly failed" as this seems to indicate--I really have no idea, and assumed someone in here would know.

I have been taking the singularity for granted based on glimpses of stuff I've read on the board, although perhaps I need to re-examine how certain I think it is.

As far as the second part of your post, no I don't subscribe to your views on consciousness, but have no interest in discussing that and IMHO that discussion is seperate from this one (I'll worry about whether I think the singularity is conscious once it happens).

#5 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 30 July 2006 - 04:01 AM

scottl

I was curious if we are really as far away i.e. "uterly failed" as this seems to indicate--I really have no idea, and assumed someone in here would know.

I have been taking the singularity for granted based on glimpses of stuff I've read on the board.


GOFAI (good old fashion AI) is dead and buried, but there are newer connectionist models, along with a broad range of other designs and approaches that seem very promising. I am also not an expert on AI, but my personal *intuition* is that highly advanced AI systems are still at least a few decades away. (By highly advanced, I mean human level)

My disagreement with Kurzweil prognostications stems, not from the exponential growth concept, as much as from the ambiguity of the goal that is being strived for.


As far as the second part of your post, no I don't subscribe to your views on consciousness, but have no interest in discussing that and IMHO that discussion is seperate from this one (I'll worry about whether I think the singularity is conscious once it happens).


Fair enough. As your signature says, we should all believe what works for us, as long as those beliefs do no harm to others.

#6 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 30 July 2006 - 05:10 PM

(Don)
My disagreement with Kurzweil prognostications stems, not from the exponential growth concept, as much as from the ambiguity of the goal that is being strived for.


Concise and precisely correct as usual Don. This has been my principle concern for a very long time as well. It also goes for the idea of *friendliness* and even what people call altruism in evolutionary terms.

The debate over the value of compassion, and other sympathetic aspects of intelligence is vitally important because the very idea of emotions is anathema for many models of intelligence but it very well may be that emotions are not only exemplary of higher intelligence, they may be essential for it to exist.

This perspective introduces a chaotic element that undermines the certainty of any predictive analysis and that more than any other reason (like values bias) makes the crap shoot of predicting when these advances will occur very suspect.

That said I do think we are moving in the direction of AGI and the real debate is the relativist perception of how big the steps are being taken. To many for a long time we were taking giant steps that implied to them that by now HAL would be in charge. The truth was not so much that we hit a brick wall in development but that we grossly overestimated the extent of their advancements with respect to the ultimate goal.

This same kind of bias holds for Kurzweil's analysis. The importance of processing speed is real but can be overestimated if it is merely one among many such important variables and not necessarily the singularly most important issue.

Software is obviously another and we have experienced a bottleneck in terms of the complexity of software for some time now and I suspect there are more obstacles yet to be overcome as we learn more about the inherent characteristic of the goal, which of course takes us back to the initial assumption about what exactly is intelligence?

Edited by Lazarus Long, 31 July 2006 - 10:27 AM.


#7 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 31 July 2006 - 07:22 AM

My disagreement with Kurzweil prognostications stems, not from the exponential growth concept, as much as from the ambiguity of the goal that is being strived for.


As a wise man once said...

The Singularity holds out the possibility of winning the Grand Prize, the true Utopia, the best-of-all-possible-worlds - not just freedom from pain and stress or a sterile round of endless physical pleasures, but the prospect of endless growth for every human being - growth in mind, in intelligence, in strength of personality; life without bound, without end; experiencing everything we’ve dreamed of experiencing, becoming everything we’ve ever dreamed of being; not for a billion years, or ten-to-the-billionth years, but forever… or perhaps embarking together on some still greater adventure of which we cannot even conceive. That’s the Apotheosis.

If any utopia, any destiny, any happy ending is possible for the human species, it lies in the Singularity.

There is no evil I have to accept because “there’s nothing I can do about it”. There is no abused child, no oppressed peasant, no starving beggar, no crack-addicted infant, no cancer patient, literally no one that I cannot look squarely in the eye. I’m working to save everybody, heal the planet, solve all the problems of the world.

...emotions are not only exemplary of higher intelligence, they may be essential for it to exist.


In humans, some of the time, sure. But "emotions" are this particular, highly specific evolved biological thing that is only there because they were a convenient stepping-stone from pure stimulus-response cognition to more complex cognition. In another galaxy or universe, aliens probably get along just fine without it, or have other cognitive features with entirely different properties and different names that they think are essential to intelligence because in their civilization, it seems like it is.

What is "intelligence"? The ability to achieve complex goals in a complex environment, says Goertzel, and that's fine for our purposes. So intelligence involves extracting regularities from sensory data and exploiting those regularities to achieve goal states. You can do this with or without emotion.

Having good social intelligence requires being attuned to emotion, because emotion is so important in the human social world. But to an AI exploring humans for the first time, it just looks like a bunch of complicated regularities in data. So an AI could act socially adept just by modeling emotions, but not actually experiencing them itself. Some of us cringe at this kind of statement, but it's true.

Maybe the definition of intelligence you are using here automatically encompasses special-purpose social intelligence. For this, it may be essential that humans think that an AI is experiencing emotion, even if it is just faking it.

This perspective introduces a chaotic element that undermines the certainty of any predictive analysis and that more than any other reason (like values bias) makes the crap shoot of predicting when these advances will occur very suspect.


A million sci-fi movies and books love to milk this alleged "chaotic element" of intelligence. We'd hate to admit that underlying all the sweat and blood and love and hate is a series of neurons whose ultimate foundation is heartless pattern recognition operating on noisy sensory data.

It seems like we have this swirling tornado of emotion and thought in us constantly, and that this is essential for us to get things done in the world. Not so. In fact, this tornado causes us to perform far more poorly than normative models in thousands of different problem domains. It can be an embarassment.

If you look at the human species within the larger space of every possible atomic configuration with the same general amount of matter, we're an astonishingly precise, mechanical, well-planned and well-executed organism. That feeling of chaos is chaos within such precise bounds that, if we had a lottery machine that outputted a random matterblob, it would take it forever to get anything close to the specificity of a human being, or any form of computable intelligence.

what exactly is intelligence?


When your AI is smart enough to use protein folding to achieve bootstrapping nanotechnology and rapid infrastructure, it's intelligent. Otherwise it is just a piece of software sitting on a computer, even if it can clean out the stock market.

#8 jaydfox

  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 31 July 2006 - 08:00 AM

The truth was not so much Thai we hit a brick wall in development but that we grossly overestimated the extent of their advancements with respect to the ultimate goal.

Did your spell checker do that?

By the way, this comment might seem off topic, but in an ironic way, it's not.

#9 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 31 July 2006 - 08:41 AM

Relevant to this thread:

http://sl4.org/wiki/KnowabilityOfFAI

Addresses the question:

"How can an AI be creative if we know exactly what it will do? Or if we don't know exactly what it will do, how can we know it will be Friendly?"

#10 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 31 July 2006 - 10:25 AM

The spellchecker did do that Jay combined with me hitting the wrong selection no doubt and the irony is not lost on me either. :)

#11 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 31 July 2006 - 11:14 AM

In humans, some of the time, sure. But "emotions" are this particular, highly specific evolved biological thing that is only there because they were a convenient stepping-stone from pure stimulus-response cognition to more complex cognition. In another galaxy or universe, aliens probably get along just fine without it, or have other cognitive features with entirely different properties and different names that they think are essential to intelligence because in their civilization, it seems like it is.


Are they Michael? You don't really know that and in fact you are expressing an emotional bias that is not really supported by objective evidence. You don't understand the role of emotions with respect to intelligence or evolution and while your assumption might be correct it is still merely an assumption.

I will not debate aliens or other cultures of LGM with you as it is irrelevant to the topic. I will accept that we need to address the issue of defining intelligence and I believe we have addressed this elsewhere at great length but basically I do not see problem solving as the be all and end all of intelligence. I see it as one component of intelligence. I also think we can agree on a sliding scale for intelligence but more than intelligence we are also describing consciousness or at least conscious intelligence as in self aware and once this aspect of intelligence is introduced there simply may be an associative emotional component that you're dismissing.

I think it is premature to draw your conclusion simply because linguistically we associate feelings with sensation. I do think that emotions are ironically related to the ability to assimilate complex and even contradictory sensory data and make survival based heuristic analyses that require a response before a complete evaluation of the data are possible.

sponsored ad

  • Advert

#12 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 31 July 2006 - 01:37 PM

Again and again we return to the question of how Practical Self Aware (or Conscious) AI will be assembled and I retort again that we will likely find our earliest successes by the age old avenue of building on what we know through reverse engineering the human brain.

We can argue all about theoretical alternatives but before those happen the knowledge being rapidly gathered from techniques like this one make synthetic cybernetic-cerebral machines a most probable first development.

Even if we do not simply model it directly upon the human brain/mind these developments will make the end run around the issue by creating a synthesis of human brains and the most advanced computers, along with a large scale web neural net assembled through shared processing ability that is combined with BCI augmented cybernetics probable long before a stand alone creation of quantum computer and its required next stage unique tetratic logic (or however you want to envision algorithmic logic beyond the limitations of binary reductionism) software come jointly into being.

http://www.eurekaler...t-mrw072806.php
MIT researchers watch brain in action
Cambridge, Mass. -- For the first time, scientists have been able to watch neurons within the brain of a living animal change in response to experience.

Thanks to a new imaging system, researchers at MIT's Picower Institute for Learning and Memory have gotten an unprecedented look into how genes shape the brain in response to the environment. Their work is reported in the July 28 issue of Cell.

"This work represents a technological breakthrough," said first author Kuan Hong Wang, a research scientist at the Picower Institute who will launch his own laboratory at the National Institute of Mental Health in the fall. "This is the first study that demonstrates the ability to directly visualize the molecular activity of individual neurons in the brain of live animals at a single-cell resolution, and to observe the changes in the activity in the same neurons in response to the changes of the environment on a daily basis for a week."

This advance, coupled with other brain disease models, could "offer unparalleled advantages in understanding pathological processes in real time, leading to potential new drugs and treatments for a host of neurological diseases and mental disorders," said Nobel laureate Susumu Tonegawa, a co-author of the study.

Tonegawa, director of the Picower Institute and the Picower Professor of Biology and Neuroscience at MIT, Wang and colleagues found that visual experience induces a protein that works as a molecular "filter" to enhance the overall selectivity of the brain's responses to visual stimuli.

The protein, called "Arc," was previously detected in the hippocampus, where it is believed to help store lasting memories by strengthening synapses, the connections between neurons. The Picower Institute's unexpected finding was that Arc also blocks the activity of neurons with low orientation selectivity that are not well "tuned" to vertical and horizontal lines, while keeping neurons with high orientation selectivity.

"Consequently, with the help of Arc, the overall orientation selectivity in the visual cortex is sharpened by visual experience," like a camera that learns to focus better over time, Wang said. What's more, he said, "we suspect that this molecular filtering mechanism may also be applicable to other information processing systems in the brain."

Although baby animals are born with a handful of neurons tuned to respond to edges of light at specific orientations, the ability to detect these orientations improves with experience. The more the animal is exposed to shapes, objects and light, the better it can perceive them.

Plasticity is the amazing ability of a neuron or a synapse to change in response to experience. Changes in synaptic strength require rapid protein synthesis, but at the molecular level, little is known about the factors contributing to experience-dependent changes in orientation selectivity in the visual cortex, Wang said.
(excerpt)


The convergence of technologies are rapidly making the avenue of synthetic AI as an extension of human intelligence very likely in a reasonably short term (under 20 years) because it does not require the reinvention of the wheel to accomplish. To put it simply consider human minds the seed AI and a combination of AGI software and the shared processing ability of the web as the base stage matrix for fertilization by it. Add the advancements coming down the pike for accelerating computer tech and its requisite software and these will build on one another synergistically but I think we are seeing ordinal development based on what can do not merely upon what we may want to do.




1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users