• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo
- - - - -

Cyborgs Are Us


  • Please log in to reply
9 replies to this topic

#1 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 23 September 2002 - 09:54 PM


I am hoping that as this debate goes forward from a philosophical perspective we do not lose sight of the pragmatic aspects becoming reality as each day passes. The question of should we upload a consciousness, like the question of what will happen if we upload a human identity is about to be over shadowed by the very real questions of how will we upload a human identity and can we download enhanced mental capacity [?]


Overview/Brain Interfaces

Rats and monkeys whose brains have been wired to a computer have successfully controlled levers and robot arms by imagining their own limb either pressing a bar or manipulating a joystick.

These feats have been made possible by advances in microwires that can be implanted in the motor cortex and by the development of algorithms that translate the electrical activity of brain neurons into commands able to control mechanical devices. Human trials of sophisticated brain-machine interfaces are far off, but the technology could eventually help people who have lost an arm to control a robotic replacement with their mind or help patients with a spinal cord injury regain control of a paralyzed limb.



This is an update on the Cybernaught Monkies.
http://www.sciam.com/ The Scientific American Homepage

Controlling Robots with the Mind

October 2002 issue
<span style='font-size:16pt;line-height:100%'>Controlling Robots with the Mind</span>
People with nerve or limb injuries may one day be able to command wheelchairs, prosthetics and even paralyzed arms and legs by "thinking them through" the motions
By Miguel A. L. Nicolelis and John K. Chapin

Belle, our tiny owl monkey, was seated in her special chair inside a soundproof chamber at our Duke University laboratory. Her right hand grasped a joystick as she watched a horizontal series of lights on a display panel. She knew that if a light suddenly shone and she moved the joystick left or right to correspond to its position, a dispenser would send a drop of fruit juice into her mouth. She loved to play this game. And she was good at it.

Posted Image
<span style='font-size:8pt;line-height:100%'>Image: JIM WALLACE Duke University Photography</span>
OWL MONKEY named Belle climbs on a robot arm she was able to control from a distant room purely by imagining her own arm moving through three-dimensional space.

Belle wore a cap glued to her head. Under it were four plastic connectors. The connectors fed arrays of microwires--each wire finer than the finest sewing thread--into different regions of Belle's motor cortex, the brain tissue that plans movements and sends instructions for enacting the plans to nerve cells in the spinal cord. Each of the 100 microwires lay beside a single motor neuron. When a neuron produced an electrical discharge--an "action potential"--the adjacent microwire would capture the current and send it up through a small wiring bundle that ran from Belle's cap to a box of electronics on a table next to the booth. The box, in turn, was linked to two computers, one next door and the other half a country away.

In a crowded room across the hall, members of our research team were getting anxious. After months of hard work, we were about to test the idea that we could reliably translate the raw electrical activity in a living being's brain--Belle's mere thoughts--into signals that could direct the actions of a robot. Unknown to Belle on this spring afternoon in 2000, we had assembled a multijointed robot arm in this room, away from her view, that she would control for the first time. As soon as Belle's brain sensed a lit spot on the panel, electronics in the box running two real-time mathematical models would rapidly analyze the tiny action potentials produced by her brain cells. Our lab computer would convert the electrical patterns into instructions that would direct the robot arm. Six hundred miles north, in Cambridge, Mass., a different computer would produce the same actions in another robot arm, built by Mandayam A. Srinivasan, head of the Laboratory for Human and Machine Haptics (the Touch Lab) at the Massachusetts Institute of Technology. At least, that was the plan.

If we had done everything correctly, the two robot arms would behave as Belle's arm did, at exactly the same time. We would have to translate her neuronal activity into robot commands in just 300 milliseconds--the natural delay between the time Belle's motor cortex planned how she should move her limb and the moment it sent the instructions to her muscles. If the brain of a living creature could accurately control two dissimilar robot arms--despite the signal noise and transmission delays inherent in our lab network and the error-prone Internet--perhaps it could someday control a mechanical device or actual limbs in ways that would be truly helpful to people.

Finally the moment came. We randomly switched on lights in front of Belle, and she immediately moved her joystick back and forth to correspond to them. Our robot arm moved similarly to Belle's real arm. So did Srinivasan's. Belle and the robots moved in synchrony, like dancers choreographed by the electrical impulses sparking in Belle's mind. Amid the loud celebration that erupted in Durham, N.C., and Cambridge, we could not help thinking that this was only the beginning of a promising journey.

In the two years since that day, our labs and several others have advanced neuroscience, computer science, microelectronics and robotics to create ways for rats, monkeys and eventually humans to control mechanical and electronic machines purely by "thinking through," or imagining, the motions. Our immediate goal is to help a person who has been paralyzed by a neurological disorder or spinal cord injury, but whose motor cortex is spared, to operate a wheelchair or a robotic limb. Someday the research could also help such a patient regain control over a natural arm or leg, with the aid of wireless communication between implants in the brain and the limb. And it could lead to devices that restore or augment other motor, sensory or cognitive functions.

The big question is, of course, whether we can make a practical, reliable system. Doctors have no means by which to repair spinal cord breaks or damaged brains. In the distant future, neuroscientists may be able to regenerate injured neurons or program stem cells (those capable of differentiating into various cell types) to take their place. But in the near future, brain-machine interfaces (BMIs), or neuroprostheses, are a more viable option for restoring motor function. Success this summer with macaque monkeys that completed different tasks than those we asked of Belle has gotten us even closer to this goal.

From Theory to Practice
Recent advances in brain-machine interfaces are grounded in part on discoveries made about 20 years ago. In the early 1980s Apostolos P. Georgopoulos of Johns Hopkins University recorded the electrical activity of single motor-cortex neurons in macaque monkeys. He found that the nerve cells typically reacted most strongly when a monkey moved its arm in a certain direction. Yet when the arm moved at an angle away from a cell's preferred direction, the neuron's activity didn't cease; it diminished in proportion to the cosine of that angle. The finding showed that motor neurons were broadly tuned to a range of motion and that the brain most likely relied on the collective activity of dispersed populations of single neurons to generate a motor command.

Posted Image
Image: BRYAN CHRISTIE DESIGN
Sidebar: A Vision of the Future

<span style='font-size:10pt;line-height:100%'>From Theory to Practice</span>

Recent advances in brain-machine interfaces are grounded in part on discoveries made about 20 years ago. In the early 1980s Apostolos P. Georgopoulos of Johns Hopkins University recorded the electrical activity of single motor-cortex neurons in macaque monkeys. He found that the nerve cells typically reacted most strongly when a monkey moved its arm in a certain direction. Yet when the arm moved at an angle away from a cell's preferred direction, the neuron's activity didn't cease; it diminished in proportion to the cosine of that angle. The finding showed that motor neurons were broadly tuned to a range of motion and that the brain most likely relied on the collective activity of dispersed populations of single neurons to generate a motor command.

There were caveats, however. Georgopoulos had recorded the activity of single neurons one at a time and from only one motor area. This approach left unproved the underlying hypothesis that some kind of coding scheme emerges from the simultaneous activity of many neurons distributed across multiple cortical areas. Scientists knew that the frontal and parietal lobes--in the forward and rear parts of the brain, respectively--interacted to plan and generate motor commands. But technological bottlenecks prevented neurophysiologists from making widespread recordings at once. Furthermore, most scientists believed that by cataloguing the properties of neurons one at a time, they could build a comprehensive map of how the brain works--as if charting the properties of individual trees could unveil the ecological structure of an entire forest!

Posted Image Image: TIM BEDDON SPL / Photo Researchers, Inc.
Sidebar: Stopping Seizures

Fortunately, not everyone agreed. When the two of us met 14 years ago at Hahnemann University, we discussed the challenge of simultaneously recording many single neurons. By 1993 technological breakthroughs we had made allowed us to record 48 neurons spread across five structures that form a rat's sensorimotor system--the brain regions that perceive and use sensory information to direct movements.

Crucial to our success back then--and since--were new electrode arrays containing Teflon-coated stainless-steel microwires that could be implanted in an animal's brain. Neurophysiologists had used standard electrodes that resemble rigid needles to record single neurons. These classic electrodes worked well but only for a few hours, because cellular compounds collected around the electrodes' tips and eventually insulated them from the current. Furthermore, as the subject's brain moved slightly during normal activity, the stiff pins damaged neurons. The microwires we devised in our lab (later produced by NBLabs in Denison, Tex.) had blunter tips, about 50 microns in diameter, and were much more flexible. Cellular substances did not seal off the ends, and the flexibility greatly reduced neuron damage. These properties enabled us to produce recordings for months on end, and having tools for reliable recording allowed us to begin developing systems for translating brain signals into commands that could control a mechanical device.

With electrical engineer Harvey Wiggins, now president of Plexon in Dallas, and with Donald J. Woodward and Samuel A. Deadwyler of Wake Forest University School of Medicine, we devised a small "Harvey box" of custom electronics, like the one next to Belle's booth. It was the first hardware that could properly sample, filter and amplify neural signals from many electrodes. Special software allowed us to discriminate electrical activity from up to four single neurons per microwire by identifying unique features of each cell's electrical discharge.

<span style='font-size:10pt;line-height:100%'>A Rat's Brain Controls a Lever </span>
In our next experiments at Hahnemann in the mid-1990s, we taught a rat in a cage to control a lever with its mind. First we trained it to press a bar with its forelimb. The bar was electronically connected to a lever outside the cage. When the rat pressed the bar, the outside lever tipped down to a chute and delivered a drop of water it could drink.

We fitted the rat's head with a small version of the brain-machine interface Belle would later use. Every time the rat commanded its forelimb to press the bar, we simultaneously recorded the action potentials produced by 46 neurons. We had programmed resistors in a so-called integrator, which weighted and processed data from the neurons to generate a single analog output that predicted very well the trajectory of the rat's forelimb. We linked this integrator to the robot lever's controller so that it could command the lever.

Once the rat had gotten used to pressing the bar for water, we disconnected the bar from the lever. The rat pressed the bar, but the lever remained still. Frustrated, it began to press the bar repeatedly, to no avail. But one time, the lever tipped and delivered the water. The rat didn't know it, but its 46 neurons had expressed the same firing pattern they had in earlier trials when the bar still worked. That pattern prompted the integrator to put the lever in motion.

After several hours the rat realized it no longer needed to press the bar. If it just looked at the bar and imagined its forelimb pressing it, its neurons could still express the firing pattern that our brain-machine interface would interpret as motor commands to move the lever. Over time, four of six rats succeeded in this task. They learned that they had to "think through" the motion of pressing the bar. This is not as mystical at it might sound; right now you can imagine reaching out to grasp an object near you--without doing so. In similar fashion, a person with an injured or severed limb might learn to control a robot arm joined to a shoulder.

<span style='font-size:10pt;line-height:100%'>A Monkey's Brain Controls a Robot Arm</span>

We were thrilled with our rats' success. It inspired us to move forward, to try to reproduce in a robotic limb the three-dimensional arm movements made by monkeys--animals with brains far more similar to those of humans. As a first step, we had to devise technology for predicting how the monkeys intended to move their natural arms.

At this time, one of us (Nicolelis) moved to Duke and established a neurophysiology laboratory there. Together we built an interface to simultaneously monitor close to 100 neurons, distributed across the frontal and parietal lobes. We proceeded to try it with several owl monkeys. We chose owl monkeys because their motor cortical areas are located on the surface of their smooth brain, a configuration that minimizes the surgical difficulty of implanting microwire arrays. The microwire arrays allowed us to record the action potentials in each creature's brain for several months.

In our first experiments, we required owl monkeys, including Belle, to move a joystick left or right after seeing a light appear on the left or right side of a video screen. We later sat them in a chair facing an opaque barrier. When we lifted the barrier they saw a piece of fruit on a tray. The monkeys had to reach out and grab the fruit, bring it to their mouth and place their hand back down. We measured the position of each monkey's wrist by attaching fiber-optic sensors to it, which defined the wrist's trajectory.

Further analysis revealed that a simple linear summation of the electrical activity of cortical motor neurons predicted very well the position of an animal's hand a few hundred milliseconds ahead of time. This discovery was made by Johan Wessberg of Duke, now at the Gothenburg University in Sweden. The main trick was for the computer to continuously combine neuronal activity produced as far back in time as one second to best predict movements in real time.

As our scientific work proceeded, we acquired a more advanced Harvey box from Plexon. Using it and some custom, real-time algorithms, our computer sampled and integrated the action potentials every 50 to 100 milliseconds. Software translated the output into instructions that could direct the actions of a robot arm in three-dimensional space. Only then did we try to use a BMI to control a robotic device. As we watched our multijointed robot arm accurately mimic Belle's arm movements on that inspiring afternoon in 2000, it was difficult not to ponder the implausibility of it all. Only 50 to 100 neurons randomly sampled from tens of millions were doing the needed work.

Later mathematical analyses revealed that the accuracy of the robot movements was roughly proportional to the number of neurons recorded, but this linear relation began to taper off as the number increased. By sampling 100 neurons we could create robot hand trajectories that were about 70 percent similar to those the monkeys produced. Further analysis estimated that to achieve 95 percent accuracy in the prediction of one-dimensional hand movements, as few as 500 to 700 neurons would suffice, depending on which brain regions we sampled. We are now calculating the number of neurons that would be needed for highly accurate three-dimensional movements. We suspect the total will again be in the hundreds, not thousands.

These results suggest that within each cortical area, the "message" defining a given hand movement is widely disseminated. This decentralization is extremely beneficial to the animal: in case of injury, the animal can fall back on a huge reservoir of redundancy. For us researchers, it means that a BMI neuroprosthesis for severely paralyzed patients may require sampling smaller populations of neurons than was once anticipated.

We continued working with Belle and our other monkeys after Belle's successful experiment. We found that as the animals perfected their tasks, the properties of their neurons changed--over several days or even within a daily two-hour recording session. The contribution of individual neurons varied over time. To cope with this "motor learning," we added a simple routine that enabled our model to reassess periodically the contribution of each neuron. Brain cells that ceased to influence the predictions significantly were dropped from the model, and those that became better predictors were added. In essence, we designed a way to extract from the brain a neural output for hand trajectory. This coding, plus our ability to measure neurons reliably over time, allowed our BMI to represent Belle's intended movements accurately for several months. We could have continued, but we had the data we needed.

It is important to note that the gradual changing of neuronal electrical activity helps to give the brain its plasticity. The number of action potentials a neuron generates before a given movement changes as the animal undergoes more experiences. Yet the dynamic revision of neuronal properties does not represent an impediment for practical BMIs. The beauty of a distributed neural output is that it does not rely on a small group of neurons. If a BMI can maintain viable recordings from hundreds to thousands of single neurons for months to years and utilize models that can learn, it can handle evolving neurons, neuronal death and even degradation in electrode-recording capabilities.


<span style='font-size:10pt;line-height:100%'>Exploiting Sensory Feedback </span>

Belle proved that a bmi can work for a primate brain. But could we adapt the interface to more complex brains? In May 2001 we began studies with three macaque monkeys at Duke. Their brains contain deep furrows and convolutions that resemble those of the human brain.

We employed the same BMI used for Belle, with one fundamental addition: now the monkeys could exploit visual feedback to judge for themselves how well the BMI could mimic their hand movements. We let the macaques move a joystick in random directions, driving a cursor across a computer screen. Suddenly a round target would appear somewhere on the screen. To receive a sip of fruit juice, the monkey had to position the cursor quickly inside the target--within 0.5 second--by rapidly manipulating the joystick.

The first macaque to master this task was Aurora, an elegant female who clearly enjoyed showing off that she could hit the target more than 90 percent of the time. For a year, our postdoctoral fellows Roy Crist and José Carmena recorded the activity of up to 92 neurons in five frontal and parietal areas of Aurora's cortex.

Once Aurora commanded the game, we started playing a trick on her. In about 30 percent of the trials we disabled the connection between the joystick and the cursor. To move the cursor quickly within the target, Aurora had to rely solely on her brain activity, processed by our BMI. After being puzzled, Aurora gradually altered her strategy. Although she continued to make hand movements, after a few days she learned she could control the cursor 100 percent of the time with her brain alone. In a few trials each day during the ensuing weeks Aurora didn't even bother to move her hand; she moved the cursor by just thinking about the trajectory it should take.

That was not all. Because Aurora could see her performance on the screen, the BMI made better and better predictions even though it was recording the same neurons. Although much more analysis is required to understand this result, one explanation is that the visual feedback helped Aurora to maximize the BMI's reaction to both brain and machine learning. If this proves true, visual or other sensory feedback could allow people to improve the performance of their own BMIs.

We observed another encouraging result. At this writing, it has been a year since we implanted the microwires in Aurora's brain, and we continue to record 60 to 70 neurons daily. This extended success indicates that even in a primate with a convoluted brain, our microwire arrays can provide long-term, high-quality, multichannel signals. Although this sample is down from the original 92 neurons, Aurora's performance with the BMI remains at the highest levels she has achieved.

We will make Aurora's tasks more challenging. In May we began modifying the BMI to give her tactile feedback for new experiments that are now beginning. The BMI will control a nearby robot arm fitted with a gripper that simulates a grasping hand. Force sensors will indicate when the gripper encounters an object and how much force is required to hold it. Tactile feedback--is the object heavy or light, slick or sticky?--will be delivered to a patch on Aurora's skin embedded with small vibrators. Variations in the vibration frequencies should help Aurora figure out how much force the robot arm should apply to, say, pick up a piece of fruit, and to hold it as the robot brings it back to her. This experiment might give us the most concrete evidence yet that a person suffering from severe paralysis could regain basic arm movements through an implant in the brain that communicated over wires, or wirelessly, with signal generators embedded in a limb.

If visual and tactile sensations mimic the information that usually flows between Aurora's own arm and brain, long-term interaction with a BMI could possibly stimulate her brain to incorporate the robot into its representations of her body--schema known to exist in most brain regions. In other words, Aurora's brain might represent this artificial device as another part of her body. Neuronal tissue in her brain might even dedicate itself to operating the robot arm and interpreting its feedback.

To test whether this hypothesis has merit, we plan to conduct experiments like those done with Aurora, except that an animal's arm will be temporarily anesthetized, thereby removing any natural feedback information. We predict that after a transition period, the primate will be able to interact with the BMI just fine. If the animal's brain does meld the robot arm into its body representations, it is reasonable to expect that a paraplegic's brain would do the same, rededicating neurons that once served a natural limb to the operation of an artificial one.

Each advance shows how plastic the brain is. Yet there will always be limits. It is unlikely, for example, that a stroke victim could gain full control over a robot limb. Stroke damage is usually widespread and involves so much of the brain's white matter--the fibers that allow brain regions to communicate--that the destruction overwhelms the brain's plastic capabilities. This is why stroke victims who lose control of uninjured limbs rarely regain it.


<span style='font-size:10pt;line-height:100%'>Reality Check</span>

Good news notwithstanding, we researchers must be very cautious about offering false hope to people with serious disabilities. We must still overcome many hurdles before BMIs can be considered safe, reliable and efficient therapeutic options. We have to demonstrate in clinical trials that a proposed BMI will offer much greater well-being while posing no risk of added neurological damage.

Surgical implantation of electrode arrays will always be of medical concern, for instance. Investigators need to evaluate whether highly dense microwire arrays can provide viable recordings without causing tissue damage or infection in humans. Progress toward dense arrays is already under way. Duke electronics technician Gary Lehew has designed ways to increase significantly the number of microwires mounted in an array that is light and easy to implant. We can now implant multiple arrays, each of which has up to 160 microwires and measures five by eight millimeters, smaller than a pinky fingernail. We recently implanted 704 microwires across eight cortical areas in a macaque and recorded 318 neurons simultaneously.

In addition, considerable miniaturization of electronics and batteries must occur. We have begun collaborating with José Carlos Príncipe of the University of Florida to craft implantable microelectronics that will embed in hardware the neuronal

pattern recognition we now do with software, thereby eventually freeing the BMI from a computer. These microchips will thus have to send wireless control data to robotic actuators. Working with Patrick D. Wolf's lab at Duke, we have built the first wireless "neurochip" and beta-tested it with Aurora. Seeing streams of neural activity flash on a laptop many meters away from Aurora--broadcast via the first wireless connection between a primate's brain and a computer--was a delight.

More and more scientists are embracing the vision that BMIs can help people in need. In the past year, several traditional neurological laboratories have begun to pursue neuroprosthetic devices. Preliminary results from Arizona State University, Brown University and the California Institute of Technology have recently appeared. Some of the studies provide independent confirmation of the rat and monkey studies we have done. Researchers at Arizona State basically reproduced our 3-D approach in owl monkeys and showed that it can work in rhesus monkeys too. Scientists at Brown enabled a rhesus macaque monkey to move a cursor around a computer screen. Both groups recorded 10 to 20 neurons or so per animal. Their success further demonstrates that this new field is progressing nicely.

The most useful BMIs will exploit hundreds to a few thousand single neurons distributed over multiple motor regions in the frontal and parietal lobes. Those that record only a small number of neurons (say, 30 or fewer) from a single cortical area would never provide clinical help, because they would lack the excess capacity required to adapt to neuronal loss or changes in neuronal responsiveness. The other extreme--recording millions of neurons using large electrodes--would most likely not work either, because it might be too invasive.

Noninvasive methods, though promising for some therapies, will probably be of limited use for controlling prostheses with thoughts. Scalp recording, called electroencephalography (EEG), is a noninvasive technique that can drive a different kind of brain-machine interface, however. Niels Birbaumer of the University of Tübingen in Germany has successfully used EEG recordings and a computer interface to help patients paralyzed by severe neurological disorders learn how to modulate their EEG activity to select letters on a computer screen, so they can write messages. The process is time-consuming but offers the only way for these people to communicate with the world. Yet EEG signals cannot be used directly for limb prostheses, because they depict the average electrical activity of broad populations of neurons; it is difficult to extract from them the fine variations needed to encode precise arm and hand movements.

Despite the remaining hurdles, we have plenty of reasons to be optimistic. Although it may be a decade before we witness the operation of the first human neuroprosthesis, all the amazing possibilities crossed our minds that afternoon in Durham as we watched the activity of Belle's neurons flashing on a computer monitor. We will always remember our sense of awe as we eavesdropped on the processes by which the primate brain generates a thought. Belle's thought to receive her juice was a simple one, but a thought it was, and it commanded the outside world to achieve her very real goal.


--------------------------------------------------------------------------------
Miguel A. L. Nicolelis and John K. Chapin have collaborated for more than a decade. Nicolelis, a native of Brazil, received his M.D. and Ph.D. in neurophysiology from the University of São Paulo. After postdoctoral work at Hahnemann University, he joined Duke University, where he now co-directs the Center for Neuroengineering and is professor of neurobiology, biomedical engineering, and psychological and brain sciences. Chapin received his Ph.D. in neurophysiology from the University of Rochester and has held faculty positions at the University of Texas and the MCP Hahnemann University School of Medicine (now Drexel University College of Medicine). He is currently professor of physiology and pharmacology at the State University of New York Downstate Medical Center.



More to Explore:
Real-Time Prediction of Hand Trajectory by Ensembles of Cortical Neurons in Primates. J. Wessberg, C. R. Stambaugh, J. D. Kralik, P. D. Beck, J. K. Chapin, J. Kim, S. J. Biggs, M. A. Srinivasan and M.A.L. Nicolelis in Nature, Vol. 408, pages 361-365; November 16, 2000

Actions from Thoughts. M.A.L. Nicolelis in Nature, Vol. 409, pages 403-407; January 18, 2001

Advances in Neural Population Coding. Edited by M.A.L. Nicolelis. Progress in Brain Research, Vol. 130. Elsevier, 2001

Neural Prostheses for Restoration of Sensory and Motor Function. Edited by J. K. Chapin and K. A. Moxon. CRC Press, 2001

Real-Time Control of a Robot Arm Using Simultaneously Recorded Neurons in the Motor Cortex. J. K. Chapin, K. A. Moxon, R. S. Markowitz and M.A.L. Nicolelis in Nature Neurosciences, Vol. 2, pages 664-670; July 1999


#2 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,054 posts
  • 2,002
  • Location:Wausau, WI

Posted 19 February 2004 - 12:00 AM

Here is a story about a fellow that has already been a "cyborg". Many of you may have already heard of Kevin Warwick


From - The Times of India

'In the future, humans will become cyborgs'
PIALI BANERJEE

TIMES NEWS NETWORK[ TUESDAY, JANUARY 20, 2004 03:07:44 AM ]

MUMBAI: How does it feel to shake hands with an ex-cyborg, a man who has gone through the experience of being part-human part-robot in the past?

Well, the experience is tempered down somewhat when you hear Kevin Warwick joking about his own experiments with artificial intelligence at the University of Reading, UK.

"I've been a cyborg for three months," he says, rolling his eyes and adding a la Arnold Schwarzenegger in 'Terminator', "And I'll be back."

The story so far goes that Kevin Warwick, who is currently visiting Mumbai, shocked the scientific world so much when he inserted a silicon chip in his forearm and connected himself to a computer in 1998, that when he actually connected his nervous system to a robot via the Internet to turn himself into a cyborg four years later, the world simply sat back to watch his experiments.

Was the experiment successful? "Well, it definitely let me realise a few of my dreams of communicating directly without speaking," he replies.

"My nervous system was connected, from where I was in New York , to a robot in UK through the Internet, and I could actually move the robot's hand by moving my hand. The experience worked vice versa too. In the sense, that when the robot gripped any object, I could sense the pressure thousands of miles away."

Mr Warwick roped in his wife, Irena, to test his theory that communication between two people need not be through the old-fashioned way of speech.


"She had electrodes put into her hand, so that every time I moved, the signal from my nervous system reached hers directly," he explains. "It was exciting to feel my nerves go 'ting ting' each time she moved, miles and miles away from me."


Mr Warwick's current project involves developing a robot which has five senses—besides having vision and hearing, it has a radar noise, an infra-red sensitive lip and an ultrasonic sensitive forehead.

"Most intelligent robots now work with two senses, I want to see whether this one can react better to the outside world, using five senses," he says.

As far as human experimentation goes, a patient of multiple sclerosis has volunteered to have his brain connected to a computer, which will help him to do simple motor functions around the house and drive a car.

"I will be experimenting personally with a brain transplant after 10 years," he adds. "Technically I know it's possible to communicate what we're thinking directly into the brain of another person. This implant may help me prove it."

..........



Read the full article here

sponsored ad

  • Advert

#3 isaac

  • Guest
  • 1 posts
  • 0

Posted 10 May 2004 - 04:39 AM

http://www.newscient...p?id=ns99992078
http://www.kevinwarwick.org.uk/
http://forms.theregi... cyborg&x=0&y=0
http://www.google.co.....ptain cyborg"
http://www.google.co....."&btnG=Search

I don't know whether or not the claims made about Kevin Warwick are valid. However, I tend to be skeptical of anyone out for media attention. He certainly raises interesting questions - but it seems unlikely that he's doing good science, given his suspect methods.

Isaac Z. Schlueter
http://isaac.beigetower.org

#4 ocsrazor

  • Guest OcsRazor
  • 461 posts
  • 2
  • Location:Atlanta, GA

Posted 10 May 2004 - 09:03 PM

Isaac,

The view from inside the field is that Warwick is not a particularly capable scientist. With the great length he went to implant himself with devices, one would hope you would at least do an interesting experiment with a device to produce some knowledge on how to create more effective implants. Unfortunately, there has been very little, if any, new knowledge created by his work.

#5 Lazarus Long

  • Topic Starter
  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 10 May 2004 - 11:07 PM

I began this thread almost two years ago, a lot has happened since.

Should we merge its contents into one of the others that are more comprehensive and current Peter?

#6

  • Lurker
  • 0

Posted 11 May 2004 - 09:36 AM

I'm not impressed with Warwick's self-experimentation, I think we need far more animal-computer brain links. Of course I know very little about this field of research but it only makes sense that we have a good understanding of how chips and neurons communicate in order to facilitate the sharing of higher level thoughts.

http://news.bbc.co.u...lth/2843099.stm

Did anyone read this article? Most likely, but I just came across it and it sounds quite interesting.

#7 bacopa

  • Validating/Suspended
  • 2,223 posts
  • 159
  • Location:Boston

Posted 11 May 2004 - 03:29 PM

interesting look into the future of chip insertion technology, certainly most transhumanists would like to see that technology accelertated much further than where it is now.

Electric stimulation has already been used to help patients with damaged spinal cords walk. "But the walking they do is very, very poor by normal standards," says Donaldson. "And my view is that no foreseeable technology is going to get paraplegics walking any better than has already been done."

I hope he'll one day have to eat these words...

#8 ocsrazor

  • Guest OcsRazor
  • 461 posts
  • 2
  • Location:Atlanta, GA

Posted 11 May 2004 - 03:34 PM

Yes Laz, definitely agree we have already covered much of the ground being mentioned here. I will try to reorganize any of the interesting threads in BCI into an archive of good information.

For example, Cosmos, I covered the Berger hippocampus chip in detail last year when the news broke. Look around in my neural interfacing thread for more info. In short it is a good start, his group is very good at large scale neuro-modeling, but they are very inexperienced at neural interface.

Best,
Peter

#9 Lazarus Long

  • Topic Starter
  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 15 January 2008 - 01:23 PM

Here is a thread that has languished but not the subject. Since the last post ( a lament about how much had happened in so short a time) even more has happened.

Here is a report that shows how cybernetics achieving another milestone. A relatively complete interface between a monkey and a robot. You will also notice that the names of the original researchers are appearing again. Clearly they are making great strides.

Posted Image

Monkey’s Thoughts Propel Robot, a Step That May Help Humans

By SANDRA BLAKESLEE
Published: January 15, 2008

If Idoya could talk, she would have plenty to boast about. On Thursday, the 12-pound, 32-inch monkey made a 200-pound, 5-foot humanoid robot walk on a treadmill using only her brain activity. She was in North Carolina, and the robot was in Japan.

It was the first time that brain signals had been used to make a robot walk, said Dr. Miguel A. L. Nicolelis, a neuroscientist at Duke University whose laboratory designed and carried out the experiment. In 2003, Dr. Nicolelis’s team proved that monkeys could use their thoughts alone to control a robotic arm for reaching and grasping.

These experiments, Dr. Nicolelis said, are the first steps toward a brain machine interface that might permit paralyzed people to walk by directing devices with their thoughts. Electrodes in the person’s brain would send signals to a device worn on the hip, like a cell phone or pager, that would relay those signals to a pair of braces, a kind of external skeleton, worn on the legs.

“When that person thinks about walking,” he said, “walking happens.”

Richard A. Andersen, an expert on such systems at the California Institute of Technology in Pasadena who was not involved in the experiment, said that it was “an important advance to achieve locomotion with a brain machine interface.”

Another expert, Nicho Hatsopoulos, a professor at the University of Chicago, said that the experiment was “an exciting development. And the use of an exoskeleton could be quite fruitful.”

A brain machine interface is any system that allows people or animals to use their brain activity to control an external device. But until ways are found to safely implant electrodes into human brains, most research will remain focused on animals. In preparing for the experiment, Idoya was trained to walk upright on a treadmill. She held onto a bar with her hands and got treats — raisins and Cheerios — as she walked at different speeds, forward and backward, for 15 minutes a day, 3 days a week, for 2 months.

Meanwhile, electrodes implanted in the so-called leg area of Idoya’s brain recorded the activity of 250 to 300 neurons that fired while she walked. Some neurons became active when her ankle, knee and hip joints moved. Others responded when her feet touched the ground. And some fired in anticipation of her movements.

To obtain a detailed model of Idoya’s leg movements, the researchers also painted her ankle, knee and hip joints with fluorescent stage makeup and, using a special high speed camera, captured her movements on video. The video and brain cell activity were then combined and translated into a format that a computer could read. This format is able to predict with 90 percent accuracy all permutations of Idoya’s leg movements three to four seconds before the movement takes place.

On Thursday, an alert and ready-to-work Idoya stepped onto her treadmill and began walking at a steady pace with electrodes implanted in her brain. Her walking pattern and brain signals were collected, fed into the computer and transmitted over a high-speed Internet link to a robot in Kyoto, Japan. The robot, called CB for Computational Brain, has the same range of motion as a human. It can dance, squat, point and “feel” the ground with sensors embedded in its feet, and it will not fall over when shoved.

Designed by Gordon Cheng and colleagues at the ATR Computational Neuroscience Laboratories in Kyoto, the robot was chosen for the experiment because of its extraordinary ability to mimic human locomotion. As Idoya’s brain signals streamed into CB’s actuators, her job was to make the robot walk steadily via her own brain activity. She could see the back of CB’s legs on an enormous movie screen in front of her treadmill and received treats if she could make the robot’s joints move in synchrony with her own leg movements.

As Idoya walked, CB walked at exactly the same pace. Recordings from Idoya’s brain revealed that her neurons fired each time she took a step and each time the robot took a step. “It’s walking!” Dr. Nicolelis said. “That’s one small step for a robot and one giant leap for a primate.”

The signals from Idoya’s brain sent to the robot, and the video of the robot sent back to Idoya, were relayed in less than a quarter of a second, he said. That was so fast that the robot’s movements meshed with the monkey’s experience. An hour into the experiment, the researchers pulled a trick on Idoya. They stopped her treadmill. Everyone held their breath. What would Idoya do?

“Her eyes remained focused like crazy on CB’s legs,” Dr. Nicolelis said.

She got treats galore. The robot kept walking. And the researchers were jubilant.

When Idoya’s brain signals made the robot walk, some neurons in her brain controlled her own legs, whereas others controlled the robot’s legs. The latter set of neurons had basically become attuned to the robot’s legs after about an hour of practice and visual feedback. Idoya cannot talk but her brain signals revealed that after the treadmill stopped, she was able to make CB walk for three full minutes by attending to its legs and not her own.

Vision is a powerful, dominant signal in the brain, Dr. Nicolelis said. Idoya’s motor cortex, where the electrodes were implanted, had started to absorb the representation of the robot’s legs — as if they belonged to Idoya herself.

In earlier experiments, Dr. Nicolelis found that 20 percent of cells in a monkey’s motor cortex were active only when a robotic arm moved. He said it meant that tools like robotic arms and legs could be assimilated via learning into an animal’s body representation.

In the near future, Idoya and other bipedal monkeys will be getting more feedback from CB in the form of microstimulation to neurons that specialize in the sense of touch related to the legs and feet. When CB’s feet touch the ground, sensors will detect pressure and calculate balance. When that information goes directly into the monkeys’ brains, Dr. Nicolelis said, they will have the strong impression that they can feel CB’s feet hitting the ground.

At that point, the monkeys will be asked to make CB walk across a room by using just their thoughts.

“We have shown that you can take signals across the planet in the same time scale that a biological system works,” Dr. Nicolelis said. “Here the target happens to be a robot. It could be a crane. Or any tool of any size or magnitude. The body does not have a monopoly for enacting the desires of the brain.”

To prove this point, Dr. Nicolelis and his colleague, Dr. Manoel Jacobsen Teixeira, a neurosurgeon at the Sirio-Lebanese Hospital in São Paulo, Brazil, plan to demonstrate by the end of the year that humans can operate an exoskeleton with their thoughts.

It is not uncommon for people to have their arms ripped from their shoulder sockets during a motorcycle or automobile accident, Dr. Nicolelis said. All the nerves are torn, leaving the arm paralyzed but in chronic pain.

Dr. Teixeira is implanting electrodes on the surface of these patients’ brains and stimulating the underlying region where the arm is represented. The pain goes away. By pushing the same electrodes slightly deeper in the brain, Dr. Nicolelis said, it should be possible to record brain activity involved in moving the arm and intending to move the arm. The patients’ paralyzed arms will then be placed into an exoskeleton or shell equipped with motors and sensors.

“They should be able to move the arm with their thoughts,” he said. “This is science fiction coming to life.”

sponsored ad

  • Advert

#10 basho

  • Guest
  • 774 posts
  • 1
  • Location:oʎʞoʇ

Posted 15 January 2008 - 02:07 PM

... another milestone. A relatively complete interface between a monkey and a robot. Clearly they are making great strides.

Posted Image


Here's the video. Incredible stuff!






0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users