Brain-machine interfaces charge ahead.

A diagram shows the major components of a brain-machine interface.
When researchers finally got the technology to sequence the human genome in the early 1990s, they immediately opened new discussions with ethicists and social scientists to figure out the moral implications of gaining this unprecedented level of knowledge. Would whole genome information change our idea of who we are as a species? Would it lead to a new eugenics movement, destroy medical privacy, or undermine the sacred legal principle of equal protection? Few people had pondered such questions before the project began.

When researchers finally got the technology to develop practical interfaces between human brains and computers in late 1990s, they did not need to open new discussions with ethicists and social scientists; they simply joined a long-running debate. Indeed, by the time the first human subject was wired to a digital interface in 1998, academics, science fiction authors, and ordinary citizens had already been pondering the implications for decades.

There is little danger that the Borg, the Matrix, or even Tron will take over the world anytime soon, but in the past few years researchers have gotten much closer to perfecting the central technology of these fantasies. Using multi-electrode arrays, cultured neurons, and increasingly sophisticated computer algorithms, these pioneering neuroscientists are now developing the first generation of medically useful brain-machine interfaces.

Does this void the warranty?
A tiny but densely packed multi-electrode array forms the basis of several brain-machine interfaces now in development, such as this BrainGate array developed by Cybekinetics. (Source:Cybekinetics)
Over the years, neuroscientists have built a wide range of connections between biological brains and electronics, ranging from the simple electrodes and noninvasive EEG probes that are staples of brain science to complex, chronically implanted multi-electrode arrays. In 1998, Kevin Warwick, PhD, professor in the school of systems engineering at the University of Reading in Reading, UK, took this work to the next level. He had himself wired.

“[The interface] was wired into the median nerves of my left arm, in a two-hour neurosurgical operation. I had wires coming out my arm which we could then plug into the computer,” says Warwick. Over the next three months, Warwick and his colleagues studied the data coming from his implanted probe, decoded the motor and sensory signals, and eventually used the interface to control a robotic arm and receive new sensory signals.

As anyone who has plugged a new peripheral device into a computer will understand, the hard part of the work was getting the driver software to operate correctly. “The first six weeks we were looking at purely monitoring signals, in my case what I did was move my hand, and we were picking up the neural signals, the brain signals if you like, that did that, and sending them off to the computer,” says Warwick. After extensive analysis, the researchers were eventually able to read Warwick’s intentions and use them to drive a robotic arm.

Once the driver software was working, adding new capabilities was trivial, thanks to the brain’s uncanny capacity to adapt. For example, when the team connected an ultrasonic range finder to Warwick’s interface, he immediately gained the ability to sense objects with sonar. “I had a pretty accurate indication ... of how far objects were away from me,” says Warwick, adding that “In a way it was quite disappointing because my brain ... was quite happy with it, no problem at all, as though it was always there.”

In the hope of improving the algorithms for brain-machine interfaces, Warwick and his colleagues are now taking a “bottom-up” approach as well, wiring much simpler biological brains into computers so they can study the control signals in greater detail. One recent project, for example, produced a robot that roams around the lab under the control of a dish of cultured rat neurons. Sensors on the robot feed data to the neurons, which respond with signals that change the robot’s behavior.

With this small, accessible “brain,” the researchers can probe the neurons’ activities much more thoroughly than they could in intact animal or human systems. “We can actually look at what’s going on under the microscope with the physical body doing certain things, very simple things that we can understand as to what position it’s in and what sensory inputs are coming in, [it’s] as though we’re inside what’s going on,” says Warwick.

Besides advancing basic research on brain-machine interactions, the bionic petri dish could also have practical applications. “The brain clearly can move any piece of technology around,” says Warwick, adding that “one possibility that’s really quite interesting would be in terms of distant planets, that you could send the cultures up on a spaceship and if something happened, then well it doesn’t matter too much.”

Device not recognized
A culture of rat neurons wired with electrodes forms the central processor for an autonomous robot. (Source: Kevin Warwick)
Sending expendable cyborgs to explore other planets could raise some novel ethical questions, but before addressing those, the field still needs to sort out some pesky technical details. One major challenge is that to date, each research team has developed its own custom-built interface algorithms.

While this parallel approach has produced several working prototypes of brain-machine interfaces, it has also rendered them incompatible with each other. That could be particularly troubling in medical applications, where a patient using one type of system might need a new operation to upgrade to a different system.

To address that, Lakshmin­arayan Srini­vasan, PhD, a research fellow in neurosurgery at the Massachusetts General Hospital in Boston, has been working to standardize brain-machine interface algorithms. In a recent paper, he and his colleagues described a generic strategy that should work with a wide range of brains, probes, and devices. “The algorithmic structure provides a conceptual abstraction and mathematical prescription that is fairly flexible,” says Srinivasan. He adds that the system uses an approach similar to object-oriented design, a modular style of software development that already dominates conventional computer programming.

Any truly generic interface will have to be able to interpret at least two radically different types of brain data. Many researchers have developed brain-machine interfaces that read neural signals with chronically embedded multi-electrode arrays, but many others are focusing on systems that use non-invasive EEG-type probes on the outside of the head. The two designs produce very different signals.

So far, Srinivasan and his colleagues have tested their generic algorithm only with simulated neuronal signals. Srinivasan readily concedes that the system needs further development: “The gold standard comparison between design strategies will be a randomized trial of closed-loop device performance in patients with the specific diseases being addressed. There will be several additional opportunities to comprehensively validate alternate algorithms between simulated data and human trials.”

Getting interfaces working reliably will also require more understanding of the brain’s own control language. “It is still an open question of how the brain, and its constituent neural circuits, encode information and how to optimally read out this encoding. So in that respect there is still a lot of research needed to identify the best decoding strategies and develop general principles for ‘brain reading,’” says Paul Sajda, PhD, director of the laboratory for intelligent imaging and neural computing at Columbia University in New York.

Channel surfing, reloaded
The need for more research has not prevented scientists from developing a few practical breakthroughs, though. In addition to remotely controlling robot arms and other devices, investigators have harnessed human brainpower to accelerate some computer-based tasks.

At Columbia, Sajda and his colleagues are now using a non-invasive brain-machine interface to search through large databases of images. As a subject watches the images flicker past ten times faster than a conventional movie, a set of EEG probes reads signals from the volunteer’s brain. It’s much faster than searching the images manually, and much more reliable than purely computer-based search algorithms.

“Currently the neural signal we read out is interpreted as a binary signal: ‘yes’ the image is of interest or ‘no’ it is not. The signal to noise ratio ... and the specificity of currently identified signals is so low that it is difficult to obtain more meaningful information from subtle changes in these signals,” says Sajda. Despite these limitations, the system promises to be quite useful. Among other applications, it could be used to screen hours of surveillance video to identify suspicious activity and people quickly.

Sajda’s system also highlights the potential for using non-invasive probes in brain-machine interfaces. Many researchers in the field contend that EEG signals lack the resolution to provide fine-grained control over robotic arms or computer cursors, but Sajda says some recent results suggest otherwise: “I think much more is possible with the non-invasive approaches than what we have seen thus far.”

Regardless of the specific hardware they use, brain-machine interfaces could form the basis of groundbreaking new therapies, especially for patients who are paralyzed or otherwise disabled. Some healthy technophiles might also find the idea of a direct brain interface attractive. “The possibilities of linking into the network, in terms of commmnication particularly, I think are tremendous,” says Warwick.

It’s certainly something to ponder.

1. Srinivasan, L., Eden, U.T., Mitter, S.K, and Brown, E.N. “General purpose filter design for neural prosthetic devices,” Journal of Neurophysiology, 98:2456-2475.

About the Author
Originally trained as a microbiologist, Alan Dove has been writing about science and its interfaces with industry and government for more than a decade.

This article was published in Bioscience Technology magazine: Vol. 32, No. 10, October, 2008, pp. 1, 12-14.