Moving the Mouse With the Mind
Braingate technology allows patients with tetrapalegia to move a computer cursor by thinking about it.
What is the Goal of the Braingate Trial?
In the ongoing Braingate clinical trialsa,b, which began in 2004, we are hoping to find a way to allow people with tetraplegia, whether it results from stroke, injury, or neurodegenerative disease (eg, amyotrophic lateral sclerosis [ALS]) to control an external device simply by thinking about moving their own hand. Ultimately, we would like to develop the ability for intuitive rapid control of an external device by thought.
What Are Some Results You’ve Seen?
With the device implanted, subjects have been able to control an electronic cursor, a hand prosthesis, and a robotic arm. Recently someone with a cervical spinal cord injury achieved limited control of their own hand and arm. Instead of the neural signal traveling through the peripheral nervous system to the muscles, it was transmitted from the motor cortex to external amplifiers that processed the signals and then appropriately activated functional electric stimulators implanted in the patients arm and hand muscles.
How does the Braingate Device Work?
The core technology uses a microelectrode array, implanted in the motor cortex of a person who is unable to move their limbs, to transmit electric signals from the motor cortex through a cable that exits the skull, feeding into a pedestal attached to the patient’s skull. The pedestal, in turn, is connected to an amplifier that transmits the signal to computer processers that convert and send the signal to an external device, producing an effect in real time. The pedestal is similar in size and shape to the exterior hardware of a cochlear implant.
At the heart of this technology, is the ability to record high-resolution neurophysiologic data from the cortex, sometimes from single neurons. With the Utah microarray, it is possible to record a single neuron or dozens or hundreds of neurons. This microarray consists of 100 electrodes in a 4 mm x 4 mm platform, providing 96 connections, and allowing recording from up to 96 single nerve cells. The electrodes are connected by microwires to the pedestal, which is similar in size and shape to the exterior hardware of a cochlear implant. The microwires are similar to those used in single-fiber EMG, although what is recorded with the Utah microarray (Figure) are extracellular electric discharges within 50 to 150 μm of a particular nerve cell or an ensemble of neurons. Changes in the firing rate or pattern of firing for those groups of neurons is the message, so to speak, to move a part of the anatomy. By processing those signals and changes in signals from the cells or ensembles of cells and rapidly comparing them to signals known to relate to specific movements, we can replicate intended movements externally.
Recordings from animal studies have been used to record signals and correlate them with limb movements in nonhuman primates, giving us the ability to translate what signals indicate movement. We hope that will allow us to help patients with paralysis use the signals of their brain to operate external devices.
How Do You Know When Braingate Is Working?
In our trials, we can start with the algorithms from animal studies, but we also have to record from the patient to adapt to their unique brain signals. We use a classic measure, the center-and-out test. In what is essentially a computer game, the patient focuses on the center of the circle and then moves the cursor from the center to a target on the edge of the circle that “lights up” at random and then moves the cursor back to the center of the circle.
Our decoders and filters are built by asking people to pretend or imagine that they are moving their hand or the mouse. This allows us to correlate the intended movement with the signals received by the amplifier and processed by the computer.
How Much of the Signal From the Motor Cortex is Measured?
Even if we use 2 Utah microarrays, which we sometimes do, there is much of the motor cortex that is not being acquired. The motor cortex is much larger considering the 3-dimensional size and shape of the motor cortex (many cm in length mediolaterally, more than 1 cm anteroposteriorly), and also has sulci. In short, we are getting a sparse sample of what is available to be recorded. Not surprisingly, this motivates many researchers to work on ever better microelectrode arrays so that we may eventually be able to record from thousands of neurons.
How Much Data Is Gathered and How Is The Data
Currently we record 30 kilosamples per second per channel, or 30 kHz. Depending on how many bits of data are recorded there are 15-30 gigabytes per hour or perhaps a half terabyte per day if we record continuously.
What is more important than the mass of data that can be collected, is that even with data from just a few dozen neurons in the motor cortex, it is possible to decode and encrypt the intended endpoint velocity of the hand from that relatively small amount of data. There is so much information that can be extracted from just a few dozen neurons that it is enough to have the information we need to translate thought to movement.
The power of machine learning for data analysis is another important advance. Instead of having to do the decoding and encryption, we can allow machine learning to process the signals in an iterative manner that builds ever-better algorithms that can hopefully result in more complex movements of external objects.
How Does Your Work Fit Into the Larger Field of Brain-Computer Interfaces?
The field is at least a half century old. As early as 1970, the National Institutes of Health (NIH) had a neural prosthesis program, with the goals of understanding the fundamental neuroscience and applications that would allow for the restoration of limb function for people who had become paralyzed. Since the late 1960s pioneers in the field, including Karl Frank, Eberhard Fetz, Donald Humphrey and many others have researched how the motor cortex is involved in movement. Through this work we have developed the technology to record and process those signals reliably and reproducibly in response to people thinking about moving a limb. Computer technology, computational neuroscience, and use of artificial intelligence for signal processing having all improved remarkably over the same time period. Almost all of that work has been funded by Departments of Veterans’ Affairs and the Department of Defense through the NIH neural prosthesis program. None of the work we are doing with Braingate or that others are doing in the field of neuroengineering would be possible without those achievements and advances.
What Do You Enjoy About Your Field?
The ability to use the engineers’ and neuroscientists’ approaches at the same time is particularly unique. We benefit from the engineering focus on: how do we use this brain signal to make something happen? At the same time, we benefit from the neuroscience focus on: understanding how signals in the brain are encoded and encrypted not just for movement, but for all that an organism does. Bringing together these 2 worlds is an especially fun aspect of the work and the more we merge them, the more success we have had.
I am also motivated by the people who participate in the trials. Those who join us in the early feasibility studies and pilot trials do so not because they want to benefit themselves, but because they want people to benefit in the future. Their insights about what works and needs to work better have led to advances. As researchers, we can watch what patients do and whether they succeed or not, but can’t know if it was easy or intuitive. We rely on our patients to tell us, and when we ask them, they say, “it feels natural,” or “I’m just doing it.”
What Does This Mean for Practicing Neurologists?
I’m a neurointensivist and see people in the neuroICU who have lost their ability to move or speak. What I want to be able to tell them is that they will be able to move again and speak again. This is a lofty goal and yet the field is telling us that we can reliably implant electrodes and translate brain signals to movement. I would like to tell anyone diagnosed with ALS that they won’t lose their ability to speak. We are not there yet, and I believe we can get there. I would like to see neurologists and therapists know about this research and be able to share that hope. Ultimately, we’d like a brain-computer interface for paralysis to be as natural for physicians to recommend to their patients as is a deep brain stimulator or cardiac pacemaker.
For People Who Want to Learn More, What Reading Would You Recommend?
Guger C, Allison B, Lebedev (Eds). Brain-Computer Interface Research: A State-of-the-Art Summary 6. Cham, Switzerland: Springer International Publishing, AG; 2017.
Hochberg LR, Serruya MD, Friehs GM, et al. Neuronal ensemble control of prosthetic devices by a human with tetraplegia. Nature. 2006;442(7099):164-171.
Jarosiewicz B, Sarma AA, Saab J, et al., Retrospectively supervised click decoder calibration for self-calibrating point-and-click brain-computer interfaces. J Physiol Paris. 2016;110(4 Pt A):382-391.
Krucoff MO, Rahimpour S, Slutzky MW, et al. Enhancing nervous system recovery through neurobiologics, neural interface training, and neurorehabilitation. Neuroprosthetics. 2016;10:584.
Rao RPN. Brain-Computer Interfacing: An Introduction. New York, NY: Cambridge University Press; 2013.
Wolpaw J, Wolpaw EW. Brain-Computer Interfaces: Principles and Practice. New York, NY: Oxford University Press; 2012.
Leigh R. Hochberg, MD, PhD
Center for Neurotechnology and Neurorecovery
Vascular and Critical Care Neurology
Neurocritical Care and Stroke Services
Massachusetts General Hospital
Senior Lecturer, Harvard Medical School
Professor of Engineering
Veterans Affairs RR&D Center for Neurorestoration and Neurotechnology
Veterans Affairs Medical Center