A brain cap that translates thoughts into motion sounds like the stuff of science fiction, but researchers at the University of Maryland are turning the inconceivable into reality. Designed by José ‘Pepe’ L. Contreras-Vidal, an associate professor of kinesiology, the noninvasive, sensor-lined cap could soon harness brain waves to control computers, prosthetic limbs, motorized wheelchairs, and even digital avatars. At present, however, Conteras-Vidal and his team are focusing on helping the paralyzed or disabled extend their range of motion. “We are doing something that few previously thought was possible,” says Conteras-Vidal.
The “Brain cap” fits snugly over the head and uses electroencephalography (better known as “EEG”) to read brain waves. Those readings are analyzed while a person walks on a treadmill or performs some other activity to determine how the brain fires during specific movements. Over the past 18 months, Contreras-Vidal and company have published three papers, including one in the Journal of Neurophysiology, that demonstrates how EEG brain signals can reconstruct the complex movements of the ankle, knee, and hip joints of a person in motion.
Unlike similar technologies, Contreras-Vidal’s cap is noninvasive and requires little training to use.
Unlike other brain-computer interface technologies in development, many of which require electrodes to be implanted directly in the brain, Contreras-Vidal’s cap is noninvasive and requires little training to use. The researches have also received a $1.2 million National Science Foundation grant to work with their counterparts from Rice University, the University of MIchigan, and Drexel University on a thought-controlled prosthetic arm for amputees that allows them to feel whatever it touches—just like a regular one would.
There’s nothing fictional about this,” says Rice University co-principal investigator Marcia O’Malley, an associate professor of mechanical engineering. “The investigators on this grant have already demonstrated that much of this is possible. What remains is to bring all of it— noninvasive neural decoding, direct brain control, and [touch] sensory feedback—together into one device.”