To someone who’s lost an arm or leg, any prosthetic device that affords them some semblance of normalcy is a godsend. To the people who build prosthetic devices though, simple devices capable of only one or two movements are not good enough. They want to build prostheses that can mimic human limbs in every conceivable movement. They are aggressively pursuing better signal processing to achieve the desired results.
A typical prosthetic capable of responding to nerve impulses contains electrical components that are connected to the patient’s nervous system through a series of electrodes attached at the end of the affected limb. Prosthetic manufacturers already have the ability to measure certain kinds of brain signals and translate them into limited movements.
Unfortunately, the human central nervous system is capable of sending an overwhelming amount of information that electronic devices have trouble processing. That’s why today’s prostheses are usually only capable of one or two movements. That is what researchers are working to change.
Applying Advanced Signal Processing
So what’s going on in the field of prosthetic design? A great article published by Wired in mid-October (2018) gives us a glimpse. Scientists in Baltimore have developed a device that combines advanced signal processing and deep learning technologies that could eventually be used to create the most lifelike prosthetic devices ever.
The signal processing comes into play as the engine that gets a prosthetic device to respond. Rock West Solutions, a California company that specializes in signal processing technologies for the medical sector, explains that making advanced prosthetic devices work as intended requires sorting out all the signals coming from the patient’s brain. This is no easy task.
Advanced signal processing is necessary to determine which signals coming from the central nervous system’s full stream pertain to the affected limb and the movement the patient is trying to accomplish. Wired contributor Eric Niiler likens it to trying to identify individual instruments while listening to a full orchestra.
The human ear can certainly tell when an orchestra is loud. But only people with finely tuned ears can pick out individual instruments. Those with the talent are rare indeed. The task for prosthetic designers is similar.
Applying Deep-Learning Technologies
Once the Baltimore researchers got a handle on signal processing, they looked to introduce deep learning to the equation. It wasn’t enough just to identify the signals in question. Rather, they also had to take those signals and translate them into workable commands for a prosthetic device. That’s where deep learning comes into play.
Right now, it’s simply not possible for electronic devices to understand what the human brain is thinking. However, devices can be trained to respond to certain patterns. This enables the researchers to use deep learning to train prosthetic devices to respond in whatever way users want them to.
To accomplish the desired training, researchers came up with a tablet app that acts as an interpreter of sorts. Patients connected to the app attempt to get their prostheses to move by thinking about the movement in question. The app then translates brain signals associated with those thoughts into signals that move the prosthetic accordingly. By being trained to recognize brain signals, the prosthetic can artificially ‘learn’ how to respond to what the patient is thinking.
This sounds like a lot of science fiction, but it’s all real. Scientists are making tremendous strides toward creating genuinely lifelike prostheses powered by advanced signal processing and deep learning. We truly could be on the verge of a new kind of prosthetic that blurs the line between human and machine. Are we ready?