Deep-learning algorithm controls paralysed arms
US researchers have developed a deep-learning algorithm that can analyse the brain activity of patients with tetraplegia — paralysis of the arms and legs. Described in the journal Nature Medicine, the algorithm has been employed to deliver electrical stimulation to the patient’s forearm muscles, thereby restoring functional movements to the previously paralysed limb.
The brain-computer interface neurotechnology, called NeuroLife, began life at the Battelle Memorial Institute back in 2014, when surgeons at the Ohio State University Wexner Medical Center implanted a microelectrode array into the brain of a young man suffering from quadriplegia following a diving accident. The chip received neural signals from his brain and sent them to a computer; NeuroLife’s machine-learning algorithms decoded the participant’s intended movement from the neural data. When the intent to move was decoded, the system sent electrical stimulation to a multi-electrode sleeve on the subject’s arm that provided electrical stimulation to the appropriate muscles to evoke the desired movement.
The current NeuroLife system is used in a controlled laboratory setting, but many hardware and software advances necessary for it to operate in a home environment are now in development at Battelle. Necessary advances to enable NeuroLife home use include making the system smaller and more rugged, with high accuracy, rapid response times, multifunctionality and short set-up times.
The neural decoding component of the NeuroLife system — the algorithm that translates patterns of brain activity into intended user action — currently limits several desired system characteristics because it requires recalibration every session, with considerable time commitment from the user and the technical team. Now, Battelle researchers have introduced a deep neural network decoding framework aimed at increasing system usability to facilitate the transition of the technology from the lab to the home setting.
Over a two-year period, Michael Schwemmer and colleagues collected cortical brain activity recordings of a tetraplegia patient performing ‘imagined’ arm and hand movements. Brain activity was collected through an implanted microelectrode array; these microelectrodes directly sampled neuronal activity with high spatiotemporal resolution. From this large dataset, the team used a deep-learning approach to develop a brain–computer interface decoder that provides accurate, rapid and sustained performance and learns new functionalities, all with little need for retraining.
The researchers showed that their new decoding method displays highly accurate performance, can sustain this performance for over a year without daily recalibration, responds faster than a current state-of-the-art method that requires greater set-up time and can increase the number of available functions. They further demonstrated that the study participant could use the decoder to control electrical stimulation of his paralysed forearm, allowing him to accurately manipulate three common everyday objects.
“Our paper shows that neural decoders can be designed to help meet these potential end-user performance expectations and advance the clinical translation of the technology,” said Schwemmer.
AI camera tech could help quickly identify serious infections
A combination of camera technology, software and AI has the potential to assess the severity of...
Machine learning identifies 800,000+ antimicrobial peptides
An international research team has used machine learning to search for antibiotics in a vast...
AI platform makes microscopy image analysis more accessible
DL4MicEverywhere makes artificial intelligence (AI) accessible for analysing microscopy images,...