Machine learning aids neurosurgeon during brain surgery

MathWorks Australia

Thursday, 08 June, 2023


Machine learning aids neurosurgeon during brain surgery

Stephane Marouani, Country Manager ANZ at Mathworks, explains how Parkinson’s surgery can be improved with AI and signal processing.

Parkinson’s disease is a chronic and degenerative disorder characterised by irregular electrical signals in the brain’s motor system. The earliest signs of Parkinson’s include tremors, difficulty walking, rigidity, loss of balance and impaired coordination, with the severity of symptoms growing as the disease progresses. Some patients also develop cognitive issues, including depression and dementia.

More than 10 million people worldwide are living with the disease, according to the Parkinson’s Foundation. The risk grows as people age, meaning as the average human life span increases, Parkinson’s will affect more of the population.

While there is no cure, new treatments reduce the symptoms, enabling patients to live fuller lives. One of the most effective tools for treating symptoms is deep brain stimulation (DBS), a procedure approved for Parkinson’s in the United States since 1997. The surgery implants a stimulus electrode in the brain that then delivers electrical pulses to disrupt the hyperactivity that causes the disease’s motor symptoms.

The neurosurgeon’s key task is to place the stimulus electrode in the subthalamic nucleus (STN), a structure smaller than an almond located deep within the brain. Both the size and position of the STN complicate the surgery. The electrodes require precise placement; a misplaced electrode could adversely impact other parts of the brain and puts the patient at risk of additional surgery.

The subthalamic nucleus is smaller than an almond located deep within the brain.

While the neurosurgeon reviews MRI and CT scans of the patient’s brain to ascertain the location of the STN, the surgeon lacks direct visibility of the site during the surgery. The pre-surgery scans and the electrode readings during the procedure guide the surgeon to the optimal location to place the electrodes, a painstaking process that can take hours in the operating room.

Deciphering electrode readings during surgery is complex and complicated, so Dr Konrad Ciecierski — an assistant professor of bioinformatics and machine recognition at NASK, a National Research Institute in Poland — created software to assist neurosurgeons as they operate. The software pre-processes the electrode recordings and runs a machine learning classifier that helps to pinpoint the location of the STN. This software reduces the surgery from the typical three to four hours to 20 minutes, a relief for patients who are under local anaesthesia but are awake during the procedure.

“Many patients are anxious,” Ciecierski said. “So, it is important to make the surgery as fast and precise as possible.”

AI guides the hands of a neurosurgeon

Before the surgery, the patients undergo CT and MRI scans. The scans are fused together to create a detailed 3D image of the brain. The scans are the first step in locating the STN, which varies from patient to patient. The 3D image also helps to safely plot the path of the microelectrodes so that they do not go near arteries and other delicate areas.

The patient remains awake during the procedure, enabling the surgeon to monitor brain function while requesting specific tasks — such as touching fingers together or making a fist — to test if the electrodes are in the optimal location. A local anaesthetic numbs the patient’s scalp before the surgeon drills two small holes in the skull for the microelectrodes. The brain, which lacks pain receptors, does not require anaesthesia.

The time-consuming part of the surgery is placing the stimulus electrode, which relies on interpreting live neural signals from a set of recording electrodes used to guide the placement. These recordings capture neurons firing, which is registered by the electrode as a voltage spike. The surgeon places three to five microelectrodes above the STN and gradually moves them deeper through it.

Recording of the brain activity typically starts about 10 mm above the STN with the white matter, a relatively quiet part of the brain that makes up the first 4 or 5 mm. Once the electrodes reach the deeper part of the brain where the STN is located, brain activity picks up. Often the surgeon can see the hyperactive state of the STN that causes Parkinson’s symptoms in the electrodes’ readings.

Unfortunately, the recordings are not always clear. Sometimes, the hyperactivity does not reach a level that makes the location of the STN obvious.

“A neurological minefield surrounds the subthalamic nucleus,” Ciecierski said. “If you put the electrode in the wrong spot, it can, for example, severely alter the patient’s emotions.”

This is the problem that Ciecierski’s tool seeks to solve. The surgery remains similar but starts approximately 1 cm above where the preoperative imaging shows the STN. The surgeon moves the microelectrodes 1 mm at a time, taking a 10-second recording from each electrode. The surgeon repeats this action until the electrodes pass through the expected location of the STN.

Placement of the electrodes within the patient’s brain. Yellow lines denote the edges of the MRI planes. Image credit: NASK.

Ciecierski uses MATLAB to interpret the data in the recordings. Specifically, the algorithm relies on MATLAB signal processing for operations including wavelet transformations, power spectral analysis, removal of high frequencies, removal of spikes, spike grouping based on the neuronal cells that originated them and removal of artefacts. Ciecierski’s computer processes each electrode’s recording in parallel.

To pick up on the spikes, the recordings are amplified, which puts the signal at risk of being contaminated by artefacts. Filtering is essential because the recording electrodes can pick up activity outside the brain — such as a surgeon’s speech, a patient’s heartbeat and even the hum of the power grid — which can distort the readings. Digital filtering is applied to the data, which is then fed to a machine learning classifier that estimates the likelihood of the spikes emanating from the STN.

Top: An original signal as registered by an electrode. Bottom: The above signal after artefact removal. Image credit: NASK.

Ciecierski runs the program on his computer set-up less than three metres from the patient, a situation that clearly reminds him of what is at stake. Fortunately, the signal processing and classifying take only about two minutes.

According to Alex Tarchini, a customer success engineer at MathWorks, working with 3D imaging, signal processing and machine learning while in an operating theatre takes a lot of patience and skill.

“He’s bringing together different disciplines,” Tarchini said. “The algorithms come from many different engineering fields and are guiding the hands of a surgeon.”

Interpreting results

The machine learning classifier correctly indicates whether the recording comes from the STN 97% of the time, based on ground truth datasets. Instead of solely relying on the machine learning classifier’s numeric value, MATLAB software displays diagrams of the processed electrode readings to the surgeon, who can also zoom in and look at time and amplitude information from the raw recordings. Occasionally, the classifier misclassifies a recording that an experienced surgeon catches by reviewing the diagram.

After-the-fact validation of classifier performance using various metrics. Image credit: NASK.

“They say that medicine is an art,” Ciecierski said. “Sometimes it is very precise, but there is still an art to interpreting the results.”

After electrode placement, the surgeon sends a current through it to see how it affects the patient, with many immediately having more control of their movements and speed than they have had for years.

“For the patient, it can feel like magic,” he said. “Symptoms that were terrible for years suddenly disappear.”

A second part of the surgery with general anaesthesia places a neurostimulator under the skin near the collarbone that connects via a wire to the electrode, sending the precise amount of current for each patient’s needs.

At first, Dr Tomasz Mandat, the surgeon who collaborated with Ciecierski, wasn’t sure if running software would shorten the operating time. But after a few attempts, it was clear that not only did the software decrease the time needed for surgery, but it also provided new confidence around the electrode placement.

“Using Konrad’s software during surgery positively impacts our efficacy,” Mandat said.

The future of deep brain stimulation

After deep brain stimulations, patients reduce doses of dopamine replacement medications by 50% on average and experience a 30–60% improvement in motor score evaluations, according to recent studies.

With thousands of deep brain stimulation surgeries performed every year, Ciecierski’s software could improve the surgical experience for many. So far, he has assisted with over a hundred surgeries in his native Poland since the first trial in 2014.

Deep brain stimulation, however, is not for every Parkinson’s patient. Each country has different rules for who qualifies for the surgery, and it’s typically considered after other methods such as pharmaceuticals have failed. In the United States, which currently requires patients to have had Parkinson’s for at least four years before deep brain stimulation surgery, researchers are beginning to consider using it as the first line of defence. If this approach works, deep brain stimulation would be open to more patients trying to stop Parkinson’s progression in its tracks.

Doctors use deep brain stimulation surgery to treat other conditions such as Tourette’s syndrome, Huntington’s disease, dystonia and chronic pain. Ciecierski would like to see more researchers creating software to support further innovation in neurosurgery. He believes medicine will advance with aid from applied mathematics and computer science.

“People in medicine are often afraid of computers,” he said. “Many people with computer science degrees are afraid of even going near operating theatres. We need to bridge the gap between computer science and medicine.”

Top image credit: iStock.com/ipopba

Originally published here.

Related Articles

AI can detect COVID and other conditions from chest X-rays

As scientists compare different AI models to improve automated chest X-ray interpretation, a new...

Image integrity best practice: the problem with altering western blots

Image integrity issues are most likely to come from western blots, so researchers and...

Leveraging big data and AI in genomic research

AI has fast become an integral part of our daily lives, and embracing it is essential to the...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd