A team of researchers from Duke University has created a machine learning model that enhances the ability of medical professionals to interpret electroencephalography (EEG) charts for patients in intensive care. This tool is especially important as EEG readings are the only way to detect when unconscious patients are at risk of having a seizure or seizure-like events, and it has the potential to save numerous lives each year.
Computational tool could be a life-saving method for unconscious patients at risk of seizures. The results were published online on May 23 in the New England Journal of Medicine AI.
EEGs utilize sensors on the scalp to measure the brain’s electrical signals, which creates a distinctive pattern. This pattern changes dramatically during a seizure, making it easy to identify.
According to Dr. Brandon Westover, there are seizure-like events that are more challenging to identify and categorize compared to seizures. These events exist on a continuum of brain activity and can also cause harm, requiring treatment. Even highly trained neurologists may have difficulty recognizing and confidently categorizing the EEG patterns caused by these events, making it important for medical facilities to have the necessary expertise.
Health outcomes of these patients are critical for doctors to determine.
To create a tool to assist in making these determinations, the doctors sought the expertise of Cynthia Rudin, the Earl D. McLean, Jr. Professor of Computer Science and Electrical and Computer Engineering at Duke. Rudin and her team specialize in developing machine learning algorithms that are easy to understand. Unlike traditional machine learning models, which are often difficult for humans to comprehend, interpretable machine learning models must be able to explain their reasoning.
The research group began by collecting EEG samples from over 2,700 patients and involving more than.In a study, 120 experts were tasked with identifying important characteristics in EEG graphs. They categorized these characteristics as either a seizure, one of four types of seizure-like events, or ‘other.’ Each type of event is represented in the EEG charts by specific shapes or patterns in the lines. However, the appearance of these charts can be inconsistent, making it difficult to identify clear signals. This inconsistency can be caused by inaccurate data or the merging of different signals, resulting in confusing charts.
“There is a definitive truth, but it’s challenging to interpret,” explained Stark Guo, a Ph.D. student in Rudin’s lab. “The inherent uncertainty in many of these charts required us to train the model to make decisions within a continuous spectrum.”The algorithm presents a continuum of seizure-like events rather than distinct categories. This continuum resembles a multicolored starfish, with each arm representing a different type of seizure-like event detected by the EEG. The algorithm’s confidence in its decision is indicated by the specific chart’s position on the arm, with greater certainty closer to the tip of the arm and less certainty closer to the central body. In addition to this visual classification, the algorithm identifies the brainwave patterns it used to make its determination and offers three examples of professional.The algorithm is designed to identify patterns in EEG charts that are similar to those typically diagnosed by medical professionals. According to Alina Barnett, a postdoctoral research associate in the Rudin lab, this allows medical professionals to quickly review the important sections and either confirm the presence of patterns or determine if the algorithm is inaccurate. Even those with limited experience in reading EEGs can make more informed decisions. To test the algorithm, a team of researchers had eight medical professionals categorize 100 EEG samples into six categories, first with the assistance of AI and then without. The results showed the performance of all participants.
The accuracy of the pants has significantly improved, increasing from 47% to 71%. In a previous study, their performance surpassed those using a similar “black box” algorithm.
“People often believe that black box machine learning models are more precise, but for many important applications, such as this one, that is not the case,” Rudin stated. “It is much easier to troubleshoot models when they are interpretable. In this instance, the interpretable model was actually more accurate. Furthermore, it offers a comprehensive view of the types of anomalous electrical signals that occur in the brain, which is extremely beneficial for the care of critically ill patients.”This study received funding from the National Science Foundation (IIS-2147061, HRD-2222336, IIS-2130250, 2014431), the National Institutes of Health (R01NS102190, R01NS102574, R01NS107291, RF1AG064312, RF1NS120947, R01AG073410, R01HL161253, K23NS124656, P20GM130447) and the DHHS LB606 Nebraska Stem Cell Grant.