Site icon YSL News

Revolutionary AI Technique for Medical Imaging: Capturing Uncertainty in Images

Tyche is a machine-learning framework designed to produce realistic responses for identifying potential diseases in medical images. It can‍ capture the uncertainty⁤ in images, which could help⁣ prevent clinicians ​from overlooking important information for making‌ diagnoses.

In the field of biomedicine, segmentation entails‍ annotating pixels from a significant structure ‍in a medical image, such as ⁢an organ ⁢or cell. Artificial intelligence models can assist clinicians by highlighting ⁤pixels that might indicate ‌signs of a particular disease or‌ anomaly.

Nevertheless, these models typically offer only one solution, rnrnThe⁣ issue of medical​ image segmentation is‌ complex and subjective. Five different human annotators may offer five different segmentations, potentially disagreeing on the boundaries of a nodule in a lung CT image.

“Having multiple options can be beneficial for decision-making. Simply ⁤recognizing the uncertainty in a​ medical image can impact decision-making, so⁣ it’s crucial to consider this uncertainty,” explains Marianne Rakic, a PhD candidate in computer science at MIT.

Rakic is the primary author of a paper in collaboration with colleagues from MIT, the Broad Institute of MIT⁤ and Harvard, and Massachusetts General ⁤Hospital.

Researchers have developed a new ​AI tool called Tyche, which can identify uncertainty in medical images. Tyche generates multiple plausible ⁤segmentations of a medical‌ image, each highlighting slightly different ⁤areas. ⁢Users can choose the most suitable segmentation for their needs ⁢and specify​ how many options they want to see.

One of the key features of Tyche is its​ ability to handle new segmentation tasks without requiring retraining. Unlike other systems, ⁢Tyche does not need to‍ be trained with a large‍ amount of data and extensive machine-learning knowledge.

This makes Tyche a versatile and​ efficient tool for medical image analysis.It is believed⁣ that this system would be more user-friendly for clinicians and‌ biomedical researchers compared to other methods.​ It can be used right away for various tasks, such as ⁣identifying lung⁣ X-ray lesions or pointing out anomalies in a brain MRI.

In the end, this system could​ enhance diagnoses and contribute to biomedical‍ research by highlighting ‌important information that may be overlooked by other AI tools.

“Ambiguity has​ not been extensively studied. If ⁤your​ model completely misses a nodule that three experts agree is present and two experts say is not, that is‍ probably⁢ something you should pay attention to,” ‌explains⁣ a senior researcher.Adrian Dalca, an assistant professor at Harvard Medical School and MGH, and a research scientist in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), is the ‌author of⁤ the article. Other contributors to the article​ include Hallee Wong, a graduate student in electrical engineering and computer science; Jose Javier Gonzalez Ortiz PhD‍ ’23; ​Beth Cimini, associate director for bioimage analysis at the Broad Institute; and‌ John ‌Guttag, the Dugald C. Jackson Professor of ⁤Computer Science and Electrical Engineering. Tyche will​ be⁤ presented by​ Rakic at the IEEE​ Conference on Computer Vision and Pattern Recognition, where it has been selected as a ‍highlight.Addressing Ambiguity

AI systems utilized for medical image⁢ segmentation typically rely on neural networks.⁣ These networks are modeled after the‌ human brain and consist of ‌interconnected layers of nodes, or neurons, that analyze⁣ data.

Following ⁤discussions with collaborators ‍at ⁤the Broad Institute and MGH who utilize these​ systems, the researchers identified two significant issues that hinder their⁣ effectiveness. The models are unable to account for uncertainty and require retraining for even a slightly different segmentation task.

While some ⁣methods attempt to address one of these shortcomings, addressing both simultaneously remains a challenge.hallenges with a single solution have been difficult to​ solve, according to Rakic. She explains that ⁢considering ambiguity often requires using a very complex‌ model. However, the⁢ goal of the method they propose⁣ is to make it user-friendly with a⁤ relatively small model in order⁢ to quickly make‌ predictions. The researchers developed Tyche by adjusting a simple neural network architecture. To use Tyche, a user provides a⁣ few examples of the segmentation task. For example, these examples could be multiple images ⁣of lesions in a heart MRI that have been segmented ‌by different human experts so‍ that⁣ the model can learn.

Complete​ the task and identify any uncertainty.

A study showed that only 16 example ⁤images, known as a “context set,” are⁢ sufficient for the model to make‍ accurate predictions,‍ with no limit on the number‍ of examples that can be used. The context set ⁣allows Tyche ​to handle new tasks without the need for ⁢retraining.

To account for uncertainty, the researchers​ made ⁢adjustments ‌to the neural network, causing it to generate multiple predictions based on⁢ a single medical image input and the context set. They also modified ⁢the network’s layers to allow the candidate segmentations produced at each ‍step to communicate with each other and the examples.

It ensures that ⁢candidate segmentations are slightly different but‌ still solve the task, similar to rolling dice where different outcomes ⁣are‌ possible.

The training process has been modified to ⁣maximize the quality ⁤of the best prediction ‍and provide ‍multiple medical image segmentations for the user to choose from.

The researchers have‌ also developed‍ a⁢ method for the model to understand the context in ⁤which it ‌is set.

version of Tyche that can be used with an existing, pretrained model for medical image ‌segmentation. ⁤In this case, Tyche enables ⁢the⁢ model to ⁤output multiple candidates ​by making slight transformations to images.

Better, faster predictions

When the⁢ researchers tested Tyche with datasets of annotated medical images, they found ⁤that its predictions captured the diversity of‌ human annotators, and that its best predictions were better than any from the baseline models. Tyche also performed faster ⁤than most models.

“Outputting multiple candidates and ensuring they are⁢ different from one another really gives you

“Having an edge is crucial,”​ Rakic states.

The​ study also found that Tyche could outperform more intricate models that‌ have⁢ been trained using a large, specialized dataset.

As for future work, they aim to experiment with a more adaptable context set, potentially incorporating text or various types of images. Additionally, they are interested in exploring methods to enhance Tyche’s worst predictions and improve ⁤the system so it can suggest ‍the best segmentation ⁢candidates.

This research is partially funded by the National Institutes of​ Health, the Eric and Wendy Schmidt Center at the Broad Institute of MIT and Harvard, and Quanta Computer.

Exit mobile version