Researchers have created a new technique that utilizes an image generation AI model to produce realistic images of individual cells. These images are then used as ‘synthetic data’ to train an AI model to improve its ability to accurately carry out single-cell segmentation. Observing individual cells through microscopes can uncover various important cell biological processes that often play a role in human diseases. However, distinguishing single cells from each other and their background is a very time-consuming process, making it an ideal task for AI assistance. AI models are taught how to perform these tasks efficiently.described in a paper published in the journal iScience. The researchers at UC Santa Cruz have created a solution to the issue of limited annotated data for AI training sets. They have developed a method that involves using human-annotated data to generate realistic microscopy images of single cells, which can then be used as “synthetic data” to train AI models to improve single cell-segmentation. This process eliminates the time-consuming and laborious task of manually distinguishing cells from their background, known as “single-cell segmentation.”led by Assistant Professor of Biomolecular Engineering Ali Shariati and his graduate student Abolfazl Zargari. The model, called cGAN-Seg, is freely available on GitHub.
“Our model produces images that can be used for training segmentation models,” Shariati explained. “It’s like doing microscopy without a microscope because we can generate images that closely resemble real cell images in terms of their morphological details. The great thing is that these generated images are already annotated and labeled. They bear a striking resemblance to real images, allowing for easy adaptation and use.”
Using AI allows us to create new situations that our model hasn’t been exposed to during the training process.”
Observing individual cells through a microscope has the potential to enhance our understanding of cell behavior and dynamics over time, improve disease detection, and aid in the development of new medications. Subcellular characteristics such as texture can provide valuable insights for researchers, such as determining whether a cell is cancerous.
However, manually identifying and outlining cell boundaries from their surroundings is highly challenging, particularly in tissue samples with numerous cells in a single image. The process of cell segmentation could take researchers several days to complete manually.on only one cell type, then it might not work well with other types of cells and your results may not be generalizable to other experiments.”
Because acquiring and annotating large numbers of images is time-consuming and often requires specialized knowledge, Zargari and her collaborators wanted to develop a way to train an accurate deep learning model using fewer images.
To do this, the researchers designed a deep learning framework to learn from a small number of images and generalize to new, unseen images. To train and test the framework, they used a data set comprising only 100 microscopy images. The results showed that their method achieved 95% accuracy in identifying and segmenting cells without the need for a large initial data set.
With high-quality images, it is difficult to distinguish low-quality cell images. It is rare to find such a good dataset in the field of microscopy.”
To solve this problem, the researchers developed a generative AI model that converts a limited set of annotated, labeled cell images into more images, adding more complex and varied subcellular features and structures to create a diverse set of “synthetic” images. Importantly, they can produce annotated images with a high density of cells, which are particularly challenging to annotate manually and are especially applicable for studying tissues. This method serves to address the issue of segmenting low-quality cell images.The process involves creating images of various cell types using different imaging techniques, such as fluorescence or histological staining.
Zargari, the leader in developing the generative model, utilized a popular AI algorithm known as a “cycle generative adversarial network” to produce lifelike images. The generative model is improved with “augmentation functions” and a “style injecting network,” which assist the generator in producing a diverse range of high-quality artificial images depicting different potential appearances of the cells. As far as the researchers know, this is the initial tThe application of ime style injecting techniques has been utilized in this particular scenario. The generator produces a variety of synthetic images, which are then employed to teach a model to accurately perform cell segmentation on new, real images obtained during experiments. According to Zagari, “By utilizing a limited data set, we can effectively train a generative model. This generative model allows us to generate a wider and more diverse range of annotated, synthetic images. With these synthetic images, we can train a robust segmentation model — that’s the main concept.” The researchers conducted a comparison of their model’s results using synthetic training data with traditional techniques.past basic segmentation and into more advanced analysis of cellular behavior and function. This could lead to a better understanding of diseases at the cellular level and the development of more targeted treatments. The researchers are excited about the potential impact of their work and are looking forward to further testing and refining their AI model for cell segmentation.The researchers are using synthetic images that can be transformed into time-lapse videos to predict the future behavior of cells. This allows them to study which factors affect the early development of cells and make predictions about their future actions such as growth, migration, differentiation, or division. The study was conducted by Abolfazl Zargari, Benjamin R. Topacio, Najmeh Mashhadi, and S. Ali Shariati.The article titled “mentation with Limited Training Datasets using Cycle Generative Adversarial Networks” was published in iScience in 2024. The DOI for the article is 10.1016/j.isci.2024.109740.