It is through the computational processing of images that the intricate details of samples examined under various light microscopes are unveiled. Despite significant advancements in this area, there remain opportunities to enhance aspects like image contrast and resolution. A newly developed computational model, founded on a distinctive deep learning framework, is outpacing traditional models in speed while achieving equal or superior image quality.
The intricate details of samples viewed under different types of light microscopes are brought to light through advanced computational image processing. While progress has been made in this field, there is still potential for improvements in areas such as contrast and resolution. A novel computational model created by researchers from the Center for Advanced Systems Understanding (CASUS) at Helmholtz-Zentrum Dresden-Rossendorf (HZDR) and the Max Delbrück Center for Molecular Medicine boasts faster performance than conventional models, while achieving comparable or even improved image quality. This model, termed Multi-Stage Residual-BCR Net (m-rBCR), was specifically designed for microscopy images. The model was first introduced at the European Conference on Computer Vision (ECCV), a leading event in computer vision and machine learning, and the accompanying peer-reviewed paper has become available.
The innovative model adds a new perspective to the image processing technique known as deconvolution. This method, although computationally demanding, enhances the contrast and resolution of digital images captured by various types of optical microscopes, including widefield, confocal, and transmission microscopes. Deconvolution primarily aims to minimize blur, a specific kind of image deterioration caused by the microscope used. There are two primary strategies in this approach: explicit deconvolution and deep learning-based deconvolution.
Explicit deconvolution methods revolve around the point spread function (PSF), which describes how an infinitesimal point source of light from a sample is broadened into a three-dimensional diffraction pattern by the optical system. Consequently, a recorded two-dimensional image contains light from out-of-focus elements that contribute to the blur. By understanding the PSF of a given microscopy system, one can mathematically correct the blur to produce a much clearer image than the raw captured version.
“A major issue with PSF-based deconvolution techniques is that the PSF for many microscopy systems is often unavailable or inaccurate,” states Dr. Artur Yakimovich, head of a CASUS Young Investigator Group and lead author of the ECCV paper. “For many years, researchers have been focusing on blind deconvolution, which estimates the PSF from images or sets of images. However, blind deconvolution remains a complex challenge, and progress has been limited.”
Previously demonstrated by the Yakimovich team, employing an “inverse problem solving” approach has proven effective in microscopy. Inverse problems involve identifying the underlying causes of observed phenomena. To successfully address such problems, a substantial amount of data and deep learning algorithms are usually required. Similar to explicit deconvolution methods, this approach allows for the creation of high-resolution or higher quality images. For the methodology discussed at the ECCV, the researchers utilized a physics-informed neural network known as Multi-Stage Residual-BCR Net (m-rBCR).
A Different Use of Deep Learning
Generally speaking, image processing can be approached in two primary ways: through the conventional spatial representation or through its frequency representation (which requires a transformation from the spatial form). In the latter approach, every image is seen as a series of waves. Both formats have their merits; some processing tasks are more straightforward in one than the other. Most deep learning architectures cater to spatial domains, as they are optimal for conventional photographs. However, microscopy images, which primarily consist of monochromatic data, present a unique case. For instance, fluorescence microscopy involves specific light sources against dark backgrounds. For this reason, m-rBCR employs the frequency representation as its foundational approach.
“Employing the frequency domain in such instances aids in generating optically meaningful data representations—a concept that allows m-rBCR to effectively tackle the deconvolution challenge using remarkably fewer parameters than other current deep learning architectures,” clarifies Rui Li, the first author and presenter at ECCV. Li worked to enhance the neural network structure of a model called BCR-Net, which drew inspiration from a frequency representation-based signal compression method introduced in the 1990s by Gregory Beylkin, Ronald Coifman, and Vladimir Rokhlin (which is where the ‘BCR’ in the name comes from).
The team has tested the m-rBCR model with four different datasets, consisting of two simulated and two actual microscopy image datasets. The results demonstrate excellent performance with substantially fewer training parameters and quicker run-time compared to contemporary deep learning models, while also outperforming explicit deconvolution techniques.
A Model Specifically Designed for Microscopy
“This new architecture utilizes an underappreciated method for learning representations that goes beyond standard convolutional neural network techniques,” sums up co-author Prof. Misha Kudryashev, head of the “In situ Structural Biology” group at the Max-Delbrück-Centrum für Molekulare Medizin in Berlin. “Our model significantly reduces potentially redundant parameters without sacrificing performance. The model is explicitly tailored for microscopy images and, with its streamlined design, it poses a challenge to the ongoing trend of increasingly larger models that demand more computing resources.”
Recently, the Yakimovich group introduced an image quality enhancement model utilizing generative artificial intelligence. This Conditional Variational Diffusion Model achieves state-of-the-art outcomes, exceeding the results of the m-rBCR model discussed here. “Nonetheless, this approach requires substantial training data and computing power, including adequate graphical processing units, which are currently in high demand,” Yakimovich notes. “In contrast, the lightweight m-rBCR model has no such limitations and still produces excellent results. I’m therefore optimistic about its potential impact within the imaging community. To support this, we’ve begun improving its user-friendliness.”
The Yakimovich group, focused on “Machine Learning for Infection and Disease,” aims to comprehend the intricate network of molecular interactions activated post-pathogen infection. Harnessing the innovative capabilities of machine learning is essential in this endeavor. Their interests include enhancing image resolution, reconstructing images in 3D, automating disease diagnoses, and assessing image reconstruction quality.