Recent research has provided a potential explanation for how the brain learns to recognize both color and black-and-white images. The study suggests that at an early stage in development, the brain adapts to distinguish objects based on luminance when the retina is not yet able to process color information.
Even though the human visual system has advanced mechanisms for processing color, the brain can still effortlessly recognize objects in black-and-white images. A new study from MIT suggests a possible explanation for how the brain becomes skilled at identifying both color and color-degraded images.
According to experimental data and computational modeling, researchers have discovered evidence indicating that the ability to recognize objects based on their luminance rather than their color may develop early in life. In infancy, when babies have limited color information, the brain learns to distinguish objects based on their intensity of light. As the retina and cortex mature and become better at processing colors, the brain incorporates color information but still retains its previously acquired ability to recognize images without heavily relying on color cues.
These findings are in line with the notion that the brain’s ability to recognize objects is not solely dependent on color, but also depends on other visual cues.
The research has shown that degraded visual and auditory input in the early stages of development can actually have a positive impact on the perceptual systems. Pawan Sinha, a professor at MIT, emphasizes the importance of initial limitations on the richness of information that the neonatal system is exposed to, not only in the context of color vision and visual acuity, but also in the context of audition.
The study’s results also offer an explanation for why children who are blind from birth but undergo surgery to remove congenital cataracts later in life struggle to identify black and white objects. These children, who are introduced to vibrant colors as soon as they regain their vision, may become overly dependent on color, making them less adaptable to changes or the absence of color information.
Lead authors of the study, MIT postdocs Marin Vogelsang and Lukas Vogelsang, along with Project Prakash research scientist Priti Gupta, published their findings in Science today. Sidney Diamond, a retired neuroscience researcher, also contributed to the study.The paper’s authors include a neurologist who is currently a research affiliate at MIT, as well as other members of the Project Prakash team. The researchers’ interest in how early exposure to color impacts later object recognition stemmed from an observation made during a study of children who had their vision restored after being born with congenital cataracts. In 2005, Sinha initiated Project Prakash, an initiative in India aimed at identifying and treating children with vision loss that can be reversed. Many of these children suffer from Visual impairment caused by dense bilateral cataracts is a common condition in India, where there is a large population of blind children ranging from 200,000 to 700,000. Project Prakash provides treatment to these children and also allows them to take part in studies on their visual development. These studies have contributed to the understanding of the changes in the brain’s organization after the restoration of sight, the brain’s perception of brightness, and other vision-related phenomena. In one study, Sinha and his team assessed the children’s ability to recognize objects by using both color and black-and-white stimuli.
The study found that for children with normal vision, converting color images to grayscale did not impact their ability to recognize the object. However, children who had cataract removal surgery showed a significant decrease in performance when presented with black-and-white images.
This led the researchers to suggest that the visual inputs children receive in early life may influence their ability to adapt to changes in color and identify objects in black-and-white images. In newborns with normal sight, retinal cone cells are not fully developed at birth, causing them to see the world in grayscale.During the early years of life, children may have poor visual acuity and color vision, but their vision improves as the cone system develops. The immature visual system receives limited color information, leading researchers to suggest that the baby brain must become skilled at recognizing images with reduced color cues. They also proposed that children born with cataracts, which are later removed, may rely too heavily on color cues when identifying objects. The researchers experimentally demonstrated that with mature retinas, children may rely too much on color cues post-operatively.The researchers conducted experiments to investigate the impact of color vision on learning and development. They used a standard convolutional neural network, AlexNet, to model vision and trained it to recognize objects. During training, the network was given different types of input, including grayscale images and color images, to mimic the developmental progression of chromatic enrichment in babies’ eyesight. Another training regimen involved only color images.The children of Project Prakash have the ability to process full color information after their cataracts are removed. The researchers discovered that their model could effectively identify objects in both color and grayscale images and was able to withstand color manipulations. However, the Prakash-proxy model, which was trained only on color images, did not perform well when faced with grayscale or hue-manipulated images. This suggests that the model is highly proficient with colored images but struggles with others when not initially trained with color-degraded images.Many models struggle to generalize, possibly due to their reliance on specific color cues,” explained Lukas Vogelsang.
The success of the developmentally inspired model in generalizing is not just because it was trained on color and grayscale images; the sequence of these images also plays a significant role. Another object-recognition model that was initially trained on color images and then grayscale images did not perform as well in identifying black-and-white objects.
<p”Sinha emphasizes, “It’s not just the developmental choreography steps that are important, but also the sequence in which they occur.
The Benefits of Limited Sensory Input
Upon examining the inner workings of the models, researchers discovered that those starting with grayscale inputs learned to depend on brightness to recognize objects. Even when they started receiving color input, they didn’t change their approach significantly as they had already mastered an effective strategy. Models that initially had color images did alter their approach when grayscale images were introduced but were unable to adjust enough to match the accuracy of the models that were initially given grayscale images.
A comparable occurrence may take place in the human brain, which has the potential to adapt to limited sensory input.
The human brain is more adaptable in the early stages of life and can easily learn to recognize objects based on their brightness alone. The lack of color information in the early stages of life may actually be advantageous for the developing brain, as it learns to identify objects with limited information.”As a newborn, the normally sighted child is deprived, in a certain sense, of color vision. And that turns out to be an advantage,” Diamond says. Researchers in Sinha’s lab have also found that limitations in early sensory input can have positive effects on other aspects of vision and the auditory system. In 2022, they used computational models to demonstrate that early exposure to limited sensory information can be beneficial.The study found that exposure to low-frequency sounds, similar to what babies hear in the womb, can enhance performance on auditory tasks that involve analyzing sounds over a longer period of time, such as identifying emotions. The researchers now aim to investigate if this effect also applies to other aspects of development, such as language acquisition.
Funding for the research was provided by the National Eye Institute of NIH and the Intelligence Advanced Research Projects Activity.