“`html
The genome can only hold a tiny portion of the information necessary for managing complex behaviors. So, how does a newborn sea turtle instinctively know to move towards the moonlight? Neuroscientists from Cold Spring Harbor Laboratory propose a possible explanation for this long-standing mystery. Their insights might pave the way for more advanced, quicker forms of artificial intelligence.
The genome can only hold a tiny portion of the information necessary for managing complex behaviors. So, how does a newborn sea turtle instinctively know to move towards the moonlight? Cold Spring Harbor neuroscientists propose a possible explanation for this long-standing mystery. Their insights might pave the way for more advanced, quicker forms of artificial intelligence.
From the moment of birth, we are all prepped for action. Numerous animals are capable of remarkable feats right after they enter the world. Spiders weave intricate webs, while whales take to the water immediately. But where do these innate skills originate? Clearly, the brain is crucial as it houses the vast network of neural connections required to manage complex behaviors. However, the genome can only contain a small amount of that data. This conundrum has perplexed researchers for many years. Now, Professors Anthony Zador and Alexei Koulakov from Cold Spring Harbor Laboratory (CSHL) are proposing a potential answer using artificial intelligence.
When Zador first encounters this challenge, he reinterprets it in a novel way. “What if the limited capacity of the genome is actually what makes us intelligent?” he speculates. “What if this limitation is beneficial instead of detrimental?” In other words, perhaps our ability to think and learn quickly stems from the necessity to adapt due to the constraints of our genome. This is a significant and ambitious theory—demonstrating it is another story. After all, we can’t replicate the billions of years of evolution in a lab setting. This is where the idea of the genomic bottleneck algorithm comes into play.
In the realm of AI, generations evolve not over decades but at the click of a button. Zador, Koulakov, and CSHL postdoctoral researchers Divyansha Lachi and Sergey Shuvaev aim to create a computer algorithm that organizes vast amounts of data efficiently—similarly to how our genome might condense the information needed to develop functional brain circuits. They then compare this new algorithm with AI networks that go through multiple rounds of training. Interestingly, they discover that the newly created, untrained algorithm performs tasks like image recognition nearly as well as the latest AI technology. Their algorithm even competes effectively in classic video games such as Space Invaders, as if it has an inherent understanding of how to play.
So, does this suggest that AI might soon mimic our natural abilities? “We’re not quite there yet,” Koulakov clarifies. “The brain’s cortical structure can hold around 280 terabytes of information—that’s equivalent to 32 years of high-definition video. In contrast, our genomes can manage only about one hour. This indicates that a compression power of 400,000 times is still unmatched.”
Nonetheless, the algorithm demonstrates unprecedented levels of data compression in artificial intelligence. This characteristic could have significant applications in technology. Shuvaev, the lead author of the study, points out: “For instance, if you want to operate a large language model on a smartphone, one way to implement this algorithm could be to expand the model layer by layer within the hardware.”
Such advancements could result in more sophisticated AI with quicker processing times. It’s remarkable to think that it has taken 3.5 billion years of evolution to reach this point.
“`