The Silent Survivors: Unraveling the Secrets of Asexual Mites Through Millennia

An international research team has discovered various mechanisms in asexual mites that generate genetic diversity and thus ensure survival. In collaboration with colleagues from international partner institutions, researchers at the University of Cologne have investigated the asexual reproduction of oribatid mites using genome sequencing techniques. They show that the key to evolution without sex in
HomeHealthRevolutionary Brain Decoder Offers New Hope for Aphasia Communication

Revolutionary Brain Decoder Offers New Hope for Aphasia Communication

A new AI-driven tool has been developed to convert personal thoughts into uninterrupted written text, eliminating the need for the individual to understand spoken language. This breakthrough indicates that, with further advancements, brain-computer interfaces could enhance communication for individuals with aphasia.

Aphasia, a brain condition that affects approximately one million individuals in the U.S., makes it difficult for people to articulate their thoughts verbally and to understand spoken words.

Researchers at The University of Texas at Austin have created an AI-powered tool capable of transforming a person’s thoughts into seamless text without requiring that person to grasp spoken language. Remarkably, the tool can be trained on an individual’s specific brain activity patterns in about an hour. This builds on their previous efforts to create a brain decoder that necessitated numerous hours of training while the subject listened to auditory stories. The latest innovation suggests that brain-computer interfaces could potentially aid in communication for those with aphasia, given further improvements.

“Accessing semantic representations through both language and visual cues opens exciting possibilities for neurotechnology, especially for individuals who find it challenging to produce and understand language,” stated Jerry Tang, a postdoctoral researcher at UT in Alex Huth’s lab and the lead author of a study published in Current Biology. “This allows us to develop language-based brain-computer interfaces without needing any language comprehension.”

In earlier studies, the team trained a system involving a transformer model similar to ChatGPT that could convert a person’s brain activity into continuous text. The semantic decoder successfully generates text regardless of whether a person is listening to a story, contemplating telling one, or watching a silent story-telling video. However, there are constraints. Training this brain decoder required participants to remain still in an fMRI scanner for about 16 hours while they listened to podcasts—an impractical task for most people, and potentially infeasible for anyone with difficulties in understanding spoken language. Furthermore, the original decoder only works for the individuals it was specifically trained on.

With the latest advancement, the team has refined a method to adjust the previous brain decoder for new users, needing only an hour of training in an fMRI setup while watching brief, silent videos, like Pixar shorts. They developed a converter algorithm that learns to correlate a new person’s brain activity with that of someone whose data was previously utilized for training the brain decoder, achieving similar results much faster with the new subject.

Huth remarked that this research reveals an important insight about brain functions: our thoughts go beyond language.

“This highlights a significant connection between the brain activities triggered when you listen to a story being told and those when you watch a video narrating a tale,” said Huth, an associate professor of computer science and neuroscience and the lead researcher. “Our brains perceive both storytelling forms similarly. It also indicates that what we’re decoding isn’t just language, but rather representations that transcend language, independent of the type of input.”

Like their original brain decoder, this enhanced version relies on cooperative participants willing to engage in training. If subjects resist during the training—such as by thinking of unrelated ideas—the results become unusable, thereby limiting potential misuse.

While their recent participants were neurologically free of issues, the researchers conducted analyses simulating brain lesions in individuals with aphasia, demonstrating that the decoder still successfully transcribed the story they experienced into continuous text. This implies that the method could eventually be applicable to individuals with aphasia.

They are currently collaborating with Maya Henry, an associate professor at UT’s Dell Medical School and Moody College of Communication, who specializes in studying aphasia, to assess the effectiveness of their enhanced brain decoder in individuals with this condition.

“It has been enjoyable and fulfilling to consider how to create the most effective interface and streamline the model-training process for the participants,” Tang mentioned. “I am truly eager to continue investigating how our decoder can provide assistance to people.”

This research received support from the National Institute on Deafness and Other Communication Disorders of the National Institutes of Health, the Whitehall Foundation, the Alfred P. Sloan Foundation, and the Burroughs Wellcome Fund.