The Impact of Urban Tree Loss on Educational Achievement

Economists looked at test scores and school attendance for Chicago-area kids before and after a bug infestation wiped out the city's ash trees. Education outcomes for low-income students went down, highlighting how the impacts of ecosystem degradation are disproportionately felt by disadvantaged communities. It's well established that urban tree cover provides numerous environmental and psychological
HomeHealthEarUnleashing the Power of the 'Gerbil Brain': How it Can Enhance Machine...

Unleashing the Power of the ‘Gerbil Brain’: How it Can Enhance Machine Listening

Macquarie University ⁢researchers have​ disproved a 75-year-old theory regarding ‍how humans perceive the direction ‌of sounds. This⁤ breakthrough could lead to the development of more ​flexible and efficient hearing devices, including hearing aids and smartphones. ‍The theory, which was developed in the 1940s, aimed to explain how ⁢humans determine the ⁤source of sounds.Explain how the human ability to ⁤locate sound sources relies‍ on detecting ‍differences of just a few tens of ​millionths of a second in the ​time it takes for the sound to reach each ear.

This theory​ suggests that specialized⁣ detectors are responsible ⁤for determining the direction of a sound, with specific neurons representing the location in space.

These assumptions have been influential ‌in ⁢guiding research and shaping the ‌development of⁣ audio​ technologies for many years.

However,⁣ a recent​ research‌ paper published in Current Biology by Hearing⁢ researchers⁤ from Macquarie University has ⁤challenged the concept of a neural network.The traditional belief that spatial hearing is solely dedicated⁢ to a specific area of ⁤the brain⁤ has been debunked. ⁤Macquarie University Distinguished Professor of Hearing,​ David McAlpine, has ⁤spent ‍25‌ years researching and proving that animals use a much sparser neural network for spatial hearing, with neurons on both sides of the brain also performing this function. However, demonstrating this in‍ humans has ⁤been challenging. By utilizing a ⁢specialized hearing test, advanced brain imaging, and comparing the brains of humans with⁢ those of other mammals such⁤ as rhesus monkeys, McAlpine and his team have successfully shown for the⁢ first time that ‌spatial hearing involves a more widespread neural network in the brain.

Humans also utilize these simpler networks.

“We tend to believe that our brains are much more⁢ advanced than those of other animals in every way, but that is simply arrogance,” remarked Professor McAlpine.

“We have demonstrated that gerbils are similar to guinea pigs, guinea pigs are‍ similar to rhesus⁤ monkeys, and rhesus​ monkeys are similar ⁢to humans in​ this aspect.

“A sparse,​ energy-efficient ‍form of neural circuitry carries out ⁢this function — ⁤similar to our gerbil brain.”

The ‌research team also confirmed that ⁢the same neural network ⁢segregates speech from background sounds — a‌ discovery that holds significance for the ​development of ⁤hearing ⁤devices and th.The challenge⁣ faced by all types of machine hearing is known as the ‘cocktail party‌ problem’, which refers to the struggle of hearing ‍in noisy environments. This problem⁤ makes it difficult for⁤ people with hearing devices to distinguish one voice in a crowded space and for our smart⁢ devices to understand us when ‍we speak to them. Professor McAlpine and his team’s recent research findings suggest that ⁢instead of focusing on the large language models (LLMs) ⁤currently in use, a simpler⁤ approach should be taken. LLMs are effective at ​predicting the next word in a sentence, but they struggle to ⁣understand the context and meaning ​of the text.

“Too much,” he ​says.

“The important thing here‌ is being able to locate the source of ‌a sound, ⁣and to do⁤ that, we don’t need a‌ ‘deep ⁣mind’ language brain. Other animals can do it, and they don’t have language.

“When we are⁢ listening, our brains don’t keep tracking sound the whole time, which the⁣ large language processors are trying to do.

“Instead,‌ we, and other animals, use our ‘shallow brain’ to ​pick out very small snippets of⁤ sound, including speech, and use these snippets to ‌tag‍ the location and maybe even the identity of the source.

“We don’t have to reconstruct a high-fidelity signal to do this, but instead unde

It‍ is important to understand how our brain represents that signal neurally,‍ well before⁤ it reaches a language centre ​in the cortex.

“This indicates that a machine does not need to be ⁢trained for language like⁤ a human brain⁤ in order to effectively listen.

“We only need that⁢ gerbil brain.”

The next ‍goal for the team ‍is to determine the minimum amount of ⁣information that can be​ conveyed in a sound while still achieving maximum spatial listening.