Norwegian and Princess Cruises Provide Flexible Cancellation Options in Light of California Wildfires

Norwegian, Princess offer flexible cruise cancellations amid California fires Cruise lines are offering flexible cancellations for guests impacted by fires raging in the Los Angeles area. Norwegian Cruise Line is waiving cancellation fees for affected passengers booked to sail in January and February.  “Our hearts and thoughts are with all of those impacted by the
HomeTechnologyRevolutionary Training Method for Ultra-Efficient AI Solutions

Revolutionary Training Method for Ultra-Efficient AI Solutions

AI technologies, such as ChatGPT, are built on artificial neural networks that somewhat replicate the function of nerve cells in the human brain. These networks are trained using enormous datasets on powerful computers, consuming a tremendous amount of energy. One promising alternative to reduce this energy consumption could be the use of spiking neurons, which are significantly more energy-efficient. Traditionally, methods for training spiking neurons faced notable limitations. However, a recent study from the University of Bonn has introduced a potential new approach that may lead to energy-efficient AI techniques. The findings have been published in Physical Review Letters.

AI technologies like ChatGPT rely on artificial neural networks that closely mimic the functions of nerve cells found in human brains. These systems are trained with extensive data sets on high-performance computing systems, resulting in high energy consumption. A viable solution could be the implementation of spiking neurons, which require considerably less energy. However, training methods for these spiking neurons have historically been limited. A new study from the University of Bonn proposes a novel approach that could contribute to the development of AI models that are much more energy-efficient. The results of this research have been published in Physical Review Letters.

The human brain is an exceptional structure. It uses as much energy as three LED bulbs while weighing less than a laptop yet is capable of creating music, formulating complex theories like quantum mechanics, and contemplating profound subjects.

Despite the remarkable capabilities of AI applications such as ChatGPT, they consume substantial energy while tackling complex tasks. Similar to the human brain, they function on a neural network comprising billions of “nerve cells” that share information. Traditional artificial neurons operate continuously, resembling a fence with electrical flow that never stops.

“Biological neurons operate differently,” says Professor Raoul-Martin Memmesheimer from the Institute of Genetics at the University of Bonn. “They communicate through brief voltage impulses called action potentials or spikes.” Since these spikes are infrequent, these networks require considerably less energy. Hence, the development of artificial neural networks that emulate this spiking behavior is a significant focus in AI research.

Spiking networks – efficient yet challenging to train

For neural networks to perform specific tasks, they must undergo training. For instance, if you want an AI to differentiate between a chair and a table, you present it with pictures of furniture to see if it can identify them correctly. Depending on its performance, connections within the neural network are adjusted—some strengthened, others weakened—resulting in improved accuracy over successive training sessions.

After each training session, adjustments are made to determine which neurons influence others and by how much. “In traditional neural networks, the output signals change gradually,” explains Memmesheimer, also a member of the Life and Health Transdisciplinary Research Area. “For example, a signal might decrease from 0.9 to 0.8. However, in spiking neurons, it’s different: a spike is either present or it’s absent. You can’t have a partial spike.”

In a typical neural network, each connection has a knob that allows for slight adjustments of the output signal from a neuron. These knobs are optimized to ensure the network accurately distinguishes between chairs and tables. Conversely, spiking networks lack the ability to gradually tweak signal strengths. “This makes it challenging to fine-tune connection weightings,” emphasizes Dr. Christian Klos, a colleague of Memmesheimer and the study’s first author.

It was previously believed that standard training techniques (referred to as “gradient descent learning”) would present significant challenges for spiking networks. However, the recent study indicates otherwise. “We discovered that in certain basic neuron models, spikes can’t just vanish or appear randomly. They can only shift earlier or later in time,” explains Klos. The timing of these spikes can actually be adjusted continuously through the strength of the connections.

Adjusting connection strengths in spiking networks

The different timing patterns of spikes impact how the targeted neurons react. Essentially, the more “simultaneously” a biological or spiking neuron receives inputs from multiple other neurons, the higher the likelihood of it generating its own spike. Therefore, the influence of one neuron on another can be modified through both the strength of the connections and the timing of the spikes. “We can apply the same effective training method to both types of spiking neural networks we’ve examined,” Klos adds.

The researchers have already demonstrated that their method is practical, successfully teaching a spiking neural network to accurately differentiate between handwritten digits. Going forward, their next challenge is to train the network to understand speech, as noted by Memmesheimer: “While it’s uncertain how our method will be utilized in training spiking networks in the future, we are optimistic about its potential since it is precise and closely resembles the effective method for training conventional neural networks.”

Funding:

The study received funding from the Federal Ministry of Education and Research (BMBF) through the Bernstein Award 2014.