Pete Hegseth Sexual Assault Report: New Details Unveiled by California Police

California police release details of Pete Hegseth sexual assault report A woman who claims she was sexually assaulted by Donald Trump’s nominee to lead the Defense Department told police in 2017 she remembered Pete Hegseth preventing her from leaving a hotel room and that he was on top of her, according to newly released documents
HomeTechnologyEngineers Spotlight Enhanced Efficiency in Optical Neural Networks

Engineers Spotlight Enhanced Efficiency in Optical Neural Networks

Researchers have introduced a programmable framework that addresses a significant computational hurdle faced by optics-driven artificial intelligence systems. Through a variety of image classification tests, they utilized scattered light from a low-power laser to execute precise, scalable computations while consuming only a small fraction of the energy required by electronic systems.

EPFL researchers have introduced a programmable framework that effectively addresses a critical computational challenge in optics-based artificial intelligence systems. They carried out a series of image classification tests utilizing scattered light from a low-power laser to conduct precise and scalable computations, requiring significantly less energy compared to electronic methods.

As digital artificial intelligence systems expand in complexity and influence, their energy consumption for training and operation also increases, leading to higher carbon emissions. Recent studies suggest that if the current pace of AI server production continues, their yearly energy usage could surpass that of a small nation by 2027. Deep neural networks, designed similarly to the human brain, are particularly energy-intensive due to the vast number of connections between numerous layers of neuron-like processors.

To address this escalating energy demand, researchers have intensified their efforts to create optical computing systems, which have been explored since the 1980s. These systems rely on photons for data processing, and while light has the potential to perform computations more quickly and efficiently than electrons, a pivotal obstacle has thus far prevented optical systems from outpacing traditional electronic ones.

“To categorize data within a neural network, each node or ‘neuron’ must decide to activate or not based on weighted input data. This decision results in a nonlinear transformation of the data, meaning the output is not simply proportional to the input,” explains Christophe Moser, the head of the Laboratory of Applied Photonics Devices in EPFL’s School of Engineering.

Moser indicates that while digital neural networks can handle nonlinear transformations effortlessly using transistors, optical systems traditionally required powerful lasers to accomplish this step. Collaborating with students Mustafa Yildirim, Niyazi Ulas Dinc, and Ilker Oguz, as well as Optics Laboratory head Demetri Psaltis, Moser developed an energy-efficient method for conducting these nonlinear computations using light. Their novel approach encodes data, such as image pixels, into the spatial modulation of a low-power laser beam, which reflects back on itself multiple times, resulting in a nonlinear multiplication of the pixels.

“Our experiments in image classification across three datasets demonstrated that our technique is scalable and can be up to 1,000 times more energy-efficient than leading deep digital networks. This suggests it could serve as a viable platform for optical neural networks,” says Psaltis.

The research, funded by a Sinergia grant from the Swiss National Science Foundation, has recently been featured in Nature Photonics.

A straightforward structural solution

In nature, photons generally do not interact directly as charged electrons do. Hence, to achieve nonlinear transformations in optical systems, scientists have found it necessary to induce indirect interactions among photons—for instance, by using sufficiently intense light to alter the optical properties of the material it passes through.

The researchers tackled the need for a high-power laser with a clever and simple solution: they spatially encoded the pixels of an image onto the surface of a low-power laser beam. By encoding this information twice, adjusting the beam’s path in the process, the pixels are essentially squared. Since squaring is a nonlinear transformation, this modification effectively fulfills the non-linearity crucial for neural network computations while using far less energy. This encoding can be performed multiple times, enhancing the non-linearity of the transformation and improving calculation accuracy.

“We estimate that our system can carry out optical multiplications using eight orders of magnitude less energy than what an electronic system would require,” notes Psaltis.

Both Moser and Psaltis stress that the scalability of their low-energy technique is a significant benefit, as the ultimate aim is to leverage hybrid electronic-optical systems to reduce the energy consumption of digital neural networks. However, further engineering research is necessary to realize this scale-up. For instance, due to the different hardware used in optical versus electronic systems, the researchers are already working on developing a compiler to convert digital data into a format that optical systems can process.