The Diplomat Season 2: Everything You Need to Know About the Return of This Thrilling Series

‘The Diplomat’ is back for Season 2: What to know, from premiere date to cast Keri Russell is back as U.S. ambassador to the United Kingdom Kate Wyler for the second season of "The Diplomat." The Netflix political thriller returns on Halloween and less than a week away from the upcoming U.S. election, making it
HomeTechnologyHarnessing the Power of Neural Networks for Edge IoT Devices

Harnessing the Power of Neural Networks for Edge IoT Devices

Researchers have introduced a novel method for binarized neural networks (BNNs) that employs ternary gradients, aiming to tackle the computational hurdles faced by IoT edge devices. They designed a computing-in-memory (CiM) architecture based on magnetic RAM, leading to significant reductions in both circuit size and power usage. Their system achieved similar accuracy levels and quicker training times compared to conventional BNNs, presenting a viable option for effective AI implementation in devices with limited resources, like those found in IoT applications.

Artificial intelligence (AI) and the Internet of Things (IoT) are undoubtedly two technological areas that have surged forward rapidly in the last ten years. AI systems have proven to be powerful tools in areas like data analysis, image recognition, and natural language processing, benefitting both academic and industrial fields. Concurrently, advancements in electronics have allowed for a significant reduction in the size of devices that can connect to the Internet. Both engineers and researchers envision a future where IoT devices are widespread, forming the backbone of a highly interconnected environment.

Nonetheless, integrating AI into IoT edge devices presents a major challenge. Artificial neural networks (ANNs), crucial to AI, typically demand extensive computational resources. In contrast, IoT edge devices are compact and have restricted power, processing capabilities, and circuit space. Creating ANNs that can efficiently learn, deploy, and function on edge devices remains a considerable obstacle.

In light of this challenge, Professor Takayuki Kawahara and Yuya Fujiwara from the Tokyo University of Science are diligently searching for effective solutions. Their recent study published in IEEE Access on October 08, 2024, unveils an innovative training algorithm for a specific kind of ANN known as binarized neural networks (BNN), along with a cutting-edge implementation of this algorithm in a computing-in-memory (CiM) architecture tailored for IoT devices.

“BNNs utilize weights and activation values that are restricted to -1 and +1. This enables a drastic reduction in the computing resources needed by the network, limiting the smallest information unit to just one bit,” Kawahara explains. “However, during the learning process, while weights and activation values can remain in a single bit for inference, the weights and gradients are represented by real numbers, and many calculations during learning involve real numbers as well. Consequently, this has posed a challenge in enabling learning capabilities for BNNs on edge devices.”

To address this challenge, the team created a new training algorithm called ternarized gradient BNN (TGBNN), which incorporates three key innovations. First, it uses ternary gradients during training while maintaining binary weights and activations. Second, enhancements were made to the Straight Through Estimator (STE) to improve gradient backpropagation control. Lastly, a probabilistic method was used to update parameters based on the behavior of MRAM cells.

The researchers then implemented the TGBNN algorithm into a CiM architecture—a modern design approach that executes calculations directly within memory, minimizing the need for a dedicated processor and conserving circuit space and power. To achieve this, they created a completely new XNOR logic gate as the foundational element for a Magnetic Random Access Memory (MRAM) array, utilizing a magnetic tunnel junction to store data in its magnetic state.

To alter the stored value of an individual MRAM cell, the team implemented two distinct techniques. The first approach was spin-orbit torque, which arises from injecting an electron spin current into a material. The second was voltage-controlled magnetic anisotropy, which adjusts the energy barrier between various magnetic states in a material. Thanks to these innovations, the size of the product-of-sum calculation circuit was cut down to half compared to traditional units.

The performance of their MRAM-based CiM system for BNNs was evaluated using the MNIST handwritten digit dataset, which contains images for recognition by ANNs. “Our ternarized gradient BNN recorded an accuracy exceeding 88% using Error-Correcting Output Codes (ECOC)-based learning, achieving results comparable to standard BNNs of the same architecture while converging faster during training,” noted Kawahara. “We believe our design will facilitate efficient BNNs on edge devices, allowing them to learn and adapt effectively.”

This advancement could open doors to powerful IoT devices that harness AI more extensively. This has significant ramifications across numerous fields experiencing rapid development. For instance, wearable health monitoring tools could become smaller, more efficient, and dependable without needing constant cloud connectivity. Similarly, smart homes could perform more intricate tasks and respond more adeptly. In all these potential applications, the proposed design could also diminish energy consumption, thereby aiding sustainability efforts.