32 Insights from NFL Week 12: Slip-Ups That Could Haunt Playoff Contenders

32 things we learned in NFL Week 12: Missteps could cost some contenders The 32 things we learned from Week 12 of the 2024 NFL season: 0. The number of interceptions thrown by Green Bay Packers QB Jordan Love in Sunday’s throttling of the undermanned San Francisco 49ers, the first time he’d taken an INT
HomeTechnologyEnhancing Robots' Vision with Radio Waves: A Game-Changer in Robotics

Enhancing Robots’ Vision with Radio Waves: A Game-Changer in Robotics

Researchers have introduced PanoRadar, an innovative tool designed to enhance the vision of robots, granting them the ability to create detailed, 3D representations of their surroundings using basic radio waves.

As the development of reliable perception systems for robots continues, one major hurdle is functioning under adverse weather conditions. Traditional vision sensors, such as cameras and LiDAR (Light Detection and Ranging), struggle in situations involving heavy smoke and fog.

Nature has demonstrated that vision does not solely rely on light. Various creatures have adapted to sense their environments without visual light; for instance, bats use sound echoes to navigate, while sharks can detect electric fields produced by their prey.

Radio waves, with their much longer wavelengths compared to light, can pierce through smoke and fog more effectively and can even see through some materials—abilities that surpass human vision. However, robots have typically depended on a limited range of sensors: they might use cameras and LiDAR, which can provide clear images but fall short in difficult conditions, or conventional radar, which can see through obstacles but only generates low-resolution images.

In response to this limitation, a group of researchers from the University of Pennsylvania School of Engineering and Applied Science (Penn Engineering) has created PanoRadar, a specialized tool that equips robots with enhanced vision capabilities by converting straightforward radio waves into intricate, 3D images of their surroundings.

“We started with the question of whether we could merge the strengths of both types of sensors,” explains Mingmin Zhao, Assistant Professor in Computer and Information Science. “We aimed to combine the reliability of radio signals, which work well in fog and challenging conditions, with the high resolution provided by visual sensors.”

In a paper set to be presented at the 2024 International Conference on Mobile Computing and Networking (MobiCom), Zhao and his team from the Wireless, Audio, Vision, and Electronics for Sensing (WAVES) Lab alongside the Penn Research In Embedded Computing and Integrated Systems Engineering (PRECISE) Center—including doctoral student Haowen Lai, recent master’s graduate Gaoxiang Luo, and undergraduate assistant Yifei (Freddy) Liu—reveal how PanoRadar utilizes radio waves and artificial intelligence (AI) to allow robots to navigate even the most arduous environments, such as dense smoke-filled buildings or foggy roads.

PanoRadar operates similarly to a lighthouse, sweeping its beam around to scan the entire area. The system consists of a rotating vertical array of antennas that examines its surroundings. As the antennas rotate, they emit radio waves and listen for their reflections from the environment, akin to how a lighthouse’s beam identifies ships and coastal structures.

Thanks to advanced AI techniques, PanoRadar transcends basic scanning methods. Unlike a lighthouse, which simply lights up areas as it moves, PanoRadar smartly integrates data from all angles of rotation to enhance imaging quality. Even though the sensor itself comes at a fraction of the cost of typical LiDAR systems, this rotating technique generates a dense array of virtual measurement points, achieving an imaging quality similar to that of LiDAR. “The critical advancement lies in how we process the radio wave data,” Zhao adds. “Our signal processing and machine learning methods can extract detailed 3D information from the surroundings.”

A major obstacle for Zhao’s team was to devise algorithms that maintain high-resolution imaging while the robot is in motion. “To reach LiDAR-level resolution with radio signals, we needed to combine data from various positions with remarkable accuracy,” notes Lai, the lead author. “This becomes particularly demanding while the robot moves, as even minor motion inaccuracies can greatly affect the image quality.”

Another challenge was programming the system to comprehend what it perceives. “Indoor settings exhibit regular patterns and shapes,” says Luo. “We capitalized on these patterns to train our AI to interpret the radar inputs, similar to how humans learn to understand their visual environment.” Throughout the training phase, the machine learning model utilized LiDAR data to validate its interpretations, enabling continual self-improvement.

“Our field tests in different buildings indicated that radio sensing excels where traditional sensors falter,” comments Liu. “The system maintains accurate tracking through smoke and can even create maps of areas containing glass walls.” This is due to the fact that radio waves can pass through airborne particles, allowing the system to detect features like glass surfaces which LiDAR cannot capture. PanoRadar’s impressive resolution also enables precise recognition of people—an essential capability for applications such as autonomous driving and rescue operations in dangerous scenarios.

In the future, the team aims to investigate how PanoRadar can be integrated with other sensing technologies, including cameras and LiDAR, to form stronger, multi-modal perception systems for robots. They are also extending their testing to various robotic platforms and self-driving vehicles. “For critical missions, having multiple sensing methods is vital,” Zhao emphasizes. “Every sensor has its pros and cons, and by combining them wisely, we can develop robots that are better prepared to face real-world challenges.”

This research was carried out at the University of Pennsylvania School of Engineering and Applied Science, with support from a faculty startup fund.