Assessing the Influence of Agricultural Research Investments on Biodiversity and Land Management

New, groundbreaking research shows how, at a local scale, agricultural research and development led to improved crop varieties that resulted in global benefits to the environment and food system sustainability. New, groundbreaking research shows how, at a local scale, agricultural research and development led to improved crop varieties that resulted in global benefits to the
HomeTechnologyEngineers Pioneering Safety for Multirobot Systems

Engineers Pioneering Safety for Multirobot Systems

Engineers have created a training approach for multiagent systems, which includes large quantities of drones, ensuring they can operate safely in busy environments.

Drone shows have become a popular spectacle, featuring elaborate light displays created by numerous airborne drones that fly in coordinated paths to create stunning visual patterns. When everything functions correctly, these shows are truly impressive. However, incidents involving malfunctioning drones, which have occurred recently in places like Florida and New York, pose significant risks to spectators nearby.

These drone show incidents emphasize the difficulties in ensuring safety within “multiagent systems,” a term that refers to coordinated groups of agents like robots, drones, and self-driving vehicles.

A team of engineers from MIT has developed a training technique for these multiagent systems that ensures their safe functionality in densely populated settings. Their research revealed that by training a small group of agents, the safety protocols they learn can easily be applied to a greater number of agents, thereby securing the entire system’s safety.

During real-life tests, the MIT team trained a limited number of small drones to perform various tasks safely, such as changing positions mid-air and landing precisely on designated moving vehicles. They demonstrated through simulations that the trained behavior of a few drones could effectively be scaled up to thousands, allowing a large system of agents to safely perform the same actions.

“This could set a standard for any situation requiring multiple agents, like warehouse automation, search-and-rescue missions, and autonomous vehicles,” explained Chuchu Fan, an associate professor in aeronautics and astronautics at MIT. “This method acts as a protective shield, guiding each agent to fulfill its task while informing them on how to stay safe.”

Fan and her team shared their findings in a research paper published in the journal IEEE Transactions on Robotics. The paper includes contributions from MIT graduate students Songyuan Zhang and Oswin So, along with former postdoc Kunal Garg, who is now an assistant professor at Arizona State University.

Mall margins

When engineers aim for safety in multiagent systems, they commonly assess the potential movements of each agent regarding every other agent involved. This detailed path planning is often a slow and resource-intensive process, and even then, it doesn’t ensure safety.

“In a drone show, each drone follows a prescribed trajectory—defined waypoints and timings—essentially proceeding blindly along the plan,” noted Zhang, the study’s primary author. “Since they’re only aware of their required location and timing, they struggle to adapt to unexpected changes.”

The MIT team sought to establish a training method for a small group of agents, enabling them to maneuver safely in a manner that could efficiently extend to larger groups. Instead of dictating individual paths, their method allows agents to continually assess their safety boundaries, thus permitting various routes to complete their tasks as long as they remain within these established safety margins.

The approach resembles how humans intuitively navigate crowded spaces.

“Imagine you are in a busy shopping mall,” So elaborates. “You focus on individuals within a close radius, like 5 meters around you, to navigate without colliding with anyone. Our approach adopts a similar localized strategy.”

Safety barrier

In this study, the researchers present GCBF+, which stands for “Graph Control Barrier Function.” This term refers to a mathematical concept in robotics that defines a safety barrier or boundary that indicates areas where an agent is likely to encounter danger. The safety zone for each agent can fluctuate in real time as it interacts with other moving agents.

Typically, when designing safety measures for an agent within a multiagent framework, developers consider all potential interactions with every other agent. The MIT team’s technique, however, focuses on calculating safety margins for just a few agents, achieving sufficient accuracy to represent the behavior of a far greater number of agents within the system.

The method determines an agent’s barrier function by first evaluating its “sensing radius,” or the area it can observe based on its sensors. Similarly to the shopping mall analogy, the researchers assume that agents only need to be aware of others within their immediate sensing range to avoid collisions.

Using computer simulations of each agent’s unique mechanical capabilities, the team devises a set of instructions for how an agent and a few similar counterparts should navigate their environment. By running multiple trajectory simulations, they can observe potential collisions or interactions.

“Once we obtain these trajectories, we can formulate laws aimed at minimizing safety violations within the current setup,” Zhang explained. “We then enhance the control system to promote safer operations.”

This allows the control system to be integrated into real agents, letting them consistently map their safety area based on other agents nearby, enabling them to navigate safely to complete their assignments.

“Our control mechanism is reactive,” Fan stated. “We don’t predefine a path. Instead, it continuously gathers information about each agent’s movement, speeds, and that of other drones. This data allows it to formulate real-time plans that adapt dynamically to stay safe.”

The team demonstrated GCBF+ using a set of eight Crazyflies—compact, lightweight quadrotor drones tasked with flying and swapping positions mid-air. If the drones aimed for the most direct route, collisions would surely occur. However, after undergoing training with this method, they successfully altered their paths in real time to navigate around one another while respecting their safety zones.

In another demonstration, the drones had to fly around and land on specific Turtlebots—wheeled robots that continuously moved in loops. The Crazyflies managed to avoid collisions while landing on the moving Turtlebots.

“With our approach, we only need to input the drones’ destinations instead of an entire collision-free trajectory; the drones can autonomously decide how to reach their targets safely,” Fan remarked, suggesting that this method could be applied across diverse multiagent systems to assure safety, including in drone shows, warehouse management, autonomous driving, and delivery drones.

This research was partially funded by the U.S. National Science Foundation, MIT Lincoln Laboratory as part of the Safety in Aerobatic Flight Regimes (SAFR) initiative, and the Defense Science and Technology Agency of Singapore.