Mammoths: A Vital Nutrient for Early American Societies

Scientists have uncovered the first direct evidence that ancient Americans relied primarily on mammoth and other large animals for food. Their research sheds new light on both the rapid expansion of humans throughout the Americas and the extinction of large ice age mammals. Scientists have uncovered the first direct evidence that ancient Americans relied primarily
HomeTechnologyAI Surgical Assistant Matches Human Skill After Learning from Procedure Videos

AI Surgical Assistant Matches Human Skill After Learning from Procedure Videos

 

A robot, trained for the first time by observing videos of experienced surgeons, managed to perform surgical procedures with the same skill as human doctors, according to researchers.

The application of imitation learning in training surgical robots removes the necessity to program each specific movement for medical procedures, moving robotic surgery closer to complete autonomy, where robots could conduct intricate surgeries independently.

“It’s truly remarkable to have this model where we simply provide camera input, and it can anticipate the necessary robotic movements for surgery,” stated senior author Axel Krieger. “We think this represents a significant advancement toward a new era in medical robotics.”

The research, led by scientists at Johns Hopkins University, is being presented this week at the Conference on Robot Learning in Munich, an important gathering for robotics and machine learning.

The team, which included researchers from Stanford University, employed imitation learning to teach the da Vinci Surgical System robot how to carry out essential surgical tasks: maneuvering a needle, lifting tissue, and suturing. The model combined imitation learning with a machine learning framework similar to that utilized by ChatGPT. However, while ChatGPT processes language, this model communicates “robot” using kinematics, which mathematically represents the angles of robotic movements.

The researchers trained their model using hundreds of videos taken from wrist cameras mounted on the arms of da Vinci robots throughout various surgical procedures. These recordings, captured by surgeons globally, are utilized for post-operative review and are later stored. With nearly 7,000 da Vinci robots operating worldwide and over 50,000 surgeons trained on the system, there exists a vast reservoir of data for robots to “learn” from.

Though widely employed, the da Vinci system is often criticized for being imprecise. However, the researchers discovered a way to turn the less accurate input into an advantage. The crucial aspect was training the model to perform relative movements, rather than exact ones, which tend to be less reliable.

“All we need is the image input, and this AI system determines the correct action,” explained lead author Ji Woong “Brian” Kim. “We’ve found that even with just a few hundred demonstrations, the model can grasp the procedure and adapt to new scenarios it hasn’t seen before.”

The robot was trained for three key tasks: needle manipulation, tissue lifting, and suturing. For all these tasks, the robot executed the surgical procedures as skillfully as human practitioners.

“What’s impressive is how well the model can pick up skills that we didn’t specifically teach it,” Krieger remarked. “For instance, if it accidentally drops the needle, it knows to automatically pick it up and proceed, which isn’t something I programmed it to do.”

Researchers mention that this model could swiftly train robots for any surgical procedure. The team is currently engaging in imitation learning to teach a robot not only small surgical tasks but complete surgical operations.

Previously, programming a robot to manage even a simple part of a surgery involved hand-coding every action, which could take years—often a decade for just one type of suturing process, according to Krieger.

“This was very restrictive,” noted Krieger. “The innovation here is that we now only need to gather imitation learning data from various procedures, allowing us to train a robot in just a few days. This accelerates our pursuit of autonomy while minimizing medical errors and enhancing surgical precision.”

Contributors from Johns Hopkins include PhD student Samuel Schmidgall, Associate Research Engineer Anton Deguet, and Associate Professor of Mechanical Engineering Marin Kobilarov. The Stanford University team member is PhD student Tony Z. Zhao.