Cornell researchers have created a robotic feeding system that employs computer vision, machine learning, and multimodal sensing to securely feed individuals with serious mobility impairments, such as those with spinal cord injuries, cerebral palsy, and multiple sclerosis.Paraphrased:
“Feeding individuals with disabilities can be challenging, as many are unable to lean forward and need to have food placed directly into their mouths,” explained Tapomayukh “Tapo” Bhattacharjee, an assistant professor of computer science at the Cornell Ann S. Bowers College of Computing and Information Science and the lead developer of the system. “This challenge becomes even more difficult when dealing with individuals who have additional complex medical conditions.”
The research paper, titled “Feel the Bite: Robot-Assisted Inside-Mouth Bite Transfer using Robust Mouth Perception and Physical Interaction-Aware Control,” was presented at the Human Robot Interaction conference, which took place from March 11-14 in Boulder, Colorado.It was recognized with a Best Paper Honorable Mention, and a demo of the team’s broader robotic feeding system won a Best Demo Award.
Bhattacharjee and his EmPRISE Lab are leaders in assistive robotics and have dedicated years to teaching machines how humans feed themselves. Teaching a machine to identify food items on a plate, pick them up, and transfer them into the mouth of a care recipient is a complex challenge.
“The last 5 centimeters, from the utensil to inside the mouth, is extremely challenging,” Bhattacharjee stated.
Some care recipients may have difficulty feeding themselves, and this robotic feeding system aims to assist them.Some people have very small mouths, less than 2 centimeters wide, while others have muscle spasms that can happen unexpectedly, even with a utensil in their mouth, according to Bhattacharjee. Additionally, some can only bite food in specific areas of their mouth, which they indicate by using their tongue to push the utensil, he added. “Current technology only takes a person’s face into account once and assumes they will stay still, which is often not the case and can be very restrictive for those receiving care,” said Rajat Kumar Jenamani, the lead author of the paper and a computer science doctoral student.The robot was equipped with two important features to meet these challenges: real-time mouth tracking and a dynamic response mechanism. The real-time mouth tracking adjusts to users’ movements, while the dynamic response mechanism allows the robot to detect the nature of physical interactions and react accordingly. This means the system can differentiate between sudden spasms, intentional bites, and user attempts to manipulate the utensil in their mouth. The robotic system fed 13 individuals with various medical conditions in a user study conducted at three locations, including the EmPRISE Lab.The robot was tested in different locations, including the Cornell Ithaca campus, a medical center in New York City, and a care recipient’s home in Connecticut. Researchers reported that users of the robot found it to be safe and comfortable. “This is one of the most comprehensive real-world assessments of any autonomous robot-assisted feeding system with end-users,” said Bhattacharjee. The team’s robot is equipped with a multi-jointed arm that holds a custom-built utensil at the end, capable of sensing the forces being applied to it. The mouth tracking method, which was trained on thousands of images featuring various participants’ head poses and facial expressions, combines data from two cameras positioned above andResearchers have stated that by placing the utensil below, it allows for accurate detection of the mouth and helps overcome any visual obstructions caused by the utensil itself. This response mechanism is aware of physical interactions and uses both visual and force sensing to understand how users are engaging with the robot, according to Jenamani.
“We are giving individuals the ability to control a 20-pound robot using only their tongue,” he explained.
He pointed to the user studies as the most rewarding part of the project, emphasizing the significant emotional impact of the robot on the care recipients and their caregivers. In one session, the parents of a daughter with schizencephaly were involved.A girl with quadriplegia, a rare birth defect, was able to feed herself successfully using the system. “It was a moment of real emotion; her father raised his cap in celebration, and her mother was almost in tears,” Jenamani said. Researchers stated that while further work is needed to explore the system’s long-term usability, its promising results highlight the potential to improve care recipients’ level of independence and quality of life. Bhattacharjee said, “It’s amazing and very fulfilling.” Paper co-authors include Daniel Stabile, M.S. ’23, and Ziang Liu, a doctoral student in the field of neuroscience.The study was conducted by computer scientists Abrar Anwar from the University of South Carolina and Katherine Dimitropoulou from Columbia University. The National Science Foundation provided the primary funding for this research.