Even the most lifelike androids can evoke discomfort when their facial expressions are inconsistent with their emotions. A conventional approach known as the ‘patchwork method’ has been used for animating facial movements, but it has its limitations. A research team has introduced an innovative technology that utilizes ‘waveform movements’ to generate intricate expressions in real-time, avoiding any awkward transitions. This system mirrors internal feelings, improving emotional interactions between robots and people, which may make androids seem more humanlike.
Researchers from Osaka University have created a technology that enables androids to express their emotional states, like “excited” or “sleepy,” by generating facial movements as superimposed diminishing waves.
Even if an android looks so lifelike that it could be mistaken for a human in a photo, seeing it move in reality can be somewhat disconcerting. While it can smile, frown, or show other recognizable expressions, identifying a consistent emotional state behind these expressions can be challenging, leading to uncertainty about its true feelings and a sense of unease.
Traditionally, a ‘patchwork method’ has been employed for robots capable of moving various facial components, allowing them to show facial expressions for longer periods. This approach requires multiple predefined action scenarios to avoid unnatural facial movements while transitioning between these scenarios as required.
However, this presents practical difficulties, like the need to prepare intricate action scenarios in advance, reduce noticeable awkward movements during transitions, and fine-tune actions to subtly manage the conveyed emotions.
In this research, lead author Hisashi Ishihara and his team developed a dynamic facial expression synthesis technique using “waveform movements.” This method translates different gestures that make up facial expressions, like “breathing,” “blinking,” and “yawning,” into individual waves. These waves are conveyed to the corresponding facial regions and combined in real time to create complex facial movements. This method removes the need for advanced action data preparation and avoids noticeable transitions between movements.
Additionally, by implementing “waveform modulation,” which alters the individual waveforms according to the robot’s internal state, shifts in mood can be immediately reflected in facial movements.
“Progressing this research on dynamic facial expression synthesis will allow robots capable of elaborate facial movements to display more vibrant expressions and adapt their mood changes according to their surroundings, including human interactions,” states senior author Koichi Osuka. “This could significantly enhance emotional exchanges between humans and robots.”
Ishihara adds, “Instead of simply creating surface-level movements, further advancements in a system that conveys internal emotions through every aspect of an android’s actions could lead to the emergence of androids perceived as having genuine feelings.”
With the ability to adaptively express emotions, this technology is anticipated to greatly improve the effectiveness of communication robots, enabling them to share information with humans in a more natural and relatable way.