Researchers investigated how infants exhibit purposeful behavior by attaching a vibrant mobile to their foot and monitoring movements using a Vicon 3D motion capture system. The study assessed the capability of artificial intelligence (AI) to recognize alterations in infants’ movement patterns. The results demonstrated that AI methodologies, particularly the deep learning model 2D-CapsNet, successfully distinguished various behavioral phases. Remarkably, there was a significant variation in foot movements. Analyzing the fluctuation in AI classification accuracy for each infant offers researchers fresh insights into when and how infants begin to interact with their environment.
Recent developments in computing and artificial intelligence, alongside findings related to infant learning, indicate that machine learning and deep learning technologies can assist us in examining how infants progress from random exploratory motions to intentional actions. Most existing studies have concentrated on spontaneous movements in infants, differentiating between fidgety and calm behaviors.
Even though early movements might seem disorganized, they reveal significant patterns as infants engage with their environment. Nonetheless, we still need to comprehend how infants deliberately interact with their surroundings and the principles that underlie their goal-directed actions.
In a baby-mobile experiment, which has been utilized in developmental studies since the late 1960s, researchers from Florida Atlantic University and their collaborators examined how infants begin to act with intention. The baby-mobile experiment entails a colorful mobile gently attached to an infant’s foot—when the baby kicks, the mobile moves, thereby linking their actions to their visual experience. This arrangement aids researchers in understanding how infants control their movements and discover their capacity to affect their environment.
In this recent study, the researchers evaluated whether AI tools could detect intricate shifts in infant movement patterns. Tracking infants’ movements with a Vicon 3D motion capture system allowed for classification into various categories, from spontaneous actions to reactions triggered by the mobile’s motion. By employing different AI techniques, the researchers investigated which methods most effectively captured the subtle details of infant behavior in various scenarios and how movements changed over time.
The study’s results, published in Scientific Reports, highlight the usefulness of AI for understanding early infant development and interaction. Machine learning and deep learning approaches accurately classified five-second segments of 3D infant movements according to different phases of the experiment. Among these methodologies, the deep learning model 2D-CapsNet achieved the best performance. Notably, foot movements exhibited the highest accuracy rates across all methods, indicating that these movements changed the most significantly compared to other body parts throughout the experimental phases.
“This is an important finding because the AI systems had no prior knowledge about the experiment or which body part was linked to the mobile. This demonstrates that the feet—acting as end effectors—are the areas most affected by interaction with the mobile,” explained Scott Kelso, Ph.D., co-author and Glenwood and Martha Creech Eminent Scholar in Science at the Center for Complex Systems and Brain Sciences at FAU’s Charles E. Schmidt College of Science. “In essence, the way infants connect with their environment has the greatest impact at the contact points with the world. In this case, it was through their ‘feet first.'”
The 2D-CapsNet model recorded an accuracy rate of 86% when analyzing foot movements and effectively captured the intricate relationships between different body parts during motion. Across all tested methods, foot movements consistently demonstrated the highest accuracy, roughly 20% better than movements of the hands, knees, or the entire body.
“We discovered that infants explored more after being disconnected from the mobile compared to before they had the chance to control it. It appears that losing the ability to manipulate the mobile heightened their desire to engage with the world in search of a way to reconnect,” noted Aliza Sloan, Ph.D., co-author and postdoctoral research scientist at FAU’s Center for Complex Systems and Brain Sciences. “Nevertheless, some infants displayed movement patterns during this disconnected phase that hinted at their earlier interactions with the mobile. This implies that only certain infants comprehended their relationship with the mobile sufficiently to sustain those movement patterns, anticipating that they would still yield a reaction from the mobile even post-disconnection.”
The researchers indicate that maintaining a high accuracy of infant movements during disconnection might suggest that the infants learned something from their prior interactions. However, different movement types may hold varying implications regarding what the infants learned.
“It is crucial to recognize that studying infants is more complex than studying adults because infants cannot communicate verbally,” said Nancy Aaron Jones, Ph.D., co-author, professor in FAU’s Department of Psychology, director of the FAU WAVES Lab, and a member of the Center for Complex Systems and Brain Sciences at the Charles E. Schmidt College of Science. “Adults can follow instructions and explain their actions; infants cannot. That’s where AI can play a pivotal role. AI helps researchers analyze subtle alterations in infant movements, and even their stillness, shedding light on how they think and learn before they can speak. Their movements also aid us in understanding the significant individual differences that arise as infants grow.”
By observing changes in AI classification accuracy for each infant, researchers acquire a novel method to discern when and how infants begin to engage with the world.
“While previous AI approaches primarily focused on categorizing spontaneous movements linked to clinical outcomes, integrating theory-based experiments with AI can enhance our assessments of infant behavior relative to their specific contexts,” said Kelso. “This advancement can improve risk identification, diagnosis, and treatment of disorders.”
The co-authors of this study include Massoud Khodadadzadeh, Ph.D., previously at Ulster University in Derry, Northern Ireland, and now at the University of Bedfordshire, United Kingdom; and Damien Coyle, Ph.D., at the University of Bath, United Kingdom.
This research received support from Tier 2 High Performance Computing resources from the Northern Ireland High-Performance Computing facility, funded by the UK Engineering and Physical Sciences Research Council; UK Research and Innovation Turing AI Fellowship (2021-2025), funded by Engineering and Physical Research Council, Vice Chancellor’s Research Scholarship; the Institute for Research in Applicable Computing at the University of Bedfordshire; the FAU Foundation (Eminent Scholar in Science); and the United States National Institutes of Health.