Ancient Encounter: Pterosaur Fossil Discloses Crocodilian Attack from 76 Million Years Ago

The fossilized neck bone of a flying reptile unearthed in Canada shows tell-tale signs of being bitten by a crocodile-like creature 76 million years ago, according to a new study. The fossilised neck bone of a flying reptile unearthed in Canada shows tell-tale signs of being bitten by a crocodile-like creature 76 million years ago
HomeHealthExploring the Effects of Artificial Intelligence on Youth Mental Well-Being

Exploring the Effects of Artificial Intelligence on Youth Mental Well-Being

Experts emphasize the importance of establishing a clear framework for AI research in light of the swift uptake of artificial intelligence among children and teenagers who use digital devices for internet and social media access.

A newly published peer-reviewed article from specialists at the Oxford Internet Institute, University of Oxford, underscores the necessity for a well-defined framework in AI research due to the fast-growing use of artificial intelligence by young people engaging with the internet and social media on their devices.

The recommendations in the paper stem from a thorough evaluation of existing gaps in the research concerning the effects of digital technologies on the mental well-being of young individuals, alongside an in-depth exploration of the challenges that contribute to these gaps.

Titled “From Social Media to Artificial Intelligence: Enhancing Research on Digital Harms in Youth,” and published on January 21 in The Lancet Child and Adolescent Health, the paper advocates for a “critical reevaluation” of our approach to studying how internet technologies influence the mental health of young people, and suggests ways future AI research can avoid the mistakes made in social media studies. Current issues include conflicting results and a notable absence of long-term, causal research.

The analysis and recommendations offered by the researchers from Oxford are organized into four main sections:

  • A concise review of recent studies regarding the effects of technology on the mental health of children and teenagers, spotlighting significant evidence limitations.
  • An examination of the difficulties in designing and interpreting research that they believe contribute to these limitations.
  • Suggestions for enhancing research methodologies to tackle these challenges, especially regarding their application to AI and children’s welfare.
  • Specific actions for fostering cooperation among researchers, policymakers, tech companies, families, and young individuals.

“Research addressing the effects of AI, along with data for policymakers and guidance for caregivers, needs to learn from the challenges experienced in social media research,” remarked Dr. Karen Mansfield, postdoctoral researcher at the OII and the lead author of the article. “Young individuals are already exploring new methods of engaging with AI, and without a robust framework for collaboration among various parties, evidence-based policies regarding AI will fall behind, similar to the situation seen with social media.”

The article points out that the influence of social media is often viewed as a singular causal element, overlooking the diverse ways social media is utilized and the contextual factors that affect both technology usage and mental health. If this approach is not reconsidered, upcoming research into AI may fall victim to a new wave of media anxiety, similar to what occurred with social media. Other hurdles involve rapidly outdated measures of social media use and data that often overlook the most vulnerable young people.

The authors argue that successful research on AI should pose questions that don’t automatically portray AI as a problem, guarantee causal designs, and prioritize the most pertinent exposures and outcomes.

The article concludes that as young people engage with AI in new ways, research and evidence-based policies will find it challenging to keep pace. However, by ensuring that our methods of studying AI’s impact on youth take into account the lessons learned from past research gaps, we can better manage the integration of AI into online platforms and their usage.

“We advocate for a collaborative evidence-based framework that will hold major tech companies accountable in a proactive, gradual, and informative manner,” stated Professor Andrew Przybylski, OII Professor of Human Behaviour and Technology and a contributing author to the paper. “If we neglect to build on previous experiences, we may find ourselves in the same position in another decade, feeling powerless regarding the role of AI much like we currently do with social media and smartphones. We must take proactive measures now to ensure that AI is safe and advantageous for children and adolescents.”