Researchers have created a new structured explanation of internal world models, enabling collaboration across disciplines. Internal world models aid in forecasting new scenarios based on past encounters and in navigating unfamiliar environments. This formalized perspective facilitates the comparison of world models among humans, animals, and AI and addresses shortcomings.
A group of researchers led by Professor Ilka Diester, an expert in Optophysiology and spokesperson for the BrainLinks-BrainTools research center at the University of Freiburg, has devised a structured explanation of internal world models, which was published in the journal Neuron. This formalized approach enhances comprehension of how internal world models develop and function. It allows for a systematic comparison of world models in humans, animals, and artificial intelligence (AI). This comparison highlights where AI lacks compared to human intelligence and suggests potential avenues for future improvement. The interdisciplinary publication involved eleven researchers from Freiburg spanning four faculties.
Internal world models: Predictions based on experience
Humans and animals derive general principles from daily experiences, constructing internal models that aid in navigating novel contexts. These models enable them to make predictions in unfamiliar situations and respond appropriately. For instance, knowledge of similar cities with common features like a city center, pedestrian zones, and public transport helps in orienting oneself in a new city. Likewise, past social experiences guide appropriate behavior during a restaurant dinner.
Clarifying world models through a new structured description
The researchers, in their recent study, categorize internal world models across species into three interconnected abstract spaces: the task space, neural space, and conceptual space. The task space encompasses an individual’s experiences, while the neural space comprises various brain states measurable from molecular levels to neuronal activities. In AI, the neural space equates to node activities in artificial neural networks. The conceptual space links task space and neural space states, reflecting an individual’s status integrating internal processes and external influences. These dynamic state transitions lend scientific tangibility to individual internal world models.
Addressing deficiencies in internal world models
Through the structured perspective, scientists can now analyze internal world models across disciplines and explore their development and evolution. Insights from human and animal research aim to enhance AI capabilities. Present AI systems often lack the ability to verify the plausibility of their predictions. For example, prevalent language models like ChatGPT operate mainly as pattern recognizers without robust planning abilities. Effective planning is crucial for evaluating and adjusting strategies in unfamiliar scenarios to prevent potential harm. Researchers also speculate that deficits in internal world models might contribute to certain mental disorders such as depression or schizophrenia. A deeper understanding of world models could facilitate a more targeted application of medication and therapy.