Is it possible for a computer to learn a language similarly to a child? A recent study published in the prominent journal Computational Linguistics by professors Katrien Beuls from the University of Namur and Paul Van Eecke from the AI Lab at Vrije Universiteit Brussel provides new insights into this question. The researchers call for a significant change in how artificial intelligence learns and processes language.
“Children acquire their native language by interacting with the people around them. Through play and experimentation with language, they strive to understand the intentions of those they converse with. This interactive process, where language is learned through meaningful engagement, is fundamental to human language acquisition,” explains Katrien Beuls.
“In contrast, today’s large language models (LLMs), like ChatGPT, learn language differently,” adds Paul Van Eecke. “They analyze vast amounts of text to identify common word pairings, allowing them to produce text that often resembles that of human writers. While this makes them highly effective for various text-related tasks such as summarizing, translating, or answering questions, these models also face significant limitations. They can generate inaccuracies, exhibit biases, struggle with human-like reasoning, and require substantial data and energy for development and operation.”
The researchers suggest an alternative approach where artificial agents learn language the way humans do—through engaging in meaningful communicative interactions in their environment. Their experiments show how these agents develop language structures that are closely linked to their physical surroundings and sensory experiences. This leads to language models that:
- Are less vulnerable to generating inaccuracies and biases, as their understanding of language is based on real-world interactions.
- Utilize data and energy more effectively, resulting in a reduced ecological impact.
- Have a deeper connection to meaning and intention, which helps them understand language and context in a manner similar to humans.
“Incorporating communicative and context-based interactions into AI models is an essential advancement in creating the next generation of language models. This research provides a promising direction towards language technologies that better reflect how humans comprehend and utilize language,” the researchers conclude.