Large language models (LLMs) have made significant strides recently and are increasingly becoming vital in our daily activities, notably through tools like ChatGPT. An article published in Nature Human Behaviour delves into the potential advantages and drawbacks of LLMs regarding our collective decision-making and problem-solving abilities. This work, conducted by a group of 28 scientists from institutions like Copenhagen Business School and the Max Planck Institute for Human Development in Berlin, offers guidance for both researchers and policymakers. The goal is to ensure that LLMs support, rather than hinder, human collaboration and intelligence.
Large language models (LLMs) have evolved quickly in recent times and are becoming an essential part of our daily lives through applications like ChatGPT. A recent article published in Nature Human Behaviour discusses the chances and challenges that arise from using LLMs, particularly regarding our capacity to engage in collective thinking, decision-making, and problem-solving. An interdisciplinary team of 28 researchers, led by experts from the Copenhagen Business School and the Max Planck Institute for Human Development in Berlin, has put forth suggestions for researchers and policymakers to ensure that LLMs are designed to augment human collective intelligence rather than diminish it.
What can you do if you’re uncertain about what “LLM” means? You might quickly search for it online or consult with a colleague. In everyday life, we often utilize collective intelligence—the shared knowledge of groups. By combining individual expertise, our collective intelligence enables us to achieve results that surpass what any single person, even specialists, can accomplish. This kind of intelligence is pivotal to the success of various groups, from small work teams to large online communities like Wikipedia and broader society.
LLMs are AI systems that utilize large datasets and deep learning to analyze and create text. The article elaborates on how LLMs can strengthen collective intelligence and explores their potential effects on teams and society. “As LLMs increasingly shape how we access information and make decisions, it is essential to find a balance between leveraging their advantages and protecting against their risks. Our article outlines how LLMs can enhance human collective intelligence while also discussing possible negative outcomes,” explains Ralph Hertwig, a co-author and Director at the Max Planck Institute for Human Development, Berlin.
The researchers highlight several benefits of LLMs, such as improving accessibility in group discussions. They can help eliminate language barriers through translation services and writing support, enabling individuals from diverse backgrounds to engage equally in conversations. Additionally, LLMs can speed up idea generation, aid in forming opinions, summarize varying viewpoints, and assist in reaching consensus.
However, there are considerable risks involved with utilizing LLMs. For instance, they may diminish individuals’ willingness to contribute to knowledge-sharing platforms like Wikipedia and Stack Overflow. An increased reliance on proprietary models could threaten the openness and variety of our knowledge resources. Another concern is the possibility of false consensus and pluralistic ignorance, where there is a mistaken belief that a majority agrees on a particular norm. “Since LLMs learn from data available online, there’s a risk that minority perspectives may be overlooked, leading to a skewed sense of agreement and sidelining certain viewpoints,” highlights Jason Burton, the lead author and assistant professor at Copenhagen Business School and associate research scientist at MPIB.
“The significance of this article lies in demonstrating the need for proactive thinking about how LLMs affect the online information landscape and, consequently, our collective intelligence—both positively and negatively,” summarizes co-author Joshua Becker, an assistant professor at University College London. The authors advocate for increased transparency in LLM development, urging the disclosure of training data sources and suggesting that LLM developers undergo external evaluations. This approach would help clarify the development processes of LLMs and minimize potentially harmful consequences.
Moreover, the article provides concise information boxes on topics related to LLMs, emphasizing the role of collective intelligence in training these models. The authors discuss how to ensure diversity in representation during LLM development. Two research-focused information boxes illustrate how LLMs can mimic human collective intelligence and raise critical research questions, such as preventing knowledge homogenization and determining how to allocate credit and responsibility when collective outcomes are created with LLMs.
Key Points:
- LLMs are transforming how people search for, utilize, and communicate information, which can influence the collective intelligence of teams and society as a whole.
- While LLMs offer new opportunities for enhancing collective intelligence, they also present risks that could threaten the diversity of the information landscape.
- To ensure that LLMs strengthen rather than diminish collective intelligence, there must be transparency regarding the models’ technical details and mechanisms for monitoring must be established.
Participating institutes
- Department of Digitalization, Copenhagen Business School, Frederiksberg, DK
- Center for Adaptive Rationality, Max Planck Institute for Human Development, Berlin, DE
- Center for Humans and Machines, Max Planck Institute for Human Development, Berlin, DE
- Humboldt-Universität zu Berlin, Department of Psychology, Berlin, DE
- Center for Cognitive and Decision Sciences, University of Basel, Basel, CH
- Google DeepMind, London, UK
- UCL School of Management, London, UK
- Centre for Collective Intelligence Design, Nesta, London, UK
- Bonn-Aachen International Center for Information Technology, University of Bonn, Bonn, DE
- Lamarr Institute for Machine Learning and Artificial Intelligence, Bonn, DE
- Collective Intelligence Project, San Francisco, CA, USA
- Center for Information Technology Policy, Princeton University, Princeton, NJ, USA
- Department of Computer Science, Princeton University, Princeton, NJ, USA
- School of Sociology, University College Dublin, Dublin, IE
- Geary Institute for Public Policy, University College Dublin, Dublin, IE
- Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
- Science of Intelligence Excellence Cluster, Technische Universität Berlin, Berlin, DE
- School of Information and Communication, Insight SFI Research Centre for Data Analytics, University College Dublin, Dublin, IE
- Oxford Internet Institute, Oxford University, Oxford, UK
- Deliberative Democracy Lab, Stanford University, Stanford, CA, USA
- Tepper School of Business, Carnegie Mellon University, Pittsburgh, PA, USA