A new essay delves into the conditions necessary for consciousness to exist, highlighting a key difference that sets computers apart.
Is it advantageous for artificial intelligence to possess consciousness? Dr. Wanja Wiese from the Institute of Philosophy II at Ruhr University Bochum, Germany, suggests otherwise for various reasons. In an essay, he examines the prerequisites for consciousness and draws comparisons between human brains and computers. Wiese points out significant distinctions in brain organization, memory, and computing elements. He argues that the causal structure, particularly, may be a relevant factor in determining consciousness. The essay was published on June 26, 2024, in the journal Philosophical Studies.
Two different approaches
When contemplating the potential consciousness of artificial systems, two main approaches arise. One focuses on the likelihood of current AI systems possessing consciousness and what enhancements are needed to facilitate consciousness. The other approach questions which types of AI systems are unlikely to be conscious and how to eliminate the possibility of certain systems attaining consciousness.
Wanja Wiese follows the second approach in his research. He aims to minimize the risk of unintentionally creating artificial consciousness, given the uncertainty surrounding the moral permissibility of creating it. Additionally, this approach aims to prevent deception by AI systems that only appear conscious. This is crucial as many individuals tend to attribute consciousness to chatbots, despite expert consensus that current AI systems lack consciousness.
The free energy principle
In his essay, Wiese asks how to determine if essential conditions for consciousness exist that conventional computers do not fulfill. A common denominator among all conscious beings is being alive. While being alive seems overly restrictive as a requirement for consciousness, some argue that certain conditions necessary for life may also be crucial for consciousness.
Wiese references British neuroscientist Karl Friston’s free energy principle. This principle suggests that the processes maintaining a self-organizing system, like a living organism, involve a form of information processing. While these processes regulate vital parameters in humans, the same information processing could be replicated in a computer. However, the computer would simulate rather than regulate these processes.
Most differences are not relevant to consciousness
Wiese suggests that consciousness may follow a similar pattern. If consciousness contributes to an organism’s survival, then physiological processes sustaining the organism must display an information-processing element that conscious experiences leave behind. This concept, termed the “computational correlate of consciousness,” can theoretically be implemented in a computer. Nonetheless, additional conditions may be necessary for a computer to replicate conscious experiences, not just simulate them.
Wiese discusses differences between how conscious beings and computers materialize the computational correlate of consciousness. He argues that many disparities are irrelevant to consciousness; for instance, the energy efficiency of our brains is unlikely to be a prerequisite for consciousness.
However, a notable difference lies in the causal structure of computers and brains. The sequential loading, processing, and storing of data in computers contrasts with the interconnected nature of different brain areas. Wiese suggests that this divergence in causal connectivity could be crucial to understanding consciousness.
Wiese highlights the significance of the free energy principle for describing conscious entities in a way feasible for artificial systems, while absent in typical artificial systems like computer simulations. This approach enables a more precise understanding of the prerequisites for consciousness in artificial systems.