Pete Hegseth Sexual Assault Report: New Details Unveiled by California Police

California police release details of Pete Hegseth sexual assault report A woman who claims she was sexually assaulted by Donald Trump’s nominee to lead the Defense Department told police in 2017 she remembered Pete Hegseth preventing her from leaving a hotel room and that he was on top of her, according to newly released documents
HomeTechnologyNew Research Debunks AI's Existential Threat to Humanity

New Research Debunks AI’s Existential Threat to Humanity

Large Language Models (LLMs) are entirely manageable via human prompts and do not exhibit ’emergent abilities’, meaning they can’t draw their own conclusions or insights. Simply increasing the size of these models does not grant them new reasoning skills, implying they won’t develop dangerous capabilities and do not represent an existential risk. A recent study clarifies LLMs’ abilities and limitations, highlighting the necessity for well-structured prompts to achieve optimal performance.

According to new research from the University of Bath and the Technical University of Darmstadt in Germany, ChatGPT and other large language models (LLMs) cannot independently learn or acquire new skills, indicating they do not pose an existential threat to humanity.

The research, published today during the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024) – the leading global conference in natural language processing – reveals that while LLMs can superficially follow instructions and demonstrate proficiency in language, they require explicit instructions to develop new skills. Thus, they are inherently manageable, predictable, and safe.

This means they are fundamentally manageable, predictable, and safe.

The research team concluded that LLMs, even as they are trained on larger datasets, can be utilized without safety fears. However, the technology is still susceptible to misuse.

As these models evolve, they are expected to generate more complex language and improve at following detailed prompts, yet they are unlikely to acquire advanced reasoning capabilities.

“The dominant narrative suggesting that this type of AI poses a threat to humanity hinders the wider adoption and development of these technologies and distracts from genuine issues that deserve our attention,” stated Dr. Harish Tayyar Madabushi, a computer scientist from the University of Bath and co-author of the new study on the ’emergent abilities’ of LLMs.

The research team, led by Professor Iryna Gurevych from the Technical University of Darmstadt in Germany, conducted experiments to evaluate LLMs’ capabilities in tasks they had never encountered before – referred to as emergent abilities.

For instance, LLMs can respond to inquiries about social scenarios without having been explicitly trained for it. While earlier studies indicated that this might stem from models ‘knowing’ about social contexts, the researchers demonstrated that it actually derives from the LLMs’ established capability to perform tasks based on examples provided, a process referred to as `in-context learning’ (ICL).

Through numerous experiments, the team illustrated that the combination of LLMs’ instruction-following ability (ICL), memory, and language skills accounts for both their strengths and weaknesses.

Dr. Tayyar Madabushi expressed: “There has been a concern that as models grow in size, they might solve unforeseen problems, thereby raising fears that these larger models could develop dangerous skills like reasoning and planning.

“This has fueled extensive discussion – for example, at last year’s AI Safety Summit at Bletchley Park, where we were invited to provide input – but our study demonstrates that fears of models engaging in completely unexpected, innovative, or potentially harmful behavior are unfounded.

“Worries about the existential threats posed by LLMs are not limited to novices; even some of the leading AI researchers globally have voiced these concerns.”

However, Dr. Tayyar Madabushi argues that such fears are unfounded since their experiments clearly revealed the lack of emergent complex reasoning capabilities within LLMs.

“While it’s crucial to tackle the existing risks of AI misuse, like generating fake news and increased fraud risk, acting on perceived existential threats would be premature,” he remarked.

“What this signifies for users is that expecting LLMs to comprehend and accomplish complex tasks that necessitate intricate reasoning without explicit guidance is likely to lead to errors. Instead, users will benefit from clearly articulating their requirements and providing examples whenever feasible, especially for anything beyond simple tasks.”

Professor Gurevych added: “… our findings do not imply that AI poses no threat at all. Rather, we demonstrate that the supposed emergence of complex reasoning skills tied to specific threats is not backed by evidence, and we can effectively manage the learning processes of LLMs. Future studies should thus concentrate on other risks posed by these models, such as their potential misuse for generating false information.”