New research highlights trends in ADHD diagnoses

New research identifies differing trends in attention-deficit/hyperactivity disorder (ADHD) diagnoses among adolescents and adults, including an increase among adults from 2020 to 2023. The study found a significant downward trends in ADHD incidence among adults from 2016 to 2020 and adolescents from 2016 to 2018. The ADHD incidence rate remained stable for adolescents in subsequent
HomeInnovationThe Perils of AI: Why We Should Reconsider Its Use

The Perils of AI: Why We Should Reconsider Its Use

 

 

The Futility and Risks of AI: A Call to Halt Its Use | Opinion


AI presents problems; seeking more applications for it isn’t the answer.

This column has been updated with new insights.

 

Last year, I began drafting a column about generative artificial intelligence. I interviewed several individuals and pieced together a few hundred words, but I was quickly overtaken by the whirlwind of a busy election year. Despite my intention to complete that AI column, it lingered in my thoughts without fruition.

The crux of the issue: I had concerns about the future implications of AI. However, while I was distracted, that future arrived sooner than expected.

And AI triumphed.

I mistakenly thought other reasonable adults would share my skepticism towards AI. Its flaws are well recognized—after all, haven’t we all seen “The Terminator”?

Yet when a friend shared their experiences with AI on social media, the responses were overwhelmingly positive: using it to organize notes, brainstorm ideas, and draft press releases.

 

During a visit from a group of college students to the Free Press, I was taken aback to learn that one of their classes mandates the use of AI—this prestigious institution offers its own “safe” AI that ensures student data isn’t used to train larger models.

A nauseating commercial for a major tech company’s phone showcases solitary individuals chatting with an AI, as if it were a long-lost companion.

 

When an acquaintance mentioned seeing students at a café directly copy-pasting content generated by AI into their homework, the discussion that followed centered on the ethical implications of AI use—not on how to curb what many would deem dishonest conduct.

Now, platforms like Facebook and Instagram have replaced their search bars with AI-enhanced search functions, leaving me baffled. Perhaps I’m just old-fashioned, but it all seems excessive.

 

I might have incorrectly assessed my fellow individuals, but my stance on AI remains unchanged. We are diving headfirst into unreliable, unvetted technology that comes with severe environmental ramifications and a host of ethical dilemmas. This is ill-advised, and—since I enjoy expressing my opinion—I truly believe we need to stop.

ChatGPT and Gemini: Essentially Predictive Text Systems

 

The journey of artificial intelligence research began post-World War II, largely as a response to behaviorism, which reduced human beings to mere collections of behaviors, explained Robin Zebrowski, a cognitive science professor at Beloit College and a recognized authority in AI cognition. She has authored papers such as “Carving Up Participation: Sense-Making and Sociomorphing for Artificial Minds” and “The AI Wars, 1950–2000, and Their Consequences.”

In a 1950 paper titled “Computing Machinery and Intelligence,” British mathematician Alan Turing—who famously cracked the Enigma code during World War II and faced legal persecution for his sexuality—posed a critical question: Can a machine think in a manner similar enough to humans to be indistinguishable from them?

 

Researchers of that era became captivated by consciousness, wondering whether it was feasible to design a machine capable of passing what is now known as the Turing Test.

 

“I’m intrigued by the experimental nature of it, but there’s also an element of ‘Why can’t we achieve this? Let’s just try!’ which raises moral questions,” Zebrowski noted. “Back then, there wasn’t much risk in experimenting with these systems, as they were not particularly effective and didn’t emulate human functions.”

(“Consciousness is an enigma,” she remarked. “We have no clear understanding of how it arises. We’re not putting that into a machine any time soon.”)

AI still cannot replicate what we do, but it increasingly appears closer to it.

Contemporary generative AI models like ChatGPT and Gemini are not the result of a groundbreaking technological breakthrough—they are fundamentally predictive text machines. The neural network technology behind them has been around since the 1980s, according to Zebrowski. However, the significant shift is that our data has become a valuable resource.

“We now have the capability to aggregate everyone’s data and put it to work,” Zebrowski explained. “What has changed is the large-scale availability of data.”

 

AI Lacks a Sense of Truth

With increased data availability, text prediction systems like Gemini and ChatGPT can generate an almost limitless array of outputs—even if some of that data leads to errors. The results produced by AI systems are shaped by training models that may include misinformation or biased content, according to Zebrowski. “Today’s AI systems are trained on the less reliable parts of the internet.”

 

For instance, in 2022, CNET began using AI to generate several articles, only to retract the initiative months later after finding that half of the AI-generated stories contained errors, as reported by The Verge.

Just this month, multiple news organizations had to amend or retract stories that erroneously claimed President George H.W. Bush pardoned his son, Neil, based on research allegedly conducted using generative AI.

These examples represent just a glimpse of AI’s more notorious failures. Now, envision these inaccuracies being replicated on a larger scale across countless notes, questions from panels, and academic papers.

 

Zebrowski stated, “No one should be surprised by these outcomes.”

“The system’s design ensures that it will continually bear inaccuracies since AI does not comprehend what truth means.”

Is Any Use of AI Ethical?

Concerning AI reminiscent of “The Terminator,” Zebrowski affirmed that such scenarios remain unlikely. However, the current uses of AI and our engagement with it pose considerable risks.

“Presently, AI is largely overhyped. Yet it is inflicting significant harm—without providing substantial benefits,” she commented.

But this is typical of human behavior toward technology: rush ahead without caution. It’s an approach we’re adopting with AI as we utilize it for various tasks—ranging from panel discussions and meeting notes to academic research, medical diagnostics, homework, data analysis, and even to replace human workers—all done without adequate oversight or ethical standards.

 

The prevailing belief appears to be that the cat is out of the bag, and any attempts to recapture it are in vain. Instead, it is argued that we must learn how to manage this situation or risk being overwhelmed.

The college professor who visited the Free Press mentioned she is educating her students on the ethical use of AI. However, this raises an important question: Is it conceivable for AI to be employed ethically?

This is a matter that major tech companies may be reluctant to address.

 

Many ethicists have raised alarms regarding the ethical implications of AI—though it seems those warnings have been somewhat ignored.

 

Google dismissed its leading AI ethicist in 2020. Microsoft disbanded its AI ethics team in 2023. Additionally, OpenAI’s ethics team was dissolved just last year.

 

In contrast to the European Union’s proactive stance, the U.S. government has not established any notable guidelines or regulations for AI development and application. Elon Musk, who has substantial investments in AI proliferation, also holds significant sway in the incoming presidential administration.

AI ethicists have outlined clear guidelines, Zebrowski noted, advocating for government oversight of these systems.

“But that’s not very likely to occur, primarily because the advisers the government is consulting are the CEOs and individuals who gain from AI advancements, which could result in unfavorable outcomes,” she stated. “Many people from marginalized groups have expressed clear concerns about the risks and harms associated with AI, yet they don’t have a platform to voice their opinions to the president. Therefore, the necessary regulations are unlikely to materialize, particularly in the U.S., where most of these companies are based.

“I think it’s going to worsen significantly before it starts to improve, if it even improves at all.”

Concerns from OpenAI’s CEO, Sam Altman

Another critical aspect of AI is that it is a product that major tech companies have heavily invested in, even if its actual capabilities may not align with their claims.

 

“You hear OpenAI’s CEO, Sam Altman, express genuine concerns during government hearings about the potential implications of these systems, as if he didn’t launch it into the public,” said Zebrowski. “It frustrates me because the public is led to believe through exaggerated claims that AI will achieve super-intelligence, outsmarting and outperforming humans, despite the fact that the companies profiting from this hype are the ones creating the models.

“The hype has financial motivations. Consequently, the public remains unaware of how these systems operate and the appropriate contexts for their use.”

 

This leads to overly optimistic commercials showcasing AI making thoughtful gift suggestions or sharing fascinating stories about nature. Tech companies aim to convince us that AI is as indispensable as smartphones, social media, and even Bluetooth-enabled appliances that notify us when we’re low on milk.

 

Tech firms have a notable history of aggressive market promotion.

Willy Staley of The New York Times Magazine recently highlighted this tech marketing strategy in an article about Netflix, illustrating how the service overcame years of debt to become an integral home fixture: “If you assessed Netflix by media standards, its future seemed uncertain, but it operated under tech-sector guidelines – spending massive amounts to attract customers, altering their behaviors, and outpacing competitors until an entire industry was revolutionized.”

 

Key Risks Associated with AI

Before Google terminated its collaboration with ethicist Timnit Gebru, she and several researchers created a draft document detailing the main dangers associated with AI. Among the concerns is the complexity of AI learning models and the certainty that they will absorb misinformation. AI systems rely on datasets mostly created by individuals who have the utmost access to technology, leading to bias in favor of wealthier, tech-savvy populations and nations. (Gebru has previously flagged issues concerning facial recognition software’s accuracy regarding women and people of color.)

 

Gebru and her colleagues also emphasized AI’s environmental effects—training these learning models consumes substantial energy, leading to significant emissions that low-income communities will likely bear while benefiting more affluent organizations or individuals.

For example, a data center near Memphis that supports Musk’s xAI employs numerous portable methane gas generators, set up without permits, which locals claim release a continuous plume of hazy smoke.

The historically Black community is already familiar with environmental injustices, and AI is presenting a new challenge: One researcher estimates a single query to a chatbot uses enough energy to power a light bulb for 20 minutes.

 

ChatGPT is mimicking me – and likely you, as well

Last year, I asked ChatGPT to generate a column “in the style of Michigan journalist Nancy Kaffer,” and it somewhat succeeded.

“In the heart of the Great Lakes State, where opportunities should flow freely like our clear waters, lies a significant gap of inequality that jeopardizes our societal existence. The frontline in this fight is our education system, which is designed to uplift, empower, and prepare our future leaders.”

The text continues in this manner – “as we stand at this pivotal moment of progress, we must confront this issue directly” – “together, let us pave the way towards a genuinely equitable education system.”

I do focus on equity in education, but this is quite poor writing: superficial and cliché-ridden, resembling the tone of an overly enthusiastic high school sophomore.

 

When I made the same request last month, the outcome was remarkably different.

“Michigan’s infrastructure is in a silent crisis,” wrote ChatGPT. “It’s not a glamorous topic that grabs headlines or trends on social media, but it significantly affects the daily lives of many. The fissures in our roads, the peeling paint on bridges, the strained water systems – these are indicators of a broader systemic issue that needs urgent attention, not just for convenience, but for future economic stability. … Policymakers are increasingly advocating for bold investments in infrastructure to stop deferring necessary repairs. Yet we continue to be caught in a struggle between immediate political gains and the crucial long-term expenditures. Repairing Michigan’s infrastructure transcends merely allocating funds for roadwork – it’s about ensuring the state can compete, prosper, and adequately support its citizens in the future. For us living here, the frustration is evident.”

I can still recognize it’s not my voice. Yet, it contains phrases and concepts I might utilize, even if they’re pieced together without much depth or artistry.

ChatGPT is learning to replicate me, and its improvements raise some concerns about what it might produce in a year.

What Lies Ahead for AI and Us?

I’m jesting about “The Terminator,” mostly.

I’m a fan of technology: I was one of the first to embrace the iPhone, and I often remind my Gen Z son that the modern Xboxes and PlayStations owe their existence to my 1987 purchase of the original Nintendo with my babysitting earnings.

 

But in the realm of AI and machine learning, there’s also the cautionary tale of “The Matrix,” alongside other narratives like “Bladerunner,” “Rossum’s Universal Robots,” and “The Veldt,” along with the well-known Three Laws of Robotics by Isaac Asimov. And I can confidently say that when it comes to the notion of this, which does not bode well, the fictional accounts are consistent. Sure, it’s fiction, yet looking at the preceding century: We can only attain what we can dream, and science fiction aids us in visualizing future possibilities.

Or envisioning futures we really wish to avoid.

 

The earlier analogy about the horse and the barn is persuasive. Consider a knife instead. One might invest considerable effort to make the knife non-threatening, wrapping it in foil or duct tape or dulling the edge – or simpler yet, just putting it down.

Historically, humanity has rarely chosen to abandon technology voluntarily. A notable exception remains the atomic bomb.

“We utilized it, most people regretted it, and that led to decades of efforts to manage and mitigate the consequences of that technology,” Zebrowski stated. “There was a universal agreement of sorts, ‘Alright, let’s steer clear of this,’ even as many continued to pursue it covertly. It’s impractical to ask the world to cease pursuing this technology. However, I believe nuclear technology serves as the closest parallel. We collectively recognized, or at least most nations did, that this is not a path we should tread.”

 

Numerous ethicists have proposed policy guidelines for AI usage. For lawmakers seeking to bring some order to this challenge, such frameworks are readily available. (If you’re affiliated with a university, chances are you have an ethicist or two on your team.)

Yet, as I reflect on the intrinsic bias and error-proneness within AI learning sets, the potential for massive job displacement, and the reality of a generation that may never develop critical thinking skills because AI does everything for them, I return to the metaphor about the knife – and one additional thought: You can’t contribute to the solution if you’re part of the problem.

Using AI isn’t mandatory. And if you do choose to use it, perhaps it’s time to rethink that choice.

Nancy Kaffer is the editorial page editor of the Detroit Free Press, where this column was first published.