Exciting Innovations: Unveiling the 2025 Toyota SUV Lineup – RAV4, Highlander, 4Runner, and Crown Signia

2025 Toyota SUVs: What’s new with RAV4, Highlander, 4Runner and Crown Signia Toyota picks up another fancy-looking midsize hybrid SUV to replace the model the carmaker is losing this year. Also, an off-road icon scores a long-overdue redesign. There are a few new trim levels across the lineup, including a new Nightshade model. Dive in
HomeLocalScholarship Dreams Shattered: The Mental Toll of an AI Accusation

Scholarship Dreams Shattered: The Mental Toll of an AI Accusation

 

She lost her scholarship after being accused of cheating with AI — and it affected her mental health


Marley Stevens, a student at the University of North Georgia, was in her car when she received an email alert: Her professor had given her a zero on her paper, claiming she cheated by using artificial intelligence.

 

The issue stemmed from her use of Grammarly, a spell-check tool that leverages generative AI, to enhance her writing. Although this resource is suggested on the UNG website, Stevens found herself on academic probation following a lengthy misconduct investigation that took six months. The zero on her paper negatively affected her GPA and resulted in the loss of her scholarship.

Already dealing with anxiety and a chronic heart condition, Stevens experienced a significant decline in her mental health throughout this period.

“I couldn’t sleep or concentrate on anything,” Stevens remarked. “I felt powerless.”

Stevens is part of an increasing group of students who feel they were wrongly accused of AI-assisted cheating. Institutions have been trying to cope with a surge of AI-influenced submissions since the launch of ChatGPT in November 2022, which produces text that mimics human writing. However, the implementation of detection software raises the possibility of false accusations, leading to misconduct proceedings that can deeply affect students’ mental health.

 

‘I didn’t know how to prove I was innocent’

Lucie Vágnerová, an education consultant based in New York, has assisted over 100 cases involving AI-related accusations since November 2023 and has noticed a rise in clients reporting false allegations.

These accusations raise serious fears about losing academic scholarships and visas for international students. It’s common for such investigations to prolong for weeks or even months. In severe cases, graduates have received plagiarism allegations, causing immense stress as they begin new jobs.

“Anxiety is the most frequently mentioned feeling from students facing academic misconduct,” Vágnerová shared. “They tell me they aren’t eating, sleeping, and are overcome with guilt.”

 

In 2023, several seniors at Texas A&M University–Commerce faced temporary diploma denials after an instructor accused the entire animal science class of using ChatGPT. The professor based this accusation on running the students’ submissions through ChatGPT’s software to judge if AI had generated the text. Experts have indicated that ChatGPT is not a reliable tool for detecting AI-generated content.

For many students, hiring legal help to challenge these accusations isn’t practical. When Liberty University senior Maggie Seabolt was informed that her paper had been flagged for 35% AI content last spring, she was bewildered — she had composed the paper in one sitting on Microsoft Word. As a first-generation college student, she felt lost on where to seek help.

“It was incredibly stressful to be accused of something I didn’t do,” Seabolt said. “I felt very isolated without knowing how to defend myself.”

Though her professor did not formally charge her with academic dishonesty, she still received a 20% deduction on her paper grade.

 

Liberty University does not prohibit tools like ChatGPT and Grammarly, but advises students against relying on AI for substantial rephrasing or creating new content. They recommend disabling generative AI features whenever possible.

The popular AI detection software Turnitin has been shown to produce more false positives when AI-generated content is below 20%. The company states that its models should not be the only basis for imposing actions against students.

“Our guidance emphasizes that knowing a student and their unique writing style is irreplaceable. Educators should maintain open communication with students and exercise their judgment regarding concerns about false positives,” a Turnitin representative informed YSL News.

The University of Georgia refrained from discussing the particulars of Stevens’ case, citing privacy laws, but indicated that the application of AI can vary significantly across different classrooms. They also shared their academic integrity policies, which include specific guidelines around AI and plagiarism.

 

A representative from Grammarly confirmed a donation of $4,000 to a GoFundMe page created by Stevens and invited her to speak at a session on AI innovation and academic integrity at an Educause conference, a nonprofit organization dedicated to information technology in higher education. In October, Grammarly launched a feature named Authorship, aimed at addressing false positive claims by tracking writing processes.

The issue with solely relying on AI detection tools for cheating

Generative AI refers to a type of artificial intelligence that produces human-like text, images, code, music, and video. Following the release of ChatGPT by OpenAI, many professors began utilizing plagiarism detection software like Turnitin to ensure academic integrity. These programs provide an AI writing indicator that highlights sections that may have been created or changed using AI assists.

However, professionals caution that these detection tools can misclassify genuine writing as AI-generated. A 2024 study by the University of Pennsylvania revealed that AI detectors could easily be misled by variations in spelling, symbol usage, and formatting; therefore, it advised against using such tools for punitive measures. A 2023 study from Stanford University indicated that ChatGPT detection tools have a bias against non-native English speakers. OpenAI disabled its own AI detection platform due to its poor accuracy.

 

Casey Fiesler, an Associate Professor at the University of Colorado Boulder who researches technology ethics, noted that making academic integrity decisions based solely on AI detection systems is irresponsible due to inherent biases.

 

“The likelihood of a false positive is simply too significant,” Fiesler explained. “Defending oneself against a flawed algorithm is challenging.”

Another concern is the disparity between the rapid evolution of AI and the slower pace at which educational institutions develop policies, leading to inconsistent practices across schools and even within departments.

Almost half of the respondents in a 2024 EDUCAUSE AI Landscape study, which included leaders and staff from higher education, disagreed or strongly disagreed that their institutions had adequate guidelines for AI usage. Only 8% believed their cybersecurity and privacy strategies were sufficient to manage risks related to AI.

“AI guidelines need to strike a balance between being standardized enough for students to comprehend expectations across their courses and flexible enough to allow for differences in discipline and faculty judgment,” commented Jenay Robert, a senior researcher at Educause.

 

Kathryn Conrad, an English Professor at the University of Kansas, emphasized the importance for educators to understand that AI detection tools work differently from plagiarism detection tools. AI detection tools figure out patterns of “burstiness” and “perplexity,” while plagiarism detection assesses the similarity of student work against a database or content on the web, according to Conrad. Turnitin provides both types of services, which can create confusion.

 

In her proposal for an AI Bill of Rights for Education, she suggests that instructors should clearly define AI policies in their syllabi to prevent misunderstandings.

“If students are told they cannot use generative AI for their assignments but are allowed to utilize it for brainstorming, then misconstruing a Turnitin report as evidence of cheating is an issue of misunderstanding both the tool and the detection processes,” Conrad explained.

What actions can students take if accused of using AI to cheat?

The initial measure to avoid false AI accusations is understanding the AI policies in each course thoroughly.

 

In cases of alleged misconduct, maintaining a record of work can be beneficial for defense. Vágnerová recommends using writing programs that keep previous drafts, such as Google Docs and Microsoft Word. Starting assignments early and leveraging resources like writing centers and office hours can also help showcase a student’s originality in case of false allegations. Additionally, students can take screenshots of research history as evidence of their work process.

Student defense attorney Richard Asselta emphasizes the importance for students to remain calm and consult a trusted friend or adult before addressing allegations of AI use.

“One common mistake is responding impulsively without careful consideration, which can lead to misunderstandings,” Asselta advised.

In addition to providing evidence for their claims, Asselta suggests that students should approach the issue logically, listen to their professors’ concerns, and accurately follow the academic misconduct procedures laid out by their institution.

 

Rachel Hale reports on youth mental health, supported by a grant from Pivotal Ventures. Pivotal Ventures does not influence editorial decisions. Reach her on X @rachelleighhale.