USA TODAY and Yahoo may earn commission from links in this article. Pricing and availability subject to change.

She lost her scholarship over an AI allegation — and it impacted her mental health

University of North Georgia student Marley Stevens was sitting in her car when she got the email notification: Her professor had given her a zero on a paper and accused her of using artificial intelligence to cheat.

Her offense? Using Grammarly, a spell check plug-in that utilizes AI, to proofread a paper. Despite the tool being listed as a recommended resource on UNG’s site, Stevens was put on academic probation after a misconduct and appeals process that lasted six months. Getting a zero on the paper impacted her GPA, and she lost her scholarship as a result.

She was already taking Lexapro for diagnosed anxiety and struggling with a chronic heart condition before the ordeal. In the months during and after, her mental health plummeted.

“I couldn't sleep or focus on anything,” Stevens says. “I felt helpless.”

ADVERTISEMENT

Stevens is among a growing number of students who say they were unjustly accused of using AI to cheat. Schools have looked for solutions to address an onslaught of AI-doctored work following the Nov. 2022 launch of ChatGPT, an artificial intelligence chatbot that generates human-like text. But with detection software comes risk of false positives — and resulting misconduct processes that can impact the mental wellbeing of students.

University of North Georgia student Marley Stevens battled an allegation that used AI to cheat.
University of North Georgia student Marley Stevens battled an allegation that used AI to cheat.

'I had no idea how to even prove my innocence'

New York-based education consultant Lucie Vágnerová has worked with more than 100 AI-related cases of student misconduct since Nov. 2023 and says the number of clients coming to her about false positives is on the rise.

With a false allegation comes concerns about forfeiting merit and academic scholarships and losing visas for international students. It’s not atypical for cases to drag on for weeks or months. In extreme situations, students have received letters accusing them of plagiarism after graduation, which is “incredibly stressful” for students starting a job.

ADVERTISEMENT

More: Professors are using ChatGPT detector tools to accuse students of cheating. But what if the software is wrong?

“Anxiety is maybe the most mentioned word that I hear from students going through academic misconduct,” says Lucie Vágnerová. “They're telling me they're not eating, they're not sleeping, they're feeling guilty.”

In 2023, multiple seniors at Texas A&M University–Commerce were temporarily denied diplomas after an instructor accused his entire animal science class of using ChatGPT. The professor came to his conclusion by running the students’ work through ChatGPT and asking the site to determine if the software had produced the writing. Experts say ChatGPT cannot be trusted to detect AI-generated writing.

Liberty University student Maggie Seabolt's grade was lowered after receiving what she says was a false AI allegation.
Liberty University student Maggie Seabolt's grade was lowered after receiving what she says was a false AI allegation.

For many students, hiring outside help to fight allegations isn’t feasible. When Liberty University senior Maggie Seabolt received a notification that her paper was flagged for 35% AI last spring, she was confused — she had typed up the paper in one sitting in Microsoft Word. As a first-generation college student, she wasn’t sure where to turn for guidance.

ADVERTISEMENT

“To see that I was being accused of using AI when I knew in my heart I didn’t, it was really, really stressful, because I had no idea how to even prove my innocence,” Seabolt says. “I definitely felt very alone.”

Her professor didn’t move forward to report her for academic dishonesty, but marked 20% off her paper grade.

Liberty University does not ban AI-powered writing aids like ChatGPT and Grammarly, but clarifies that students shouldn’t accept AI generated modifications that involve extensive paraphrasing or produces new writing. They recommend turning off generative AI text features when possible.

Popular AI detection tool Turnitin yields a higher incidence of false positives in cases where the percentage of AI-generated writing is lower than 20%. The company states its detection model shouldn’t be used as the sole basis for actions against a student.

“Our guidance is that there is no substitute for knowing a student and their writing style. When concerns about false positives arise, educators should continue to engage in an open and honest dialogue with students, relying on their experience and judgment,” a Turnitin representative told USA TODAY.

ADVERTISEMENT

UGA declined to discuss the specifics of Stevens’ case due to student privacy laws, but says use of AI varies from classroom to classroom and shared their academic integrity policy, which includes guidance on artificial intelligence and plagiarism.

A Grammarly representative confirmed that Grammarly donated $4,000 to a GoFundMe Stevens set up and invited her to speak at a session on AI innovation and academic integrity at a conference held by Educause, a nonprofit focused on information technology in higher education. In October, Grammarly launched Authorship, a feature designed to respond to false positives from AI detection tools by process-tracking.

The problem with using AI detectors as the sole indicator of cheating

Generative AI is a type of artificial intelligence that creates human-like text, images, coding, music and video. When artificial intelligence research company OpenAI released ChatGPT, more professors looked to plagiarism software detectors like Turnitin to verify academic integrity. The software includes an AI writing indicator score that highlights text that could’ve been written or modified with AI tools.

But experts say detection software can misidentify writing as AI-generated. A 2024 University of Pennsylvania study found that AI detectors were easily fooled by variations in spelling, symbol usage and spacing; the study recommended against using detectors in a disciplinary context. A 2023 Stanford University study found that ChatGPT detectors are biased against non-native English speakers. ChatGPT’s parent company OpenAI disabled their own AI detection platform because of its low rate of accuracy.

'This shouldn’t be a surprise' The education community shares mixed reactions to ChatGPT

University of Colorado Boulder Associate Professor Casey Fiesler, who researches technology ethics and internet policy, says making academic integrity decisions based purely on AI detectors is irresponsible given their systematic biases.

“The risk of a false positive is too high.” Fiesler says. “It's hard to defend yourself against a flawed algorithm.”

Part of the problem is that there’s a lag between how quickly AI has developed and the speed with which universities have responded with policies, meaning there isn’t standardization across schools, or even within a given curriculum department.

Nearly half of higher education leaders, faculty and staff respondents to the 2024 EDUCAUSE AI Landscape study disagreed or strongly disagreed that their institution has appropriate guidelines in place for AI use. Only 8% of respondents says their cybersecurity and privacy policies are adequate to address AI-related risks.

“AI policies must strike a balance between two often competing needs: standardized enough so that students understand what is expected across their courses, and customizable enough so that there is room for disciplinary differences and faculty autonomy,” says report author Jenay Robert, who is a senior Educause researcher.

University of Kansas English Professor Kathryn Conrad says it’s important for educators to be aware that AI-detection software functions differently than plagiarism-detection tools. AI detection tools look for patterns of "burstiness" and "perplexity” as opposed to comparing the similarity of student work to something in a database or on the web, according to Conrad. Turnitin provides both services, which can muddy those waters.

In her Blueprint for an AI Bill of Rights for Education, she recommends that teachers explicitly outline AI guidelines in their courses to avoid confusion.

“If you've told students that they can't use generative AI for their papers but have given the OK for them to use it for brainstorming, and then Turnitin suggests that a student used generative AI for a paper, accusing that student of cheating is a misunderstanding of how both the tool and the detector work,” Conrad says.

What can students accused of using AI to cheat do?

The first step to preventing false AI accusations is being aware of a course’s policies on AI tools.

In the case of a misconduct allegation, a paper trail of work can help prove steps. Vágnerová recommended using writing software that saves drafts, like Google Docs and Microsoft Word. Working on assignments early and utilizing resources like office hours and university writing services can help students demonstrate the originality of their work if falsely accused. Students can also proactively take screenshots of internet search history that shows steps of research in the writing process.

Student defense lawyer Richard Asselta says it’s important for students to stay calm and talk to a trusted adult or peer before responding to any accusations of using AI.

“One of the first mistakes I see is students responding without really thinking it through and sometimes they can say things that get misconstrued,” Asselta says.

In addition to providing evidence of a paper trail, Asselta says students should take a logical approach, hear professors’ concerns and follow a given school’s academic misconduct process correctly.

Rachel Hale’s role covering youth mental health at USA TODAY is funded by a grant from Pivotal Ventures. Pivotal Ventures does not provide editorial input. Reach her at rhale@usatoday.com and @rachelleighhale on X.

This article originally appeared on USA TODAY: Students are being falsely accused of using AI. It's harming them.