News

‘Distrust, Detection & Discipline:’ New data reveals teachers’ ChatGPT crackdown

ChatGPT student cheating: Male teen with brown hair sits next to larger adult-size white humanoid robot. They are looking at each other and giving each other high five with hands meeting.
Stock-Asso/Shutterstock

New survey data puts hard numbers behind the steep rise of ChatGPT and other generative AI chatbots in America’s classrooms — and reveals a big spike in student discipline as a result.

As artificial intelligence tools become more common in schools, most teachers say their districts have adopted guidance and training for both educators and students, according to a new, nationally representative survey by the nonprofit Center for Democracy and Technology. What this guidance lacks, however, are clear instructions on how teachers should respond if they suspect a student used generative AI to cheat.

ChatGPT student cheating: Bar chart in gray on white with black text

Center for Democracy and Technology

Report statistics.

“Though there has been positive movement, schools are still grappling with how to effectively implement generative AI in the classroom — making this a critical moment for school officials to put appropriate guardrails in place to ensure that irresponsible use of this technology by teachers and students does not become entrenched,” report co-authors Maddy Dwyer and Elizabeth Laird write.

Among the middle and high school teachers who responded to the online survey, which was conducted in November and December, 60% said their schools permit the use of generative AI for schoolwork — double the number who said the same just five months earlier according to a Sept. 2023 survey by the Center for Democracy and Technology. And while a resounding 80% of educators said they have received formal training about the tools, including on how to incorporate generative AI into assignments, just 28% said they’ve received instruction on how to respond if they suspect a student has used ChatGPT to cheat.

ChatGPT student cheating: Two circle charts in light blues on white with black text

Center for Democracy and Technology

Report statistics.

That doesn’t mean, however, that students aren’t getting into trouble. Among survey respondents, 64% said they were aware of students who were disciplined or faced some form of consequences — including not receiving credit for an assignment — for using generative AI on a school assignment. That represents a 16 percentage-point increase from August.

ChatGPT student cheating: 4 percentage statistics on graphic illustration in light blues on white with black text

Center for Democracy and Technology

Report statistics.

Fighting fire with fire, a growing share of teachers say they rely on digital detection tools to sniff out students who may have used generative AI to plagiarize. Sixty-eight percent of teachers — and 76% of licensed special education teachers — said they turn to generative AI content detection tools to determine whether students’ work is actually their own.

The findings carry significant equity concerns for students with disabilities, researchers concluded, especially in the face of research suggesting that such detection tools are ineffective.

***

Mark Keierleber is an investigative reporter at The 74. Previously, Mark was a reporter at the Student Press Law Center, a Washington, D.C.-based legal assistance agency, where he reported on government transparency and First Amendment issues relevant to students and educators.

The Center for Democracy and Technology is a nonprofit think tank focused on digital rights and expression.

This story first appeared at The 74, a nonprofit news site covering education. Sign up for free newsletters from The 74.

To Top
Skip to content