The Ultimate Guide on AI Cheating and Detection | 5 Actionable Tips + Best AI Detectors Rank

The Ultimate Guide on AI Cheating and Detection | 5 Actionable Tips + Best AI Detectors Rank

7 min read June 24, 2024
✨ Summary: The only piece of content you need to understand how to handle AI-generated text and AI cheating in student assignments. Learn about top AI detection tools and ethical AI use in education.

Hello, fellow educators!

If you’re grappling with the challenges of AI cheating and detection in the era of ChatGPT, you’ve come to the right place. As a seasoned teacher with a decade of experience, I’ve partnered with CoGrader to dive deep into this hot topic and provide you with practical insights and strategies.

In this blog post, we’ll explore the spectrum of AI use in student work, discuss the purpose of writing assignments, and share tips for detecting AI-generated text.

We’ll also compare three popular AI detection tools to help you make informed decisions.

So, let’s get started!

The Tom Sawyer Test: Understanding the Spectrum of AI Use

Remember the story of Tom Sawyer and the fence? Just like Tom tricked his friend into painting the fence for him, students might be tempted to use ChatGPT to complete their assignments. But the key question is: what’s the purpose of the assignment?

If the goal is simply to have an essay completed, then using AI might seem like a valid solution. However, as educators, we know that the true purpose of writing assignments is to help students develop and showcase their skills. This is where the spectrum of AI use comes into play.

On one end of the spectrum, we have essays that are entirely AI-generated, which is clearly unethical. On the other end, we have students who write independently, without any AI assistance. But there’s a lot of middle ground, such as:

  • AI generating multiple drafts, and the student choosing the best parts
  • AI guiding the student through the writing process as a coach
  • The student writing until they’re stuck, then asking AI for help
  • The student writing all the content, but using AI for feedback and improvement

As educators, it’s our responsibility to teach students how to use AI ethically and effectively, recognizing the different levels of AI involvement in their work.

Here’s a graphic that helps in this process, by Matt Miller from Ditch that Textbook:

AI Writing Spectrum Chart (levels of using AI to write, from more to less AI usage)

I have created a different version of this chart with student-centric language, to use in the Classroom:

Student Friendly AI Writing Spectrum Graph (From more AI to less AI)

Click here to edit it on Canvas as a template.

Clarity on Writing Assignments: Communicating Purpose and Expectations

To help students navigate the ethical use of AI, we need to be crystal clear about the intent and purpose behind each writing assignment. This is where learning objectives come in handy – not just for administrative purposes, but to articulate what we’re doing and why we’re doing it.

By clearly communicating the purpose of the task and the skills we want students to develop, we can guide them towards appropriate levels of AI use and help them make informed decisions about the ethics of their choices.

Detecting AI-Generated Text: Signs to Look For

While AI detectors can be helpful, they’re not foolproof. As teachers, we can also rely on our own instincts and observations to identify potential AI-generated text.

Some red flags to watch out for include:

  • Language that seems too sophisticated or polished compared to the student’s usual writing style.

This one is pretty straightforward and doesn’t need much detail. If your student is turning something in that doesn’t sound like them at all, it’s a sign there might have been AI usage when writing the essay.

  • Essays that are generic, vague, or lack specifics.

A lot of my students sounded great in their writing, but never really said anything on their work.

Usually, what happens is that they didn’t have any original ideas, didn’t really understand the assignment, and they didn’t give ChatGPT enough information to actually write a specific answer.

So when it came time to write, AI had nothing to write for them.

Then they turned in essays that were really generic, vague or lacked specific examples to support whatever claims they were making, or essays that have a thesis that is never actually addressed in the rest of the writing.

  • Odd or irrelevant tangents that an AI might produce based on misunderstanding context

We as teachers really love to have a special unique way of teaching certain things.

So, if that way of teaching this subject doesn’t exist online, the AI is not going to know what to do with it.

For example, if I were teaching about Mark Twain and discussed tangents in the context of our class material, an AI might mistakenly conclude that we were discussing a calculus class.

It might then start explaining derivatives and limits, which would be irrelevant to our literature discussion. It would start talking about all of this nonsense that has nothing to do with the tangential journeys of Mark Twain’s writing.

That’s because, just like Twain said, “Truth is stranger than fiction, because fiction is obliged to stick to possibilities; Truth isn’t.”

AI is obliged to stick to possibilities too. The truth is too random for AI to predict, so it’ll just give you whatever it thinks you want. It’s trying to make you happy.

  • Inconsistencies in tone, formatting, or quality from one paragraph to the next

We all know what it looks like when a student has copied and pasted from Wikipedia because they leave the hyperlinks in.

Well… They do the same thing with AI. 😂

For me, they usually forgot to even change the highlighting, the font, etc. I asked them if they switched their default font to Roboto, which is what it came up as in Google Docs.

When they forget to change everything, that’s kind of a dead giveaway to me if there’s random font size changes because they forgot to fix it.

  • Large copy-and-paste sections

If you are using Google Docs to write, you can use the Version History tool to figure out if a student just copy-pasted the whole thing all at once.

Here’s a video of a teacher explaining how to do that.

By familiarizing ourselves with these telltale signs, we can better identify when students might be relying too heavily on AI and initiate constructive conversations about ethical use.

AI Detectors: Comparing the Top Tools

Now, let’s take a closer look at three popular AI detection tools: GPTZero, Originality.ai, and CoGrader.

I ran a series of experiments to test their accuracy and reliability, using five different iterations of the same essay:

  1. 100% AI-generated essay from GPT-4
  2. 100% AI-generated essay from Claude
  3. 100% human-written essay from a student ethically opposed to using AI
  4. AI-assisted essay where AI provided finishing touches and revisions to the human-written essay
  5. Essay put through a “humanizer” tool called Stealth Writer that attempts to disguise AI writing

The results were quite interesting:

TestCoGraderOriginality.aiGPTZero
ChatGPT4o100%88%100%
Claude-3100%54%100%
Human Written12%8%1%
AI-Human Mixed100%100%100%
“Humanizer” tool18%3%3%

All three tools were fairly accurate in identifying GPT4o, Claude-3, and Human written essays.

Notably, all three tools struggled to detect the essay that had been put through the “humanizer” tool, with confidence levels ranging from 3% to 18%.

While these tools can be helpful in identifying potential AI use, it’s important to remember that they’re not 100% reliable and can produce both false positives and false negatives.

As educators, we should use them as a starting point for further investigation and conversation, rather than relying on them as the sole determinant of academic integrity.

Embracing the Future of Education: AI as a Tool, Not a Threat

As AI continues to advance and become more accessible, it’s crucial that we as educators adapt and evolve our teaching practices to incorporate these tools in a way that benefits our students. Rather than viewing AI as a threat to academic integrity, we can choose to see it as an opportunity to teach students valuable skills in critical thinking, ethical decision-making, and responsible use of technology.

By guiding our students through the spectrum of AI use, clearly communicating the purpose of our assignments, and staying vigilant for signs of unethical AI reliance, we can help them navigate this brave new world of education with confidence and integrity. Remember, our role as teachers is not just to impart knowledge, but to equip our students with the skills and values they need to succeed in an ever-changing world.

By embracing AI as a tool for learning and growth, we can help them rise to the challenges of the future and become the ethical, innovative leaders of tomorrow. So, let’s continue this conversation and work together to find the best ways to integrate AI into our classrooms. Share your experiences, insights, and strategies in the comments below, and let’s learn from each other as we navigate this exciting new frontier of education.

Happy teaching!

Andrew Gitner

Andrew Gitner

Your CoTeacher - EdTech Specialist & High School ELA Teacher

Andrew Gitner is an EdTech Specialist and ELA teacher with a decade of experience in the classroom. He is passionate about using AI to empower teachers and students.