AI Detection Isn’t Your Friend — And Here’s Why

AI detection is a controversial subject, but one thing’s for sure: they’re not made with student welfare in mind. Here’s why AI detection can be potentially harmful.

John Angelo Yap

Updated August 6, 2024

A deceptive robot, generated with Midjourney

A deceptive robot, generated with Midjourney

Reading Time: 7 minutes

LLMs are everywhere, so it stands to reason that AI detection tools are everywhere too. Schools and universities are rushing to adopt them. But there's a problem: they're not as accurate as we think.

False positives are running rampant in universities and they’re starting to claim lives. Innocent students are being accused of cheating. Their work is flagged as AI-generated when it's not. 

Imagine pouring hours into an essay, only to be told it's not yours. Your grade, reputation, and academic future are at risk. All because of an imperfect algorithm.

These tools aren't just unreliable. They're potentially harmful. They create an atmosphere of distrust and fear. Students second-guess their writing, afraid to sound "too good." Is this the learning environment we want to foster? Here’s why AI detection isn’t your friend.

Why is AI Detection Flawed?

AI detection tools have gained prominence as LLMs become more sophisticated in generating human-like text. However, these detection methods are inherently flawed and unreliable for so, so many reasons — and no, we’re not the only ones saying that. In fact, OpenAI discontinued their own detector for this reason.

AI detection visualized
Midjourney
AI detection visualized
Midjourney

So, what’s happening behind the scenes?

Like I said, LLMs are rapidly evolving which is making it difficult for detection tools to keep pace. GPT-4, Claude, Gemini — as these new models continue to improve, detection algorithms quickly become outdated. This cat-and-mouse game means that even the most advanced detectors may fail to identify AI-generated content accurately.

And then, we also have to ask ourselves: is there any distinguishable difference left between human and AI writing? Modern language models can produce nuanced, context-aware (well, barely, but still) writing that mimics human thought processes and writing styles. 

But one of the biggest reasons is that AI detection tools typically rely on statistical patterns and linguistic markers that can be manipulated. As users become more aware of these detection methods, they can intentionally alter AI-generated text to evade detection which makes these platforms less effective.

The issue is that AI detection assumes that the difference between human and AI-generated content is an either-or question. But the reality is far more complex than that. Many modern writing processes involve both humans and AI tools which is now creating a spectrum of authorship that defies simple classification.

And friends, that’s just the tip of the iceberg.

How AI Detection Puts Students in Danger

It’s one thing to hear it in theory, but to actually read about some of the circumstances that students are being forced to when getting accused of using AI is heartbreaking. Here are some of the most noteworthy stories I’ve found:

Student receiving a failing grade
Midjourney
Student receiving a failing grade
Midjourney

UC Davis Students

William Quarterman's world was turned upside down in an instant. A stellar student with an unblemished record suddenly faced a failing grade. The reason? His history exam was flagged as plagiarized by GPTZero. Worse still, he was told to suffer the consequences of academic dishonesty.

The twist? Quarterman never used ChatGPT. This false accusation took a heavy toll. His academic performance suffered. His mental health declined. Fortunately, justice prevailed. The university dismissed the case, admitting they lacked evidence.

But the story didn’t stop there, as false positives in AI detection struck again, this time closer to home. Louise Stivers, another UC Davis student, faced similar accusations. TurnItIn flagged her case brief as AI-generated. Help came from an unexpected source: William Quarterman. Along with his father, Quarterman advised Stivers. Their combined efforts successfully overturned her academic sanction.

University of Melbourne Masters Student

A University of Melbourne student faced a false accusation of using AI in her assignment. The student, under the pseudonym of Rachel, received an email claiming AI use in her work. Her anxiety spiked. She denied the allegation and provided evidence. Despite this, a hearing was scheduled a month away. The wait caused significant stress for Rachel.

The university states that additional evidence is required before making misconduct allegations. However, Rachel's case initially relied solely on Turnitin's detection. Two days before the hearing, Rachel presented her browser history and assignment drafts. The matter was then dropped.

Turnitin's representative acknowledged the weakness in using detection as the sole basis for allegations. He suggested that assessment design should allow students to show their work process. This approach could prevent similar situations in the future.

Relevant Tweets

Is There Any Hope For Better AI Detection?

I want to say yes, I really do. But I don’t think that — in our current understanding of language models and lack of governance — there’s any way for AI detectors to be fully accurate.

The challenge lies in the ever-evolving nature of LLMs. Like I said, it’s a constant game of cat-and-mouse. A model improves, so does AI detection, but then the model also improves in a couple of months, and the cycle goes on.

There’s also the fact that there is an overwhelming volume of online content. The resources required to accurately scan and analyze every piece of digital content (since they are used in training LLMs) are immense, and the potential for errors remains high.

This is the sad reality:

While there's undoubtedly ongoing research in this field, the road ahead isn’t without challenges. The hope for truly reliable AI detection may be just that — a hope, rather than a realistic expectation in the near future. 

So, What Can We Do?

To protect yourself from false positives, here’s what you need to do:

Collect As Many Evidence As You Can

When you’re in front of a school board, the best line of defense is overwhelming evidence. Here’s where you can source them:

  • Browser History. Shows that you actually did the research.
  • Document History. A minute-by-minute record of your paper’s progress.
  • Library Records. Establishes that you went to the library to complete your paper and used books as sources.
  • Handwritten Outlines. Allows you to claim that you came up with everything on the paper.

Educate Your Peers About AI Detection

False positive AI detection is a growing concern. Even if companies claim that only one in a hundred students is affected — that’s still one student who’ll face a lifetime of scrutiny because of something he didn’t do. That includes unfair penalties and damaged reputations. Educating peers helps combat these errors.

Spread awareness. Understanding the limitations of AI detectors protects genuine work. It promotes fair assessment of content. Informed peers can challenge false positives effectively. This safeguards academic and professional integrity in this GPT-dominated world.

Consider Using Paraphrasing Tools

And for your own protection, I also highly suggest using paraphrasing tools for your work. But not just any paraphraser, you need something special:

AI bypassers.

These paraphrasing tools focus on creating text that passes common AI detectors. Now, I’m not going to sugarcoat it — there’s definitely room for academic abuse in this technology. But I’m going to recommend it because I also see a lot of potential for good. Anyone can use it, whether or not you use AI.

My recommendation? Undetectable AI.

This is one of the most effective AI bypassers in the market today — and I should know, after all, we’ve written a lot of articles about Undetectable AI in the past. One thing that stands out about it is that it can hide from AI detection using any of the most popular detectors today. For more information, you can read more about Undetectable AI in our complete review.

The Bottom Line

AI detection isn’t your friend — it’s a band-aid solution because we can’t come up with anything better yet.

LLM abuse in the academic space is definitely a real threat, but the permanent solution shouldn’t involve innocent students getting unfairly sanctioned and treated guilty until innocent. Both scenarios are frightening.

As for students, the bitter pill to swallow is that we have to continue to adapt to these changing times until something better comes along. So now, prepare your evidence at all times and consider using AI bypassers. These can potentially save your career.

Teachers also have a lot to say about AI detection, so hear from them in this article. Good luck!

Want to Learn Even More?

If you enjoyed this article, subscribe to our free newsletter where we share tips & tricks on how to use tech & AI to grow and optimize your business, career, and life.


Written by John Angelo Yap

Hi, I'm Angelo. I'm currently an undergraduate student studying Software Engineering. Now, you might be wondering, what is a computer science student doing writing for Gold Penguin? I took up studying computer science because it was practical and because I was good at it. But, if I had the chance, I'd be writing for a career. Building worlds and adjectivizing nouns for no other reason other than they sound good. And that's why I'm here.

Subscribe
Notify of
guest

0 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments