Gold Penguin Logo with Text

AI Writing In Schools Is Totally Out Of Control – And Students Are Assumed Guilty Until Proven Innocent

Recent concerns highlight the potential inaccuracies and unfairness of Turnitin's AI detection system, with innocent students facing undue accusations based on ambiguous detection percentages. What is the reliability of such tools? We urgently need to reform academic systems on how they view AI detection.
A cute robot in a courtroom, made with Midjourney
A cute robot in a courtroom, made with Midjourney
September 28, 2023 6:15 am

ChatGPT has undoubtedly ushered in a new era of convenience, efficiency, and advancement. In the academic world, AI writing checkers like Turnitin have popped up to provide the facade they can help uphold academic integrity by detecting unauthorized use of AI or plagiarized content.

While the intention behind this makes sense on paper, concerns recently emerged that question the fairness and accuracy of these very tools.

We spoke to a graduate student who was accused of using AI just this week. Her story sheds light on the flaws in the system, revealing the unintended negative consequences for innocent students. Without even having a case, students are assumed guilty and have to fight their way to innocence.

A Student's Nightmare

A graduate student recently reached out to share her horrible experience with the fallibility of Turnitin's AI detection system. Just a few days after the semester started, a paper she wrote was flagged with a 36% AI writing score (according to the report TurnItIn provided the professor).

The result of this? An unanticipated and distressing meeting with her professor to defend her original work.

During this meeting, she learned that her use of Grammarly—a popular grammar and spell-check tool—and even the mere act of using synonyms, could have triggered the AI suspicion.

This is deeply concerning, especially when students have been encouraged to use tools like Grammarly throughout their academic careers. Universities even offer writing centers to aid students in improving their grammar and writing skills (which include the use of Grammarly).

The usage of a thesaurus is also a long-standing practice in academic writing. So, where does one draw the line between utilizing available resources and being suspected of AI-driven writing? It just doesn't really make sense anymore.

The Vague Approach to Accusations

To add salt to the wound, the student's initial communication regarding the suspected AI use was painfully vague. She received an email with a generic call to discuss her classwork. The lack of clarity and specificity left her unprepared and anxious. It was only during the meeting that she learned about the AI detection percentage threshold and the implications.

After a thorough analysis and running the student's topic through several AI generators, the professor concluded that the student hadn't used AI assistance.

However, the student was informed that a note would be added to her academic file indicating a meeting about academic dishonesty, a label that can have profound implications on a student's reputation and academic journey.

All for a mere 36% chance of AI. I don't think professors actually understand what these scores mean. These are AI writing predictors, not detectors. And they should be labeled as such. How does it even make sense to accuse a student of cheating with less than a 50% chance they even did something?

The Future of AI Detection: More Harm than Good?

The story illuminates a pressing issue in academia that popped up in under a year.

On one hand, there's a genuine need to maintain academic integrity and ensure originality in students' works. On the other hand, the existing tools and methods for AI detection seem to have glaring gaps and inconsistencies.

The student's ordeal underscores a broader systemic issue that I've been talking about for months but we're just now seeing the impacts because school just started.

If a tool like Grammarly, which has been promoted for years, can lead to AI suspicion, then the system's criteria need recalibration.

Students should be able to use legitimate tools to improve their writing without the constant fear of being falsely accused – how does fixing grammar trigger an AI detector? What if a student manually revewied it?

Setting an arbitrary threshold for AI suspicion (like 20%) can be problematic.

Every student's writing is unique, and the diverse usage of language, grammar tools, and resources can unintentionally lead to detection percentages that trigger unnecessary suspicion. And that's not okay.

This gets even harder to quantify when you take into effect factors like native language, cultural background, and personal experiences. Some students might use phrases or structures similar to what they've read or heard before, not because they're copying, but because that's how they've learned and understood the language.

The Call for Transparency and Reform

The future of education depends heavily on technological advancements and the integrity of its tools. For Turnitin and other similar platforms, it's crucial to revisit and refine their AI detection algorithms, ensuring fewer false positives.

Academic institutions need to be transparent with students about the tools, methods, and criteria they use to detect AI assistance. A clear and specific communication framework is required to prevent undue stress and anxiety for students. Especially if they've already started to use these systems against them.

The graduate student's story is not an isolated incident. Many others face similar challenges. As educators and technologists, it's our responsibility to ensure that the tools we deploy enhance the academic experience, rather than hinder it.

In the interim, it's commendable that students like her are taking matters into their own hands, such as screen recording their writing processes. However, the fact that such measures are even considered necessary is a testament to the pressing need for systemic change. Guilty until proven innocent is the new mantra.

Stay Updated On The Latest AI Tools
Get notified with the biggest releases and tips in Artificial Intelligence delivered straight to your email. We don't sell your data.
Written by Justin Gluska
Justin is the founder of Gold Penguin, a business technology blog that helps people start, grow, and scale their business using AI. The world is changing and he believes it's best to make use of the new technology that is starting to change the world. If it can help you make more money or save you time, he'll write about it!
Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments
magnifiercross