TurnItIn’s New Guidelines to Handling AI False Positives & Innocent Students

Turnitin has released insightful guidelines to help educators handle cases where AI tools mistakenly flag students' work as AI-generated. The guide emphasizes the importance of relationships with students and the crucial role of human judgment in academic integrity procedures. Let's talk about it.

Justin Gluska

Updated September 14, 2023

a robot professor writing on a chalkboard in front of a big lecture as 4k digital art

a robot professor writing on a chalkboard in front of a big lecture as 4k digital art

Reading Time: 6 minutes

In an era where AI is becoming an integral part of the academic landscape, detecting plagiarism and ensuring academic integrity is a top priority.

However, recent incidents have shed light on the darker side of AI’s role in academia...

Students like Louise Stivers and William Quarterman from UC Davis, as well as several at Texas A&M University, were falsely accused of using AI to write their papers, leading to detrimental consequences on their academic careers.

These cases raised questions about the accuracy and reliability of AI detection tools, such as Turnitin and GPTZero.

In response to the growing concern, Turnitin has now released a set of guidelines titled “AI conversations: Handling false positives for educators”. These guidelines aim to help educators approach false positive cases in a more informed and human-centric manner.

Delving into the Guidelines

The guidelines released by Turnitin focus on preparing educators to handle situations where AI detection tools may incorrectly flag human-written text as AI-generated content, termed as ‘false positives’.

These guidelines lay emphasis on the importance of relationships between instructors and students and the strength of these relationships in handling difficult conversations that may arise due to false positive detections.

Before and During the Assignment

  • Setting Clear Expectations: Educators should clearly articulate what is permissible and what is not for a particular assignment. Moreover, if students are using AI writing tools, they must be instructed to provide citations.
  • Collecting a Diagnostic Writing Sample: Turnitin suggests collecting a baseline writing sample from students. This sample can be compared with future submissions to assess authenticity.
  • Employ AI Misuse Tools: Educators should use tools such as the AI misuse rubric to ensure that writing assignments are structured in a way that minimizes the risk of misconduct.
  • Planning for AI Misuse: Educators should plan for the process to be used if AI misuse is suspected. This includes setting expectations with students and preparing a set of challenge questions, reflection questions, and process questions to engage the student if misuse is suspected.

After the Assignment is Submitted

  • Understanding Turnitin’s AI Detector: Educators should understand how Turnitin’s AI detector works and use it wisely. It’s important to remember that the tool is not a replacement for human judgment and should be used as one piece of the puzzle.
  • Revisiting the Assignment and Process: Turnitin suggests educators to review their assignments and processes, ensuring that proper safeguards are in place to protect assignments from AI misuse.
  • Relying on Relationships with Students: A respectful dialogue with the student should always be a part of the process. Educators are encouraged to use their relationship with the student as a filter for evaluating the situation.
  • Comparing the Writing to the Diagnostic Sample: Educators should compare the flagged content with the diagnostic sample to see if the style, complexity, and sophistication match.
  • Adopting an Attitude of Positive Intent: If after review, the evidence is not clear, educators should give the benefit of the doubt to the student.

A Step in the Right Direction

The move by Turnitin reflects a growing awareness of the limitations and potential harm of relying solely on AI detection tools. By releasing these guidelines, Turnitin acknowledges the need for a balanced approach that combines technology with human judgment.

As students like Stivers and Quarterman can attest, being falsely accused of AI-generated cheating can have severe consequences on both mental health and academic careers. The guidelines by Turnitin represent a much-needed step towards ensuring that such incidents are handled more empathetically and accurately in the future.

Ensuring academic integrity is paramount, but it should not come at the cost of jeopardizing innocent students’ futures. The Turnitin guidelines seem to be a step towards a more informed, compassionate approach to the issue.

Looking Forward: Embracing AI While Safeguarding Student Trust

In the evolving landscape of academia, AI tools will undoubtedly continue to play a significant role. However, incidents like those involving Stivers and Quarterman have highlighted the need for caution, transparency, and human involvement.

Turnitin’s guidelines are not just a set of instructions but a reflection of a changing mindset. Educators and institutions need to be proactive in understanding the technologies they are employing and the potential consequences they may have on the lives of students. Many have already expressed concern about the lack of transparency within the AI detection models.

Moreover, as Turnitin’s guidelines suggest, building strong relationships with students is crucial.

When students feel that they are part of a community that values their integrity and trusts their word, they are more likely to engage in honest academic practices.

Institutions should also be prepared to adapt and change their approaches as AI technologies evolve. Open channels of communication with students about the tools being used and the steps being taken to ensure fairness can foster an environment of trust and cooperation.

Developers of AI detection tools should also continue to work towards reducing the rate of false positives and improving the accuracy of their systems. AI writing detection is not provable – it's only predictable. This means you can't definitively accuse something of being 100% AI written just based on patterns. There's no official text watermark, it's mathematically impossible.

Those behind the scene of all the AI buzz should also actively engage with the educational community to understand the real-world impact of their tools and how they can better serve the needs of educators and students. The storm has only just begun...

The release of Turnitin's guidelines is an acknowledgment of the importance of human judgment in maintaining academic integrity in the age of AI.

By striking a balance between utilizing AI tools and relying on the human element, we can create an educational environment that not only upholds integrity but also protects the well-being and futures of students.

As educators, institutions, and AI developers navigate this complex terrain, the ultimate goal should remain clear – fostering an educational community that embraces innovation while safeguarding the trust and aspirations of its students.

Will paper-based essays soon come back? Only time will tell!

Want to Learn Even More?

If you enjoyed this article, subscribe to our free newsletter where we share tips & tricks on how to use tech & AI to grow and optimize your business, career, and life.


Written by Justin Gluska

Justin is the founder of Gold Penguin, a business technology blog that helps people start, grow, and scale their business using AI. The world is changing and he believes it's best to make use of the new technology that is starting to change the world. If it can help you make more money or save you time, he'll write about it!

Subscribe
Notify of
guest

0 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments