Everything You Need to Know About TurnItIn's AI Detector For The Coming School Year
TurnItIn has some detailed FAQs they've answered about how their relatively new internal AI detector works for teachers and students. Here are the most important takeaways.
Justin Gluska
Updated September 14, 2023
a robot sitting inside a classroom cheating on an exam, as pastel cute digital art
Reading Time: 6 minutes
Turnitin, one of the leading academic plagiarism detection tools, recently introduced new Artificial Intelligence writing detection capabilities to their platform. With the start of the 2023/24 school year this is going to be a major factor in grading, especially at universities.
To help you understand it all, we went through all of their publicly available documentation and put together a guide about everything and anything Turnitin AI detection. Here's how it functions, their key features, and how they do it:
AI Writing Detection Within Similarity Reports
Turnitin added an AI writing indicator to their Similarity Reports that shows the estimated percentage of a submitted document that may be AI-generated. Only instructors and admins can view the indicator, which is claimed to provide data to inform decisions, not dictate grading.
Submissions Analyzed Sentence-by-Sentence
When a paper is submitted, it is broken into segments that capture each sentence in context. Turnitin's AI model analyzes the sentences and predicts whether they are human or AI-written, providing an overall AI percentage for the document.
Current Models Detected
The current model detects text generated by GPT-3, GPT-3.5, ChatGPT, and often GPT-4 (ChatGPT Plus).
Model Trained to Detect AI Text Patterns
Turnitin's model uses parameters that target the consistent, highly probable word sequences characteristic of AI models compared to more inconsistent human writing patterns– Aka robotic sounding sentences.
Diverse Training Data
The model is trained on a sample dataset of both real and AI-generated academic writing from different subject areas, geographies, and under-represented groups. This probably tries to counteract that many AI detection tools falsely flag students who don't speak english as a native language.
Retroactive Detection Via Resubmission
Past assignments can be checked for AI writing if resubmitted to Turnitin after the capability's launch in April 2023.
English Language Only For Now
The initial AI detection model only supports English language submissions. Non-English submissions will not be processed.
Trial Accounts Available
Institutions can request test accounts to evaluate the AI detection capabilities before fully rolling them out across schools.
Ability to Suppress Indicator and Report
Admins can enable/disable the AI writing feature to suppress the indicator and report as needed. If a teacher doesn't want to use the feature, they won't have it shown.
AI Detection Integrated Into Workflow
The AI detection seamlessly integrates into existing Similarity Report workflows. It does not impact how users interact with the reports. Teachers can easily view it as another parameter when grading papers.
Available Via LMS Integrations
The AI indicator and report are accessible through LMS integrations with platforms like Moodle, Blackboard, and Canvas.
Limitations With MS Teams Integration
The AI detection is not available via the MS Teams Assignment Similarity integration. Instructors will need to request full reports manually. We're not sure if this will be changed in the future.
Different From Authorship Detection
The AI percentage focuses on AI authorship, while Authorship uses metadata and language analysis to detect impersonation.
We must note that while Turnitin’s AI detection model seems to be quite powerful, it's not perfect but still claims to have less than a 1% false positive rate.
Turnitin is committed to maintaining this rate and will continue optimizing its model, focusing on academic integrity while safeguarding student interests. Some students have been falsely flagged and it's caused a lot of unintended consequences.
The model's accuracy gets better with a larger text sample. Turnitin notes that the technology might miss 15% of AI-written text in a document, but it also acknowledges the integrity of treating all work fairly, considering the low false positive rate.
Ultimately, Turnitin's goal is to provide comprehensive data for educators to guide decisions grounded in fairness and academic integrity. As AI technology evolves, so too will Turnitin's AI detection capabilities to ensure continued accuracy, fairness, and effectiveness.
Do AI Detectors Even Work?
It's a tricky situation.
The same study mentioned earlier from Stanford researchers raised serious questions about whether current AI writing detectors actually work as claimed.
The analysis found that while detectors performed well in evaluating essays written by native English speakers, they incorrectly classified over 60% of essays by non-native English learners as AI-generated.
Alarmingly, nearly all non-native speaker essays were flagged by at least one detector.
This unreliability stems from the detectors scoring based on lexical and syntactic complexity, where non-native speakers naturally trail U.S.-born writers. With foreign-born students potentially being unfairly accused of cheating, this highlights the concerning bias and lack of objectivity in current AI detectors.
The study also found the detectors are easily fooled through simple prompt engineering.
OpenAI, the creators of ChatGPT, concur that existing AI detectors do not reliably detect AI-generated content. In OpenAI's own experiments, the tools incorrectly labeled human-written texts like Shakespeare as AI-generated.
The detectors seem prone to disproportionately flag non-native English speakers and formulaic writing styles. Students can also easily bypass the detectors through minor edits. OpenAI even removed their own AI detector a few weeks before this announcement.
Given how easily gamed and unreliable these AI detectors are, especially for non-native speakers, both the Stanford researchers and OpenAI caution against utilizing them to identify AI writing in academic settings currently.
More rigorous evaluation and refinement is needed before putting faith in these technologies, where the stakes are high and students' interests are at risk.
While addressing potential AI cheating is important, OpenAI emphasizes that maintaining fairness, objectivity and accountability should be prioritized over deploying unreliable detectors. They didn't mention anything about TurnItIn though.
OpenAI points to techniques some educators have found useful instead, like having students share ChatGPT conversations to demonstrate critical thinking and information literacy skills in a transparent manner.
The Path Forward for This Academic Year
Addressing the rise of AI generative models in academia is a complex issue with no perfect solution yet.
However, both the research community and AI developers like OpenAI agree maintaining ethical standards and protecting students, especially non-native speakers, should be the foremost priority. Will that happen? We'll have to see.
Businesses generally care more about profits than the impacts of what they're doing – and we'll have to see if TurnItIn's guidance is a PR play or if they really are careful about flagging students.
Relying on still-unreliable detectors is likely not the answer at this time. Rather, a measured approach focused on transparency, accountability and pedagogical outcomes may be preferable as policies continue evolving.
If you've been falsely flagged of using ChatGPT you could at least use that documentation to help your claim.
It's gonna be a rough year for students, parents, and teachers– that's for sure.
Want to Learn Even More?
If you enjoyed this article, subscribe to our free newsletter where we share tips & tricks on how to use tech & AI to grow and optimize your business, career, and life.