Artificial intelligence is increasingly making its way into our daily lives, and the world of academia isn't immune. With tools like TurnItIn's new AI detection software alongside AI-generated writing tools like ChatGPT, teachers and students alike are finding themselves navigating uncharted territory.
Educators are growing more concerned about students potentially cheating on assignments using advanced technologies. In response, they're looking to solutions like Turnitin's new AI writing detection tool to combat this issue.
But while AI detection tools might seem promising at first glance, they come with their own set of problems.
Over the next few minutes, we'll dive into the complex relationship between AI-driven cheating, AI writing detection technology, and their impact on both educators and innocent students who might get caught in the crossfire of false positives.
TurnItIn's AI Detection Software: A Double-Edged Sword
The new software is set to be activated for over 10,700 educational institutions over the next few weeks, offering a solution to counter AI-assisted cheating, but at what cost?
The accuracy of the AI detectors has been called into question, with Turnitin's software giving false positives and not being completely reliable in identifying mixed AI and human writing sources.
High school senior Lucy Goetz was surprised when the detector flagged her essay on socialism as likely AI-generated. Educators face a dilemma when it comes to AI in education: ignoring AI could lead to rampant cheating, but treating AI solely as a threat is shortsighted.
Turnitin's software is not foolproof, and AI technology is constantly evolving, making it difficult to keep up. It seems as professors are often comparing AI detection to concepts like plagiarism checkers.
But... AI Detection is NOT conclusive, it works based on predictions. I don't see how it's fair to accuse students of something when you don't have definitive proof.
Regardless, some experts believe that the introduction of AI detectors will only result in an arms race between cheaters and detectors... forever chasing tails for something neither will ever fully catch. But what can be done about it?
Take this recent example of a college senior at the UC Davis: William Quarterman, was accused of cheating on his history exam after his professor used an AI detection tool, GPTZero, which claimed his answers were AI-generated.
Quarterman denied the allegations and experienced panic attacks as he had to face the university's honor court. He was eventually cleared of the accusation after providing evidence that he didn't use AI.
As higher education institutions struggle to address the increasing use of AI by students for assignments and exams, the reliability of AI-driven detection software is being questioned.
Richard Culatta, CEO of the International Society for Technology in Education, suggests that universities should not elevate these cases to disciplinary action but instead implement guidelines and ask students to show their work before making accusations. He emphasizes that educators should learn to work with AI rather than banning or fearing it, as it rapidly evolves.
The benefits of using AI detection tools are certainly appealing for educators looking to maintain academic integrity. These systems can assign "generated by AI" scores and perform sentence-by-sentence analysis on student assignments, providing insights that may not be apparent through human review alone. But how accurate would this even be?
In theory, this should deter students from relying on AI-generated writing in their coursework and encourage them to develop their own thoughts and arguments. But students obviously doesn't always abide by the most ethical thing to do.
The rate of false positives from AI detection tools is not zero, and creators of these tools like OpenAI, Turnitin, Content at Scale, and GPTZero, warn educators about possible inaccuracies. As a result, education technology experts recommend that schools embrace AI and develop policies around its use, including citing AI when appropriate, making assessments more rigorous, and determining the right questions to ask when a student is suspected of cheating.
However, there is a flip side to these advantages, as the potential dangers of relying too heavily on AI detection become increasingly evident:
As seen in Goetz's case mentioned in the Washington Post article, detectors can sometimes get it wrong - with potentially disastrous consequences for innocent students. When Turnitin flagged a portion of her original essay as likely being generated by ChatGPT, it raised serious concerns about false accusations based on imperfect technology.
Detectors Introduced Without Widespread Vetting
The rapid implementation of Turnitin’s AI detection software across educational institutions raises questions about whether sufficient testing has been conducted to ensure its accuracy and fairness. For instance, the Washington Post columnist found several California high schoolers' papers falsely identified as fabricated by Turnitin's new detector.
This is huge. It's like TurnItIn just hopped on the wave of AI detection and released something that can damage millions of students, and do we really know the extent of their vetting process?
Rapidly Evolving AI Technology Outpacing Detection Tools
As discussed in the USA Today article, creators behind detection tools like GPTZero acknowledge their systems' fallibility due to constant advancements in AI-generated content techniques. Additionally, newer versions of popular writing bots such as ChatGPT (e.g., GPT-4) or Google's Bard may already surpass the detection capabilities of current tools.
One teacher, who wishes to remain unnamed, expressed their concern for the current accuracy level of TurnItIn's AI detection tool:
"Turnitin is claiming 98 % accuracy. But in previous 3 days of my testing, I feel it is more like 60%. One of my papers (published in 2020) was flagged as AI written. Similarly, some prompts generated content which was flagged as human written. I am just appalled that some professors keep thinking AI detection is similar to plagiarism detection even though it is not."
The Consequences of Incorrectly Flagged Students
When students are falsely flagged by AI detection tools, the consequences can ripple beyond mere academic repercussions. Innocent students caught in this crossfire may face a range of adverse effects that could impact their education and overall well-being.
The impact on student-teacher relationships
When students like Goetz are wrongly accused of cheating, the trust they have built with their teachers may be significantly damaged. Teachers play a pivotal role in fostering an environment conducive to learning, and mutual trust is a fundamental aspect of that relationship.
A false accusation can create a sense of doubt in the teacher's mind, potentially leading them to scrutinize future assignments from the affected student more critically than others.
Students who experience unjust accusations may feel alienated or unfairly targeted by their instructors, which can result in disengagement from class activities or reluctance to seek help when needed.
These strained relationships could extend beyond individual students and teachers, impacting classroom dynamics as other students become aware of potential inaccuracies in AI detection tools. In some cases, such awareness may also lead to doubts about fair grading practices or increased skepticism regarding educators' reliance on AI-based solutions.
It's crucial for institutions implementing Turnitin's AI-integrated platform and similar tools to recognize the importance of fostering strong student-teacher bonds and take active steps toward ensuring accurate evaluations while minimizing potential harm arising from incorrect flagging.
Unfair punishments and damaged academic reputations
When students like Quarterman are falsely accused of cheating due to the inaccuracies of AI detection software, they may face a range of unfair consequences that could negatively impact their educational journey. These include failing grades on assignments or exams, disciplinary actions such as academic probation, suspension, or even expulsion in severe cases.
Such unjust punishments can tarnish a student's academic record and pursue future opportunities. Poor grades resulting from incorrect cheating accusations might hinder students' chances of securing scholarships or being accepted into competitive college programs.
A history marked by disciplinary actions due to alleged academic dishonesty could lead potential employers or graduate schools to question the applicant's integrity during evaluations.
In addition to these tangible effects, an undeserved blemish on one's academic reputation may have long-lasting repercussions on self-esteem and confidence in their abilities.
Students who experience false allegations might become more hesitant to take risks in their studies out of fear that they may again be flagged erroneously.
To prevent these detrimental outcomes for innocent students caught in the crossfire of flawed AI detection tools, it is essential that educational institutions consider implementing robust review processes alongside AI detection solutions. By combining human expertise with AI advancements, educators can help minimize wrongful accusations while still upholding academic integrity standards.
If an assignment comes back as 20% AI-generated, it doesn't mean 20% of it was written with AI. 20% AI-generated means there is a 20% chance the article you're looking at has been written with AI. The percentages can be a bit deceptive so it's very important to learn how AI detection works.
The emotional toll on students wrongly accused of cheating
Beyond the practical consequences, being falsely accused of cheating can have a profound impact on a student's emotional well-being. Wrongful accusations can trigger anxiety, stress, and self-doubt among affected students. In Quarterman's case, he experienced "full-blown panic attacks" during his ordeal with the university's honor court after being wrongfully accused.
Such emotionally distressing experiences can be detrimental to a student's mental health and contribute to their overall academic struggles.
The emotional burden placed upon innocent students caught in false positives emphasizes the importance of ensuring that AI detection tools are used responsibly and ethically within educational settings.
Institutions should consider implementing comprehensive support systems for both educators and students when utilizing AI detection software. These support systems might include clear guidelines for using such technology alongside human judgment, training programs for educators on interpreting results accurately while considering context, and providing resources for students who may be facing false accusations so they can effectively advocate for themselves during any investigative process.
By tying together all of these approaches, educators can help minimize wrongful accusations while still upholding academic integrity standards. This holistic approach not only preserves the fairness and trust integral to academia but also ensures that innocent students' emotional well-being remains a priority throughout their educational journey.
Testing TurnItIn's AI Detection Software: A Mixed Bag of Results
As educational institutions rely more heavily on AI detection tools like Turnitin's software, it becomes increasingly vital to evaluate their efficacy and limitations through real-world testing. This analysis can provide valuable insights into the practical performance of such systems, shedding light on potential shortcomings and areas for improvement.
In one particular case highlighted by the Washington Post columnist, five high school students volunteered to help test Turnitin’s AI detector by creating 16 samples of essays. These essays comprised a mix of original student work, AI-fabricated content from ChatGPT, and pieces featuring mixed-source writing from both human and AI sources.
By running these samples through Turnitin’s system, we were able to see how the software judges a small sample of student work.
The test results revealed certain limitations in Turnitin's ability to accurately identify AI-generated writing:
- Accurate identification of only six out of 16 samples - In this experiment, Turnitin correctly identified less than half (37.5%) of the total submissions.
- Partial credit for seven samples with mixed accuracy - While the system was directionally correct in some instances, it failed to fully identify or distinguish between human-written sentences and those generated by ChatGPT.
- Failure on three samples, including a false positive - Perhaps most concerning was the instance where the system flagged Goetz’s original essay as being partly generated by ChatGPT when it was entirely her own work.
TurnItIn Claims 98% Detection Accuracy and Less than 1% False Positives
Despite these notable discrepancies found during testing, TurnItIn claims that its detector is 98 percent accurate overall based on its internal assessments. The company also states that situations like Goetz's case—false positives—occur less than 1 percent of the time. I'm not quite sure how you can claim almost 100% detection accuracy on something that isn't actually provable.
Given the practical implications for students and educators, even a small percentage of false positives could have severe consequences. 1% of a million students getting falsely flagged is still massive, especially at the university level.
Turnitin's software must be monitored to ensure they provide accurate and fair evaluations while minimizing potential harm to innocent students caught in their net. This feature still needs more tweaking and needs to clearly educate teachers and professors on what the percentages and detection rates stand for.
How Reliable is TurnItIn AI Detection?
The detection tool is fairly reliable. This doesn't mean accurate, it just means you'll get around the same score when testing very similar content.
How Accurate is TurnItIn AI Detection?
It may be reliable (you'll get the same score testing similar content), but you cannot claim it is accurate. You cannot prove anything is written with ChatGPT.
You are really just looking at words on a screen.
You may get some insight, especially if sentences match patterns easily replicable by AI/ChatGPT, but if TurnItIn accuses a student of using AI and student actually fail classes because of it – there will undoubtedly be a ton of lawsuits that pop up over the next year.
They claim they might flag a human-written document as AI-written for one out of every 100 fully-human written documents. This is not good enough in my opinion.
A large college lecture hall has at least 300 students. You're telling me 3 of them are going to get written up for using ChatGPT just because a detector told the professor so?
Does TurnItIn Detect AI Writing?
The short answer is yes, although not all teachers are actually going to use the feature. The detection tool provides reports to educators & integrates directly with popular learning management solutions like Canvas, Blackboard, and Moodle.
Teachers are able to check for AI writing via TurnItIn's Similarity report (what they also use to determine if a student is plagiarizing)
The Dilemma Facing Educators
As AI-generated cheating becomes a rising concern in academic settings, educators find themselves grappling with the challenge of maintaining integrity without relying solely on potentially faulty AI detection tools.
Teachers must strike a delicate balance between discouraging dishonest practices and ensuring fairness to all students. They need to find ways to assess student work accurately while minimizing the risk of false positives that could damage innocent students' academic records or emotional well-being.
It can be helpful to draw parallels between the adoption of AI writing technologies and the widespread use of calculators within academia. Both tools serve as valuable aids for learning when used responsibly, but they also present opportunities for misuse or over reliance by students seeking shortcuts instead of truly understanding concepts. As such, finding ways to integrate these powerful tools into education ethically and responsibly is an ongoing debate among educators.
The reluctance of some institutions to display AI writing scores
Recognizing potential pitfalls in using Turnitin's technology, 2 percent of its customers have chosen not to include the “Generated by ChatGPT” score alongside other feedback elements in their report summaries.
A significant majority of UK universities, according to UCISA are cautious about embracing new technology without adequate understanding or safeguards in place, leading them to avoid displaying potentially misleading information derived from AI detections when assessing student work.
This dilemma highlights that while leveraging technology advancements can benefit educators, finding a balance that ensures accuracy and fairness remains critical for protecting innocent students who might be caught up in flawed systems.
Possible Solutions and the Future of AI in Education
This is not an easy solution. We're going to hit the point in the next few months (or year at the latest) where it's pretty much impossible to decipher between human and AI written content.
By creating partnerships between these institution tools and schools, both parties must work together to refine detection tools, minimize false positives, and develop solutions that seamlessly integrate with academic environments. I don't think it's going to be easy, but it's a necessarily ethical step.
Schools could consider revising assignment formats, promoting a culture of academic integrity, or implementing honor codes as strategies for discouraging dishonest behavior without the over reliance on AI detection software.
Educators should also recognize the potential benefits AI tools offer in enhancing teaching and learning experiences while also addressing the challenges they bring concerning academic integrity. It seems like many are stuck in their ways
I think we're very close to an educational firefight. If this trend continues and detection is promised but inadequately deployed, there will be tons of lawsuits across the world from parents unhappy with unfair punishments.
Of course there will be students getting caught who wrongfully abuse AI to submit their school work, but even punishing a single student who didn't cheat is absolutely terrible.
This isn't plagiarism, we don't really have proof. Patterns and assumptions aren't going to be enough to face the wave of what's about to come. It's nearing the end of the spring term now but when fall gets here, we're in for a wild ride...