Did ChatGPT Write This Paper? Here are 9 Tips To Help You Figure It Out
While you can't detect AI like you can with plagiarism, there's a few obvious ways to get some insight into figuring out if ChatGPT was used to write a paper.

Justin Gluska
Updated October 23, 2023

a penguin writing on a typewriter in his ice igloo bedroom
Reading Time: 5 minutes
The rise of AI writing tools like ChatGPT have made it increasingly difficult to figure out whether something was written by a human or generated with artificial intelligence.
While advanced AI can produce high-quality writing that reads naturally, there are definitely signs that hint to something being written by a robot.
We can make educated guesses about the likelihood an article or paper was AI-generated but there's no actual way to prove anything. It's not concrete and is actually quite contested, but there are a few ways to at least get some level of insight into the origins behind a paper, essay, or article you're looking at.
Look for Overuse of Transitional Phrases
One common marker of AI writing is the overuse of transitional phrases like "firstly", "secondly", "furthermore," "additionally," and "consequently."
The AI relies on these bridges between ideas to create logical flow. But it often overdoes it, inserting transitions where a human writer would let ideas stand alone. Especially with internet writing. Nobody uses transitions.
If you notice an article constantly using these types of words to link concepts, it may indicate it came from ChatGPT.
Watch for Fancy Yet Misused Words
AI programs have big vocabularies but sometimes use elaborate words incorrectly or in unnatural ways.
For example, AI-generated text may say something "utilizes" or "implements" a concept when a human would simply say "uses."
Look for misused fancy terms that seem oddly out of place or don't fit the context. Not fitting the context of the intended audience/reader is a great indicator
Try AI Detection Tools like CopyLeaks & Originality
Specialized tools such as CopyLeaks and Originality allow you to paste in text samples for analysis of how likely they are to be AI-generated based on linguistic patterns.
While not 100% accurate, they provide helpful second opinions beyond your own evaluation. Cross-check tricky articles with leading detection tools for more clues.
Check for Lots of Short, Choppy Sentences
Unlike human writers, AI systems have a harder time constructing longer, complex sentences with detailed explanations and analysis.
AI text often relies on short, choppy bits rather than fluid sentences with depth. If passages read more like bullet points strung together instead of detailed stuff, it may point to AI.
You should still check multiple instances of writing from the same author if you want a more reliable judgement though. A single article or essay isn't enough.
Look for Repeated Phrases and Keywords
Since AI programs lean heavily on discovering patterns in the data they're trained on, they often repeat the same words and phrases over and over.
This habit comes from the AI trying to seem fluent by pulling from training. But it results in unnatural repetition you're unlikely to see in human writing.
They can also "hallucinate" and just start talking about random things. If you find a factually incorrect or incoherent statistic, somebody probably just ran a query through ChatGPT and expected perfection without proofreading anything.
Look Out For "I'm Sorry But As a Large Language Model"
I'm sorry but as a large language model is something super common when someone tried to use AI to write something it didn't know about (or just refused to write about).
It's appeared in the courtroom and many academic papers where people try to get away using boilerplate ChatGPT (basically you ask it for something and directly paste that into what you're looking to complete) without tweaking anything.
This text commonly shows up and is an obvious (and humorous) way to immediately prove something was written with ChatGPT.
Check Accuracy of Numbers and Data
Because AI relies on patterns in data rather than real understanding, it often makes mistaken assumptions or uses numbers inaccurately.
If you notice statistics or contradictions between facts and numbers, it may reveal an AI source that doesn't comprehend the data. Also, don't trust anything you read without it being linked back to a primary source. Articles and papers that are just listicles without any sources are generally just ChatGPT spewing out BS.
Look for Credibility of Sources and Citations
Like mentioned above, verifying sources helps reveal if human due diligence or AI automation was the culprit.
AI text may cite suspicious sources or include no citations at all beyond vague references.
Take a closer look at where facts and quotes came from to help determine if a real person compiled research or if it was probably just ChatGPT.
Look for Contextual Cues and Disclaimers
Responsible AI users disclose when content was generated by bots rather than humans. While that isn't too often, it's still an easy way to be sure if you see it.
Look for any contextual clues within the text itself denoting it was written by AI. Students may include disclaimers on AI-assisted essays while marketers may fess up to using tools like Jasper.
Final Thoughts
While we can't declare something was AI-generated with full certainty, using these tips can give you some pretty good insight.
I honestly use Originality a lot with multiple inputs from the same writer. I've noticed it does well when I test out my copywriters. It does over predict though, so I take it with a grain of salt.
Humanity's future may depend on our ability to separate AI-generated disinformation from the truth. It's going to be hard doing this in schools, companies, and especially the internet. But there's no stopping the AI wave. We just have to get used to the cards we've been dealt.
Want to Learn Even More?
If you enjoyed this article, subscribe to our free newsletter where we share tips & tricks on how to use tech & AI to grow and optimize your business, career, and life.