Gold Penguin Logo with Text

ChatGPT in Schools Is The Perfect Wake Up Call Our Education System Needed

The rise of AI-generated content in schools sets off alarm bells as some see a crisis, but it truly offers pivotal opportunity - a wake-up call pushing academic leaders to revitalize an education system overdue for an upgrade focused on critical thinking, ethics, and actually preparing students for the real world
Updated January 16, 2024
A student sitting in a classroom surrounded by cheating robots, created with Midjourney
A student sitting in a classroom surrounded by cheating robots, created with Midjourney

The recent proliferation of AI-generated content in schools has been sounding alarm bells across academia. But rather than a crisis to be warded off, it presents a pivotal opportunity - a much-needed wake-up call for an education system that is desperately begging for a transformation.

For decades, the core tenets of teaching and evaluation have remained stagnant even as the world evolved rapidly around it. Technology is engrained in our personal and professional lives more than ever before, and it seems like we haven't fully adapted to it yet.

Students memorize content and regurgitate it on cue, only to forget it soon after. Assessments test how much information they can recall, not intelligence applied. The system incentivizes finding shortcuts that subvert learning. Cheating has become rampant.

Learning, to many students, is a byproduct that might happen after memorization. In many ways, the education model has been sleeping through a revolution, stubbornly clinging to traditions in desperate need of revamp.

AI-based tools like ChatGPT, with their ability to generate entire essays in the blink of an eye, reflect how vastly disconnected the system has become from present-day realities.

And what's the appropriate school response? Teachers have turned to faulty AI detection tools, banned ChatGPT, and a whole bunch of other nonsense. Closing your eyes to the fact that ChatGPT exists and is not going anywhere will be a net-negative for the education system and output of students across the world.

Rather than solely relying on AI detection tools to catch and punish students, schools need to reflect on why students are using AI tools in the first place.

The American education system focuses heavily on rote memorization, repeating back content, and getting high grades. This incentivizes students to find shortcuts to achieve rewards like good grades rather than truly gain knowledge and skills (this doesn't consider technical or vocational schools, I'm just generalizing).

Schools should recognize how these flawed priorities and incentives drive students towards unethical behavior like using AI for schoolwork.

Before vilifying technology, educators are rapidly approaching a point where they must re-evaluate systems that place too much emphasis on mindless content drills, test scores, competition for academic superiority, and other practices that undermine real learning. These issues motivate students to cheat and use AI disingenuously.

It's not just teachers, it's the entire education system as a whole. It's the way the system is set up. Rather than doubling down on punitive AI detection tools, schools must acknowledge the message behind this phenomenon. The intense pressure to achieve high scores and repeat course content back verbatim creates incentives for ethical shortcuts. Schools should reassess academic reward systems centered narrowly on grades and rote learning, as these can promote cheating instead of actual understanding.

The solution lies not in vilifying technology but in reforming priorities. Assessments should gauge not what students can parrot but what they can create, analyze, improve, and lead.

That's exactly what the real world is like. Recall is the first step, application is what actually makes you successful in doing anything. And technology is used throughout the entire process...

Curriculums ought to teach transferable skills over transient facts - critical thinking, communication, and creativity. The most successful people and companies in the world are not great at one thing; they make use of many skills they've compounded and puzzled together.

Teachers need upskilling to design assignments that compel students to learn actively, not chase grades passively. Review processes must uphold accountability while minimizing unfair allegations arising from imperfect detection tools.

Most importantly, we need to reshape how we link talent to achievement. Schools currently reward students just for outscoring their peers instead of truly improving themselves. Simply failing those who misuse new tools won't fix things; we need to guide them in the right direction.

Standardized tests still have some value to measure a baseline level of learning. However, they should not be the ultimate verdict on a student's intelligence or potential. That kind of standardized test-centric system is outdated and has not lived up to its promises.

Rather than clamping down, schools must leverage this inflection point to create systems centered on values, conscience, and 21st-century dexterity.

The education model requires no protection from progress but rather awakening to its reality. AI-based academic misdemeanors present the jolt it needs.

The time for sweeping transformation is now. Mark my words; if this isn't widely adopted and understood by most of academia in the next 2 years, the years it takes to recover will exponentially grow. Archaic institutions will fail, and new ones will pop up. Ignorant complacency breeds destruction. It's not an if, but when. And it will come.

AI Detection Is Reliably Inaccurate

The driving force behind this pivot must be acknowledging concerns around AI detection tools’ reliability, as evidenced by several users’ testimonials and a bunch of other research.

While AI detection can 100% help to predict if AI wrote something, many of these tools acknowledge their very own pitfalls and even advise against using them in educational scenarios.

Services like GPTZero and Turnitin caution that their algorithms cannot provide definitive proof of AI writing - only predictions based on pattern analysis. Some estimate 1-5% false positive rates, meaning for every 100 essays analyzed, 1-5 students could face unjust cheating allegations. That is crazy.

Such punitive consequences on the basis of imperfect technology with acknowledged limitations can irrevocably damage innocent students’ reputations, emotional well-being, and future prospects.

Further, Turnitin concedes up to a 15% missed detection rate to minimize false positives. This means for a paper flagged as 50% AI-written, the actual figure could be as high as 65%. The inability to precisely distinguish AI vs human text even within flagged documents further reiterates the technology’s immaturity.

With error bands in both directions, integrity officers must interpret AI probabilities cautiously so that students don't face penalties over imprecise percent guesses. The lack of transparency around factors weighted in detection methodology also impedes contextual human judgment.

Rather than a solution, Turnitin’s tool still requires vigilant post-processing. Yes, let's make it harder on teachers across the world and introduce a technology that isn't even understood by the company that made the solution to the problem.

In a recent interview we had with an unnamed university student, he highlighted their school’s policy, “An AI detector is just using heuristics to guess. There's no certainty in anything it is saying, and so there's no PROOF that anyone is cheating at all.”

This student raises an excellent point - that the unreliability of AI detection tools may push students to go to extreme lengths to prove their innocence. This student's idea of recording video while working shows how questionable use of technology can lead to more technology being used defensively.

Instead of preventing cheating, the unreliable AI tools may start an "arms race." Students keep finding more technology to prove they didn't cheat. Schools keep trying to upgrade detection tools. This drains time and money away from actual education, which is the main point of going to school.

Schools need to acknowledge that it is unfair to put the burden of proving their innocence solely on individual students when using inaccurate predictive systems. A core principle of ethical leadership is that accusations against someone must be backed up by evidence that goes beyond reasonable doubt.

Basing judgments on probability scores from predictive systems that are constantly changing and lack transparency goes against this principle of ethical governance. It harms the student-teacher relationship when teachers grow suspicious of students because of a technology's uncertain guesses.

Before schools mandate detectors that bring more problems than solutions, they must re-evaluate priorities. As the student notes, "Be it generated by AI or by human hands, a paper full of fabrications should fail for that reason alone.” Rather than fearing progress, the answer may be returning to time-tested tenets of quality education - engagement, analysis, and integrity.

Yet the burden of proof still falls unfairly on students, who face failing grades or suspension over technology-aided guesses. A parent, Sarah, describes the agony of her daughter being unjustly accused over AI detection mismatches. Another parent notes these tools cause “a LOT of anxiety” even for honest students fearful of false flags jeopardizing their livelihood.

No matter how advanced, technology cannot replicate the human judgment required in nuanced academic situations. Students should not have their futures determined by algorithms, which still need further tweaks for fairness and can't even be proven in the first place.

Please note that I didn't say predicted because these tools do help; they just can't prove it.

The Road Ahead

ChatGPT's advent may feel disruptive today, but it's merely a sign of innovations to come. If systems fail to evolve now, they only risk irrelevance tomorrow when new technologies emerge.

Rather than reactive tech crackdowns, schools must re-envision curriculums and assessments focused on real-world skills over rote content regurgitation. Simultaneously, reliance on opaque, unreliable AI detectors risks damaging students through inaccurate cheating allegations.

Instead of blocking progress, schools actually have an exciting opportunity here. They can use this challenge to upgrade their whole approach - refreshing their priorities, reworking their rewards systems, and kickstarting some much-needed improvements across education.

As schools race to implement fancy AI cheating detectors, they seem oblivious to something very basic - the tools just aren’t ready yet – and they probably won't ever be. Even the companies making them admit they regularly mess up.

How many caring parents would accept even a tiny chance of their child being unfairly treated?

Schools must wake up and start thinking about common sense and ethics, not just avoiding progressive technology like it's the plague. Reliable or not, stomping on students when you don’t have solid proof goes against the basic rules of fairness. It will ruin teacher-student trust and destroy institutions' reputations, opening them to nasty lawsuits.

There are smarter ways forward. Schools should have open talks on using AI responsibly instead of jumping to cheat charges and rethink how they currently examine students. They can train teachers to create assignments that make cheating tougher and good learning habits easier. Bring back experience over experimentation.

The way out of this mess is not betting on some new version of unreliable AI detectors but getting back to timeless human values – critical thinking, compassion, and wisdom.

If schools remember ethics and humanity, they’ll find solutions with heart, not just technology. But first, they must make things right for innocent students needlessly put through the wringer by an experiment gone totally wrong.

I do have faith in education, though; I don't want to end this on a negative note.

I retain an enduring optimism in the promise of education if only we collectively acknowledge that the processes informing it no longer align with modern realities. The world has changed immensely, while dominant academic models have grown stagnant, still centered on outdated notions of teaching and evaluation. When systems fail to evolve alongside rapid shifts in society and technology, disruptive innovations inevitably arrive to fill the gaps and spark a much-needed reassessment of existing practices.

I have faith that through openness, imagination, and unity of purpose, we can reshape education for the future while upholding the humanistic values and love of knowledge that drew many educators into this profession.

This is not a time for fear or division but for embracing change, coming together, and rising to meet the promise of progress. This rest of the decade will be very interesting.

Want To Learn Even More?
If you enjoyed this article, subscribe to our free monthly newsletter
where we share tips & tricks on how to use tech & AI to grow and optimize your business, career, and life.
Written by Justin Gluska
Justin is the founder of Gold Penguin, a business technology blog that helps people start, grow, and scale their business using AI. The world is changing and he believes it's best to make use of the new technology that is starting to change the world. If it can help you make more money or save you time, he'll write about it!
Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments
Join Our Newsletter!
If you enjoyed this article, subscribe to our free monthly newsletter where we share tips & tricks on how to use tech & AI to grow and optimize your business, career, and life.
magnifiercross