The jury is still out on the long-term effect of LLMs in education because of two factors. One, because it’s still a new technology. Two, because there hasn’t been enough studies on the subject matter. As a society, we are in desperate need of smart people investigating AI in the academic setting. We need something tangible, something that can be visualized.
Which is why we’re shining the spotlight today on ESCP Business School and their impact paper titled, “AI and the future of academic writing: Insights from the ESCP Business School Prompt-o-thon workshop in Berlin.”
In this article, we’ll discuss the people behind the paper, their findings, what we can learn from it, and our short interview with the researchers themselves.
The People Behind The Paper
Gaining insights into the effects of AGI in an academic setting would be incomplete without a collaborative and diverse approach between experts in the academe. The following are the brilliant people from ECSP Berlin who are responsible for spearheading the Prompt-o-thon Workshop and creating this paper:
- Markus Bick: Full Professor of Information & Operations Management, and Chair of Business Information Systems.
- Jessica Breaugh: Senior Lecturer in Management
- Chuanwen Dong: Assistant Professor of Information & Operations Management
- Gonçalo Pina: Associate Professor in Law Economics & Humanities
- Carolin Waldner: Assistant Professor for Sustainability Management
AI and Academic Writing: What We Can Learn From ESCP Berlin Prompt-o-thon
To get the full picture on the effect of LLMs in academia, a Prompt-o-thon Workshop was held in ESCP’s Berlin Campus on March 7th, 2023. The goal of this event was simple: explore how well AI tools write academic papers, particularly thesis papers.
Each of the 20 participants were given 45 minutes to ideate and create a thesis paper. They were also provided a short introduction on how to use the following tools to expedite their writing and research process:
Afterwards, the participants were split into three teams and tasked to write a thesis on the topic of “The impact of feedback on worker performance.” From there, students used the AI tools on a wide range of cases, including outlining, literature review, identifying research gaps, and Python programming for automation.
Evaluation of the Created Theses
The created theses were evaluated using the standardized ESCP evaluation that is employed for academic papers in the institution. The results were the following:
- 5 out of 20 for content.
- 15 out of 20 for formatting.
Overall, the papers scored an average score of 8 out of 20, which does not surpass the required passing score of 10 out of 20.
However, the teachers did commend the conceptualization and volume each thesis exhibited considering that they were created in only 45 minutes. They further noted that each of the thesis had “interesting research questions, identified valid research methods, and presented what looked like a comprehensive list of literature.”
One noteworthy takeaway, however, is that the language appeared to be more colloquial than professional. Furthermore, the depth of knowledge leaves much to be desired, with some citations not being listed in the final reference list. Some were also found to be fabricated, showing signs of AI hallucination.
What Students Think of LLMs
Students were also asked to evaluate their thoughts on LLMs for writing academic papers. This is further divided into two questions: why use it and how?
For the former, the students noted the following reasons why one should consider using AGI for thesis papers:
- Inspiration: Gathering ideas and insights on a specific subject to their comprehension of the objectives and potential research gaps they could address.
- Literature Review: AI can help in consolidating and summarizing past studies by reviewing authors' procedures and results. Tools like Elicit can specifically locate relevant articles which saves time in literature review and fosters a more streamlined approach.
- Structure: LLMs can help create outlines and structure content in a way that makes sense.
- Data Analysis: AI can also be leveraged in data analysis by automating repetitive tasks, uncovering patterns in large datasets, and providing advanced statistical insights.
- Non-Crucial Tasks: LLMs can help students with tasks like summarizing their work for an abstract, refining academic writing style, and enhancing language use.
As for how to use it, students highlighted the usefulness of AI as an aide for brainstorming, navigating research, and organizing academic writing. However, it’s further noted that AI should be treated as a tool and not a substitute. Responsible AI use must start with setting clear goals, cross-checking results for reliability, and steering clear of overdependence to maintain critical analysis skills.
The Prompt-o-thon Workshop has yielded many insights for ESCP professors, which could be summarized as the five following takeaways:
- Full acceptance of LLMs implementation in academia.
- Let students explore the implications of using LLMs for their academic papers.
- Restructure traditional assignments in a more collaborative way between students and supervisors.
- Encourage discourse on the ethical issues and potential risks of LLMs.
- Create a set of guidelines on how to use LLMs for yourself and others.
A Short Interview with ESCP Business School Professors
We also had a chance to catch up with the individuals involved in this paper and inquire more about their findings. These are their answers on questions we’ve posted and our overall thoughts on the subject matter:
Question #1: How have LLMs such as ChatGPT changed academic work?
"It has changed academic work quite a bit. Of course, different people will have had different experiences, and some will not have used it at all. In the simplest applications, LLMs are basically free assistants that can help with a number of tasks related to writing. In more sophisticated uses, they can write computer code, browse the internet for information, and clean up messy raw data. Academic work by students or professors always used outside help, be it from colleagues or research assistants. LLMs, when used properly, can replace some of this work with very high quality, almost free, and without complaining or loss of quality when the work gets very repetitive.
As teachers, we definitely have to stop and think about the kinds of assignments we create, how we assign them. In fact, what actually constitutes academic work? Many of the things we used in the past can now be automated. Therefore, we need to re-think oral examinations to test for learning and understanding, or make in-class simulations, cases, and projects that can't be completed using an LLM. For take-home writing assignments, much more emphasis goes towards creating assignments that require critical thinking rather than reproduction of knowledge, perhaps using LLMs, but where some over-relying on LLMs will not get a great grade."
Question #2: Is it realistic to ban LLMs?
"No, it is not realistic, nor should we. It's a tool that students need to learn to use responsibly. In fact, it is our job as educators to teach students how to use them effectively. For example, create assignments where students are actually asked to use a LLM and use it as a learning opportunity to point out their weaknesses. Or showcase how professors themselves use them in their workflow.
At ESCP Business School, our goal is to educate young students to be future top managers. LLMs will likely become important co-pilots for managers in the near future. Therefore, it is crucial for young students to be exposed to and learn about LLMs on campus instead of being prohibited from using them."
Question #3: How can faculty staff manage their use without banning them?
"As we discussed in our Impact Paper, 'AI and the Future of Academic Writing: Insights from the ESCP Business School Prompt-o-thon Workshop in Berlin,' this should be a collaborative process with the students. Frankly, we still did not know enough about this technology, which was moving at a very fast pace, so at this stage, it made sense to experiment with different options. As part of our work, we developed a code of conduct for the use of LLMs but encouraged each professor to adapt this code of conduct according to their preferences and objectives. At the minimum, it seemed to be essential to talk about LLMs in class, their limitations, and why they could or should be used. Other universities have asked students to declare if they have used them and to what purpose.
We also suggest that scholars in various fields should collaborate. You see, LLMs are becoming more and more general, which means one large model can now work on things from different fields. So for us, scholars from different fields should also team up to make better use of them. You see, our impact paper was written by five professors, each from a different field. We believe this collaboration maximizes our interactions with LLMs."
Question #4: How can we ensure that students are still learning?
“We still have many tools to do this, through class discussions, in-class assignments, and oral or in person examinations.
Well, perhaps LLMs can also support teachers in this. We can use them to draft exercises or even evaluation reports. Of course, those preliminary drafts should be carefully controlled and revised by the teachers.”
Question #5: How is academic work likely to evolve given the popularity of AI?
“In many cases, academics, like students, use LLMs as part of their writing and research processes. It actually speeds up many administrative tasks (like writing letters, emails, etc), freeing time for more value-added intellectual work. It leads to more consistent course planning or brings new ideas to already existing courses. It also leads to greater inclusivity in academia by reducing language barriers and improving communication. We start seeing the first research papers that make use of LLMs directly in their research method, and this is an exciting field of research. However, we have also seen some papers that were clearly written with the help of LLMs and of very low quality. The increased pressure on researchers to publish may have some unintended consequences, and it is crucial to develop best practices around the use of this new tool.”
It’s evident that AI, particularly in the form of LLMs like ChatGPT, is disrupting education at an unprecedented pace. This is apparent in various aspects of academic work, from simplifying writing tasks to the complete automation of research and data analysis.
There's a pressing need for educators to reconsider traditional assignments and assessments, emphasizing critical thinking over repetitive knowledge reproduction.
As they said, the idea of banning LLMs is unrealistic and counterproductive. As the professors of ESCP Business School argue, we must work towards integrating them responsibly in education. They advocate for a collaborative approach which involves discourse on responsible use of LLMs in assignments as a learning opportunity.
Keep this in mind: teaching responsible use is better than prohibition.
While AI's integration into academia brings innovation, there is a call for continued vigilance. This is especially apparent after innocent students keep getting falsely flagged for using AI. The need for ongoing collaboration among scholars from various fields is emphasized in the interview, making sure that LLMs are maximized while avoiding potential pitfalls.
The Bottom Line
If I could summarize my thoughts in one sentence, it’s this:
The future of AI in education hinges on responsibility.
The insights from the workshop and interview highlight both the promise and peril of integrating AI into academia. There is tremendous potential for tools like ChatGPT to enhance learning, research, and knowledge creation. However, this also comes with risks like overreliance, diminished critical thinking, and ethical issues around academic integrity.
As such, the future of AI in education ultimately depends on how responsibly it is deployed by both students and institutions. Rather than reactive policies like banning, a collaborative approach focused on developing best practices and teaching responsible use is needed. One where AI is leveraged as an aide rather than a crutch.
The academics from ESCP Business School Berlin provide a blueprint that emphasizes discourse around limitations and appropriate applications. By following their recommendations, we can work towards realizing the upside of AI in education while safeguarding its downsides.