If they weren’t senior managers of educational institutions, watching them panic over #ChatGPT, and similar AI chat-bots, would be funny. But as they, and others, are likely to distil their fears into solidified academic rules, “best practices”, “evidence-based” ordained innovations and mandatory “continuing professional development” there’s actually nothing to laugh about. Like in so many situations before however, we have nothing to fear, but fear itself.

Introduction

As a teacher and examiner of students I am not afraid of Chat GPT and similar AI Chat bots. Unlike some [1] I do not believe there is any reason to fear the End of Exam Integrity, online or otherwise. In their current form Chat GPT and similar AI-bots display moderate levels of success on assessments [2]. Evidently this is something that some might argue is ‘just a matter of time’ as we should expect the AI to only get better. The question of course should be: Better at what? One key issue in deciding whether or not we should fear (or regulate against) AI is namely the one of asking what it exactly is that such AI is dangerously good, and getting better, at? Possibly it is simply good (and getting better) at helping students learn, and possibly assessment could avoid the things an AI bot is good at [3]? It is very well possible that AI could help students make their texts more concise and readable [4], as they would perhaps already be doing with predictive text and apps like Grammarly. Risky as this approach might sound, there are some clues [5] that this is a path where AI eventually runs into limitations and would need competent decision-making by the student if the text they are supposed to write needs to be sufficiently nuanced on a topic sufficiently complex.

An alternative route to AI-avoiding assessment forms is to simply either take the writing out of the equation (i.e, presentations with Q&A) or do so under controlled conditions [6]. Quite a few worry about the ability of such AI systems to mimic human work and thus the increased difficulty of distinguishing an AI produced assessment response from a human one [7]. One might consider using AI to detect … AI. An interesting issue surrounding the use of AI as an educational tool is the possible corrupting influence of it [8], not so much as in corrupting exam integrity but corrupting the user through the haphazard and inconsistent morality such system can be prone to. In fact, when academics are expected to worry about AI Chat [9], shouldn’t we then also worry about people using AI to write about AI? One of the most interesting things about the storm-in-a-teacup fuzz that has emerged in the past months concerning this topic, is how formulaic and predictable most of the responses have been. It is almost as if humans, like AI, follow patterns and formula’s in composing their narratives [10].

The Economics of it all

Before returning to the question how teachers and students could respond to the rise of AI in the world, including the world of education, I think it is useful to consider the incentives that the different stakeholders in education and society have.

Students have an incentive to out-perform AI

Students take on an education, not to become a machine but to be better than one. It is perhaps tragic, but humans are unlikely to ever beat machines at being machines and perhaps it is time we should stop trying. Too much education is rooted in a Fordian past where it is all about routine skills to be applied repetitively to routine problems. Precisely the things at which machines excel.

Students now will be the future’s professional users of AI, so instead of seeking to regulate it away from them we should be embracing it. But students also have an incentive not to want to end up being the worker replaced or entrepreneur out-competed by AI. They have a genuine stake in finding out how to be better than AI.

Teachers have an interest in their students out-performing AI

If after a long term of hard work, all I would have to show for my efforts, as a teacher, are students that are barely able to come up with answers of AI Chat quality, irrespective of how high that quality might be, then it would be fair to say I have failed at my mission. After all, that Chat AI didn’t sit through a single one of my classes, didn’t hear any of my advice, possibly had no access to my lecture notes and never asked me any question that I could have turned into an interesting aside or valuable but unanticipated tangent in the area of my expertise. If my classes are so relevant, informative and effective that an AI can do just as well as my students, or perhaps even better, then surely either I am the wrong teacher, or I am teaching the wrong things The latter of course implying the former.

I have a genuine interest, for my students’ sake but also my own, that those students that put in some effort into my course come out more competent than a procrastinating AI-bot that skipped all my classes.

A shared goal

As a result, both the student and the teacher will need to

  • explore the Chat AI’s weaknesses, and
  • learn to exploit the Chat AI’s strengths.

As AI has the nasty habit of learning continuously, perhaps so should teachers and students realise again that the real goal of education is to become a constant learner and not some automaton that has been trained on a sufficiently large set of formulaic practice problems.

The biggest danger of mediocre senior-management regulating their way into the usage of AI in education, is that they lack the vision to see the common, shared, goals between students and teachers and will only obsess over the opportunities to ‘game the system‘. Even without AI involved, I find this hostile attitude amongst educational managers already now being a serious roadblock towards delivering quality education.

Way forward

I am not afraid of AI chat-bots and I am not worried their use by students might corrupt my ability to properly assess them. If anything, it is merely a reminder that when I am assessing students on skills and knowledge a Chat AI is better at generating, then I am failing in my duty as a teacher. So, instead I embrace it and organise within my courses their use of it.

It is not so much assessment methods that need to change, it is assessment aims. Part of the reason why this AI-invasion tickles out such panicky reactions amongst some educators and their managers, is that it puts the axe to trends in education that weren’t good trends to start with, but that were oh so convenient for a particular caste of accounting-minded educational leaders whose realities consist of spreadsheets with data of poor quality. Their lack of vision, their lack of understanding of the realities of both skill and knowledge, as well as of teaching and learning, is what triggers the fear.

Chat AI and other AI products will indeed, over time, root-out the inefficient and wasteful focus in much of education on routine tasks, routine knowledge and repetitive reproduction. That is not to say that training and practice of routine skills and routine knowledge have no place in learning, they do. But in many places they should have no place in assessment. Assessment, as well as much of learning, should focus on things that require and build on routine skills and knowledge, but that challenge students and teachers to go beyond them. Sure, a surgeon should excel at routine surgical skills but in reality they should have the capacity to go beyond those to be truly good at their craft and intellectual endeavour … and to stay out of the hands of AI-redundancy. So should economists, physicists, and … well … humans.

Conclusion

In a world where standardized curricula bring positively sanctioned boredom into educational institutions, while ‘content-creators’ on social media claim the creative and innovative domain, it is time educational institutions reclaim creativity, originality and diversity of learning aims.

For humans to be better than AI, at things that humans are better at, is for humans to be better humans with Human Intelligence, and not poorly trained AI proxies. I would say, bring on the competition between HI and AI. Students and teachers should eagerly enter the fray, together. In this one, they’re absolutely on the same side.

In the current term I and my students will be scrutinizing Chat AI’s and their responses to exam questions, we will critically mark their responses and recognize where their weaknesses and strengths lie, and we will learn better which questions to ask that bring out the things in which humans excel and what to use AI for. It’s a friendly competition between complementarities. We humans have been in this game for about 2 million years, we have become so good at it that it threatens our planet’s survival. Maybe AI will help us get even better, but also in a better and healthier way. I will report about my students’ experiences in a post sometime later this year.

Sources
  • [1] Susnjak T. ChatGPT: The End of Online Exam Integrity?. arXiv preprint arXiv:2212.09292. 2022 Dec 19.
  • [2] Gilson A, Safranek C, Huang T, Socrates V, Chi L, Taylor RA, Chartash D. How Well Does ChatGPT Do When Taking the Medical Licensing Exams? The Implications of Large Language Models for Medical Education and Knowledge Assessment. medRxiv. 2022:2022-12.
  • [3] Zhai X. ChatGPT user experience: Implications for education. Available at SSRN 4312418. 2022 Dec 27.
  • [4] Jeblick K, Schachtner B, Dexl J, Mittermeier A, Stüber AT, Topalis J, Weber T, Wesp P, Sabel B, Ricke J, Ingrisch M. ChatGPT Makes Medicine Easy to Swallow: An Exploratory Case Study on Simplified Radiology Reports. arXiv preprint arXiv:2212.14882. 2022 Dec 30.
  • [5] Gao CA, Howard FM, Markov NS, Dyer EC, Ramesh S, Luo Y, Pearson AT. Comparing scientific abstracts generated by ChatGPT to original abstracts using an artificial intelligence output detector, plagiarism detector, and blinded human reviewers. bioRxiv. 2022:2022-12.
  • [6] King MR, chatGPT. A Conversation on Artificial Intelligence, Chatbots, and Plagiarism in Higher Education. Cellular and Molecular Bioengineering. 2023 Jan 2:1-2.
  • [7] Guo B, Zhang X, Wang Z, Jiang M, Nie J, Ding Y, Yue J, Wu Y. How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection. arXiv preprint arXiv:2301.07597. 2023 Jan 18.
  • [8] Krügel S, Ostermaier A, Uhl M. The moral authority of ChatGPT. arXiv preprint arXiv:2301.07098. 2023 Jan 13.
  • [9] Stokel-Walker C. AI bot ChatGPT writes smart essays-should academics worry?. Nature. 2022 Dec 9.
  • [10] Benzon WL. A Note about Story Grammars in ChatGPT. Available at SSRN 4324840. 2023.
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: