26 January 2023

Cheating after ChatGPT – will AI destroy academic integrity?

By Frances An

‘I looked through the report that you were concerned might have been written by a contract cheating service,’ the professor said, peering through the lower half of his bifocals at a yellow A4 envelope with ‘CONFIDENTIAL’ scrawled on it. 

‘What did you think?’

I was working as a casual tutor in the Psychology department at the University of Western Australia and suspected a student had bought their assignment from an essay mill. I was supposed to boost my students’ confidence in their ability to complete the assignment themselves. I made sure students felt comfortable enough to ask content-related questions by email or in class and inform me or the unit coordinator if they were struggling with deadlines.

If the professor’s verdict was ‘guilty’, I would feel I had failed in my role as a tutor.

But the consequences for the student would be much worse. Most obviously, they would be expelled and have their academic record stained forever. There have even been news articles about academic misconduct accusations triggering student suicides. Another, less obvious consequence of using a contract cheating service is being blackmailed by the same people you bought the essay off. 

‘It’s not going to get a pass anyway,’ the coordinator said, taking out my student’s assignment, covered in red pen indicating the suspicious sections. Had we pursued it as a plagiarism or cheating case, it would have meant multiple rounds of interrogation and paperwork. In the end, we just failed the assignment and didn’t pursue it as a case of misconduct.

This is just one example of how cheating can slip through the net. The techniques universities rely on to enforce academic integrity are often time-consuming and fruitless. Software like Turnitin can help in matching text between students’ assignments and external sources, but collating hard evidence of academic misconduct demands an individual tutor’s attentiveness, experience and judgement. Even if a university suspects a case of academic misconduct, the burden on both students and overworked staff of following up in a publish-or-perish culture can drive institutions to turn a blind eye. 

Academic misconduct can lead to graduates entering the workforce without the knowledge required for their job. In the long term, this can lead to the devaluation of degrees and damage the reputation of universities. Dr Guy Curtis, a psychology lecturer and researcher in academic integrity at the University Of Western Australia, estimated that about 10% of students submit ghost-written assignments and that 95% are undetected. An article last year in the Irish Mirror suggested that ‘at least 1,500 students at Irish universities have been reported for exam cheating [and/or] plagiarism… over the past three years.’

Essay mills are one thing, but the combination of Covid and now Chat GPT have made academic cheating harder to detect than ever. The move to online learning and teaching during the pandemic meant instructors were less able to build rapport with the students and determine whether an assignment’s quality was generally aligned with their observations of an individual student’s understanding. Meanwhile, Chat GPT’s capacity to formulate comprehensive written responses to questions about history, politics and even its own usage in academia, have led to fears about job redundancies and the devaluation of essays as an assessment tool.

The answer to upholding academic integrity in the face of new technologies is to design assessment formats that reward both analytical thinking and originality on the part of students. As English teacher David James’ has argued on these pages, Chat GPT’s arrival has alerted us to the inadequacy of current assessment criteria. Canonical writers such as George Eliot and Henry James have been replaced with more ‘accessible’ and ‘diverse’ writers. Assessment objectives often reward writing that follows a prescribed structure and set of beliefs. ‘That’s not the software’s fault,’ writes James, ‘but the result of an assessment system that has become increasingly formulaic and predictable.’ 

Chat GPT should encourage educators to devise assessment formats that are hard to outsource to competent but nonspecific programs and/or third parties. In the humanities, that might involve demanding students directly quote from and synthesise original ideas from the relevant texts. Encouraging profound engagement with texts simultaneously enriches learning while also undermining Chat GPT’s effectiveness, since Chat GPT-generated essays are prone to including false quotations, irrelevant references and failing to account for a target audience

Academics Dr Michael Baird and Dr Joseph Clare considered techniques that could block cheating opportunities for assignments in a business course. They built on crime prevention research, which shows that even highly motivated offenders will refrain from deviant behaviour if the perceived costs and efforts outweigh the benefits. For example, while most assessments are falsifiable, Curtis suggests that the planning required to falsify an in-person exam might be so bothersome that students might find it more worthwhile to simply spend their time studying for the exam

We cannot rewind Chat GPT and nor we should try. As John Ashmore noted on CapX recently, the software could accelerate the completion of time-consuming tasks so humans can focus on conceptually complex aspects of their work. Rather than heralding the end of the essay, Chat GPT should instead spur us to think more creatively.

Indeed, a recent CapX piece on the merits of capitalism, written by Chat GPT reads like a coherent but rather dull page from Wikipedia, another platform that academics have blackmarked as a frequently referenced but error-prone source. It certainly doesn’t look like the bots are going to be winning any Booker Prizes any time soon.

So rather than engage in doom and gloom about tech putting academics out of the job or rendering assessmenets obsolete, let’s enjoy the conveniences of Wikipedia and Chat GPT while knowing that human writers can do better.

Click here to subscribe to our daily briefing – the best pieces from CapX and across the web.

CapX depends on the generosity of its readers. If you value what we do, please consider making a donation.

Frances An is a Mannkal Scholar and an intern at the Centre for Policy Studies.