AI & Ethics: ChatGPT in an Essay Writing Context

By Max Belyantsev

Person typing

With ChatGPT becoming prominent in the global technology realm, many teachers or professors may go on to write, “You are not allowed to use ChatGPT for this assignment!” in their course syllabus. To most teachers, “ChatGPT” becomes immediately synonymous with “cheating” or plagiarism—and they might be right, at least in part.

In an academic environment, if we are tasked with writing an essay, using ChatGPT to fabricate it entirely and then passing it off as our own seems overwhelmingly wrong. Most of us might agree that a blatant disregard for the assignment or its underlying tasks would clearly convey the student’s intention to fool their grader.

But there are plenty of well-intentioned students out there who might want to use ChatGPT for arguably legitimate reasons. The question is: where do we draw the line?

If you’re stuck on how to start your essay about Hamlet, can you ask ChatGPT to generate a few possible introduction paragraphs or list a few of the play’s main themes? On the surface, there doesn’t seem anything wrong with this. If you are allowed to use the internet to research your topic, then we might be able to excuse using ChatGPT by equating it to a glorified web search. After all, it functions by scouring the internet and compiling details into a short (or long) blurb of text.

Some might say that using ChatGPT to generate essay content of any kind—even inspiration that you might go on to use—could be grounds for academic dishonesty. But what about other aspects of the writing process? What about grammar questions, for example? What if I asked ChatGPT to rewrite one sentence in five different ways and used the best bits from each version to create the ultimate sentence? Would that be considered cheating, if the meat of the sentence is still my own?

In this context, ChatGPT’s role changes: what was once a cynical tool of deceit is now, in essence, a thesaurus with some pretty cool moves (like addressing you directly!). Would this count as A.I. “assistance”? Should we equate the milder term “assistance” with cheating and plagiarism?

Even from a practical perspective, ChatGPT gets things wrong all the time. It might not always have the most up-to-date information on everything that is happening today. In addition, with ChatGPT, you’re never really sure where it’s drawing its responses from. Could it be feeding us information from Wikipedia or other crowdsourced forums like Reddit and X? Immediately, there is a clear issue with this idea. By using ChatGPT, we inherently risk using sources that are unreliable or clearly incorrect. What’s worse is that ChatGPT could probably make something up entirely, and we might never know for sure. At best, what we get is often just satisfactory.

In the end, it might be best for students to err on the side of caution for the time being. ChatGPT’s popularity comes from its convenience, not necessarily from its reliability or its content accuracy. Nonetheless, it’s important for us to consider the ever-changing role that technology plays in our everyday lives when it comes to certain kinds of knowledge or skills. With a cautious, face-on approach, ChatGPT can be introduced into the classroom environment in a way that creates a well-adapted student body that understands the limitations of plagiarism and cheating on the quality of their work as well as their own personal development.

Previous
Previous

AI Ethics: Is it Moral to use AI for School Assignments?

Next
Next

The Question is Not “Who,” Rather “What”