AI Ethics: Is it Moral to use AI for School Assignments?

By Nick Chong

“To study and not think is a waste.” – Confucius

Who hasn’t used ChatGPT? It’s quite the godsend, especially for students; I, for one, have taken advantage of it multiple times to help me with essays, responding to emails, or finishing that one assignment that's been procrastinated until 11:59 pm (we’ve all done it at some point). There are endless possibilities for the applications of AI, but where do we draw the line?

Our focus on AI usage on school assignments is limited to LLMs/generative AI models such as OpenAI’s ChatGPT-4 or Google’s Bard, which represent the most accessible and some of the most powerful examples of AI as of publication. Such models easily process large amounts of data, analyze it six ways from Sunday, and generate enough content to completely fill the internet over twice. This brings us to our first question: Does using AI contribute anything that we, as humans, do not?

First, let us compare the AI revolution to the digital revolution. Just as the internet has drastically lowered the cost of information transmission, AI will lower the cost of cognition. The places where one can apply generative artificial intelligence to is where we apply our thinking: for school assignments, for college applications, to plan a summer vacation, etc. In essence, we are looking at an unprecedented increase in speed, efficiency, accuracy, and optimization of cognitive processes. It is not so much as what AI can contribute, but the speed and volume at which it can be contributed.

Speed and volume notwithstanding, generative AI’s ability to simulate and produce content is unparalleled to any one human being. Neural networks are the foundation of generative AI, identifying the patterns and structures within existing data to generate new and original content. (Arguing whether or not AI actually understands what it produces leads perfectly into the Chinese room argument, which argues whether or not AI has a “mind”, “understanding”, or “consciousness”. But that's for another day.)

Coupled with a high degree of accuracy and precision (hallucinations notwithstanding), especially in areas such as data analysis, summarization, language translation, handling of mundane and repetitive tasks, the potential for generative AI to contribute content is extraordinarily high. So much that cognitive processes aren’t even necessary for users of generative AI anymore.

It is the combination of these factors that makes generative AI so appealing to students. The tantalizing offer of speed, efficiency, and accuracy is a precious resource that saves time and energy. But is it worth neglecting one’s cognition for efficiency? This directly leads into our next questions: Should we be working “smarter, not harder”?

“Working smarter, not harder” is a maxim of commendable respect and a goal to strive for, though often improperly used by students to justify laziness or minimal time/effort utilized. Proper time management, for one, is the one of the most essential aspects of working “smarter, not harder”, streamlining productivity, prioritizing tasks, and allocating time appropriately. But “working smarter, not harder” is not a green light to avoid hard work altogether. Rather, it is about finding ways to work more efficiently and effectively, to leverage resources, and to have a balance between “smart” work, “hard” work, and personal well-being.

Undoubtedly, generative AI accelerates efficiency and productivity. But where does “working smarter, not harder” start to blur the line between utilizing generative AI as a supporting tool and brainless reliance? I, for one, argue for the occasional use of ChatGPT, as a tool to supplant one’s learning. It is an incredibly powerful tool and summarizer, and can easily personalize experiences simply by writing prompts to suit your needs. Take this example prompt given to ChatGPT: “What was the Treaty of Guadalupe Hidalgo? Summarize in 50 words.”

And it returns: “The Treaty of Guadalupe Hidalgo was a peace agreement signed in 1848 between the United States and Mexico, ending the Mexican-American War. Mexico ceded a large portion of its territory to the U.S., and the Rio Grande became the official border. Mexico received compensation, and the rights of Mexican citizens were protected.”

Used correctly, generative AI represents a turning point in academics: ruthless efficiency paired with a genuine desire to learn can help propel a student leagues ahead in their studies. A student struggling to understand content in, say, their US History class could use generative AI to help with their studying and note-taking, resulting in more in-depth understanding and higher grades. Using AI to check grammar and spelling or provide suggestions for improvement can be considered legitimate if the student is actively engaged in the writing process and takes ownership of their work. If it is used as a tool, it supports the human mind to a limited degree.

But the reverse is also true: Using AI solely to generate complete assignments without understanding or incorporating one's own thoughts and ideas would be unethical. This completely undermines the institution of learning as a whole: complete disregard of cognition and critical thinking and instead mindless automation that in no way contributes to oneself. Additionally, relying heavily on AI tools might hinder this process as it reduces the individual's effort to critically think, analyze, and express their own thoughts. It becomes unethical when it is abused as a tool, replacing the human mind instead of supporting it.

But how is utilizing generative AI even considered the use of cognitive processes? There is a clear difference between using and abusing generative AI. When used correctly, generative AI merely supplants the mental action of acquiring knowledge and understanding – it involves thought and time.

This is where we can draw a line between ethical and unethical use of AI for school: use and abuse. Does it facilitate one’s learning process? Or does it completely bypass it? That is a question all of us must ask ourselves, whenever we are faced with the temptation of using generative AI.

Previous
Previous

AI & Labor: Don’t let AI get the upper hand

Next
Next

AI & Ethics: ChatGPT in an Essay Writing Context