Semester 2 2025 – DUE 17 September 2025; 100%
correct solutions and explanations.
Does the Use of Large Language Models Constitute Plagiarism?
Introduction
The emergence of large language models (LLMs) such as ChatGPT
has radically altered the way knowledge is produced, accessed, and
presented in academic contexts. These models are trained on vast
amounts of data and use advanced algorithms to produce responses
that often appear thoughtful and human-like. Yet, as Naithani (2025)
observes, their outputs are not the result of independent intellectual
labour but of systematic computational processing. The advent of
LLMs raises pressing questions about authorship, originality, and
plagiarism in higher education. This essay argues that the
unacknowledged use of LLMs in academic writing does constitute
plagiarism. The argument rests on three interrelated points: first, that
presenting AI-generated content as one’s own work misrepresents
authorship; second, that reliance on LLMs undermines academic
integrity and devalues genuine learning; and third, that substituting
AI-generated text for student work conceals the absence of critical
engagement and intellectual effort. The essay also considers
counterarguments that attempt to separate AI use from plagiarism,
before rebutting them. Finally, it proposes three strategies
universities can implement to preserve the credibility of
qualifications in an age of generative artificial intelligence.
LLMs, Authorship, and the Question of Attribution
Plagiarism is broadly defined as presenting the words or ideas of
another as one’s own without proper acknowledgement. Although
LLMs are not human authors and therefore cannot themselves hold