ENG2612
EXAM PACK
FOR ASSISTANCE WITH THIS MODULE +27 67 171 1739
, UNIVERSITY EXAMINATIONS
OCT/NOV EXAMINATION 2025
ENG2612
Applied English Language for Foundation and Intermediate
Phase Home Language
First examiner: Prof. K. Sevnarayan
Second examiner: Ms. Z. Suliman
100 Marks
48 Hours
This paper consists of TEN pages.
INSTRUCTIONS:
THIS EXAMINATION CONSISTS OF TWO QUESTIONS. CHOOSE ONE OF THE TWO
QUESTIONS AND WRITE DOWN THE NUMBER OF THE QUESTION YOU HAVE
SELECTED IN THE TEMPLATE WE HAVE PROVIDED YOU WITH.
Complete the plagiarism declaration and the AI self-disclosure in the template
provided.
For this examination, you are required to write an essay of approximately 1000 words on the
essay topic of your choice.
You are allowed to access your prescribed works and the study material. You are however
not allowed to copy verbatim from your study material. You should write the answers in your
own words. In the case of any cheating or plagiarism, no mark will be allocated.
PLEASE DO NOT CITE ANY EXTERNAL SOURCE IN YOUR EXAMINATION RESPONSE.
The exam can be downloaded on the 1st October 2025 at 08:00am and must be uploaded
BEFORE 08:00 am, 3rd October 2025.
Please make sure that you upload your answer template onto the ENG2612 2025
Module Site under Assessment 04.
Your answer template must be uploaded as a PDF electronic document and not as a
scanned image.
NO LATE SUBMISSIONS WILL BE ACCEPTED.
[TURN OVER]
Open Rubric
, CONFIDENTIAL ENG2612
OCT/NOV 2025
QUESTION ONE:
Read EXTRACT A below and answer the question that follows.
EXTRACT A
TURNING OFF AI DETECTION SOFTWARE IS THE RIGHT CALL FOR
SA UNIVERSITIES
By Sioux McKenna and Neil Kramm
25 Jul 2025
Universities across South Africa are abandoning problematic artificial intelligence
detection tools that have created a climate of suspicion. The recently
announced University of Cape Town decision to disable Turnitin’s AI detection feature is
to be welcomed – and other universities would do well to follow suit. This move signals a
growing recognition that AI detection software does more harm than good. The problems
with Turnitin’s AI detector extend far beyond technical glitches. The software’s notorious
tendency towards false positives has created an atmosphere where students live in
constant fear of being wrongly accused of academic dishonesty. Unlike
their American counterparts, South African students rarely pursue legal action against
universities, but this should not be mistaken for acceptance of unfair treatment.
A system built on flawed logic
As Rebecca Davis has pointed out in Daily Maverick: detection tools fail. The fundamental
issue lies in how these detection systems operate. Turnitin’s AI detector doesn’t identify
digital fingerprints that definitively prove AI use. Instead, it searches for stylistic patterns
associated with AI-generated text. The software might flag work as likely to be AI-
generated simply because the student used em-dashes or terms such as “delve into” or
“crucial” – a writing preference that has nothing to do with artificial intelligence. This
approach has led to deeply troubling situations. Students report receiving accusatory
emails from professors suggesting significant portions of their original work were AI-
generated. One student described receiving such an email indicating that Turnitin had
flagged 30% of her text as likely to be AI-generated, followed by demands for proof of
originality: multiple drafts, version history from Google Docs, or reports from other AI
detection services like GPTZero. Other academics have endorsed the use of services like
Grammarly Authorship or Turnitin Clarity for students to prove their work is their own. The
2
, CONFIDENTIAL ENG2612
OCT/NOV 2025
burden of proof has been reversed: students are guilty until proven innocent, a principle
that would be considered unjust in any legal system and is pedagogically abhorrent in an
educational context. The psychological impact cannot be overstated; students describe
feeling anxious about every assignment, second-guessing their natural writing styles, and
living under a cloud of suspicion despite having done nothing wrong.
The absurdity exposed
The unreliability of these systems becomes comically apparent when examined closely.
The student mentioned above paid $19 to access GPTZero, another AI detection service,
hoping to clear her name. The results were revealing: the programs flagged different
portions of her original work as AI-generated, with only partial overlap between their
accusations. Even more telling, both systems flagged the professor’s own assignment
questions as AI-generated, though the Turnitin software flagged Question 2 while
GPTZero flagged Question 4. Did the professor use ChatGPT to write one of the
questions, both, or neither? The software provides no answers. This inconsistency
exposes the arbitrary nature of AI detection. If two leading systems cannot agree on what
constitutes AI-generated text, and both flag the professor’s own questions as suspicious,
how can any institution justify using these tools to make academic integrity decision
Gaming the system
While South African universities have been fortunate to avoid the litigation that has
plagued American institutions, the experiences across the Atlantic serve as a stark
warning. A number of US universities have abandoned Turnitin after facing lawsuits from
students falsely accused of using AI. Turnitin’s terms and conditions conveniently absolve
the company of responsibility for these false accusations, leaving universities to face the
legal and reputational consequences alone. The contrast with Turnitin’s similarity
detection tool is important. While that feature has its own problems, primarily academics
assuming that the percentage similarity is an indicator of the amount of plagiarism, at least
it provides transparent, visible comparisons that students can review and make sense of.
The AI detection feature operates as a black box, producing reports visible only to faculty
members, creating an inherently opaque system.
Undermining educational relationships
Perhaps most damaging is how AI detection transforms the fundamental
3
EXAM PACK
FOR ASSISTANCE WITH THIS MODULE +27 67 171 1739
, UNIVERSITY EXAMINATIONS
OCT/NOV EXAMINATION 2025
ENG2612
Applied English Language for Foundation and Intermediate
Phase Home Language
First examiner: Prof. K. Sevnarayan
Second examiner: Ms. Z. Suliman
100 Marks
48 Hours
This paper consists of TEN pages.
INSTRUCTIONS:
THIS EXAMINATION CONSISTS OF TWO QUESTIONS. CHOOSE ONE OF THE TWO
QUESTIONS AND WRITE DOWN THE NUMBER OF THE QUESTION YOU HAVE
SELECTED IN THE TEMPLATE WE HAVE PROVIDED YOU WITH.
Complete the plagiarism declaration and the AI self-disclosure in the template
provided.
For this examination, you are required to write an essay of approximately 1000 words on the
essay topic of your choice.
You are allowed to access your prescribed works and the study material. You are however
not allowed to copy verbatim from your study material. You should write the answers in your
own words. In the case of any cheating or plagiarism, no mark will be allocated.
PLEASE DO NOT CITE ANY EXTERNAL SOURCE IN YOUR EXAMINATION RESPONSE.
The exam can be downloaded on the 1st October 2025 at 08:00am and must be uploaded
BEFORE 08:00 am, 3rd October 2025.
Please make sure that you upload your answer template onto the ENG2612 2025
Module Site under Assessment 04.
Your answer template must be uploaded as a PDF electronic document and not as a
scanned image.
NO LATE SUBMISSIONS WILL BE ACCEPTED.
[TURN OVER]
Open Rubric
, CONFIDENTIAL ENG2612
OCT/NOV 2025
QUESTION ONE:
Read EXTRACT A below and answer the question that follows.
EXTRACT A
TURNING OFF AI DETECTION SOFTWARE IS THE RIGHT CALL FOR
SA UNIVERSITIES
By Sioux McKenna and Neil Kramm
25 Jul 2025
Universities across South Africa are abandoning problematic artificial intelligence
detection tools that have created a climate of suspicion. The recently
announced University of Cape Town decision to disable Turnitin’s AI detection feature is
to be welcomed – and other universities would do well to follow suit. This move signals a
growing recognition that AI detection software does more harm than good. The problems
with Turnitin’s AI detector extend far beyond technical glitches. The software’s notorious
tendency towards false positives has created an atmosphere where students live in
constant fear of being wrongly accused of academic dishonesty. Unlike
their American counterparts, South African students rarely pursue legal action against
universities, but this should not be mistaken for acceptance of unfair treatment.
A system built on flawed logic
As Rebecca Davis has pointed out in Daily Maverick: detection tools fail. The fundamental
issue lies in how these detection systems operate. Turnitin’s AI detector doesn’t identify
digital fingerprints that definitively prove AI use. Instead, it searches for stylistic patterns
associated with AI-generated text. The software might flag work as likely to be AI-
generated simply because the student used em-dashes or terms such as “delve into” or
“crucial” – a writing preference that has nothing to do with artificial intelligence. This
approach has led to deeply troubling situations. Students report receiving accusatory
emails from professors suggesting significant portions of their original work were AI-
generated. One student described receiving such an email indicating that Turnitin had
flagged 30% of her text as likely to be AI-generated, followed by demands for proof of
originality: multiple drafts, version history from Google Docs, or reports from other AI
detection services like GPTZero. Other academics have endorsed the use of services like
Grammarly Authorship or Turnitin Clarity for students to prove their work is their own. The
2
, CONFIDENTIAL ENG2612
OCT/NOV 2025
burden of proof has been reversed: students are guilty until proven innocent, a principle
that would be considered unjust in any legal system and is pedagogically abhorrent in an
educational context. The psychological impact cannot be overstated; students describe
feeling anxious about every assignment, second-guessing their natural writing styles, and
living under a cloud of suspicion despite having done nothing wrong.
The absurdity exposed
The unreliability of these systems becomes comically apparent when examined closely.
The student mentioned above paid $19 to access GPTZero, another AI detection service,
hoping to clear her name. The results were revealing: the programs flagged different
portions of her original work as AI-generated, with only partial overlap between their
accusations. Even more telling, both systems flagged the professor’s own assignment
questions as AI-generated, though the Turnitin software flagged Question 2 while
GPTZero flagged Question 4. Did the professor use ChatGPT to write one of the
questions, both, or neither? The software provides no answers. This inconsistency
exposes the arbitrary nature of AI detection. If two leading systems cannot agree on what
constitutes AI-generated text, and both flag the professor’s own questions as suspicious,
how can any institution justify using these tools to make academic integrity decision
Gaming the system
While South African universities have been fortunate to avoid the litigation that has
plagued American institutions, the experiences across the Atlantic serve as a stark
warning. A number of US universities have abandoned Turnitin after facing lawsuits from
students falsely accused of using AI. Turnitin’s terms and conditions conveniently absolve
the company of responsibility for these false accusations, leaving universities to face the
legal and reputational consequences alone. The contrast with Turnitin’s similarity
detection tool is important. While that feature has its own problems, primarily academics
assuming that the percentage similarity is an indicator of the amount of plagiarism, at least
it provides transparent, visible comparisons that students can review and make sense of.
The AI detection feature operates as a black box, producing reports visible only to faculty
members, creating an inherently opaque system.
Undermining educational relationships
Perhaps most damaging is how AI detection transforms the fundamental
3