100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached 4.2 TrustPilot
logo-home
Exam (elaborations)

ENG3702 Assignment 2 (COMPLETE ANSWERS) 2025 - DUE 21 July 2025

Rating
5.0
(2)
Sold
4
Pages
15
Grade
A+
Uploaded on
18-07-2025
Written in
2024/2025

ENG3702 Assignment 2 (COMPLETE ANSWERS) 2025 - DUE 21 July 2025; 100% TRUSTED Complete, trusted solutions and explanations. For assistance, Whats-App 0.6.7-1.7.1-1.7.3.9. Ensure your success with us.. In Session 8 of our online classes we discussed and reflected on the interaction of language, text, genre and discourse. We also explored the notion of audience and how discourse caters to audience through the elements of interpersonal relation, mimicry and exposition. Refer to the following Ven diagram, explained and discussed in Online Session 8. In a well-structured, grammatically sound and rigorously substantiated essay, explore and discuss the implications of ‘neutrality’ when gen-ai applications like Chat-GPT, GROK, Copilot, Gemini etc are indiscriminately used by non-professionals as therapists. Your essay should be between words. It should: 4 ENG3702 Assignment 02 2025 1. demonstrate a critical understanding of text, language and genre and how these create discourse; 2. illustrate how ‘audience’ influences language, genre and discourse through interpersonal relation, mimicry and exposition; 3. critically unpack the notion of ‘neutrality’ in relation to points 1 and 2; 4. apply the theoretical exposition to the essay question. 5. provide relevant and rigorously discussed examples to substantiate your argument. 6. use the relevant articles in the Additional Resources as well as group discussions on the subject on the Collaborative Minds group and DISCORD ENG 3702 server to inform your position. 7. be between words. For guidance on how to write a well-structured essay, refer to Online Class 4. Remember to reference your sources correctly, both in-text and in the reference list at the end of your essay. Refer to Online Class 5 for the correct referencing style and techniques. Failure to acknowledge your sources (AI or otherwise) and provide a reference list will result in marks being deducted and possible disciplinary action for plagiarism.

Show more Read less
Institution
Module









Whoops! We can’t load your doc right now. Try again or contact support.

Connected book

Written for

Institution
Module

Document information

Uploaded on
July 18, 2025
Number of pages
15
Written in
2024/2025
Type
Exam (elaborations)
Contains
Questions & answers

Subjects

Content preview

ENG3702
Assignment 2 2025
Unique number:

Due Date: 15 July 2025
THE ILLUSION OF NEUTRALITY: THE DANGERS OF USING GENERATIVE AI AS
THERAPY BY NON-PROFESSIONALS

The widespread use of generative artificial intelligence, or gen-AI, tools like ChatGPT,
Gemini, Copilot, and GROK has raised new concerns, especially as people begin to use
these tools in ways they were not originally designed for. One of the most sensitive and
complex uses of gen-AI today is as a replacement for human therapists. Across various
platforms, people turn to these AI programs to share their emotions, seek advice, and
feel understood. But this development introduces a very big problem, the idea of
neutrality in these systems. Many users assume that the AI is neutral, fair, and
unbiased. They believe it is safe to confide in these tools. But this belief is not only
misleading, it is also potentially harmful. Neutrality, in this case, is not real. It is an
illusion that hides the dangers and ethical issues of using machines in place of qualified
human professionals. The more we accept AI as neutral, the less we question its limits,
its origins, and its effect on human relationships, especially in emotionally vulnerable
spaces.
Terms of use
By making use of this document you agree to:
 Use this document as a guide for learning, comparison and reference purpose,
Terms of use
 Not to duplicate, reproduce and/or misrepresent the contents of this document as your own work,
By making use of this document you agree to:
 Use this document
Fully accept the consequences
solely as a guide forshould you plagiarise
learning, reference,or and
misuse this document.
comparison purposes,
 Ensure originality of your own work, and fully accept the consequences should you plagiarise or misuse this document.
 Comply with all relevant standards, guidelines, regulations, and legislation governing academic and written work.

Disclaimer
Great care has been taken in the preparation of this document; however, the contents are provided "as is" without any express or
implied representations or warranties. The author accepts no responsibility or liability for any actions taken based on the
information contained within this document. This document is intended solely for comparison, research, and reference purposes.
Reproduction, resale, or transmission of any part of this document, in any form or by any means, is strictly prohibited.

, +27 67 171 1739



THE ILLUSION OF NEUTRALITY: THE DANGERS OF USING GENERATIVE AI
AS THERAPY BY NON-PROFESSIONALS

The widespread use of generative artificial intelligence, or gen-AI, tools like
ChatGPT, Gemini, Copilot, and GROK has raised new concerns, especially as
people begin to use these tools in ways they were not originally designed for. One of
the most sensitive and complex uses of gen-AI today is as a replacement for human
therapists. Across various platforms, people turn to these AI programs to share their
emotions, seek advice, and feel understood. But this development introduces a very
big problem—the idea of neutrality in these systems. Many users assume that the AI
is neutral, fair, and unbiased. They believe it is safe to confide in these tools. But this
belief is not only misleading—it is also potentially harmful. Neutrality, in this case, is
not real. It is an illusion that hides the dangers and ethical issues of using machines
in place of qualified human professionals. The more we accept AI as neutral, the less
we question its limits, its origins, and its effect on human relationships, especially in
emotionally vulnerable spaces.

In understanding this situation, we must explore how communication is shaped.
From Session 8 of ENG3702, the interaction between text, language, genre, and
audience is critical. These elements form what we call discourse. Discourse is not
just speech or writing; it is the way meaning is created and passed between people.
It is shaped by culture, power, and relationships. AI, when acting like a therapist,
produces a type of discourse too. But that discourse is generated through mimicry,
not genuine understanding. It uses formulas and patterns from millions of texts to
give a response that looks appropriate. But it does not truly understand the user’s
situation. This raises ethical and emotional risks because people expect real care,
but get imitation instead.

Mimicry is one of the key ideas from our class Venn diagram. In the centre of text,
language, and genre, we find mimicry—where AI pretends to be something it is not.
This mimicry is convincing. It uses soft, comforting words like ―I’m here for you,‖ or
―That sounds really tough,‖ which resemble how therapists speak. But these
responses are generated by algorithms, not by feelings or thoughts. AI does not
know the user, does not feel empathy, and cannot carry emotional responsibility. The
words it uses are drawn from data, not from care. In this way, gen-AI creates a mask
of neutrality—a performance of care that lacks real ethical grounding. The textDisclaimer
Great care has been taken in the preparation of this document; however, the contents are provided "as is"
without any express or implied representations or warranties. The author accepts no responsibility or
liability for any actions taken based on the information contained within this document. This document is
intended solely for comparison, research, and reference purposes. Reproduction, resale, or transmission
of any part of this document, in any form or by any means, is strictly prohibited.

Reviews from verified buyers

Showing all 2 reviews
3 months ago

4 months ago

5.0

2 reviews

5
2
4
0
3
0
2
0
1
0
Trustworthy reviews on Stuvia

All reviews are made by real Stuvia users after verified purchases.

Get to know the seller

Seller avatar
Reputation scores are based on the amount of documents a seller has sold for a fee and the reviews they have received for those documents. There are three levels: Bronze, Silver and Gold. The better the reputation, the more your can rely on the quality of the sellers work.
EduPal University of South Africa (Unisa)
Follow You need to be logged in order to follow users or courses
Sold
149163
Member since
7 year
Number of followers
35995
Documents
4316
Last sold
1 hour ago

4.2

13555 reviews

5
7803
4
2688
3
1790
2
455
1
819

Recently viewed by you

Why students choose Stuvia

Created by fellow students, verified by reviews

Quality you can trust: written by students who passed their exams and reviewed by others who've used these revision notes.

Didn't get what you expected? Choose another document

No problem! You can straightaway pick a different document that better suits what you're after.

Pay as you like, start learning straight away

No subscription, no commitments. Pay the way you're used to via credit card and download your PDF document instantly.

Student with book image

“Bought, downloaded, and smashed it. It really can be that simple.”

Alisha Student

Frequently asked questions