Assignment 2 2025
Unique number:
Due Date: 15 July 2025
THE ILLUSION OF NEUTRALITY: THE DANGERS OF USING GENERATIVE AI AS
THERAPY BY NON-PROFESSIONALS
The widespread use of generative artificial intelligence, or gen-AI, tools like ChatGPT,
Gemini, Copilot, and GROK has raised new concerns, especially as people begin to use
these tools in ways they were not originally designed for. One of the most sensitive and
complex uses of gen-AI today is as a replacement for human therapists. Across various
platforms, people turn to these AI programs to share their emotions, seek advice, and
feel understood. But this development introduces a very big problem, the idea of
neutrality in these systems. Many users assume that the AI is neutral, fair, and
unbiased. They believe it is safe to confide in these tools. But this belief is not only
misleading, it is also potentially harmful. Neutrality, in this case, is not real. It is an
illusion that hides the dangers and ethical issues of using machines in place of qualified
human professionals. The more we accept AI as neutral, the less we question its limits,
its origins, and its effect on human relationships, especially in emotionally vulnerable
spaces.
Terms of use
By making use of this document you agree to:
Use this document as a guide for learning, comparison and reference purpose,
Terms of use
Not to duplicate, reproduce and/or misrepresent the contents of this document as your own work,
By making use of this document you agree to:
Use this document
Fully accept the consequences
solely as a guide forshould you plagiarise
learning, reference,or and
misuse this document.
comparison purposes,
Ensure originality of your own work, and fully accept the consequences should you plagiarise or misuse this document.
Comply with all relevant standards, guidelines, regulations, and legislation governing academic and written work.
Disclaimer
Great care has been taken in the preparation of this document; however, the contents are provided "as is" without any express or
implied representations or warranties. The author accepts no responsibility or liability for any actions taken based on the
information contained within this document. This document is intended solely for comparison, research, and reference purposes.
Reproduction, resale, or transmission of any part of this document, in any form or by any means, is strictly prohibited.
, +27 67 171 1739
THE ILLUSION OF NEUTRALITY: THE DANGERS OF USING GENERATIVE AI
AS THERAPY BY NON-PROFESSIONALS
The widespread use of generative artificial intelligence, or gen-AI, tools like
ChatGPT, Gemini, Copilot, and GROK has raised new concerns, especially as
people begin to use these tools in ways they were not originally designed for. One of
the most sensitive and complex uses of gen-AI today is as a replacement for human
therapists. Across various platforms, people turn to these AI programs to share their
emotions, seek advice, and feel understood. But this development introduces a very
big problem—the idea of neutrality in these systems. Many users assume that the AI
is neutral, fair, and unbiased. They believe it is safe to confide in these tools. But this
belief is not only misleading—it is also potentially harmful. Neutrality, in this case, is
not real. It is an illusion that hides the dangers and ethical issues of using machines
in place of qualified human professionals. The more we accept AI as neutral, the less
we question its limits, its origins, and its effect on human relationships, especially in
emotionally vulnerable spaces.
In understanding this situation, we must explore how communication is shaped.
From Session 8 of ENG3702, the interaction between text, language, genre, and
audience is critical. These elements form what we call discourse. Discourse is not
just speech or writing; it is the way meaning is created and passed between people.
It is shaped by culture, power, and relationships. AI, when acting like a therapist,
produces a type of discourse too. But that discourse is generated through mimicry,
not genuine understanding. It uses formulas and patterns from millions of texts to
give a response that looks appropriate. But it does not truly understand the user’s
situation. This raises ethical and emotional risks because people expect real care,
but get imitation instead.
Mimicry is one of the key ideas from our class Venn diagram. In the centre of text,
language, and genre, we find mimicry—where AI pretends to be something it is not.
This mimicry is convincing. It uses soft, comforting words like ―I’m here for you,‖ or
―That sounds really tough,‖ which resemble how therapists speak. But these
responses are generated by algorithms, not by feelings or thoughts. AI does not
know the user, does not feel empathy, and cannot carry emotional responsibility. The
words it uses are drawn from data, not from care. In this way, gen-AI creates a mask
of neutrality—a performance of care that lacks real ethical grounding. The textDisclaimer
Great care has been taken in the preparation of this document; however, the contents are provided "as is"
without any express or implied representations or warranties. The author accepts no responsibility or
liability for any actions taken based on the information contained within this document. This document is
intended solely for comparison, research, and reference purposes. Reproduction, resale, or transmission
of any part of this document, in any form or by any means, is strictly prohibited.