Unique Number:
Due date: 15 July 2025
THE ILLUSION OF NEUTRALITY: THE DANGERS OF USING GENERATIVE AI AS
THERAPY BY NON-PROFESSIONALS
The widespread use of generative artificial intelligence, or gen-AI, tools like ChatGPT,
Gemini, Copilot, and GROK has raised new concerns, especially as people begin to use
these tools in ways they were not originally designed for. One of the most sensitive and
complex uses of gen-AI today is as a replacement for human therapists. Across various
platforms, people turn to these AI programs to share their emotions, seek advice, and feel
understood. But this development introduces a very big problem, the idea of neutrality in
these systems. Many users assume that the AI is neutral, fair, and unbiased. They believe it
is safe to confide in these tools. But this belief is not only misleading, it is also potentially
harmful. Neutrality, in this case, is not real. It is an illusion that hides the dangers and ethical
issues of using machines in place of qualified human professionals. The more we accept AI
as neutral, the less we question its limits, its origins, and its effect on human relationships,
especially in emotionally vulnerable spaces.
In understanding this situation, we must explore how communication is shaped. From
Session 8 of ENG3702, the interaction between text, language, genre, and audience is
DISCLAIMER & TERMS OF USE
Educational Aid: These study notes are intended to be used as educational resources and should not be seen as a
replacement for individual research, critical analysis, or professional consultation. Students are encouraged to perform
their own research and seek advice from their instructors or academic advisors for specific assignment guidelines.
Personal Responsibility: While every effort has been made to ensure the accuracy and reliability of the information in
these study notes, the seller does not guarantee the completeness or correctness of all content. The buyer is
responsible for verifying the accuracy of the information and exercising their own judgment when applying it to their
assignments.
Academic Integrity: It is essential for students to maintain academic integrity and follow their institution's policies
regarding plagiarism, citation, and referencing. These study notes should be used as learning tools and sources of
inspiration. Any direct reproduction of the content without proper citation and acknowledgment may be considered
academic misconduct.
Limited Liability: The seller shall not be liable for any direct or indirect damages, losses, or consequences arising from
the use of these notes. This includes, but is not limited to, poor academic performance, penalties, or any other negative
consequences resulting from the application or misuse of the information provided.
, For additional support +27 81 278 3372
THE ILLUSION OF NEUTRALITY: THE DANGERS OF USING GENERATIVE AI
AS THERAPY BY NON-PROFESSIONALS
The widespread use of generative artificial intelligence, or gen-AI, tools like
ChatGPT, Gemini, Copilot, and GROK has raised new concerns, especially as
people begin to use these tools in ways they were not originally designed for. One of
the most sensitive and complex uses of gen-AI today is as a replacement for human
therapists. Across various platforms, people turn to these AI programs to share their
emotions, seek advice, and feel understood. But this development introduces a very
big problem—the idea of neutrality in these systems. Many users assume that the AI
is neutral, fair, and unbiased. They believe it is safe to confide in these tools. But this
belief is not only misleading—it is also potentially harmful. Neutrality, in this case, is
not real. It is an illusion that hides the dangers and ethical issues of using machines
in place of qualified human professionals. The more we accept AI as neutral, the less
we question its limits, its origins, and its effect on human relationships, especially in
emotionally vulnerable spaces.
In understanding this situation, we must explore how communication is shaped.
From Session 8 of ENG3702, the interaction between text, language, genre, and
audience is critical. These elements form what we call discourse. Discourse is not
just speech or writing; it is the way meaning is created and passed between people.
It is shaped by culture, power, and relationships. AI, when acting like a therapist,
produces a type of discourse too. But that discourse is generated through mimicry,
not genuine understanding. It uses formulas and patterns from millions of texts to
give a response that looks appropriate. But it does not truly understand the user’s
situation. This raises ethical and emotional risks because people expect real care,
but get imitation instead.
Mimicry is one of the key ideas from our class Venn diagram. In the centre of text,
language, and genre, we find mimicry—where AI pretends to be something it is not.
This mimicry is convincing. It uses soft, comforting words like ―I’m here for you,‖ or
―That sounds really tough,‖ which resemble how therapists speak. But these
responses are generated by algorithms, not by feelings or thoughts. AI does not
know the user, does not feel empathy, and cannot carry emotional responsibility. The
words it uses are drawn from data, not from care. In this way, gen-AI creates a mask