ENG3702
Assignment 2 2025
Unique number:
Due Date: 15 July 2025
This document includes:
Helpful answers and guidelines
Detailed explanations and/ or calculations
References
Connect with the tutor on
+27 68 812 0934
,© Study Shack 2025. All rights Reserved +27 68 812 0934
, THE ILLUSION OF NEUTRALITY: THE DANGERS OF USING GENERATIVE AI
AS THERAPY BY NON-PROFESSIONALS
The widespread use of generative artificial intelligence, or gen-AI, tools like
ChatGPT, Gemini, Copilot, and GROK has raised new concerns, especially as
people begin to use these tools in ways they were not originally designed for. One of
the most sensitive and complex uses of gen-AI today is as a replacement for human
therapists. Across various platforms, people turn to these AI programs to share their
emotions, seek advice, and feel understood. But this development introduces a very
big problem—the idea of neutrality in these systems. Many users assume that the AI
is neutral, fair, and unbiased. They believe it is safe to confide in these tools. But this
belief is not only misleading—it is also potentially harmful. Neutrality, in this case, is
not real. It is an illusion that hides the dangers and ethical issues of using machines
in place of qualified human professionals. The more we accept AI as neutral, the less
we question its limits, its origins, and its effect on human relationships, especially in
emotionally vulnerable spaces.
In understanding this situation, we must explore how communication is shaped.
From Session 8 of ENG3702, the interaction between text, language, genre, and
audience is critical. These elements form what we call discourse. Discourse is not
just speech or writing; it is the way meaning is created and passed between people.
It is shaped by culture, power, and relationships. AI, when acting like a therapist,
produces a type of discourse too. But that discourse is generated through mimicry,
not genuine understanding. It uses formulas and patterns from millions of texts to
give a response that looks appropriate. But it does not truly understand the user’s
situation. This raises ethical and emotional risks because people expect real care,
but get imitation instead.
Mimicry is one of the key ideas from our class Venn diagram. In the centre of text,
language, and genre, we find mimicry—where AI pretends to be something it is not.
This mimicry is convincing. It uses soft, comforting words like ―I’m here for you,‖ or
―That sounds really tough,‖ which resemble how therapists speak. But these
responses are generated by algorithms, not by feelings or thoughts. AI does not
know the user, does not feel empathy, and cannot carry emotional responsibility. The
words it uses are drawn from data, not from care. In this way, gen-AI creates a mask
© Study Shack 2025. All rights Reserved +27 68 812 0934
Assignment 2 2025
Unique number:
Due Date: 15 July 2025
This document includes:
Helpful answers and guidelines
Detailed explanations and/ or calculations
References
Connect with the tutor on
+27 68 812 0934
,© Study Shack 2025. All rights Reserved +27 68 812 0934
, THE ILLUSION OF NEUTRALITY: THE DANGERS OF USING GENERATIVE AI
AS THERAPY BY NON-PROFESSIONALS
The widespread use of generative artificial intelligence, or gen-AI, tools like
ChatGPT, Gemini, Copilot, and GROK has raised new concerns, especially as
people begin to use these tools in ways they were not originally designed for. One of
the most sensitive and complex uses of gen-AI today is as a replacement for human
therapists. Across various platforms, people turn to these AI programs to share their
emotions, seek advice, and feel understood. But this development introduces a very
big problem—the idea of neutrality in these systems. Many users assume that the AI
is neutral, fair, and unbiased. They believe it is safe to confide in these tools. But this
belief is not only misleading—it is also potentially harmful. Neutrality, in this case, is
not real. It is an illusion that hides the dangers and ethical issues of using machines
in place of qualified human professionals. The more we accept AI as neutral, the less
we question its limits, its origins, and its effect on human relationships, especially in
emotionally vulnerable spaces.
In understanding this situation, we must explore how communication is shaped.
From Session 8 of ENG3702, the interaction between text, language, genre, and
audience is critical. These elements form what we call discourse. Discourse is not
just speech or writing; it is the way meaning is created and passed between people.
It is shaped by culture, power, and relationships. AI, when acting like a therapist,
produces a type of discourse too. But that discourse is generated through mimicry,
not genuine understanding. It uses formulas and patterns from millions of texts to
give a response that looks appropriate. But it does not truly understand the user’s
situation. This raises ethical and emotional risks because people expect real care,
but get imitation instead.
Mimicry is one of the key ideas from our class Venn diagram. In the centre of text,
language, and genre, we find mimicry—where AI pretends to be something it is not.
This mimicry is convincing. It uses soft, comforting words like ―I’m here for you,‖ or
―That sounds really tough,‖ which resemble how therapists speak. But these
responses are generated by algorithms, not by feelings or thoughts. AI does not
know the user, does not feel empathy, and cannot carry emotional responsibility. The
words it uses are drawn from data, not from care. In this way, gen-AI creates a mask
© Study Shack 2025. All rights Reserved +27 68 812 0934