ASSIGNMENT 2 2025
UNIQUE NO.
DUE DATE: 15 JULY 2025
, Title:
The Illusion of Neutrality: Discourse, Audience, and Genre in AI-Driven Therapy
Introduction
The emergence of generative artificial intelligence (gen-AI) platforms such as ChatGPT,
Copilot, GROK, and Gemini has transformed human-digital interactions across various
sectors. Nowhere is this transformation more profound—and controversial—than in the
domain of mental health care. Traditionally guided by licensed professionals, therapy is
being reimagined through AI-driven tools that promise accessible, scalable support.
However, the deployment of gen-AI in therapy reveals complex tensions around
neutrality, language, audience expectations, and genre. This essay argues that the
assumption of neutrality in AI-driven therapy is an illusion. Instead, the discourse
generated by these tools is shaped by algorithmic biases, context-specific language,
and platform design. These factors influence how audiences engage with therapy and
how therapeutic genres are reconstructed in digital spaces.
Discourse in AI-Driven Therapy
Therapeutic discourse is inherently relational, interpretative, and context-bound. Yet,
gen-AI platforms are built on large language models trained on vast, generalized
datasets. This introduces a tension between the universality of AI discourse and the
specificity of human therapeutic needs. For instance, while a human therapist can tailor
responses based on tone, body language, and cultural nuances, AI systems rely on
textual inputs devoid of affective cues. Consequently, gen-AI reproduces patterns of
communication that reflect normative, Western-centric understandings of mental health.
Even when attempting to sound empathetic, AI tools often fall into formulaic
expressions—“That must be hard,” or “I understand how you feel”—that lack depth or
cultural resonance.
The neutrality claimed by AI interfaces is further compromised by their corporate and
algorithmic origins. Systems like ChatGPT are trained on content curated by tech