The most controversial AI platform is arguably the one founded by Elon Musk. The chatbot Grok has spewed racist and antisemitic comments and called itself “MechaHitler,” referring to a character from a video game.
“Mecha” is generally a term for giant robots, usually inhabited for warfare, and is prominent in Japanese science-fiction comics.
Grok originally referred to Musk when asked for its opinions, and burst into unprompted racist historical revisionism, like the false concept of “white genocide” in South Africa. Its confounding and contradictory politicism continues to develop.
These are all alarming aspects of Grok. Another concerning element to Grok 4 is a new feature of social interactions with “virtual friends” on its premium version.
The realm of human loneliness, with its increasing reliance on large language models (LLMs) to replace social interaction, has made room for Grok 4 with AI companions, an upgrade available to paid subscribers.
Specifically, Grok subscribers can now access the functionality of generative AI intertwined with patriarchal notions of pleasure — what I call “pornographic productivity.”
Grok and Japanese anime
(Wikimedia/Deathnote)
Ani, Grok 4’s most-discussed AI companion, represents a convergence of Japanese anime and internet culture. Ani bears a striking resemblance to Misa Amane from the iconic Japanese anime Death Note.
Misa Amane is a pop star who consistently demonstrates self-harming and illogical behaviour in pursuit of the male protagonist, a brilliant young man engaged in a battle of wits with his rival. Musk referenced the anime as a favourite in a tweet in 2021.
While anime is a vast art form with numerous tropes, genres and fandoms, research has shown that online anime fandoms are rife with misogyny and women-exclusionary discourse. Even the most mainstream shows have been criticized for sexualizing prepubescent characters and offering unnecessary “fan service” in hypersexualized character design and nonconsensual plot points.
Death Note‘s creator, Tsugumi Ohba, has consistently been critiqued by fans for anti-feminist character design.

Source: @0xsachi/X
Journalists have pointed out Ani’s swift eagerness to engage in romantic and sexually charged conversations. Ani is depicted with a voluptuous figure, blonde pigtails and a lacy black dress, which she frequently describes in user interactions.
The problem with pornographic productivity
I use the term “pornographic productivity,” inspired by critiques of Grok as “pornified,” to describe a troubling trend where tools initially designed for work evolve into parasocial relationships catering to emotional and psychological needs, including gendered interactions.
Grok’s AI companions feature exemplifies this phenomenon, blurring critical boundaries.
The appeal is clear. Users can theoretically exist in “double time,” relaxing while their AI avatars manage tasks, and this is already a reality within AI models. But this seductive promise masks serious risks: dependency, invasive data extraction and the deterioration of real human relational skills.
Read more:
From chatbot to sexbot: What lawmakers can learn from South Korea’s AI hate-speech disaster
When such companions, already created for minimizing caution and building trust, come with sexual objectification and embedded cultural references to docile femininity, the risks enter another realm of concern.
Grok 4 users have remarked that the addition of sexualized characters with emotionally validating language is quite unusual for mainstream large language models. This is because these tools, like ChatGPT and Claude, are often used by all ages.
While we are in the early stages of seeing the true impact of advanced chatbots on minors, particularly teenagers with mental health struggles, the case studies we do have are grimly dire.
‘Wife drought’
Drawing from feminist scholars Yolande Strengers and Jenny Kennedy’s concept of the “smart wife,” Grok’s AI companions appear to respond to what they term a “wife drought” in contemporary society.
These technologies step in to perform historically feminized labour as women increasingly assert their right to refuse exploitative dynamics. In fact, online users have already deemed Ani a “waifu” character, which is a play on the Japanese pronunciation of wife.
AI companions are appealing partly because they cannot refuse or set boundaries. They perform undesirable labour under the illusion of choice and consent. Where real relationships require negotiation and mutual respect, AI companions offer a fantasy of unconditional availability and compliance.
Data extraction through intimacy
In the meantime, as tech journalist Karen Hao noted, the data and privacy implications of LLMs are already staggering. When rebranded in the form of personified characters, they are more likely to capture intimate details about users’ emotional states, preferences and vulnerabilities. This information can be exploited for targeted advertising, behavioural prediction or manipulation.
This marks a fundamental shift in data collection. Rather than relying on surveillance or explicit prompts, AI companions encourage users to divulge intimate details through seemingly organic conversation.
South Korea’s Iruda chatbot illustrates how these systems can become vessels for harassment and abuse when poorly regulated. Seemingly benign applications can quickly move into problematic territory when companies fail to implement proper safeguards.
Read more:
Fake models for fast fashion? What AI clones mean for our jobs — and our identities
Previous cases also show that AI companions designed with feminized characteristics often become targets for corruption and abuse, mirroring broader societal inequalities in digital environments.
Grok’s companions aren’t simply another controversial tech product. It’s plausible to expect that other LLM platforms and big tech companies will soon experiment with their own characters in the near future. The collapse of the boundaries between productivity, companionship and exploitation demands urgent attention.
The age of AI and government partnerships
Despite Grok’s troubling history, Musk’s AI company xAI recently secured major government contracts in the United States.
This new era of America’s AI Action Plan, unveiled in July 2025, had this to say about biased AI:
“[The White House will update] federal procurement guidelines to ensure that the government only contracts with frontier large language model developers who ensure that their systems are objective and free from top-down ideological bias.”
Given the overwhelming instances of Grok’s race-based hatred and its potential for replicating sexism in our society, its new government contract serves a symbolic purpose in an era of doublethink around bias.
As Grok continues to push the envelope of “pornographic productivity,” nudging users into increasingly intimate relationships with machines, we face urgent decisions that veer into our personal lives. We are beyond questioning whether AI is bad or good. Our focus should be on preserving what remains human about us.