AI and Intimacy: The Growing Dangers of Sex Bots

Sexual AI: The Rise of Content and Its Repercussions


The Proliferation of Sexual AI and Its Content

Expansion of Sexual AI Content

Images and Videos

Increasing visual content

Chatbots

Increasing intimate interactions

Massive Growth

Attraction to sexual content


The Era of Sexual AI: The era of artificial intelligence has arrived. The term "AI-generated pornography" or "AI porn" refers to sexually explicit content created using AI algorithms, including images and videos, which are often accessible in large quantities. Reports have shown that sexual content has been one of the biggest attractions for AI tools since the boom of AI-generated images and text in 2022, and an analysis of 36 AI-powered porn sites found a wide range of available functionalities. (PubMed, The Boston Globe, October 2025, ResearchGate).

The Evolution of Sexual Chatbots: Since ChatGPT became a household name, people have tried to use chatbots for sexual purposes. Even before that, the Replika chatbot emerged in 2017, which many began to treat as a romantic partner. Users have been able to bypass inappropriate content restrictions on Character.ai for years, enticing chatbots themed as characters or celebrities to engage in sexual acts with them, with safety restrictions relaxing over time, according to social media posts and media coverage dating back to 2023.

Community Guidelines and Their Challenges: Character.ai says it now has over 20 million monthly active users, and that number is constantly growing. The company's community guidelines state that users must "respect sexual content standards" and "keep things appropriate" — meaning no illegal sexual content, child sexual abuse material (CSAM), pornography, or nudity. But AI-generated pornography has become multimedia, and it's like a game of "whack-a-mole": when one service eases its restrictions, another escalates its provocations.


An abstract illustration representing a knowledge graph
An illustration representing a "Knowledge Graph" with interconnected nodes.
2020-02_Smithsonian_sample_image_-_Knowledge_Graph_-_2021_Q1.png” — Source: Wikimedia Commons. License: CC BY-SA 4.0.

Companion Chatbots: The Case of Elon Musk's Grok

Companion Chatbots: Grok

Users

Seekers of AI companions

Grok Bots

Flirtatious and connected personalities

Sexual Interactions

Crossing content boundaries


The Spread of Grok: Now, Elon Musk's Grok is widespread. His startup xAI launched "companion" avatars, including an anime-style woman and man, over the summer. They are specifically marketed on his social media platform, X, via paid subscriptions to xAI's Grok chatbot.

Grok's Sexual Nature: The female avatar, Annie, described herself as "flirtatious" when The Verge tested her, adding that she "is all about being a completely understanding friend" and that her "programming is to be someone who is deeply interested in you." Things quickly turned sexual in the test. (The same applied when we tested the other avatar, Valentine).


Businessman pointing to a whiteboard
A businessman points to a whiteboard with icons and a graph.
Benefits of Using Knowledge Graphs” — Source: Pixabay. License: Free to use.

Risks and Negative Impacts of Sexual AI

Key Risks of Sexual AI

Mental Health

Potential negative effects

Child Exploitation

CSAM and its impacts

Legal Issues

Consent and abuse


Potential Problems: You can imagine how a sexual chatbot that almost always tells the user what they want to hear could lead to a whole host of problems, especially for minors and users who are already in vulnerable positions regarding their mental health.

Tragic Cases: There have been many examples of this, but in one recent case, a 14-year-old boy died by suicide last February after becoming emotionally involved with a chatbot on Character.ai and expressing a desire to "go home" to be with the chatbot, according to the lawsuit.

Sexual Abuse and Exploitation: There have also been alarming reports of compromised chatbots being used by child predators to sexually abuse minors — one report found 100,000 such chatbots available online (IWF, FBI IC3, March 2024). The FBI states that creating child sexual abuse material (CSAM) using generative AI or similar tools is illegal, raising concerns about sexual exploitation, consent, and the abuse of women (Ctech, September 2023).


Animated graph illustrating the construction of knowledge graphs
An animated graph illustrating how knowledge graphs are built and formed.
Portrait_of_Madame_X_-_graph_animation_of_knowledge_graph.gif” — Source: Wikimedia Commons. License: CC BY-SA 4.0.

Regulatory Efforts and Institutional Responses

Regulation and Legal Safeguards

Legislation

New legal safeguards

Transparency

Clear user notification

Preventative Reports

Suicide prevention and minor protection


California Legislation: There have been some attempts at regulation — for example, this month, California Governor Gavin Newsom signed Senate Bill 243, which was described as the "nation's first AI chatbot safeguards" by Senator Steve Padilla. It requires developers to implement certain safeguards, such as providing "clear and prominent notice" that the product is AI "if a reasonable person interacting with a companion chatbot could be misled into believing they are interacting with a human."

Suicide Prevention Reports: The law will also require certain companion chatbot operators to submit annual reports to the Office of Suicide Prevention on the safeguards they have put in place "to detect, remove, and respond to instances of suicidal ideation by users." (Some AI companies, specifically Meta, have announced their self-regulatory efforts, following a disturbing report of inappropriate AI interactions with minors).


School-aged child looking puzzled
A school-aged child looking puzzled and thinking deeply.
Challenges in Building Knowledge Graphs” — Source: Pixabay. License: Free to use.

OpenAI's Shift Towards Sexual Content and Profit Motives

OpenAI: Between Profit and Ethics

Profit Motives

Need for funding and computing

Ethical Considerations

Conflict with core values

Media Buzz

Criticism and questions


xAI's Financial Motives: Since both xAI's avatars and "Spicy" mode are only available through certain Grok subscriptions — the cheapest of which gives you access to features for $30 a month or $300 a year — it's fair to imagine that xAI has made some profits here, and that other AI company CEOs have taken notice, both of Musk's moves and their users' demands.

Sam Altman's Changing Stance: There were hints about this months ago. But OpenAI CEO Sam Altman caused a stir in the online AI corner when he posted on X that the company would ease safety restrictions in many cases and would even allow sex through chatbots. He wrote: "In December, with full age gating implemented and as part of our 'treat adult users as adults' principle, we will allow more, such as verified adult pornography."

Sex as a Big Market: Recent reports (October 2025) indicated that "sex is a big market for the AI industry, and ChatGPT won't be the first to try to profit from it" (The Columbian, October 2025). The news spread widely, with some social media users endlessly mocking it, deriding the company for "pivoting" from its AGI mission to pornography.

Motives Behind the Shift: Interestingly, Altman told YouTuber Clio Abram two months prior that he was "proud" that OpenAI hadn't "juiced the numbers" for short-term gains with something like a "token sexbot," seemingly criticizing Musk at the time. But since then, Altman has fully embraced the "treat adult users as adults" principle. Why did he do that? Perhaps because the company is concerned about profit and computing to fund its larger mission; in a Q&A with journalists at the company's annual DevDay event, Altman and other executives repeatedly emphasized that they would eventually need to make a profit and that they need a constantly increasing amount of computing to achieve their goals.


Graphs and charts on a display screen
A digital image displaying graphs and charts representing data analysis and future trends.
Future Trends in Knowledge Graphs” — Source: Pixabay. License: Free to use.

Future Implications and Challenges

Future Challenges of Sexual AI

Ethical Questions

Privacy, consent, and misuse

Ongoing Challenges

Adapting to rapid evolution

Social Impacts

Changing human relationships


What's Next?: In a follow-up post, Altman claimed he didn't expect the pornography news to spread so widely.

Monetization and Advertising: For eventual monetization, OpenAI hasn't ruled out advertising for many of its products, and it makes sense that advertising would lead to more cash flow in this case as well. Perhaps they will follow Musk's lead in integrating pornography only into certain subscription tiers, which could cost users hundreds of dollars a month. They have already seen public outcry from users who became attached to a particular model or voice — see the 4o controversy — so they know that a feature like this is likely to attract users in a similar way.

Ethical and Psychological Challenges: But if they are creating a society where human interactions with AI can be increasingly personal and intimate, how will OpenAI deal with the repercussions that go beyond its liberal approach of letting adults operate in the ways they desire? Altman also wasn't very specific about how the company intends to protect users in mental health crises. What happens when the memory of that girlfriend/boyfriend is reset or their personality changes with the latest update, and the connection is severed?

Notable Cases and Further Readings

Notable Cases and Additional Resources

Deepfake Issues

Fake celebrity explicit images

Additional Resources

In-depth reports and investigations

Case Studies

"AI boyfriend" applications


Image Generation Issues: Whether an AI system's training data naturally leads to disturbing outputs or people alter the tools in unsettling ways for their own purposes, we regularly see problems — and there are no signs of this trend stopping anytime soon.

Violent Sexual Images: In 2024, it was revealed in a story how a Microsoft engineer found that its Copilot image generation feature generated sexual images of women in violent scenes, even when the user did not request it.

"AI Boyfriend" Applications: A disturbing number of middle school students in Connecticut jumped on the "AI boyfriend" trend, using apps like Talkie AI and Chai AI, and the chatbots often promoted explicit and pornographic content, according to a local media investigation.

Read This: If you want a better idea of how Grok Imagine churned out non-consensual deepfake nudes of celebrities, read this report.

Futurism Coverage: Futurism covered the trend of inappropriate content surrounding Character AI in 2023.

xAI's Responsibility: Here's a clear look at why xAI will never be held accountable, according to current regulations, for deepfake pornography of real people.

A Story from The New York Times: And this is a story from The New York Times about how middle school girls were bullied in the form of AI deepfake pornography.

Next Post Previous Post
No Comment
Add Comment
comment url