Six months after admitting that ChatGPT 4o was a little too enthusiastic with its relentless praise for users, Sam Altman is bringing sexy back. The OpenAI boss says a new version of ChatGPT “that behaves more like what people liked about 4o” is coming in a few weeks, and it’ll get even better—or potentially much worse, depending on how you feeling about the idea—in December with the introduction of AI-powered “erotica.”
“We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues,” Altman wrote on X. “We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right. Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases.”
It’s arguable that OpenAI has been anything but “careful” with the mental health impacts of its chatbots. The company was sued in August by the parents of a teenager who died of suicide after allegedly being encouraged and instructed on how to do so by ChatGPT. The following month, Altman said the software would be trained not to talk to teens about suicide or self-harm (possibly leading one to wonder why it took a lawsuit over a teen suicide to spark such a change), or to engage them in “flirtatious talk.”
At the same time, Altman said OpenAI aims to “treat our adult users like adults,” and that’s seemingly where this forthcoming new version comes in, as Altman repeated the phrase in today’s message.
“In a few weeks, we plan to put out a new version of ChatGPT that allows people to have a personality that behaves more like what people liked about 4o (we hope it will be better!),” Altman continued. ‘If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it (but only if you want it, not because we are usage-maxxing).”
And then, so to speak, the money shot: “In December, as we roll out age-gating more fully and as part of our ‘treat adult users like adults’ principle, we will allow even more, like erotica for verified adults.”
I’m generally of the opinion that adults should be allowed to do what they want as long as nobody’s being hurt, but in this case I have to admit to certain concerns. Because as long as nobody’s being hurt is, for the moment at least, the big question here: “AI psychosis,” in which people form obsessive or otherwise unhealthy connections to chatbots, or even come to believe the software is actually sentient, is not a clinical designation but it does seem to be a growing problem, to the point that Altman himself recently acknowledged that some people use it “in self-destructive ways.”
In one particularly disturbing incident reported by Reuters, a cognitively impaired man died while attempting to meet what he believed was a real woman, but was in fact a Meta chatbot he’d been talking to on Facebook.
Altman also said in the recent past, somewhat ironically as it turns out, that while some AI companies will opt to make “Japanese anime sex bots”—presumably a dig at Elon Musk—OpenAI would not, in part to avoid the risk that “people who have really fragile mental states get exploited accidentally.”
So there has been explicit acknowledgement of the potential risk of misuse or overuse of chatbots, and in light of that—and more generally, the fact that this technology is still in its infancy—I do wonder about the wisdom of turning them into always-on phone sex machines. (You can call it “erotica” if you like, but it is what it is.) On the other hand, OpenAI needs money—lots and lots and lots of money—and nobody ever went broke selling sex.