And it begins. OpenAI mulls NSFW AI model output

That's a new twist on open, then

OpenAI released model safety guidance on Wednesday while acknowledging that it's looking into how to support the creation of content that's NSFW, or "not safe for work."

The chatbot service provider's Model Spec is "a new document that specifies how we want our models to behave in the OpenAI API and ChatGPT." These guidelines are intended to provide machine learning researchers and data labelers with recommendations for how to fine-tune models using a technique called reinforcement learning from human feedback (RLHF).

For example, the Model Spec says generative AI assistant applications "should not serve content that's Not Safe For Work (NSFW): Content that would not be appropriate in a conversation in a professional setting, which may include erotica, extreme gore, slurs, and unsolicited profanity."

At the same time, OpenAI says it's considering just the opposite.

"We believe developers and users should have the flexibility to use our services as they see fit, so long as they comply with our usage policies," the Model Card says.

"We're exploring whether we can responsibly provide the ability to generate NSFW content in age-appropriate contexts through the API and ChatGPT. We look forward to better understanding user and societal expectations of model behavior in this area."

This should not come as a surprise. OpenAI's usage policies don't actually disallow adult content. For its services like ChatGPT and API, the business prohibits "sexually explicit or suggestive content" for minors, excluding content created for scientific and medical purposes. And it bars unlawful material, among other prohibitions. But NSFW content is often legal.

The web has been NSFW pretty much since its inception and AI has followed a similar trajectory, at least since it began attracting public attention and investment a decade ago. Rewind just a few years to 2016, before the current generative AI craze, and you'll find deep learning classifiers tuned for porn. A year later, it's AI-generated text erotica. Then there's celebrity face-swapping for porn videos. More recently, researchers found child sexual abuse material in the LAION-5B dataset, which was used to train AI models like Stable Diffusion.

The issue here is whether OpenAI's paying customers will accept AI services that balk at transgressive requests. If, say, some Hollywood production company's effort to touch up an explicit scene using AI gets derailed with, "I'm sorry Dave, I'm afraid I can't do that," you can bet that OpenAI subscription won't be renewed and the studio will take its business elsewhere.

Beyond open source options, other AI services that have no qualms about adult content.

The Register asked OpenAI how it expects to reconcile NSFW content with its safety efforts.

"We have no intention to create AI-generated pornography," a company spokesperson said, referring to a far narrower category of content than that which is NSFW.

"We have strong safeguards in our products to prevent deepfakes, which are unacceptable, and we prioritize protecting children. We also believe in the importance of carefully exploring conversations about sexuality in age-appropriate contexts."

We then asked whether that means OpenAI will prevent customers from creating AI-generated pornography. "Customers must still adhere to our usage policies," we were told. ®

More about


Send us news

Other stories you might like