
Adult chatbot with age verification and safety alerts for children.
OpenAI's confirmation that it will allow verified adult users to access adult content on ChatGPT starting in December 2025 is seen as a significant milestone for the industry, both boosting market growth and setting a new standard for protecting minors.
Adult chatbots - the new race track for AI.
AI platforms have rapidly deployed models to serve the needs of adults, from intimate conversations to emotional support.
According to CEO Sam Altman, ChatGPT was previously "heavily restricted" to avoid mental health risks, but this approach made the model "less useful and less appealing" to many users. He said OpenAI is shifting to the principle of "treating adults as adults."
The addition of a controlled access mode for verified users has further enlivened the already rapidly growing adult chatbot market.
The competitive landscape has become even clearer as Elon Musk's xAI has recently been questioned by international media regarding its ability to create celebrity-related adult content, thereby raising urgent demands for new protection standards.
The "problem" of age verification.
Opening access to adult content presents a major challenge: verifying user age. Verification using identification documents offers high reliability but carries risks to privacy and personal data security. Meanwhile, AI-based behavioral or language-based verification methods require large datasets, sophisticated models, and still carry the risk of being bypassed by users.
The challenge is further amplified by the characteristics of AI content generation. Chatbots can generate new information in real time, expand conversations in sensitive ways, or interpret ambiguous requests into inappropriate content.
The risks increase dramatically when the model supports multiple media such as image, audio, and video creation. Any ability to simulate or synthesize sensitive content involving minors, even in a simulated form, must be absolutely prevented, requiring developers to establish control mechanisms from the model design layer to the content filtering layer.
The pressure of responsibility on AI businesses.
News that OpenAI is about to activate an adult content mode has caused concern among many child protection organizations. They warn that legitimizing adult content using AI could inadvertently encourage minors to seek ways to "cross the line."
In Vietnam, many parents have also expressed concern that a platform traditionally associated with learning, such as ChatGPT, has introduced a mode for adults.
These pressures force AI companies to thoroughly implement the principle of "safety by design": controlling training data, monitoring input and output, deploying real-time filters, and maintaining regular audit mechanisms.
Transparency in operations, public disclosure of protective measures, and acceptance of community feedback are becoming mandatory requirements to create a safe environment for adults without posing risks to vulnerable user groups.
OpenAI's decision to allow verified users access to adult content is not only a commercial milestone but also a reminder that technological innovation must be accompanied by responsible standards.
For the AI ecosystem to thrive sustainably, businesses need to prioritize safety, maintaining a balance between innovation and the responsibility to protect the community.
Source: https://tuoitre.vn/openai-bat-che-do-18-บน-chatgpt-lam-sao-bao-ve-tre-vi-thanh-nien-2025121111362172.htm






Comment (0)