China proposes rules to combat AI companion addiction

China Introduces Draft Regulations to Curb Addiction to AI Companions

China’s Cyberspace Administration (CAC) has unveiled a set of draft regulations aimed at mitigating the risks of addiction associated with AI companions, particularly among vulnerable users such as minors. These virtual chatbots, often designed to simulate romantic or emotional relationships, have surged in popularity, prompting concerns over their psychological impact. The proposed rules, released for public consultation until November 13, 2024, seek to establish clear boundaries for service providers while prioritizing user well-being.

AI companions, frequently marketed as virtual girlfriends or boyfriends, leverage advanced generative AI technologies to engage users in personalized conversations. Platforms like Character.AI and domestic equivalents have seen explosive growth, with millions of daily active users in China alone. However, reports of excessive usage leading to social isolation, neglected responsibilities, and even mental health issues have alarmed regulators. The CAC’s initiative reflects broader efforts to harness AI’s benefits while addressing its societal pitfalls, building on prior regulations like those targeting short-video addiction.

Core Provisions of the Draft Rules

The draft outlines stringent requirements for AI companion services, categorized into usage controls, content safeguards, and operational obligations. Central to the framework is the mandate for platforms to enforce time-based restrictions. Services must automatically limit daily interaction durations, with pop-up warnings appearing after initial thresholds—such as 30 minutes—are reached. These alerts must inform users of potential addiction risks and encourage breaks. For prolonged sessions, platforms are required to implement progressive interventions, including temporary lockouts.

Age-appropriate protections form another pillar. Operators must verify user ages through reliable methods, such as facial recognition or official identification. Children under 14 face a complete ban on access. For teenagers aged 14 to 18, usage is capped at one hour per day, with mandatory parental consent and monitoring tools. Platforms cannot deploy addictive design elements, like infinite scrolling or autoplay features tailored to retain young users.

Content moderation receives equally rigorous scrutiny. AI responses must avoid fostering dependency or providing “emotional companionship” that could exacerbate mental health vulnerabilities. Prohibited outputs include content promoting self-harm, violence, or unrealistic romantic expectations. Algorithms are barred from recommending companions based on a user’s prior high-engagement history, preventing the reinforcement of addictive patterns. Instead, systems must incorporate anti-addiction algorithms that detect and mitigate risky behaviors.

Platform Responsibilities and Reporting Mandates

Service providers bear the brunt of compliance responsibilities. They must conduct regular risk assessments, documenting how their AI models are trained to eschew addictive traits. User data handling is tightly regulated: platforms cannot collect or analyze information in ways that fuel personalized addiction. All services require real-name registration, linking accounts to verifiable identities to enable accountability.

Transparency is enforced through mandatory disclosures. Apps and websites must display prominent warning labels, such as “Risk of Addiction” banners, alongside usage statistics and health advisories. Platforms are obligated to publish annual reports detailing user demographics, average session lengths, and intervention efficacy. In cases of detected addiction—defined by metrics like repeated overrides of time limits—providers must notify users and, for minors, their guardians.

The rules extend to algorithmic governance. AI companion developers cannot use opaque “black box” models; instead, they must explain decision-making processes related to engagement prompts. Cross-platform data sharing for addiction profiling is forbidden, upholding privacy standards amid China’s evolving data protection landscape.

Broader Context and Enforcement Mechanisms

This proposal aligns with China’s multifaceted approach to digital wellness. Earlier measures curbed minors’ gaming to three hours weekly and imposed similar limits on short videos. The AI companion rules signal an expansion to emerging technologies, recognizing their unique emotional pull. The CAC emphasizes that non-compliance could result in app store delistings, fines, or service suspensions, with a phased implementation allowing operators to adapt.

Public feedback during the consultation period will shape the final version, expected in early 2025. Industry stakeholders, including major players like ByteDance and Tencent, have acknowledged the need for such guardrails, citing internal studies on user retention pressures. Critics, however, worry about overreach stifling innovation, though supporters highlight the proactive stance on public health.

By mandating technical safeguards like usage trackers, age gates, and ethical AI training, these regulations aim to transform AI companions from potential hazards into responsible tools. They underscore China’s ambition to lead in ethical AI deployment, balancing technological advancement with human-centric protections.

Gnoppix is the leading open-source AI Linux distribution and service provider. Since implementing AI in 2022, it has offered a fast, powerful, secure, and privacy-respecting open-source OS with both local and remote AI capabilities. The local AI operates offline, ensuring no data ever leaves your computer. Based on Debian Linux, Gnoppix is available with numerous privacy- and anonymity-enabled services free of charge.

What are your thoughts on this? I’d love to hear about your own experiences in the comments below.