Why chatbots are starting to check your age

Why Chatbots Are Beginning to Verify User Ages

In recent months, leading AI chatbots have introduced age verification prompts, marking a significant shift in how these tools interact with users. Platforms such as ChatGPT from OpenAI, Claude from Anthropic, and others now routinely ask newcomers to confirm they are at least 13 or 18 years old before granting full access. This development stems from mounting regulatory pressures and a growing recognition of the risks posed to minors by generative AI systems.

The push for age checks gained urgency following high-profile incidents and legislative actions. In the United States, bills like the Kids Online Safety Act (KOSA) and updates to the Children’s Online Privacy Protection Act (COPPA) mandate platforms to implement safeguards against harmful content for children. Europe’s Digital Services Act similarly requires age assurance measures for high-risk services. AI companies, facing potential fines and lawsuits, have responded proactively. OpenAI, for instance, updated its policies in late 2025 to restrict under-13 users entirely and limit 13- to 17-year-olds to monitored interactions.

How Age Verification Works in Chatbots

The implementation varies but relies primarily on self-attestation. Upon first visit, users encounter a pop-up requiring them to select their birth year or affirm their age group. ChatGPT displays a message stating, “To help keep everyone safe, please confirm you’re 13 or older.” Failure to comply blocks core features like image generation or custom GPTs. Anthropic’s Claude employs a similar gate, emphasizing parental consent for younger teens.

Some platforms integrate third-party services for stricter checks. Character.AI, after scrutiny from regulators, now mandates government ID uploads for users under 18, partnering with Yoti for biometric age estimation. This involves facial scans analyzed against age databases, achieving claimed accuracy rates above 95 percent for distinguishing adults from children. However, most chatbots stick to lighter-touch methods to minimize friction, balancing safety with user retention.

Behind the scenes, age data influences content moderation. Models are fine-tuned to detect and restrict sensitive topics for younger accounts, such as discussions of violence, self-harm, or explicit material. OpenAI’s system flags and soft-blocks responses, while Anthropic uses constitutional AI principles to enforce age-appropriate guardrails.

The Rationale: Protecting Vulnerable Users

Proponents argue these measures are essential. Generative AI can produce convincing misinformation, cyberbullying fodder, or grooming material, all amplified by chatbots’ conversational intimacy. A 2025 Stanford study found that 40 percent of teens interacting with unmonitored AIs encountered harmful suggestions, from homework cheating aids to risky behavioral advice. Regulators cite cases like a 13-year-old’s exposure to suicide ideation prompts on early chatbot versions.

Industry leaders echo this. OpenAI CEO Sam Altman stated in a company blog that “age verification is a foundational step toward responsible AI deployment.” Anthropic’s safety researchers highlight how minors lack the critical thinking to navigate AI hallucinations or manipulative outputs.

Challenges and Criticisms

Despite good intentions, age gates face substantial hurdles. Self-reported ages are notoriously unreliable; studies show up to 30 percent of minors lie about their age online. Privacy advocates, including the Electronic Frontier Foundation, warn that even anonymized data collection risks breaches or profiling. Biometric options raise consent issues, especially for children.

User experience suffers too. Legitimate adults report frustration with repeated prompts or access denials due to glitches. In regions with poor internet or ID access, such as parts of Africa and Asia, these barriers exacerbate digital divides. Critics like Evan Greer of Fight for the Future argue that “age verification distracts from root problems like over-censorship and corporate opacity.”

Effectiveness remains unproven. A recent MIT Technology Review analysis of 500 chatbot sessions revealed that savvy teens bypass gates via VPNs, secondary accounts, or coached responses from forums. Platforms acknowledge this, with OpenAI investing in behavioral signals like typing patterns to infer age indirectly.

Broader Implications for AI Accessibility

This trend signals a pivot from open-access AI to tiered systems. Future iterations may incorporate device-level checks via app stores or parental controls, akin to gaming platforms. Apple’s Screen Time and Google Family Link already integrate AI monitoring, paving the way for ecosystem-wide enforcement.

Experts predict escalation. Daphne Keller of Stanford’s Cyberlaw Clinic foresees “AI passports” tying verified ages to profiles across services. Meanwhile, open-source alternatives like Llama resist gates, potentially fragmenting the landscape into regulated commercial bots and unregulated wild west options.

For developers, compliance demands new tools. Age-aware prompting frameworks, such as Hugging Face’s safety kits, allow fine-grained restrictions. Yet, the arms race continues: as verification improves, so do circumvention tactics.

Navigating the Trade-offs

Age checks embody the tension between innovation and safety in AI. They offer a pragmatic first line of defense but fall short as a panacea. True protection requires multifaceted approaches: better model training, transparent reporting, and global standards. As chatbots embed deeper into education, therapy, and daily life, getting age right is non-negotiable, even if perfection eludes us.

What are your thoughts on this? I’d love to hear about your own experiences in the comments below.