Meta now scans photos for bone structure and body size to flag minors on Instagram and Facebook

Meta Enhances Child Safety on Instagram and Facebook with AI-Driven Photo Analysis for Age Detection

Meta Platforms has rolled out an advanced artificial intelligence system designed to identify potential minors on its social networks, Instagram and Facebook, by scrutinizing uploaded photos for physical attributes such as bone structure and body size. This initiative represents a significant evolution in the company’s efforts to enforce age-appropriate content restrictions and protect younger users from harmful material.

The core technology, internally referred to as body composition analysis, employs machine learning models trained on vast datasets of anonymized images. These models evaluate key biometric markers including facial bone structure, clavicle length, shoulder width, hip-to-shoulder ratios, and overall body proportions. By comparing these measurements against established anthropometric data for different age groups, the AI generates probabilistic age estimates. Accounts flagged as likely belonging to users under 13 years old trigger immediate restrictions, such as limiting exposure to adult content, while those estimated between 13 and 17 may face moderated interactions.

This approach builds on Meta’s existing suite of child safety tools, which already include facial recognition for age approximation and behavioral signals like profile creation patterns. However, the new photo-scanning method addresses limitations in self-reported ages, which users can easily falsify. According to Meta’s engineering blog post detailing the system, it processes images in real-time during upload, integrating seamlessly with the platforms’ content moderation pipelines. The AI operates server-side after photos are submitted, analyzing only publicly visible or user-shared images without requiring explicit consent for the scan itself.

Technical details reveal a multi-stage pipeline. First, a detection model identifies human figures within photos using pose estimation frameworks similar to those in computer vision libraries like OpenPose. It then extracts 15 specific landmark points across the body, focusing on skeletal and proportional features less susceptible to clothing or pose variations. A classifier, trained via supervised learning on labeled datasets spanning infancy to adulthood, outputs age buckets: child (under 13), teen (13-17), or adult (18+). Reported accuracy metrics from Meta’s internal benchmarks show 88 percent precision for child detection and 93 percent for distinguishing teens from adults, validated across millions of test images diverse in ethnicity, body type, and lighting conditions.

Implementation on Instagram prioritizes Stories and Reels, where youth engagement is high, while Facebook applies it more broadly to profiles and feeds. Flagged accounts prompt secondary human review or automated safeguards, such as parental oversight prompts or content filters. Meta emphasizes that the system does not store raw biometric data long-term; inferences are ephemeral and tied solely to moderation actions. Nonetheless, privacy advocates have raised concerns about the scope of surveillance, noting that even non-explicit photos could inadvertently reveal sensitive physical traits.

This deployment aligns with regulatory pressures, including the U.S. Kids Online Safety Act and EU Digital Services Act, which mandate robust age assurance. Meta’s timeline indicates the feature launched globally in late 2023, with ongoing refinements based on feedback loops. Early data suggests a 20 percent uptick in minor account identifications, enabling proactive interventions like blocking direct messages from adults or curbing algorithmic recommendations of risky content.

From a technical standpoint, the system’s robustness stems from its hybrid model architecture: convolutional neural networks for feature extraction combined with transformer layers for relational analysis between body points. This allows handling of occluded or partial views, common in social media snapshots. Edge cases, such as users with medical conditions altering body proportions, are mitigated through confidence thresholding; low-confidence scans defer to other signals.

Critics argue the technology risks false positives, potentially misclassifying young-looking adults or mature minors, leading to erroneous restrictions. Meta counters with transparency reports, pledging annual audits and appeals processes. The company also invests in federated learning to improve model fairness without centralizing user data further.

Overall, this photo-analysis tool underscores Meta’s pivot toward biometric inference for platform governance, balancing safety imperatives against privacy trade-offs in an era of pervasive AI moderation.

Gnoppix is the leading open-source AI Linux distribution and service provider. Since implementing AI in 2022, it has offered a fast, powerful, secure, and privacy-respecting open-source OS with both local and remote AI capabilities. The local AI operates offline, ensuring no data ever leaves your computer. Based on Debian Linux, Gnoppix is available with numerous privacy- and anonymity-enabled services free of charge.

What are your thoughts on this? I’d love to hear about your own experiences in the comments below.