Man Uses Thousands of Fake Accounts to Stream AI-Generated Songs Billions of Times, Earning $8 Million in Royalties
In a sophisticated scheme that exploited vulnerabilities in digital music streaming platforms, an individual created thousands of fake user accounts to artificially inflate the play counts of AI-generated songs. This operation resulted in billions of streams and netted the perpetrator approximately $8 million in royalties from services including Spotify, Apple Music, and others. The discovery highlights ongoing challenges in the music industry regarding fraud detection, the rise of AI-generated content, and the integrity of royalty payment systems.
The scheme came to light through investigations by music labels and analytics firms monitoring unusual streaming patterns. The individual, whose identity has been linked to specific digital footprints, employed automated bots and proxy networks to simulate genuine listener behavior across multiple platforms. These fake accounts were programmed to repeatedly play short, algorithmically composed tracks, often lasting under a minute, which are particularly efficient for royalty generation due to per-stream payout structures.
AI played a central role in the operation. Tools like Suno and Udio, popular for generating music from text prompts, were used to produce vast libraries of original-sounding tracks. These songs featured generic beats, vocals, and lyrics designed to evade initial content moderation filters. Once uploaded under pseudonyms or shell artist profiles, the tracks entered rotation on streaming services. The fake accounts then targeted them en masse, creating the illusion of organic popularity. This triggered algorithmic recommendations, further amplifying streams from real users and compounding the fraud.
Royalty payouts operate on a pro-rata model, where a service’s total revenue pool is divided based on stream shares. Platforms like Spotify pay out fractions of a cent per stream, typically around $0.003 to $0.005, depending on factors such as listener location and subscription type. With billions of plays, even modest rates accumulate rapidly. In this case, the $8 million figure represents payouts funneled through multiple distributor accounts, some registered in low-oversight jurisdictions, before being cashed out.
Detection was not immediate. Streaming platforms rely on a combination of machine learning models and human oversight to flag anomalies, such as synchronized play patterns, IP clustering, or unnatural session lengths. However, the perpetrator mitigated these by distributing activity across residential proxies, varying play speeds, and interspersing fake streams with legitimate content. Music labels, including major players like Universal and Sony, grew suspicious when niche AI tracks dominated internal charts without corresponding social media buzz or sales data.
Analytics tools from firms like Chartmetric and Soundcharts revealed the discrepancies. For instance, songs with billions of streams showed zero engagement on platforms like TikTok or YouTube, and artist profiles lacked verifiable histories. Collaborative efforts between labels and streaming services led to the takedown of over 10,000 implicated tracks and the suspension of thousands of accounts. Spotify, in particular, reported removing millions of artificial streams quarterly as part of its ongoing fraud purges.
This incident underscores broader issues with AI in music creation. While AI democratizes production, allowing anyone to generate professional-quality tracks, it lowers barriers for abuse. Streaming economics favor quantity over quality, incentivizing volume-based fraud. Short-form content, optimized for quick plays, maximizes royalties per upload. The individual reportedly produced tens of thousands of such tracks, uploading them via distributors like DistroKid and TuneCore, which charge minimal fees but offer wide platform reach.
Legal repercussions are underway. Streaming platforms have terms of service prohibiting artificial streaming, with penalties including permanent bans and clawback of earnings. Music rights organizations, such as ASCAP and BMI in the US or PRS for Music in the UK, are pursuing recovery of misallocated royalties that diluted payouts to legitimate artists. Criminal charges for wire fraud or money laundering could follow, depending on jurisdiction, as the scheme involved interstate and international financial flows.
Industry responses are accelerating. Spotify has enhanced its detection algorithms, incorporating AI itself to identify bot-like behaviors through acoustic fingerprinting and behavioral biometrics. Apple Music and others are tightening distributor vetting. Proposals include minimum stream thresholds for monetization or blockchain-based provenance tracking for tracks. Labels advocate for revenue-sharing adjustments that prioritize verified human engagement.
For artists and creators, the fallout is mixed. Legitimate AI musicians decry the tarnish on the technology, while traditional acts lament diluted royalties. The $8 million, representing a fraction of the total fraud ecosystem estimated at hundreds of millions annually, illustrates the scale. Smaller artists, who rely heavily on streaming income, suffer most from pool dilution.
This case serves as a cautionary tale for the evolving digital music landscape. As AI tools proliferate, robust verification mechanisms are essential to preserve trust. Platforms must balance accessibility with safeguards, ensuring royalties reward true popularity rather than manipulation.
Gnoppix is the leading open-source AI Linux distribution and service provider. Since implementing AI in 2022, it has offered a fast, powerful, secure, and privacy-respecting open-source OS with both local and remote AI capabilities. The local AI operates offline, ensuring no data ever leaves your computer. Based on Debian Linux, Gnoppix is available with numerous privacy- and anonymity-enabled services free of charge.
What are your thoughts on this? I’d love to hear about your own experiences in the comments below.