Deezer cracks down: AI-generated music streams demonetised and detection tech offered to the industry now.
Streaming platforms are in a tough spot. Deezer says most AI uploads are fraudulent and is pulling revenue. The Parisian service estimates AI tracks now make up 39% of daily uploads and 60,000 synthetic tracks arrive each day. That scale forced action. Deezer will demonetise detected fraudulent plays and remove fully AI-generated music from algorithmic recommendations. It’s also licensing the detection tool more broadly, signaling a new industry playbook. Read more about shifting revenue debates in this earlier piece on AI music licensing and industry splits.
I grew up between opera houses and coding benches — singing La Bohème on stage, then tinkering with microcontrollers in Silicon Valley. Spotting fake music feels personal. Once I watched a synthetic chorus wipe out weeks of careful playlist work. I laugh now, but protecting artists and real listeners is why I keep chasing better detection tools. Also, as someone who recorded with Madonna, I can confirm: some vocals should come with a credit and a human being attached.
AI-generated music
Deezer’s recent move is built on hard metrics. The platform reports AI tracks are roughly 39% of daily uploads, about 60,000 synthetic tracks per day, while AI music accounts for only 2% of total streams. Yet Deezer found up to 85% of those AI-driven streams were fraudulent in 2025, compared with an 8% fraud rate across the wider catalogue. The company now excludes detected fraudulent streams from royalty payments and removes fully AI-generated tracks from editorial playlists and algorithmic recommendations. The full report was covered in the Resident Advisor piece linked here: Resident Advisor.
What Deezer unearthed
The numbers are blunt. Roughly 60,000 synthetic tracks per day means catalogs balloon with low-cost uploads. AI-generated music represented 39% of uploads but just 2% of streams, indicating most synthetic content fails to attract organic listeners. The startling stat is fraud: up to 85% of AI-related streams flagged as fraudulent in 2025, versus an 8% fraud average elsewhere on Deezer. That gap drove the demonetisation policy.
How detection and takedowns work
Deezer built an internal AI-detection tool in early 2025. It tags suspected synthetic tracks, strips them from algorithmic feeds, and excludes fraudulent plays from payouts. The tool was tested with groups such as French collecting society Sacem and is already used by Billboard to identify AI tracks on charts. CEO Alexis Lanternier framed the policy as protecting transparency and artist revenue: making it harder for fraudsters to game the system.
Licensing the tech and industry ripple effects
Crucially, Deezer is licensing its detection technology to other companies. That move turns an internal compliance system into a potential industry standard. If larger platforms adopt similar detection, it could curb fraudulent streaming networks and re-balance payouts toward verified creators. But it also raises questions about false positives, artist recourse, and the criteria for what counts as “fully AI-generated.”
What comes next
Expect more platforms to build or buy detection tech, and for rights bodies to demand transparent tagging. For artists and labels, the signal is clear: attribution and provenance matter. For listeners, platforms will offer clearer choices between synthetic and human-made music. The broader debate about AI-generated music — from creativity to commerce — has entered a regulatory and marketplace sprint.
AI-generated music Business Idea
Product: Launch a SaaS platform called “ClearStream” that bundles Deezer-style AI detection with provenance metadata and an artist verification layer. The service integrates audio fingerprinting, AI-origin scoring, and a blockchain-backed provenance ledger to record upload origin, creation method, and rights holder claims. Target Market: Streaming platforms, DSPs, collecting societies, indie distributor services, and labels needing fraud mitigation and audit trails. Revenue Model: Tiered subscriptions (platforms pay per million streams scanned), licensing fees for enterprise integrations, and a transactional fee for provenance notarisation. Why Now: With 60,000 synthetic tracks added daily and fraud rates as high as 85% of AI streams on some services, demand for robust detection and provenance is immediate. Licensing trends — illustrated by Deezer opening its tech — lower go-to-market barriers and create partnership pathways. Investors gain a defensible moat through proprietary models, datasets, and enterprise contracts with rights bodies and DSPs looking to protect royalties and platform trust.
A Better Balance for Music
Deezer’s pivot shows one path forward: detection plus accountability. Technology can protect artists while allowing ethical AI creativity to flourish. The challenge is designing fair systems that reduce fraud without silencing legitimate experimentation. Which safeguards would you want on your favourite platform — stricter detection, transparent labels, or artist opt-ins? Tell me which matters most to you.
FAQ
Will Deezer stop paying royalties on all AI-generated tracks?
Deezer excludes detected fraudulent streams from royalty payments and removes fully AI-generated tracks from recommendations. In 2025 it flagged up to 85% of AI-related streams as fraudulent; legitimate AI-assisted works can still be monetised if verified.
How many AI tracks are uploaded to Deezer daily?
Deezer reports about 60,000 synthetic tracks delivered each day and estimates AI-created tracks comprise roughly 39% of daily uploads, though they account for about 2% of total streams.
Is Deezer selling its detection tech to others?
Yes. Deezer is licensing the detection tool to industry partners after testing with organisations like Sacem. The tool launched internally in early 2025 and is already used by outlets such as Billboard.