Music has always been shaped by technology. From multitrack tape and synthesizers to digital audio workstations and Auto-Tune, every generation of artists and producers has used new tools to push sound and storytelling forward.
However, the pace of recent advances in generative AI technology has felt quick and at times unsettling, especially for creatives. At its best, AI is unlocking incredible new ways for artists to create music and for listeners to discover it. At its worst, AI can be used by bad actors and content farms to confuse or deceive listeners, push “slop” into the ecosystem, and interfere with authentic artists working to build their careers. That kind of harmful AI content degrades the user experience for listeners and often attempts to divert royalties to bad actors.
The future of the music industry is being written, and we believe that aggressively protecting against the worst parts of Gen AI is essential to enabling its potential for artists and producers.
We envision a future where artists and producers are in control of how or if they incorporate AI into their creative processes. As always, we leave those creative decisions to artists themselves while continuing our work to protect them against spam, impersonation, and deception, and providing listeners with greater transparency about the music they hear.
This journey isn’t new to us. We’ve invested massively in fighting spam over the past decade. In fact, in the past 12 months alone, a period marked by the explosion of generative AI tools, we’ve removed over 75 million spammy tracks from Spotify.
AI technology is evolving fast, and we’ll continue to roll out new policies frequently. Here is where we are focusing our policy work today:
-
- Improved enforcement of impersonation violations
- A new spam filtering system
- AI disclosures for music with industry-standard credits
Stronger impersonation rules
The issue: We’ve always had a policy against deceptive content. But AI tools have made generating vocal deepfakes of your favorite artists easier than ever before.
What we’re announcing: We’ve introduced a new impersonation policy that clarifies how we handle claims about AI voice clones (and other forms of unauthorized vocal impersonation), giving artists stronger protections and clearer recourse. Vocal impersonation is only allowed in music on Spotify when the impersonated artist has authorized the usage.
We’re also ramping up our investments to protect against another impersonation tactic—where uploaders fraudulently deliver music (AI-generated or otherwise) to another artist’s profile across streaming services. We’re testing new prevention tactics with leading artist distributors to equip them to better stop these attacks at the source. On our end, we’ll also be investing more resources into our content mismatch process, reducing the wait time for review, and enabling artists to report “mismatch” even in the pre-release state.
Why it matters: Unauthorized use of AI to clone an artist’s voice exploits their identity, undermines their artistry, and threatens the fundamental integrity of their work. Some artists may choose to license their voices to AI projects—and that’s their choice to make. Our job is to do what we can to ensure that the choice stays in their hands.
Music spam filter
The issue: Total music payouts on Spotify have grown from $1B in 2014 to $10B in 2024. But big payouts entice bad actors. Spam tactics such as mass uploads, duplicates, SEO hacks, artificially short track abuse, and other forms of slop have become easier to exploit as AI tools make it simpler for anyone to generate large volumes of music.
What we’re announcing: This fall, we’ll roll out a new music spam filter—a system that will identify uploaders and tracks engaging in these tactics, tag them, and stop recommending them. We want to be careful to ensure we’re not penalizing the wrong uploaders, so we’ll be rolling the system out conservatively over the coming months and continue to add new signals to the system as new schemes emerge.
Why it matters: Left unchecked, these behaviors can dilute the royalty pool and impact attention for artists playing by the rules. Our new music spam filter will protect against this behavior and help prevent spammers from generating royalties that could be otherwise distributed to professional artists and songwriters.
AI disclosures for music with industry-standard credits
The issue: Many listeners want more information about what they’re listening to and the role of AI technology in the music they stream. And, for artists who are responsibly using AI tools in their creation processes, there’s no way on streaming services for them to share if and how they’re using AI. We know the use of AI tools is increasingly a spectrum, not a binary, where artists and producers may choose to use AI to help with some parts of their productions and not others. The industry needs a nuanced approach to AI transparency, not to be forced to classify every song as either “is AI” or “not AI.”
What we’re announcing: We’re helping develop and will support the new industry standard for AI disclosures in music credits, developed through DDEX. As this information is submitted through labels, distributors, and music partners, we’ll begin displaying it across the app. This standard gives artists and rights holders a way to clearly indicate where and how AI played a role in the creation of a track—whether that’s AI-generated vocals, instrumentation, or post-production. This change is about strengthening trust across the platform. It’s not about punishing artists who use AI responsibly or down-ranking tracks for disclosing information about how they were made.
This is an effort that will require broad industry alignment, and we’re proud to be working on this standard alongside a wide range of industry partners, including Amuse, AudioSalad, Believe, CD Baby, DistroKid, Downtown Artist & Label Services, EMPIRE, Encoding Management Service – EMS GmbH, FUGA, IDOL, Kontor New Media, Labelcamp, NueMeta, Revelator, SonoSuite, Soundrop, and Supply Chain.
Why it matters: By supporting an industry standard and helping to drive its wide adoption, we can ensure listeners see the same information, no matter which service they’re listening on. And ultimately, that preserves trust across the entire music ecosystem, as listeners can understand what’s behind the music they stream. We see this as an important first step that will undoubtedly continue to evolve.
While AI is changing how some music is made, our priorities are constant. We’re investing in tools to protect artist identity, enhance the platform, and provide listeners with more transparency. We support artists’ freedom to use AI creatively while actively combating its misuse by content farms and bad actors. Spotify does not create or own music; this is a platform for licensed music where royalties are paid based on listener engagement, and all music is treated equally, regardless of the tools used to make it.
These updates are the latest in a series of changes we’re making to support a more trustworthy music ecosystem for artists, for rightsholders, and for listeners. We’ll keep them coming as the tech evolves, so stay tuned.