Spotify is introducing a series of policy changes to manage AI-generated content on its platform. The updates include a new system for labeling AI music, a spam filter to catch fraudulent uploads, and a clearer ban on unauthorized voice clones. Sarah Perez reports for TechCrunch.
For labeling, the company will adopt the upcoming DDEX industry standard. This system allows music partners to provide detailed credits that specify how AI was used, for example, for vocals, instruments, or post-production. According to Sam Duboff, Spotify’s Global Head of Marketing and Policy, this will enable more nuanced disclosures beyond a simple “AI” or “not AI” classification.
Spotify also clarified its rules on AI personalization. The company explicitly stated that unauthorized AI voice clones, deepfakes, and other forms of vocal impersonation are not allowed and will be removed from the service.
To address the increase in spam created with AI tools, a new filter will be rolled out this fall. It is designed to identify and stop recommending tracks that use fraudulent tactics like mass uploads or search manipulation. The company will also work to prevent music from being incorrectly uploaded to the wrong artist’s profile.
Spotify emphasized that it still supports the responsible use of AI in music creation. Charlie Hellman, the company’s Global Head of Music, said the goal is to stop bad actors who game the system, not to punish artists using AI to be more creative.