A group calling itself “Sora PR Puppets” temporarily leaked access to OpenAI’s unreleased Sora video generation tool through a Hugging Face platform interface. The group published authentication tokens that allowed users to generate 10-second videos at 1080p resolution before access was revoked. The protesters claim OpenAI is pressuring early testers to present positive narratives while failing to compensate artists fairly for their testing work. OpenAI responded by stating that participation in the alpha program is voluntary, with no obligations beyond responsible use. The leaked version appeared to be a faster “turbo” variant of Sora, which has faced technical challenges including consistency issues and long processing times since its February debut.
The incident highlights broader tensions in AI testing practices, where companies typically maintain tight control over early access through NDAs and approval requirements. While “red teaming” has become an industry standard adopted even by government agencies, critics argue this approach limits independent research and transparency. Red teaming is a systematic approach where a group of experts (the “red team”) deliberately tries to find flaws, vulnerabilities, or potential misuse cases in a system by taking an adversarial position.
The protesting artists, who represent a minority of about 20 people out of hundreds of testers, created their unauthorized access point without actually compromising OpenAI’s code or proprietary information. Meanwhile, some testers like musician André Allen Anjos defended the program, stating most participants were excited to be involved and that the team was “doing it right.”
Sources: TechCrunch, Washington Post