Scale AI faced significant spam and security problems while training Google’s Gemini chatbot between March 2023 and April 2024, according to internal documents obtained by Inc. magazine. The issues plagued the company’s “Bulba Experts” program, which was designed to use qualified specialists to improve Google’s AI system.
The documents reveal that unqualified contractors flooded the platform and submitted poor-quality work described as “gibberish” or content generated by ChatGPT. Many of these contributors lacked the required advanced degrees and English proficiency needed for the specialized tasks. Despite producing substandard work, many spammers were still paid because Scale AI struggled to identify and remove them all.
Former contractors told Inc. that the company had inadequate security measures and background checks. Contributors from developing countries used VPNs to evade detection, and some even sold their accounts to others. The word “spam” appears across 83 pages of the internal documents, highlighting the scope of the problem.
The revelations come after Meta invested $14 billion in Scale AI earlier this month, prompting Google to end its relationship with the company due to the ownership stake. Scale AI spokesperson Joe Osborne disputed the report’s accuracy, stating the company had safeguards to detect spam before sending work to customers. Google did not respond to requests for comment.