AI trainers for Google report stressful conditions and low pay

Thousands of human workers who train and moderate Google’s AI models like Gemini face grueling deadlines, low pay, and exposure to distressing content. These “raters,” hired through contractors like GlobalLogic, are essential for making chatbots seem intelligent and safe but often feel invisible and expendable. This is reported by The Guardian after speaking with ten current and former employees.

Workers describe being hired for vague roles, only to find themselves moderating violent and sexually explicit material without warning or mental health support. One rater, Rachael Sawyer, said the pressure to complete dozens of tasks daily, each within minutes, led to anxiety and panic attacks. Deadlines have reportedly been cut in half, forcing raters to check complex AI responses of 500 words or more in just 15 minutes.

Many raters are highly educated professionals, including teachers and writers, earning from $16 to $21 an hour. They express concern over the quality of their work and the safety of the final product. Some are required to evaluate subjects outside their expertise, such as astrophysics or chemotherapy options.

Guidelines have also allegedly been relaxed. According to one worker, the AI is now permitted to repeat hate speech as long as the user provides it first. Researcher Adio Dinika described the system as a “pyramid scheme of human labor.” Google stated that raters are employed by its suppliers and their feedback is one of many data points used to measure system performance.

About the author

Related posts:

Stay up-to-date:

Advertisement