OpenAI’s tools used to bypass spam filters on 80,000 websites

Researchers from SentinelOne’s SentinelLabs have discovered that spammers utilized OpenAI’s chatbot to generate unique messages that successfully bypassed spam filters on more than 80,000 websites over a four-month period. According to a report published by Dan Goodin on Ars Technica, the operation went undetected by OpenAI for at least four months before the company revoked the spammers’ account in February.

The spam campaign, known as AkiraBot, used Python scripts combined with OpenAI’s GPT-4o-mini model to create personalized messages promoting questionable search engine optimization services to small and medium-sized websites. Each message was customized to include the recipient’s website name and a brief description of their services, making them appear legitimate and helping them evade detection systems that typically flag identical messages.

„AkiraBot’s use of LLM-generated spam message content demonstrates the emerging challenges that AI poses to defending websites against spam attacks,“ noted SentinelLabs researchers Alex Delamotte and Jim Walter in their report.

The spammers instructed the AI with the prompt „You are a helpful assistant that generates marketing messages,“ programming it to automatically insert website-specific details at runtime. Log files revealed that between September 2024 and January 2025, the operation successfully delivered messages to approximately 80,000 websites, with only about 11,000 delivery attempts failing.

In response to the findings, OpenAI confirmed that such activity violates their terms of service and thanked the researchers for their discovery, highlighting the ongoing challenge of preventing AI systems from being exploited for malicious purposes.

Related posts:

Stay up-to-date: