AI-powered chatbots often provide incorrect website addresses for major companies, creating a new attack vector for criminals. According to a report by threat intelligence firm Netcraft, this vulnerability can be exploited for sophisticated phishing schemes. The findings were detailed in an article by Iain Thomson for The Register.
Netcraft researchers tested GPT-4 models by asking for the login pages of well-known brands in finance, retail, and tech. The AI provided the correct URL in only 66 percent of cases. Nearly a third of the suggested links led to unregistered or inactive domains, while five percent pointed to legitimate but incorrect websites.
Rob Duncan, Netcraft’s lead of threat research, explained that criminals could exploit this flaw. Scammers can ask an AI for a specific URL and, if the result is an available domain, they can purchase it to host a malicious phishing site. Since the AI prioritizes word associations over a site’s reputation, it can be tricked into recommending these fake pages to unsuspecting users.
This marks a shift in tactics, as criminals now create content specifically designed to fool large language models. In one instance, attackers built an entire ecosystem of fake GitHub repositories, tutorials, and social media posts to promote a poisoned blockchain API. The goal was to convince an AI of the fake API’s legitimacy, thereby tricking developers into using it.