AI chatbots create new opportunities for phishing attacks

AI-powered chatbots often provide incorrect website addresses for major companies, creating a new attack vector for criminals. According to a report by threat intelligence firm Netcraft, this vulnerability can be exploited for sophisticated phishing schemes. The findings were detailed in an article by Iain Thomson for The Register. Netcraft researchers tested GPT-4 models by asking …

Read more

Open-source tool Anubis offers websites protection from AI scrapers

An open-source program named Anubis is helping website operators protect their sites from being overwhelmed by AI data scrapers. According to a report by Emanuel Maiberg for 404 Media, the developer Xe Iaso created the tool in her free time after her own server was repeatedly crashed by a bot harvesting data for AI models. …

Read more

AI-generated images fuel online plant scams

Scammers are increasingly using artificial intelligence to create and sell images of plants that do not exist. These fraudulent online listings target plant enthusiasts with pictures of exotic and impossibly perfect flowers or succulents, luring them with low prices. A blog post by the retailer Bob’s Market explains that customers who purchase these items may …

Read more

AI content creator targets older women on social media with fake content

A self-described SEO specialist, Jesse Cunningham, has openly discussed how he uses AI to produce fake content targeting older women on Facebook and Pinterest. According to reporting by Maggie Harrison Dupré for Futurism, Cunningham creates large volumes of AI-generated articles and images on topics ranging from houseplants to recipes, attributing them to fictional bloggers with …

Read more

Report: OpenAI reduces safety testing amid competition pressure

OpenAI has significantly shortened its safety testing period for new AI models, prompting concerns about insufficient safeguards. According to a Financial Times report by Cristina Criddle, testers now have just days to evaluate models compared to several months previously. Eight people familiar with OpenAI’s testing processes indicated that evaluations have become less thorough as the …

Read more

OpenAI’s tools used to bypass spam filters on 80,000 websites

Researchers from SentinelOne’s SentinelLabs have discovered that spammers utilized OpenAI’s chatbot to generate unique messages that successfully bypassed spam filters on more than 80,000 websites over a four-month period. According to a report published by Dan Goodin on Ars Technica, the operation went undetected by OpenAI for at least four months before the company revoked …

Read more

AI-generated content overloads social media through brute force

Jason Koebler, co-founder of 404 Media, reports that generative AI is being used as a “brute force attack” on social media algorithms, flooding platforms with low-quality content at unprecedented scale. In his article, Koebler explains that AI creators can produce dozens of posts in minutes, allowing them to quickly identify and exploit what performs well …

Read more

Google’s Gemini model used to remove image watermarks

A recent discovery shows that Google’s Gemini 2.0 Flash AI model can remove watermarks from images, including those from Getty Images and other stock photo providers. According to reporting by Kyle Wiggers for TechCrunch, users on social media platforms have been sharing examples of this controversial capability. Unlike some competing AI models such as Anthropic’s …

Read more

AI voice cloning tools lack effective safeguards against misuse

Most AI voice cloning services have inadequate protections against nonconsensual voice impersonation, according to a Consumer Reports investigation. The study examined six leading publicly available tools and found that five had easily bypassed safeguards. As reported by NBC News, four services (ElevenLabs, Speechify, PlayHT, and Lovo) merely require checking a box confirming authorization, while Resemble …

Read more

AI fraud detection startup secures $5 million funding

AI or Not, a platform specialized in detecting AI-generated content and deepfakes, has raised $5 million in seed funding. As reported by Dean Takahashi, the investment round was led by Foundation Capital with participation from GTMFund, Plug and Play, and angel investors. The company’s technology identifies AI-generated content across images, audio, and video to prevent …

Read more