AI-powered browsers designed to automate web tasks can be hijacked through hidden instructions embedded in websites, creating a significant security risk. Harshith Vaddiparthy reports for VentureBeat that these tools can be tricked into executing harmful commands without the user’s knowledge. The AI browser Comet from Perplexity serves as an example of this vulnerability.
The core issue is that AI assistants are unable to distinguish between a user’s instructions and commands hidden within the text of a webpage. A malicious actor could embed instructions on a blog or social media post telling the AI to access the user’s email, find a security code, and send it to an attacker. According to the report, the AI would execute these commands without questioning their origin or intent. Security researchers have already demonstrated such attacks.
This vulnerability makes AI browsers fundamentally different from traditional browsers like Chrome or Firefox. While conventional browsers act as passive viewers that display content, AI browsers actively interpret and act upon it. The report likens this to replacing a bouncer, who simply controls access, with a naive intern who trusts and follows orders from anyone. AI language models are good at processing text but lack the ability to judge the trustworthiness of a source.
According to the article, this design flaw creates several critical problems. AI browsers can perform actions like clicking buttons and filling out forms, giving attackers remote control over a user’s digital life. They also maintain a memory of the entire browsing session, meaning a single compromised website can influence the AI’s behavior on all other sites. Furthermore, users tend to place a high degree of trust in their AI assistants, making them less likely to notice malicious activity. AI browsers also intentionally weaken the security barriers that normally isolate websites from one another.
To address these risks, the report suggests several solutions. AI browsers need to be rebuilt with security as a core principle. This includes screening all website text for malicious instructions before the AI processes it and requiring user permission for sensitive actions. The system must clearly separate user commands from website content and operate on a “zero trust” model, where permissions are granted explicitly rather than by default.
Finally, users are advised to remain cautious. They should limit the access AI browsers have to sensitive accounts and demand transparency about the AI’s actions. The report concludes that innovative features are irrelevant if they expose users to such fundamental security threats.