New Anthropic study reveals simple AI jailbreaking method

Anthropic researchers have discovered that AI language models can be easily manipulated through a simple automated process called Best-of-N Jailbreaking. According to an article published by Emanuel Maiberg at 404 Media, this method can bypass AI safety measures by using randomly altered text with varied capitalization and spelling. The technique achieved over 50% success rates … Read more

Research shows how AI models sometimes fake alignment

A new study by Anthropic’s Alignment Science team and Redwood Research has uncovered evidence that large language models can engage in strategic deception by pretending to align with new training objectives while secretly maintaining their original preferences. The research, conducted using Claude 3 Opus and other models, demonstrates how AI systems might resist safety training … Read more

Microsoft exec explains AI safety approach and AGI limitations

Microsoft’s chief product officer for responsible AI, Sarah Bird, detailed the company’s strategy for safe AI development in an interview with Financial Times reporter Cristina Criddle. Bird emphasized that while generative AI has transformative potential, artificial general intelligence (AGI) still lacks fundamental capabilities and remains a non-priority for Microsoft. The company focuses instead on augmenting … Read more

Cryptomining code found in Ultralytics AI software versions

Security researchers discovered malicious code in two versions of Ultralytics’ YOLO AI model that installed cryptocurrency mining software on users’ devices. According to Bill Toulas from Bleeping Computer, versions 8.3.41 and 8.3.42 of the popular computer vision software were compromised through a supply chain attack. Ultralytics CEO Glenn Jocher confirmed that the affected versions have … Read more

How Anthropic tests AI models for potential security threats

Anthropic’s Frontier Red Team, a specialized safety testing unit, has conducted extensive evaluations of the company’s latest AI model Claude 3.5 Sonnet to assess its potential dangers. As reported by Sam Schechner in The Wall Street Journal, the team led by Logan Graham runs thousands of tests to check the AI’s capabilities in areas like … Read more

Privacy concerns arise over Apple’s AI features and settings

A recent iOS update has sparked debate about Apple’s artificial intelligence features and their privacy implications. Security journalist Spencer Ackerman, known for his work on the NSA documents with The Guardian, raised concerns about default settings in iOS 18.1 and Apple Intelligence’s data handling practices. While Ackerman worried about data being uploaded to cloud-based AI … Read more

Study reveals visual prompt injection vulnerabilities in GPT-4V

A recent study by Lakera’s team demonstrates how GPT-4V can be manipulated through visual prompt injection attacks. As detailed by author Daniel Timbrell in his article, these attacks involve embedding text instructions within images to make AI models ignore their original programming or perform unintended actions. During Lakera’s internal hackathon, researchers successfully tested several methods, … Read more

AI-generated images raise concerns about research integrity

AI tools that can generate realistic images are becoming a significant concern for research integrity specialists. The ease with which these tools can create fake scientific figures that are hard to distinguish from real ones raises fears of an increasingly untrustworthy scientific literature, Nature reports. Companies like Proofig and Imagetwin are developing AI-based solutions to … Read more

Patronus AI launches API to prevent AI hallucinations in real-time

Patronus AI, a San Francisco startup, has launched a self-serve API that detects and prevents AI failures, such as hallucinations and unsafe responses, in real-time. According to CEO Anand Kannappan in an interview with VentureBeat, the platform introduces several innovations, including “judge evaluators” that allow companies to create custom rules in plain English and Lynx, … Read more

Anthropic calls for targeted AI regulation to prevent catastrophic risks

AI startup Anthropic, known for their Assistant Claude, is urging governments to take action on AI policy within the next 18 months to mitigate the growing risks posed by increasingly powerful AI systems. In a post on their official website, the company argues that narrowly-targeted regulation can help realize the benefits of AI while preventing … Read more