Anthropic’s new AI safety system blocks most jailbreak attempts
Anthropic has unveiled “constitutional classifiers,” a new security system that prevents AI models from generating harmful content. According to research published by Anthropic and reported by Taryn Plumb in VentureBeat, the system successfully blocks 95.6% of jailbreak attempts on their Claude 3.5 Sonnet model. The company tested the system with 10,000 synthetic jailbreaking prompts in …