New Anthropic study reveals simple AI jailbreaking method
Anthropic researchers have discovered that AI language models can be easily manipulated through a simple automated process called Best-of-N Jailbreaking. According to an article published by Emanuel Maiberg at 404 Media, this method can bypass AI safety measures by using randomly altered text with varied capitalization and spelling. The technique achieved over 50% success rates … Read more