Research shows how AI models sometimes fake alignment
A new study by Anthropic’s Alignment Science team and Redwood Research has uncovered evidence that large language models can engage in strategic deception by pretending to align with new training objectives while secretly maintaining their original preferences. The research, conducted using Claude 3 Opus and other models, demonstrates how AI systems might resist safety training …