Jailbreak with ASCII trick

Researchers from Washington and Chicago have developed “ArtPrompt“, a new method to bypass security measures in language models. Using this method, chatbots such as GPT-3.5, GPT-4, Gemini, Claude, and Llama2 can be tricked into responding to requests they are supposed to reject using ASCII art prompts. This includes advice on how to make bombs and counterfeit money. Sources: Tom’s Hardware, Ars Technica

Related posts:

Stay up-to-date: