Jailbreak with ASCII trick

Researchers from Washington and Chicago have developed “ArtPrompt“, a new method to bypass security measures in language models. Using this method, chatbots such as GPT-3.5, GPT-4, Gemini, Claude, and Llama2 can be tricked into responding to requests they are supposed to reject using ASCII art prompts. This includes advice on how to make bombs and …

Read more

AWS, Accenture and Anthropic join forces for enterprise AI

Amazon Web Services (AWS), Accenture, and AI startup Anthropic (makers of Claude) are joining forces to help organizations in highly regulated industries, such as healthcare, government, and banking, deploy customized AI models quickly and responsibly. The partnership will enable organizations to access Anthropic’s AI models, including the entire Claude 3 family, through AWS’ Bedrock platform. …

Read more

Elon Musk and OpenAI have a public spat

Elon Musk co-founded and funded OpenAI. He is now suing the company for failing to live up to the openness it once promised. In return, OpenAI released old emails from Elon Musk in which he apparently had no objection.

Google paying publishers to use AI tool

Google is paying publishers to use a new AI offering. According to Google, it is primarily intended to help journalists at smaller media outlets with their work, AdWeek reports.

WordPress to sell data to AI companies

Tumblr and WordPress.com are apparently looking to sell user content to AI companies. Talks are reportedly underway with OpenAI and Midjourney, according to 404media.

Google embarrasses itself with Gemini’s political correctness

We reported on Google’s AI offensive under the “Gemini” banner, but soon after, it was the integrated image generator that made the headlines: It had apparently been steered too much in favor of diversity. What is generally a good idea makes no sense if, for example, you want a picture of the “founding fathers” of …

Read more

Air Canada has to answer for incorrect information provided by its chatbot

Air Canada’s chatbot gave a customer incorrect information about the terms of a refund. In court, the airline argued that the chatbot itself was responsible for what it said, not Air Canada. The court disagreed, and the company had to pay up. Source: The Guardian