OpenAI’s GPT store is full of spam
It seems the startup doesn’t have much time to control the individual chatbots in its store, as TechCrunch shows.
The latest around generative AI as a whole. Stay up-to-date about current discussions and trends, new findings, and more.
It seems the startup doesn’t have much time to control the individual chatbots in its store, as TechCrunch shows.
Researchers from Washington and Chicago have developed “ArtPrompt“, a new method to bypass security measures in language models. Using this method, chatbots such as GPT-3.5, GPT-4, Gemini, Claude, and Llama2 can be tricked into responding to requests they are supposed to reject using ASCII art prompts. This includes advice on how to make bombs and …
Amazon Web Services (AWS), Accenture, and AI startup Anthropic (makers of Claude) are joining forces to help organizations in highly regulated industries, such as healthcare, government, and banking, deploy customized AI models quickly and responsibly. The partnership will enable organizations to access Anthropic’s AI models, including the entire Claude 3 family, through AWS’ Bedrock platform. …
Elon Musk co-founded and funded OpenAI. He is now suing the company for failing to live up to the openness it once promised. In return, OpenAI released old emails from Elon Musk in which he apparently had no objection.
Google is paying publishers to use a new AI offering. According to Google, it is primarily intended to help journalists at smaller media outlets with their work, AdWeek reports.
Tumblr and WordPress.com are apparently looking to sell user content to AI companies. Talks are reportedly underway with OpenAI and Midjourney, according to 404media.
We reported on Google’s AI offensive under the “Gemini” banner, but soon after, it was the integrated image generator that made the headlines: It had apparently been steered too much in favor of diversity. What is generally a good idea makes no sense if, for example, you want a picture of the “founding fathers” of …
Air Canada’s chatbot gave a customer incorrect information about the terms of a refund. In court, the airline argued that the chatbot itself was responsible for what it said, not Air Canada. The court disagreed, and the company had to pay up. Source: The Guardian