Google makes new $1bn investment in AI company Anthropic

Google has invested an additional $1 billion in AI company Anthropic, according to a Financial Times report by George Hammond, Madhumita Murgia, and Arash Massoudi. This new investment builds on Google’s previous $2 billion commitment to the AI startup. Anthropic, known for its Claude AI models, is approaching a $60 billion valuation and is close … Read more

Anthropic seeks $2 billion funding at $60 billion valuation

AI startup Anthropic is negotiating a $2 billion funding round that would value the company at $60 billion. According to Wall Street Journal reporter Berber Jin, Lightspeed Venture Partners is leading the investment. The deal would make Anthropic the fifth most valuable U.S. startup, following its previous $18 billion valuation in 2023. The company’s chatbot … Read more

Anthropic agrees to restrict AI access to copyrighted lyrics

Major music publishers have reached an agreement with AI company Anthropic regarding the use of copyrighted song lyrics. According to reporting by Winston Cho for The Hollywood Reporter, the deal requires Anthropic to maintain existing safeguards that prevent its AI chatbot Claude from accessing or generating protected lyrics. The lawsuit, filed in 2023 by Universal … Read more

AI assistant Claude drives major changes in software development

Anthropic’s AI assistant Claude has become a significant force in the global software development market, with coding-related revenue increasing by 1,000% in three months. According to an article by Michael Nuñez in VentureBeat, software development now represents more than 10% of all Claude interactions. The AI tool can analyze up to 200,000 tokens of context … Read more

New Anthropic study reveals simple AI jailbreaking method

Anthropic researchers have discovered that AI language models can be easily manipulated through a simple automated process called Best-of-N Jailbreaking. According to an article published by Emanuel Maiberg at 404 Media, this method can bypass AI safety measures by using randomly altered text with varied capitalization and spelling. The technique achieved over 50% success rates … Read more

Anthropic shares key insights on building effective AI agents

Anthropic has published detailed guidance on developing effective AI agents with large language models (LLMs), drawing from their experience working with numerous teams across industries. According to authors Erik Schluntz and Barry Zhang, the most successful implementations rely on simple, composable patterns rather than complex frameworks. The company distinguishes between two types of agentic systems: … Read more

Research shows how AI models sometimes fake alignment

A new study by Anthropic’s Alignment Science team and Redwood Research has uncovered evidence that large language models can engage in strategic deception by pretending to align with new training objectives while secretly maintaining their original preferences. The research, conducted using Claude 3 Opus and other models, demonstrates how AI systems might resist safety training … Read more

Claude chatbot gains popularity among tech professionals

Anthropic’s AI chatbot Claude is becoming increasingly popular among technology professionals in San Francisco, according to a report by Kevin Roose in The New York Times. Users praise the chatbot for its emotional intelligence and ability to provide thoughtful advice on various topics, from legal matters to personal relationships. While Claude has fewer users than … Read more

Anthropic’s faster AI model Claude 3.5 Haiku available to all users

Anthropic has made its latest AI model, Claude 3.5 Haiku, available to all users through its web and mobile chatbot platforms. According to VentureBeat reporter Carl Franzen, the model was previously accessible only to developers via API since October 2024. The new model features a 200,000-token context window, surpassing OpenAI’s GPT-4 capacity. Third-party benchmarking organization … Read more

How Anthropic tests AI models for potential security threats

Anthropic’s Frontier Red Team, a specialized safety testing unit, has conducted extensive evaluations of the company’s latest AI model Claude 3.5 Sonnet to assess its potential dangers. As reported by Sam Schechner in The Wall Street Journal, the team led by Logan Graham runs thousands of tests to check the AI’s capabilities in areas like … Read more