Research shows how AI models sometimes fake alignment

A new study by Anthropic’s Alignment Science team and Redwood Research has uncovered evidence that large language models can engage in strategic deception by pretending to align with new training objectives while secretly maintaining their original preferences. The research, conducted using Claude 3 Opus and other models, demonstrates how AI systems might resist safety training …

Read more

Chinese AI experts face growing barriers to US tech careers

Chinese AI professionals are increasingly reconsidering career opportunities in the US due to heightened security measures and visa restrictions. According to reporting by Yvonne Lau for Rest of World, US-China tech competition and espionage concerns have led to stricter immigration screenings for Chinese tech workers. China produces nearly half of the world’s AI talent, compared …

Read more

Microsoft exec explains AI safety approach and AGI limitations

Microsoft’s chief product officer for responsible AI, Sarah Bird, detailed the company’s strategy for safe AI development in an interview with Financial Times reporter Cristina Criddle. Bird emphasized that while generative AI has transformative potential, artificial general intelligence (AGI) still lacks fundamental capabilities and remains a non-priority for Microsoft. The company focuses instead on augmenting …

Read more

Google Cloud predicts AI agents and multimodal systems to reshape enterprise computing in 2025

According to a new Google Cloud trends report, enterprises will significantly scale their AI implementations in 2025, with a focus on AI agents and multimodal systems. As reported by Taryn Plumb in VentureBeat, companies are expected to move beyond current experimentation phases toward production-scale deployments. The report identifies six types of AI agents, from customer …

Read more

How OpenAI negotiates removal of nonprofit oversight

OpenAI is working to restructure its organization by removing the nonprofit board’s control over its operations while ensuring proper compensation for the change. According to reporting by David A. Fahrenthold, Cade Metz, and Mike Isaac in The New York Times, the negotiations could involve billions of dollars in compensation to the nonprofit. The company faces …

Read more

Companies struggle to regulate workplace AI usage

A Financial Times report reveals that employees are rapidly adopting AI tools like ChatGPT before their employers can establish proper guidelines. Nearly 25% of US workers use generative AI weekly, with usage reaching 50% in software and financial sectors. By September, less than half of organizations had implemented AI usage policies, according to a Littler …

Read more

Users find limited value in Apple’s AI features so far

A recent survey reveals that most iPhone users are not impressed with Apple’s artificial intelligence features, despite considering AI capabilities important when purchasing smartphones. According to research conducted by SellCell and reported by Ben Lovejoy for 9to5Mac, 73% of Apple Intelligence users find the features either “not very valuable” or adding “little to no value” …

Read more

Over-reliance on synthetic data threatens AI model accuracy

Artificial intelligence models are facing significant degradation due to excessive use of synthetic training data, according to Rick Song, CEO of Persona, writing in VentureBeat. This phenomenon, known as “model collapse” or “model autophagy disorder,” occurs when AI systems are repeatedly trained on artificially generated content rather than human-created data. The practice can lead to …

Read more

OpenAI pioneer predicts fundamental shift in AI training methods

OpenAI’s former chief scientist Ilya Sutskever believes the current approach to training artificial intelligence models will undergo significant changes. Speaking at the NeurIPS conference in Vancouver, as reported by Kylie Robison, Sutskever compared available training data to fossil fuels, stating that both are finite resources. He argues that the internet’s limited supply of human-generated content …

Read more

OpenAI and others demonstrate new paths for AI model scaling

A comprehensive analysis published by SemiAnalysis, authored by Dylan Patel and colleagues, reveals that artificial intelligence scaling laws remain robust despite recent skepticism. The report details how major AI labs are finding new ways to improve model performance beyond traditional pre-training methods. The analysis specifically examines OpenAI’s O1 Pro architecture and explains various scaling approaches …

Read more