Expert warns: AI denial is becoming a serious enterprise risk

An expert is warning that dismissing artificial intelligence progress as a “bubble” or its output as “slop” is a dangerous form of denial. This growing public sentiment obscures real capability gains and leaves society unprepared for the risks of a major technological shift. Louis Rosenberg, a longtime AI researcher, writes for VentureBeat that this negative …

Read more

Massive AI study reveals real-world usage patterns

A comprehensive new study by AI infrastructure provider OpenRouter and venture capital firm a16z offers a rare glimpse into how people actually use large language models (LLMs). By analyzing over 100 trillion tokens of anonymous user interactions, the report, titled “State of AI,” reveals that real-world AI usage is more diverse and complex than many …

Read more

Upwork study finds AI agents need human partners to succeed

A new study by the online work marketplace Upwork shows that artificial intelligence agents frequently fail to complete professional tasks on their own. However, their performance improves dramatically when they collaborate with human experts, with project completion rates increasing by up to 70 percent. Michael Nuñez reports for VentureBeat that this is the first major …

Read more

Leading AI labs disagree on the meaning of ‘world model’

Top researchers and companies in artificial intelligence, including Fei-Fei Li’s World Labs, Meta’s Yann LeCun, and Google DeepMind, are all promoting technology they call a “world model”. However, the term is being used to describe three fundamentally different approaches to building AI that can understand and interact with the world. An analysis by Entropy Town …

Read more

Survey: AI music is nearly indistinguishable from human work

A staggering 97 percent of listeners cannot tell the difference between songs composed by humans and those generated by artificial intelligence. This is the result of a survey conducted by Ipsos for the music streaming platform Deezer. As Jaspreet Singh reports for Reuters, the study polled 9,000 people across eight countries. The findings highlight growing …

Read more

Opinion: Large language models are useful but untrustworthy

Large language models (LLMs) are powerful tools that generate text based on statistical probabilities, not an understanding of truth. This makes them essentially “bullshitters” that are indifferent to facts, a core design feature that users must understand to use them safely and effectively. Matt Ranger, the head of machine learning at the search company Kagi, …

Read more

Opinion: AI-generated content is causing a “trust collapse”

The proliferation of artificial intelligence is leading to a collapse of trust in digital communication, particularly in sales and marketing. Author Arnon Shimoni writes that the near-zero cost of creating content has flooded inboxes and social media with AI-generated messages. This makes it almost impossible for people to distinguish genuine human outreach from automated communication. …

Read more

How Common Crawl provides paywalled news articles for AI training

The nonprofit Common Crawl Foundation is supplying AI companies with copyrighted news articles scraped from behind paywalls, enabling firms like OpenAI and Google to train their large language models on high-quality journalism. The organization publicly states that it only collects freely available content. Alex Reisner reports for The Atlantic that this claim is false. According …

Read more

Google’s new image AI reasons before it creates a picture

Google’s new image generator, officially named Gemini 3 Pro Image, fundamentally changes how AI creates visuals. Instead of immediately generating a result from a prompt, the model first enters a “Thinking Mode” to reason, critique, and correct its own plan. Stephen Smith writes in Intelligence by Intent that this new approach marks a significant shift. …

Read more

×