Silicon Valley faces growing disconnect between AI builders and skeptical public

Silicon Valley’s AI enthusiasts are frustrated with public skepticism, but they may be missing the point. While industry insiders celebrate what they see as near-miraculous advances, many ordinary people view AI progress with anxiety or indifference. Sharon Goldman reports for Fortune that the disconnect stems from fundamentally different perspectives. What AI builders frame as thrilling …

Read more

39C3 Talk: How Wikipedia battles AI-generated articles

Mathias Schindler, a longtime Wikipedia contributor and co-founder of Wikimedia Germany, reports on a troubling discovery at the 39C3 conference in Hamburg. While developing a tool to check ISBN checksums in German Wikipedia, he uncovered a significant problem: articles containing completely fabricated literature references generated by large language models. The issue emerged when Schindler found …

Read more

How AI companies are teaching language models to admit their mistakes

Two major tech companies are tackling one of artificial intelligence’s most persistent problems: getting AI systems to stop making things up or hiding their mistakes. OpenAI and Amazon have each developed distinct approaches to make large language models more honest and reliable. OpenAI’s thruth serum OpenAI researchers introduced a technique called “confessions” that functions like …

Read more

Oscar winners and A-list talent unite against tech companies dictating Hollywood’s AI future

Hollywood professionals have formed the Creators Coalition on AI (CCAI), bringing together more than 500 actors, filmmakers, writers, and below-the-line talent to establish ethical guidelines for artificial intelligence use in entertainment. The initiative represents a response to what many see as unchecked AI adoption in the industry. The coalition’s 18 founding members include Oscar winners …

Read more

RSL 1.0 becomes official standard for AI content licensing

A new open standard aims to give publishers control over how artificial intelligence companies use their content. Really Simple Licensing 1.0 allows websites to set machine-readable licensing and compensation rules for AI systems. The RSL Collective developed the standard with backing from major internet companies. More than 1,500 organizations now support it, including The Associated …

Read more

Expert warns: AI denial is becoming a serious enterprise risk

An expert is warning that dismissing artificial intelligence progress as a “bubble” or its output as “slop” is a dangerous form of denial. This growing public sentiment obscures real capability gains and leaves society unprepared for the risks of a major technological shift. Louis Rosenberg, a longtime AI researcher, writes for VentureBeat that this negative …

Read more

Opinion: Large language models are useful but untrustworthy

Large language models (LLMs) are powerful tools that generate text based on statistical probabilities, not an understanding of truth. This makes them essentially “bullshitters” that are indifferent to facts, a core design feature that users must understand to use them safely and effectively. Matt Ranger, the head of machine learning at the search company Kagi, …

Read more

Opinion: AI-generated content is causing a “trust collapse”

The proliferation of artificial intelligence is leading to a collapse of trust in digital communication, particularly in sales and marketing. Author Arnon Shimoni writes that the near-zero cost of creating content has flooded inboxes and social media with AI-generated messages. This makes it almost impossible for people to distinguish genuine human outreach from automated communication. …

Read more

Kagi Search introduces community tool to fight AI-generated ‘slop’

The search engine Kagi has launched a feature called SlopStop to combat low-quality, AI-generated content. The system allows users to flag and help downrank what the company calls “AI slop” in web, image, and video search results. Kagi reports on its company blog that it defines AI slop as deceptive or low-value content created to …

Read more

How Common Crawl provides paywalled news articles for AI training

The nonprofit Common Crawl Foundation is supplying AI companies with copyrighted news articles scraped from behind paywalls, enabling firms like OpenAI and Google to train their large language models on high-quality journalism. The organization publicly states that it only collects freely available content. Alex Reisner reports for The Atlantic that this claim is false. According …

Read more