Study: Open weights is not the same as open source

Many AI models that power chatbots advertise themselves as “open source,” but do not fully release the code and training data. A new study shows that many large companies describe their models as “open weights”, meaning that researchers can use them, but have no access to the underlying data and can’t make fundamental changes to …

Read more

Even advanced AI still struggles as an agent

A new benchmark test from Sierra shows that even advanced language models such as GPT-4o still struggle with more complicated tasks in everyday scenarios, achieving a success rate of less than 50 percent. The test, called TAU-bench, is designed to help developers evaluate the performance of AI agents in realistic situations, taking into account factors …

Read more

PSA: AI detectors are “neither accurate nor reliable”

There are many services that claim to be able to recognize AI text with “99% accuracy” – without providing any proof. At the same time, there are services that claim to adapt AI texts in such a way that no detector can recognize them. Both cannot be true at the same time. Granted: I can …

Read more

New sources of better AI training data

Large Language Models (LLMs) are no longer trained solely on data from the Internet. In the past, LLMs were based on the vast data pool of the Internet, but this approach has reached its limits. To advance LLMs, companies like OpenAI are turning to new types of data: targeted annotation and filtering improve the quality …

Read more

The inglorious story of an AI-powered news portal

BNN Breaking, a news site with millions of readers, an international team of journalists, and a partnership with Microsoft, turned out to be a source of errors and misinformation. Former employees say the site relied heavily on AI-generated content, which was often published without sufficient verification. This led to complaints from people who were misidentified …

Read more

Behind the scenes at Anthropic (Claude): security as a priority

In an in-depth article, Time Magazine looks at AI company Anthropic and its efforts to make security a top priority. Co-founder and CEO Dario Amodei made a conscious decision not to release the chatbot Claude early to avoid potential risks. Anthropic’s mission is to empirically determine what risks actually exist by building and researching powerful …

Read more

OpenAI insiders warn of dangerous corporate culture

In an open letter, current and former OpenAI employees warn of a “reckless” development in the race for supremacy in artificial intelligence. They call for sweeping changes in the AI industry, including more transparency and better protection for whistleblowers. The signatories criticize a culture of secrecy and profit at any cost at OpenAI. The company …

Read more

Researchers work on better local AI

Researchers are making great strides in developing 1-bit LLMs that can achieve similar performance to their larger counterparts while using significantly less memory and power. This development could open the door to more complex AI applications on everyday devices such as smartphones, as they require less processing power and energy.

Two thirds of companies use generative AI regularly

A new survey from McKinsey shows that 65% of companies are already using Generative AI on a regular basis, and the majority expect the technology to lead to significant changes in their industries. However, 44% of respondents have also experienced negative consequences from using Gen AI, such as inaccurate results or cybersecurity issues, which is …

Read more

AI fear in 1927

In his 1927 film “Metropolis,” Fritz Lang showed an artificial intelligence that frightened people. In the film, which describes a future with clear class distinctions, a robot known as a “Maschinenmensch” causes unrest. The robot, which is initially used as a worker, later takes the form of a young woman named Maria and provokes an …

Read more