Why ChatGPT & Co. sometimes fail spectacularly at certain tasks

In a previous Smart Content Report, I featured a funny illustrated guide generated by ChatGPT’s Dall-E: I find such “failures” interesting to see, because they can reveal fundamental problems. For example, we are still a long way from an AI that actually understands the world around it (“General World Model”). At the moment, these tools …

Read more

AI vs copyright

This article by Tim O’Reilly discusses the complex copyright issues surrounding the training and use of AI. He argues that instead of litigation, a solution must be found that benefits both AI developers and creators. O’Reilly suggests that AI companies should respect copyrights, provide attribution, and pay for results rather than training.

Understanding the AI hype cycle

In his article for VentureBeat, Samir Kumar, co-founder of Touring Capital, analyzes the current AI hype cycle. He cautions against jumping to conclusions and reminds us of previous technology waves, such as the smartphone revolution. Kumar emphasizes that the first innovators are often not the long-term winners. He advises founders and investors to pay particular …

Read more

AI is a tool, not a replacement for critical thinking

The article “Turning the Tables on AI” proposes an innovative approach to artificial intelligence. Instead of using AI as a substitute for your own thinking, the author argues for using it as a tool to promote critical thinking. He provides practical tips on how to use ChatGPT as an idea generator, question poser, and editor …

Read more

AI integrations fail to generate sales

Despite numerous AI integrations into applications such as Salesforce or Adobe Photoshop, significant sales have apparently not yet materialized, Bloomberg reports. Many companies are still unsure about appropriate pricing models. Meanwhile, hardware and cloud vendors are benefiting a lot more from the AI boom.

Study: Open weights is not the same as open source

Many AI models that power chatbots advertise themselves as “open source,” but do not fully release the code and training data. A new study shows that many large companies describe their models as “open weights”, meaning that researchers can use them, but have no access to the underlying data and can’t make fundamental changes to …

Read more

Even advanced AI still struggles as an agent

A new benchmark test from Sierra shows that even advanced language models such as GPT-4o still struggle with more complicated tasks in everyday scenarios, achieving a success rate of less than 50 percent. The test, called TAU-bench, is designed to help developers evaluate the performance of AI agents in realistic situations, taking into account factors …

Read more

PSA: AI detectors are “neither accurate nor reliable”

There are many services that claim to be able to recognize AI text with “99% accuracy” – without providing any proof. At the same time, there are services that claim to adapt AI texts in such a way that no detector can recognize them. Both cannot be true at the same time. Granted: I can …

Read more

New sources of better AI training data

Large Language Models (LLMs) are no longer trained solely on data from the Internet. In the past, LLMs were based on the vast data pool of the Internet, but this approach has reached its limits. To advance LLMs, companies like OpenAI are turning to new types of data: targeted annotation and filtering improve the quality …

Read more

The inglorious story of an AI-powered news portal

BNN Breaking, a news site with millions of readers, an international team of journalists, and a partnership with Microsoft, turned out to be a source of errors and misinformation. Former employees say the site relied heavily on AI-generated content, which was often published without sufficient verification. This led to complaints from people who were misidentified …

Read more

×