Test shows compliance problems of leading AI models

A new tool for checking compliance with the EU AI Act has revealed weaknesses in leading AI models. As Martin Coulter reports for Reuters, some models from major tech companies are performing poorly in areas such as cybersecurity and discriminatory output. The “Large Language Model Checker” developed by LatticeFlow AI evaluates AI models across dozens …

Read more

Former CISO of Palantir joins OpenAI

Dane Stuckey, former CISO of Palantir, is joining OpenAI as its new CISO. According to Kyle Wiggers from TechCrunch, he will work alongside OpenAI’s head of security, Matt Knight. Stuckey announced the move on X/Twitter on Tuesday evening. He emphasized the importance of security for OpenAI’s mission. Stuckey started at Palantir in 2014 in information …

Read more

Anthropic updates AI safety policy

Anthropic has updated its AI safety policy to prevent misuse, reports VentureBeat author Michael Nuñez. The new “Capability Thresholds” define benchmarks for risky capabilities of AI models, such as in the area of bioweapons or autonomous AI research. If a model reaches such a threshold, additional safeguards are triggered. The revised policy also sets out …

Read more

California Governor vetoes AI safety bill

California Governor Gavin Newsom has vetoed a controversial AI safety bill. The Democrat justified his decision by stating that the bill only considered the largest and most expensive AI models, without taking into account their use in high-risk situations. Newsom emphasized that smaller models could also make critical decisions, while larger models are often used …

Read more

Russia, Iran and China use AI for disinformation in US election campaign

Russia, Iran and China are increasingly relying on AI to influence US voters ahead of the presidential election in November. According to a report by Joseph Menn in the Washington Post, this was stated by US intelligence officials. According to the report, Russia in particular is focusing on discrediting Democratic candidate Kamala Harris. Moscow is …

Read more

Cloudflare aims to protect websites from AI bots

Cloudflare is introducing new tools to protect websites from AI bots and control their scraping. The company is offering monitoring and selective blocking of AI data scraping bots to all customers, including its 33 million free users. CEO Matthew Prince says these measures are designed to give website owners more control over how and when …

Read more

Zurich researchers use AI to beat Google’s CAPTCHAs

Researchers at ETH Zurich have completely cracked Google’s reCAPTCHAv2 system. Using advanced machine learning methods, they solved 100% of the CAPTCHAs designed to distinguish humans from bots. Although the process still requires human intervention, fully automated CAPTCHA circumvention may soon become a reality. In response, companies like Google are developing more complex systems. However, this …

Read more

OpenAI’s internal communication breached

A hacker broke into OpenAI’s internal communications systems last year and stole information about the development of the company’s AI technologies. Employees criticized OpenAI for a lack of security measures and feared the technology could threaten U.S. national security in the future.

OpenAI’s ChatGPT app for Macs stored chats unencrypted

OpenAI’s ChatGPT application for macOS had a security flaw: All chats were stored unencrypted on the computer. Anyone with access to the computer as well as malicious apps could have read them.

Behind the scenes at Anthropic (Claude): security as a priority

In an in-depth article, Time Magazine looks at AI company Anthropic and its efforts to make security a top priority. Co-founder and CEO Dario Amodei made a conscious decision not to release the chatbot Claude early to avoid potential risks. Anthropic’s mission is to empirically determine what risks actually exist by building and researching powerful …

Read more