Endor Labs scores open source AI models

Endor Labs has launched a new platform to score over 900,000 open-source AI models available on Hugging Face, focusing on security, activity, quality, and popularity. This initiative aims to address concerns regarding the trustworthiness and security of AI models, which often have complex dependencies and vulnerabilities, reports VentureBeat. Developers can query the platform about model …

Read more

Galileo evaluates AI models for business use

Galileo, an AI startup led by Vikram Chatterji, has raised $45 million in a Series B funding round, totaling $68 million since its inception three years ago. The company focuses on evaluating AI models to ensure they function effectively and do not generate incorrect information or leak sensitive data, reports Forbes. Its product suite includes …

Read more

Test shows compliance problems of leading AI models

A new tool for checking compliance with the EU AI Act has revealed weaknesses in leading AI models. As Martin Coulter reports for Reuters, some models from major tech companies are performing poorly in areas such as cybersecurity and discriminatory output. The “Large Language Model Checker” developed by LatticeFlow AI evaluates AI models across dozens …

Read more

Former CISO of Palantir joins OpenAI

Dane Stuckey, former CISO of Palantir, is joining OpenAI as its new CISO. According to Kyle Wiggers from TechCrunch, he will work alongside OpenAI’s head of security, Matt Knight. Stuckey announced the move on X/Twitter on Tuesday evening. He emphasized the importance of security for OpenAI’s mission. Stuckey started at Palantir in 2014 in information …

Read more

Anthropic updates AI safety policy

Anthropic has updated its AI safety policy to prevent misuse, reports VentureBeat author Michael Nuñez. The new “Capability Thresholds” define benchmarks for risky capabilities of AI models, such as in the area of bioweapons or autonomous AI research. If a model reaches such a threshold, additional safeguards are triggered. The revised policy also sets out …

Read more

California Governor vetoes AI safety bill

California Governor Gavin Newsom has vetoed a controversial AI safety bill. The Democrat justified his decision by stating that the bill only considered the largest and most expensive AI models, without taking into account their use in high-risk situations. Newsom emphasized that smaller models could also make critical decisions, while larger models are often used …

Read more

Russia, Iran and China use AI for disinformation in US election campaign

Russia, Iran and China are increasingly relying on AI to influence US voters ahead of the presidential election in November. According to a report by Joseph Menn in the Washington Post, this was stated by US intelligence officials. According to the report, Russia in particular is focusing on discrediting Democratic candidate Kamala Harris. Moscow is …

Read more

Cloudflare aims to protect websites from AI bots

Cloudflare is introducing new tools to protect websites from AI bots and control their scraping. The company is offering monitoring and selective blocking of AI data scraping bots to all customers, including its 33 million free users. CEO Matthew Prince says these measures are designed to give website owners more control over how and when …

Read more

Zurich researchers use AI to beat Google’s CAPTCHAs

Researchers at ETH Zurich have completely cracked Google’s reCAPTCHAv2 system. Using advanced machine learning methods, they solved 100% of the CAPTCHAs designed to distinguish humans from bots. Although the process still requires human intervention, fully automated CAPTCHA circumvention may soon become a reality. In response, companies like Google are developing more complex systems. However, this …

Read more

OpenAI’s internal communication breached

A hacker broke into OpenAI’s internal communications systems last year and stole information about the development of the company’s AI technologies. Employees criticized OpenAI for a lack of security measures and feared the technology could threaten U.S. national security in the future.

×