Russia, Iran and China use AI for disinformation in US election campaign

Russia, Iran and China are increasingly relying on AI to influence US voters ahead of the presidential election in November. According to a report by Joseph Menn in the Washington Post, this was stated by US intelligence officials. According to the report, Russia in particular is focusing on discrediting Democratic candidate Kamala Harris. Moscow is … Read more

Cloudflare aims to protect websites from AI bots

Cloudflare is introducing new tools to protect websites from AI bots and control their scraping. The company is offering monitoring and selective blocking of AI data scraping bots to all customers, including its 33 million free users. CEO Matthew Prince says these measures are designed to give website owners more control over how and when … Read more

Zurich researchers use AI to beat Google’s CAPTCHAs

Researchers at ETH Zurich have completely cracked Google’s reCAPTCHAv2 system. Using advanced machine learning methods, they solved 100% of the CAPTCHAs designed to distinguish humans from bots. Although the process still requires human intervention, fully automated CAPTCHA circumvention may soon become a reality. In response, companies like Google are developing more complex systems. However, this … Read more

OpenAI’s internal communication breached

A hacker broke into OpenAI’s internal communications systems last year and stole information about the development of the company’s AI technologies. Employees criticized OpenAI for a lack of security measures and feared the technology could threaten U.S. national security in the future.

OpenAI’s ChatGPT app for Macs stored chats unencrypted

OpenAI’s ChatGPT application for macOS had a security flaw: All chats were stored unencrypted on the computer. Anyone with access to the computer as well as malicious apps could have read them.

Behind the scenes at Anthropic (Claude): security as a priority

In an in-depth article, Time Magazine looks at AI company Anthropic and its efforts to make security a top priority. Co-founder and CEO Dario Amodei made a conscious decision not to release the chatbot Claude early to avoid potential risks. Anthropic’s mission is to empirically determine what risks actually exist by building and researching powerful … Read more

OpenAI insiders warn of dangerous corporate culture

In an open letter, current and former OpenAI employees warn of a “reckless” development in the race for supremacy in artificial intelligence. They call for sweeping changes in the AI industry, including more transparency and better protection for whistleblowers. The signatories criticize a culture of secrecy and profit at any cost at OpenAI. The company … Read more

California plans strict safety rules for AI

California wants to implement strict safety rules for artificial intelligence, including a “kill switch” and reporting requirements for developers. Critics warn of barriers to innovation, excessive bureaucracy, and negative impacts on open source models that could weaken the state’s technology sector.

Inspect helps to assess AI safety

The UK’s AI Safety Institute releases Inspect, an open source toolset designed to simplify the safety assessment of AI models. Inspect can be used to test the capabilities of AI models, such as core knowledge and reasoning.

New guide for secure AI systems

The NSA, in collaboration with international partners, is releasing a guide to best practices for the secure deployment and operation of AI systems. The Cybersecurity Information Sheet is aimed primarily at operators of national security systems and companies in the defense industry, but is also relevant to other organizations. Source: Hacker News