Researcher demonstrates AI model extraction from mobile apps

Security researcher Altay Akkus has revealed techniques for extracting artificial intelligence models from mobile applications, specifically demonstrating the process using Microsoft’s Seeing AI and Adobe Scan apps. The research, published in a technical blog post, shows how AI models intended for on-device use can be accessed and extracted using various tools and methods. Using Frida, …

Read more

6 interesting talks about AI from 38C3

The 38th Chaos Communication Congress (38C3) in Hamburg, Germany, was the latest installment of the annual four-day conference on technology, society and utopia organised by the Chaos Computer Club (CCC) and volunteers. From the long list of talks, I chose six I found especially relevant for readers of Smart Content Report and myself. I used …

Read more

Meta introduces new AI reasoning method “Coconut”

Meta AI researchers have developed a new method called Coconut (Chain of Continuous Thought) that allows large language models to reason in continuous latent space rather than only through words. The research presents an alternative to traditional Chain-of-Thought (CoT) reasoning methods. The new approach enables AI models to process information in a more abstract way, …

Read more

New AI evaluation tests emerge as models surpass existing benchmarks

Leading AI research organizations are developing more challenging evaluation methods as current AI models consistently achieve top scores on traditional tests. According to Tharin Pillay’s article in Time Magazine, conventional benchmarks like SATs and bar exams no longer effectively measure AI capabilities. New evaluation frameworks include FrontierMath, developed by Epoch AI in collaboration with prominent …

Read more

AI hallucinations advance scientific discoveries

Scientists are successfully using AI hallucinations as a tool for breakthrough research, reports William J. Broad in The New York Times. These computer-generated imaginings are helping researchers develop new proteins, design drugs, and advance medical treatments. Nobel Prize winner David Baker used AI hallucinations to create millions of new proteins not found in nature, leading …

Read more

Major AI breakthroughs have transformed technology landscape

A significant wave of artificial intelligence advancements has emerged in the past month, marking a transformative period in AI development. According to technology researcher Ethan Mollick’s detailed analysis, multiple breakthrough technologies have fundamentally changed AI capabilities and accessibility. The number of high-performance AI models has increased dramatically, with six to ten GPT-4 class models now …

Read more

Why AI models face limits with long texts

Large language models are hitting significant computational barriers when processing extensive texts, according to a detailed analysis by Timothy B. Lee published in Ars Technica. The fundamental issue lies in how these models process information: computational costs increase quadratically with input size. Current leading models like GPT-4o can handle about 200 pages of text, while …

Read more

Small language models achieve breakthrough with new scaling technique

Researchers at Hugging Face have demonstrated that small language models can outperform their larger counterparts using advanced test-time scaling methods. As reported by Ben Dickson for VentureBeat, a Llama 3 model with just 3 billion parameters matched the performance of its 70-billion-parameter version on complex mathematical tasks. The breakthrough relies on scaling “test-time compute,” which …

Read more

New Anthropic study reveals simple AI jailbreaking method

Anthropic researchers have discovered that AI language models can be easily manipulated through a simple automated process called Best-of-N Jailbreaking. According to an article published by Emanuel Maiberg at 404 Media, this method can bypass AI safety measures by using randomly altered text with varied capitalization and spelling. The technique achieved over 50% success rates …

Read more

Apple and Nvidia collaborate to accelerate LLM processing

Apple and Nvidia have announced the integration of Apple’s ReDrafter technology into Nvidia’s TensorRT-LLM framework, enabling faster processing of large language models (LLMs) on Nvidia GPUs. ReDrafter, an open-source speculative decoding approach developed by Apple, uses recurrent neural networks to predict future tokens during text generation, combined with beam search and tree attention algorithms. The …

Read more