Gary Marcus, a prominent AI expert, argues that pure scaling of AI systems without fundamental architectural changes is reaching a point of diminishing returns. He cites recent comments from venture capitalist Marc Andreesen and editor Amir Efrati confirming that improvements in large language models (LLMs) are slowing down, despite increasing computational resources. Marcus warns that the current AI bubble, based on the assumption that LLMs will lead to artificial general intelligence (AGI), may burst as the economic realities become clear.
Marcus criticizes the media and tech influencers for ignoring warnings from scientists about the principled limits of LLMs, instead glorifying the hype driven by those with vested interests. He argues that the US has been investing heavily in LLMs at the expense of alternative approaches, potentially leaving the country vulnerable if adversaries pursue different strategies. To achieve reliable and trustworthy AI, Marcus suggests that we may need to “go back to the drawing board” and explore new directions.
Who is Gary Marcus?
Gary Fred Marcus is an American psychologist and cognitive scientist, recognized for his contributions to cognitive psychology, neuroscience, and artificial intelligence. He is a professor emeritus at New York University and founded Geometric Intelligence, a machine learning company acquired by Uber in 2016. Marcus has authored several influential books, including “The Algebraic Mind,” “Guitar Zero,” and “The Birth of the Mind,” which explore topics from cognitive development to music learning.
As a critic of AI, Marcus emphasizes the need for regulation and public awareness regarding AI risks, especially concerning biases in applications like facial recognition. He has called for a moratorium on training advanced AI systems until adequate safeguards are established. His recent work includes the 2024 publication “Taming Silicon Valley,” which focuses on ensuring AI serves societal interests. Throughout his career, Marcus has contributed to various scholarly articles and edited volumes, furthering the discourse on cognitive science and artificial intelligence.
Discussion on Hacker News
There’s an interesting discussion around Marcus’ article on Hacker News. It highlights a consensus that LLMs are indeed experiencing diminishing returns, similar to past trends in deep learning, where increased data and model size yield progressively smaller performance gains. Commenters speculate that despite diminishing returns, LLMs will catalyze a new economy focused on integrating conversational APIs into legacy applications, leading to further investment opportunities.
There is skepticism about the effectiveness of purely scaling LLMs (more parameters, data, and compute) to achieve significant breakthroughs, contrasting with the view that hybrid approaches may offer more substantial improvements. The conversation reflects a belief that the field will continue to innovate beyond pure neural networks, potentially incorporating neurosymbolic AI and other hybrid approaches to enhance performance and reliability.
Some participants argue that LLMs already provide immense value and utility, even if they do not lead directly to AGI, emphasizing their role in various applications and business contexts. The debate includes criticism of predictions regarding AI advancements, with some commenters accusing figures like Gary Marcus of overstating claims about the limitations of LLMs while neglecting their current capabilities.