The artificial intelligence industry is grappling with potential limitations in scaling larger language models (LLMs), according to an analysis by Gary Grossman, EVP of technology practice at Edelman. While recent reports suggest that developing more extensive AI models like GPT-5 may face diminishing returns, industry leaders including OpenAI’s Sam Altman and former Google CEO Eric Schmidt maintain that scaling barriers are not insurmountable.
The article outlines several alternative approaches to improving AI performance beyond traditional scaling, including multimodal AI models, agent technologies, and hybrid architecture designs combining symbolic reasoning with neural networks. Recent studies demonstrate that current LLMs already outperform human experts in specific tasks like medical diagnosis and financial analysis, suggesting that existing models have significant untapped potential even without further scaling.