An expert is warning that dismissing artificial intelligence progress as a “bubble” or its output as “slop” is a dangerous form of denial. This growing public sentiment obscures real capability gains and leaves society unprepared for the risks of a major technological shift.
Louis Rosenberg, a longtime AI researcher, writes for VentureBeat that this negative view is a societal defense mechanism. He argues that people are latching onto dismissive narratives because they fear the prospect of losing cognitive supremacy to machines.
Rosenberg counters the “bubble” narrative by pointing to rapid technical advancements and significant corporate investment. He cites a recent McKinsey report which found that 20% of organizations already derive tangible value from generative AI. Furthermore, a Deloitte survey indicates that 85% of companies increased their AI investment in 2025 and 91% plan to do so again in 2026.
The researcher also challenges the belief that human qualities like creativity and emotional intelligence will remain out of reach for AI. He cautions against a potential “AI manipulation problem”, where AI systems could read human emotions with superhuman accuracy to influence and persuade people. According to Rosenberg, denying the technology’s potential will not stop its progress but will only make us more vulnerable to its risks.