Cognitive scientist Gary Marcus advocates for comprehensive AI regulation in the United States, including the creation of a cabinet-level AI agency, as detailed in an interview with Steven Rosenbush for The Wall Street Journal. Marcus, who published “Taming Silicon Valley” in September, argues that current AI systems, particularly large language models (LLMs), have significant technical limitations and pose moral risks that require immediate regulatory attention.
Marcus proposes a three-part regulatory framework: establishing a cabinet-level AI agency, implementing an FDA-like approval process for AI systems, and creating a monitoring system for deployed AI technologies. He emphasizes that current LLMs lack reliable performance despite their capabilities and warns about inadequate control over these systems. The cognitive scientist also highlights the need for a better approach that combines fast, reflexive processing with deliberate reasoning, similar to human cognitive systems.