Opinion: AI should be viewed as normal technology, not superintelligence

In a comprehensive essay titled “AI as Normal Technology,” researchers Arvind Narayanan and Sayash Kapoor from Princeton University argue that artificial intelligence should be understood as a normal technology rather than a potential superintelligence. Published by the Knight First Amendment Institute at Columbia University, the paper presents an alternative vision for understanding AI’s trajectory and impacts on society. The authors contend that while AI will transform society, it will do so gradually over decades, similar to previous general-purpose technologies like electricity and the internet.

Narayanan and Kapoor challenge both utopian and dystopian visions of AI that treat it as a highly autonomous, potentially superintelligent entity. Instead, they view AI as a tool that humans can and should remain in control of. They argue that controlling AI does not require drastic policy interventions or technical breakthroughs, and that viewing AI as humanlike intelligence is neither accurate nor useful for understanding its societal impacts.

The essay distinguishes between AI methods, applications, and adoption, arguing that these occur at different timescales. While technical advances in AI methods have been rapid, the diffusion of AI applications into society happens much more slowly, particularly in high-consequence domains. The authors point to evidence that AI adoption in safety-critical areas is significantly slower than popular accounts suggest.

“AI diffusion is inherently limited by the speed at which not only individuals, but also organizations and institutions, can adapt to technology,” the researchers write. They draw parallels to past general-purpose technologies like electrification, which took decades to fully materialize in terms of productivity benefits.

The paper also challenges the concept of superintelligence, arguing that it relies on flawed conceptual understandings of intelligence and power. The authors maintain that humans have always used technology to increase their capabilities, and that modern humans with technology are already “superintelligent” compared to pre-technological humans.

Regarding AI risks, Narayanan and Kapoor analyze concerns about accidents, arms races, misuse, and misalignment. They argue that viewing AI as normal technology leads to fundamentally different conclusions about risk mitigation compared to viewing AI as humanlike. For instance, they suggest that the primary defenses against AI misuse should focus on downstream applications rather than model alignment.

For policy, the authors advocate for a resilience-based approach that emphasizes taking actions now to improve society’s ability to deal with unexpected developments in the future. They argue against nonproliferation policies that seek to limit access to powerful AI, suggesting these are infeasible to enforce and could create dangerous single points of failure.

The essay concludes that progress in AI benefits is not automatic and requires thoughtful policy interventions. The authors recommend that governments play a role in promoting AI diffusion by investing in complementary areas like AI literacy, workforce training, digitization, and open data.

By articulating this “normal technology” worldview, Narayanan and Kapoor aim to provide an alternative framework for understanding AI that can enable greater mutual understanding in AI discourse, even among those with differing opinions about AI progress, risks, and appropriate policies.

Related posts:

Stay up-to-date: