
In chapter 5 we turn to questions of existential risk that have gripped the public discourse on AI. The fear is that once future AI systems are sufficiently advanced, they will be too hard to control. Our key perspective is that these fears rest on a binary notion of AI that crosses some critical threshold of autonomy or superhuman intelligence. But this idea is contradicted by the history of AI. The technology has gradually been increasing in flexibility and capability, which we explain by introducing the concept of a “ladder of generality.” Current technology is already on the seventh rung of this ladder, each step being more general and powerful than the ones below it.
We have every reason to think that this pattern of step-by-step progress will continue. This means we don’t have to speculate about the future but can instead learn from history. And what this grounded analysis shows is that claims of out-of-control AI rest on a series of flawed premises. Of course, we must take risks concerning powerful AI seriously. But we’ll show that we already have the means to address them calmly and collectively.