Humans like to think of themselves as special. But science has a way of puncturing their illusions.
Astronomy has demoted Earth from the centre of the universe to just one planet among zillions.
Darwin's theory of evolution has proved that, rather than being made in the image of some divine benefactor, humans are just another twig on the tree of life.
Those keen to preserve the idea that humans are special can still point to intelligence.
Crows may dabble with simple tools and elephants may be able to cope with rudimentary arithmetic. But humans are the only animals with the sort of general braininess needed to build aeroplanes, write poetry or contemplate the Goldbach conjecture.
They may not stay that way. Astronomers are beginning to catalogue some of those other planets. One or more may turn out to have intelligent inhabitants. Or humans may create intelligence in their own labs. That is the eventual goal of research into artificial intelligence (AI) — and the possible consequences are the subject of a new book by Nick Bostrom, a philosopher from the University of Oxford.
Writing about artificial intelligence is difficult. The first trick is simply passing the laugh test. Much like fusion power, experts have been predicting that intelligent machines are 20 years away for the past half-century. Mr Bostrom points out that there has, in fact, been plenty of progress, though it is mostly confined to narrow, well-defined tasks such as speech recognition or the playing of games like chess.
Mr Bostrom is, sensibly, not interested in trying to predict exactly when such successes will translate into a machine that is generally intelligent — able to compete with, or surpass, humans in all mental tasks, from composing sonatas to planning a war. But, fantastical as it seems, nothing in science seems to forbid the creation of such a machine.
"We know that blind evolutionary processes can produce human-level general intelligence, since they have already done so at least once," he writes. In other words, unless you believe that there is something magical (as opposed to merely fiendishly complicated) about how the brain works, the existence of humans is proof, in principle at least, that intelligent machines can be built.
Having taken the possibility of AI as a given, Mr Bostrom spends most of his book examining the implications of building it. He is best known for his work on existential risks — asteroid strikes, nuclear war, genetically engineered plagues and the like — so it is perhaps not surprising that he concludes that, although super-intelligent machines could offer many benefits, building them would be risky.
Some people worry that such machines would compete with humans for jobs. And pulp science fiction is full of examples of intelligent machines deciding that humans are an impediment to their goals and so must be wiped out.
But Mr Bostrom worries about a more fundamental problem. Once intelligence is sufficiently well understood for a clever machine to be built, that machine may prove able to design a better version of itself. The cleverer it becomes, the quicker it would be able to design further upgrades. That could lead to an "intelligence explosion", in which a machine arrives at a state where it is as far beyond humans as humans are beyond ants.
For some, that is an attractive prospect, as such godlike machines would be much better able than humans to run human affairs. But Mr Bostrom is not among them. The thought processes of such a machine, he argues, would be as alien to humans as human thought processes are to cockroaches. It is far from obvious that such a machine would have humanity's best interests at heart — or, indeed, that it would care about humans at all.
It may seem an esoteric, even slightly crazy, subject. And much of the book's language is technical and abstract (readers must contend with ideas such as "goal-content integrity" and "indirect normativity"). Because nobody knows how such an AI might be built, Mr Bostrom is forced to spend much of the book discussing speculations built upon plausible conjecture. He is honest enough to confront the problem head-on, admitting at the start that "many of the points made in this book are probably wrong."
But the book is nonetheless valuable. The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking, even if the prospect of actually doing so seems remote. Trying to do some of that thinking in advance can only be a good thing.
This post was excerpted from "Superintelligence: Paths, Dangers, Strategies," by Nick Bostrom. Oxford University Press; 352 pages. To be published in America in September.
Click here to subscribe to The Economist.