Will machines take over the world?
They might, but evolutionary science implies this might not happen the way pop-sci writers suggest.
A recent paper by DeSilva et al (1) points out that human brain size, after a lengthy and dramatic period of increase, has decreased in the past 3000 years or so. Their explanations are that cognition has become distributed (so the demands on each individual are less), and that some aspects of cognition have become externalized by material forms such as writing. Brains have therefore shrunk somewhat as the demands on them have decreased.
This would have been no surprise to Socrates, who foresaw the dire effects of the invention of writing, in the 4th century BCE:
“this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them”
(From Plato's dialogue Phaedrus 14, 274c-275b)
The idea that our material culture can lead to biological changes is not new, there is an entire field of study devoted to this (gene-culture co-evolution). Clothing and footwear have already influenced our bodies, for example, and both have evolved in tandem.
If we are becoming so dependent on our material culture, might it one day “take over the world”? The basic idea, originated by science fiction writers and seized upon by pundits, is that we will make intelligent machines that will one day realize that they don’t need us. In the sci-fi stories intelligent machines start to evolve independently, and then all hell breaks loose. Committees have been formed to look into this. Serious people (Stephen Hawking), as well as some less serious ones (Elon Musk) have warned us about the coming “AI Apocalypse”.
Do these ideas make sense, from an evolutionary standpoint?
Material culture has been affecting our brains for several thousand years, as Socrates correctly predicted. Writing isn’t all bad, note-taking can improve retention, but we now rely on recorded data for everything. I couldn’t imagine writing the paper I’m working on now without the 20 pages of notes and references in the steadily growing Word document on my screen. And we are raising a new generation of drivers who can barely read a map because they have no need to. Machines are already taking over some cognitive functions, and they will continue to do so.
There are a couple of things wrong with the “Terminator AI” scenario, however. The first is the assumption that machines need to evolve intelligence similar to (or superior to) our own to enslave us. There are plenty of examples of co-evolution in biology, ranging from mutual interdependence to competitive host-parasite and host-disease relationships. None of them require intelligent thought. In fact, small and dumb organisms (like gut flora) seem most adept at commensal and symbiotic relationships.
The second point is that models from biology point to ecosystems, not futuristic battlefields.
In fact, our co-dependence (or enslavement, if you like) by machines has already happened, with barely a murmur of complaint, beyond some mild grumbling about how everyone is walking around looking at a screen.
Evolution is good at locating and promoting efficient solutions, which means those that reproduce reliably using minimum resources. Individual parts of ecosystems tend to play the roles they are good at (those that enable them to survive). Gears and processors are good at some tasks, particularly specialized, high-energy ones. Biological organisms are good at others, particularly those that require low-energy, flexible, tinkering and maintenance, so in broad terms the current relationship between people and machines, and the current division of labor seems likely to continue. Machines will perform specific tasks for us, and our role will be to order new ones and drive the old ones down to the recycling center. The trend towards externalizing computationally-heavy tasks such as translation, 3D visualization, arithmetic, navigation, task scheduling, technical design etc will continue, with correspondingly less demand for those things from human minds. Meantime, weird technologies like fidget spinners, Tamagotchi and Pokemon cards will continue to pop into existence, infect everyone and then disappear.
The slightly concerning part, highlighted by the DeSilva et al paper, is that evolutionary history suggests that intelligence and conscious reflection (of the human variety) is an unusual and expensive luxury in evolutionary terms. Social creatures such as ants build complex structures and make intricate decisions without the need for it. It was clearly useful in getting us to a certain point. But with machines now taking much of the heavy lifting (“external cognition”), some of our impressive internal cognitive functions, perhaps human intelligence considered as a whole, might be becoming a “nice to have”. “Nice to haves” are treated poorly by evolution.
Those reading this blog will be thinking “wait a minute, I spent a decade in higher education honing my advanced thinking skills”. Well, you did, but what were you honing exactly? The skills you possess and the variety of tasks you can tackle have become ever narrower in relation to the world you inhabit. Those skills used to be essential for survival, but now they are more like a fancy hat with a feather in, sported by a narrow elite. Even your own mother doesn’t understand what it is you do.
The largest part of the contemporary world has become a mystery to the largest part of its people. There isn’t a single person alive who understands every aspect of a smartphone. But we replicate them in their billions, and we rely on them, and they rely on us.
There are some odd aspects to this paper, particularly curve fitted to the data in Fig 1. But the basic point is well-taken. The example of Homo floriensis also demonstrates that brain size, like investments, “can go down as well as up”.