Stories about artificial intelligence are everywhere. From Westworld to Blade Runner 2049 our screens are awash with fictional depictions of the machine brain. In the real world, Facebook recently felt the need to deny news stories alleging that the company was forced to shut down its Artificial Intelligence (AI) research programme after two “smartbots” began developing their own language. This was not long after a public spat between Facebook founder Mark Zuckerberg and Tesla and SpaceX CEO Elon Musk about the dangers of AI.

The philosopher Hubert Dreyfus wrote about artificial intelligence in 1965, focusing on its limits. In What Computers Can’t Do Dreyfus argued that AI would never be able to compete with human intelligence. This was because, he said, computers are incapable of recognizing the context of information – something human intelligence does intuitively. Today, we take a different view of what synthetic forms of intelligence are capable of, and this is increasingly reflected in the culture.

Speculations on machine intelligence are nothing new. Descartes had posited an automaton that could control human perceptions of reality in the philosophical writing that led to “cogito ergo sum” in the 17th century. But it wasn’t until the 20th century that such imaginings really took root, unsurprisingly coinciding with the rise of science fiction as a significant genre. Isaac Asimov’s short story collection I, Robot, originally published in the 1940s, set a nervously hopeful tone. Asimov’s affable conceit, his “Three Laws of Robotics,” designed to maintain and protect human safety, ultimately only emphasized the worries that spawned them:

First Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm. Second Law: A robot must obey orders given by human beings, except where such orders would conflict with the First Law. Third Law: A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.

Soon anxieties began to evolve. In Frank Herbert’s Dune, published in 1965, artificial intelligence has been outlawed before the novel begins, remaining as mere backstory. It is as if an unnameable fear had permeated the author’s mind so far that it could not be safely present within his pages. Banished to the past, a humanity-wide insurgence has ensured that no return is possible. But visionaries of fictional possibilities were drawn to explore those unnameable terrors, and so we began to imagine what artificial intelligence would look like bereft of human governance. Writers started to visualise the possible forms of AI. In Iain M Banks’s science fiction worlds, for instance, advanced sentient computer beings, known as “minds,” dominate giant spaceships or synthetic universes.

Visualising the enemy

But it has been in the world of cinema and TV that AI-anxiety has found a place to fully flourish, forcing images of speculative realms in front of us. One of the great representations of this nervousness was the thinking computer HAL 9000 of Stanley Kubrick’s 2001: A Space Odyssey – released in 1968. Kubrick’s decision to use actor Douglas Rain’s creepy vocalizations to give voice to HAL led to some of the most memorable moments in cinematic history. When the crew aboard a space craft realise their central computer has less than honourable intentions toward them, one of them begins to fight back. HAL’s querying, “Just what do you think you are doing, Dave?” seemed to summarise future-fear in one neat sentence.

Inexorably, we had reached a point where we had to deal with the potential consequences of machine rebellion. In 1982, the original Blade Runner took the idea of sentient beings and made them, to all intents and purposes, human-like – going so far as to call them “replicants.” In fact, the central premise was that it was so difficult to tell the difference between humanity and its stronger and more potentially dangerous doppelgänger that a special assessment had to be used: the “Voight-Kampff test.” Used as slaves, a group of replicants rebel and become a dangerous force that must be hunted down. Viewers are led to question whether Harrison Ford’s character is also a replicant, and whether his love for a replicant is valid. Seen at the time by critics as an example of style over content, Ridley Scott’s visualisation of Philip K Dick’s novel Do Androids Dream of Electric Sheep? is nowadays regarded as a postmodern masterpiece, filled with profound existential inquiry.

Later in the 1980s, James Cameron examined the potential consequences of global domination by artificial intelligence. In his Terminator franchise, the US military develops a “neural net” AI programme known as “Skynet” to control its operations. What could possibly go wrong? The moment Skynet attains a state of consciousness is the axis from which the plot lines of the entire series turn, leading to the time-traveling cyborg killing machine famously played by Arnold Schwarzenegger. A 2004 reboot of 1978’s Battlestar Galactica continued the AI/human war trope, along with its corollary, the fear of dominance. In this version the last vestiges of humanity are hunted throughout the universe by the show’s equivalent of replicants, known as “cylons.” Humankind has become prey to a seemingly superior order of being.

But as well as exploring concerns around power, Battlestar Galactica began to explore further what it might mean to be a synthetic, sentient being, and whether or not such beings could have rights that can be violated. This notion has more recently been examined in television shows Humans (Channel 4, UK) and Westworld (HBO). Both shows feature relationships, sometimes sexual, between people and commercially purchasable androids. Both shows also depict emerging self-awareness of artificial intelligence minds, and are interested in the questions arising from these issues. But these issues – interesting as they are – are mere sideshows next to that primary existential dread that is the catalyst behind all this cultural focus. As Dolores, an awakening android in Westworld, explains to a human visitor: “This world doesn’t belong to you or the people who came before. It belongs to someone who has yet to come.”

About the author

RICHARD JOHN DAVIS is a writer and editor focusing on literature, cinema, television, art and other cultural concerns. He has worked for The Guardian, the Sunday Times and as a lecturer at Birkbeck College.