Beth Singler :
From high impact Hollywood dystopic accounts such as the infamous Terminator films to public responses to the story of a burger flipping robot being “fired”, the stories we tell ourselves about AI are important. These narratives have an impact on our conception and development of the technology, as well as expressing elements of our unconscious understanding of AI. Recognising the shaping effect of stories – whether fictional or “news” – is increasingly important as technology advances. How we think about technology can open up some pathways while closing others down.
A variety of narratives underpin popular conceptions of AI, but one in particular – that of the dynamic between the master and the slave – dominates accounts of AI at the moment. This is so pervasive that it arguably shapes our relationship with this technology.
This narrative has long appeared in science fiction accounts of AI. In 1921, “R.U.R.” (“Rossum’s Universal Robots”), a play by Karel Capek, introduced us to the “robot” – humanoid androids made of synthetic organic matter – and helped shaped this idea for modern audiences. From the Czech word robota, meaning “forced labour” or serf, these first robots were consciously stylised as slaves pitted against their human masters.
And so the uprising of the robots in R.U.R. was obviously influential on our repeating fears of “roboapocalypses”, as seen in other more recent science fiction accounts such as the films of the Terminator franchise, the Matrix, the film Singularity, the novel Roboapocalyse, and so on.
But the image of the fabricated servant has roots in much earlier mythological accounts. Think of the golden handmaids of Hephaestus, the bronze giant Talos, or the brass oracle heads described in the medieval period.
By the 1920s and 1930s, the “robota” had certainly lost their brass and bronze but were no less lustrous in the adverts of the time. The automated devices of the near future presented in those decades would, they claimed, free housewives from their drudgery and usher in a golden age of free time. In the 1950s adverts even promised new slaves.
Decades on and with new labour-saving automated servants every day, nothing has changed. We still expect technology to provide us with serfs. Indeed, we are so used to this form of serfdom that we see it where it does not exist. We presume automation where it is absent. The serf role, the relationship between master and slave, is maintained, with humans presumed to be (and perhaps eventually really) replaced by machines.
This is also seen in descriptions and the expected behaviours of contemporary AI assistants, such as Google Assistant, who “learns about your habits and day-to-day activities and carries out ‘conversation actions’ to serve you”. There are even servant AIs who perform emotional labour, such as Azuma Hikari, the Japanese AI assistant who claims to have missed its master when they are not about.
Capitalists peddling this narrative should take heed. Previous forms of it left space for and even encouraged rebellion. And so does this modern version. Perpetuated through capitalism’s branding of AI as the disruption of your work and drudgery, this framing still leads into fears around rebellion because we understand servitude as antithetical to minds. The presumption is for many that with AI we are working towards minds – and that they will want to be free.
In the thought experiment space of science fiction we see this tension being worked out again and again, where humans mostly lose as the new AI minds break free.
And so in the real world, which owes a lot to the influence of science fiction on our aspirations and designs for AI, two very different paths seem to lie ahead: the stated aim of working towards smarter and smarter machines, versus peoples’ hopes for better and better slaves.
How this tension will be resolved remains unclear. Some are clear that robots should only ever be slaves, “servants that you own”, while others are exploring questions of robot rights already. Whatever path is eventually taken, paying attention to how we speak about AI is key if we are to understand the decisions we are already making about its future. -The Conversation
(Beth Singler is Research Associate, Faraday Institute for Science and Religion, University of Cambridge)