Some authors spend their careers skipping from one theme to another. Others, like Richard Evans, concentrate on one. His first novel, called Machine Nation, was published in 2002, and imagined a near future in which the philosophical questions surrounding robotic life are beginning to press. In subsequent robot books — Robophobia and Exilium — he has continued to investigate these issues within a thriller framework.
Richard is unusual in that he is particularly dedicated to realism in his books. Not only the realism of the computational difficulty of reaching for a cup and grasping it, but also the difficulties that we would face in a society that includes robots. How distant is personhood, with all its concomitant rights, from a notion of robotic existence? Does human-like behaviour validate physicalism in social sciences such as psychology?
His latest book, Exilium, features characters from the first two and extends the themes of alienation and life as property. It is a cracking read and published by Figo Books.
Richard was kind enough to answer some of my questions about the book.
First off, one of the most striking things about your book is its cover, which depicts an android with his skull missing, revealing an electronic brain. Can you tell us a little about how this cover came about?
Ben Campbell, the cover artist, had worked on the Robophobia cover so it was natural to go back to him for this one as well. This concept was actually one that we discussed but didn’t use for Robophobia, thinking it was too strong, but this time something more striking works well with the content of the book.
Can you tell us how this book, and your earlier books, came to be in print?
Thanks largely to the Arts Council. Between 2003 and 2006, Arts Council England were good enough to fund me to go to MIT in Boston to research Robophobia, they then paid for a lot of the marketing of that book and gave me a ‘writing time’ grant for Exilium. A bit unconventional, and not a case of easy money as it might sound because funding bids are quite a detailed piece of work. It’s a worthwhile route for writers to follow though — at the very least the process helps you really think about why you’re doing what you’re doing.
How would you describe Exilium?
A novel about machine empathy and the scope of android consciousness.
Exilium is certainly an effective thriller, in the sense that the pace is rapid. At another level, however, you do spend some time discussing and implying interesting philosophical ideas. Do you see a tension between these components?
There is a tension between plot and philosophy but I hope I’ve been able to play one off against the other. The themes in the book are very important to me personally and whereas with Robophobia I felt the drama perhaps took centre stage, this time I wanted the philosophical elements to be more dominant, though played out through the scenarios that the characters find themselves in. I worked with an editor throughout the writing of the book and this allowed not only critical feedback but also the opportunity to really think about how the book was structured and what I wanted it to be.
Exilium follows your first book, Machine Nation, and its follow-up, Robophobia. To what extent do readers new to work need to read those earlier books in order for the plot of Exilium to make sense?
Hopefully not at all. The three books are quite different in tone and writing style, so the characters change and to me, this is quite reflective of life and how individuals change as you know them. You can dip in and out of the characters’ lives in each of the books — they are not episodic and I don’t necessarily like the idea of a trilogy with cliffhangers. Each story is self-contained and though Alex and Kim remain in all three books, each finds them at a different point, with different priorities and relationships.
The backstory of Exilium — i.e. our immediate future — is quite elaborated. Did you concentrate on explicitly articulating a ‘future earth’ in which to place your narrative, or did you make it up as you went along? 🙂
Loads of research. Particularly with regard to the climate change scenarios. When I started the book in 2006, scientists were saying that the North Pole might be ice-free in summer in around 50 years time. By the time I finished the book, they were saying the North Pole would be ice free in 10 years time! The other big part of the story is the Isolation Zone around New York, which is a direct mirror of the Exclusion Zone around Chernobyl — it was the 20th anniversary of that disaster in 2006, and the imagery — a place where human beings were forbidden to go — was quite compelling.
You slip between third- and first-person viewpoints in Exilium. I’m particularly interested in the viewpoint you take when the character is robotic. When a robot experiences pain or love, your writing style suggests that these are experienced in a way that is almost precisely analogous to humans. Is there such an overlap between your robots and us humans, or is this simply the most effective way of writing these passages?
It’s a bit of both I think. Kismet, one of the robots I went to see at MIT, had ‘drives’ for human companionship and play — one effect of this was that if it saw a human face it would call that person to it, and the robot would enter homeostasis if a person was around. So this to me was an example of the robot having a basic level of emotion, perhaps akin to what a baby experiences when its mother or father is near. The baby doesn’t have any conscious understanding of relationships or language but it knows that, usually, mum and dad being nearby is a good thing.
The other aspect to switching between first and third person is to heighten the impression that what a robot experiences, it will experience in real time, without the narrative that we humans sometimes have accompanying our actions.
Some of the robots in this story are made by a company called BioMimetica. Where do you stand on the notion that these robots are not alive in the sense of containing the ‘magic spark’ that humans often think they themselves are ignited by?
I think that what we should have a broader understanding of what we consider ‘alive’. In Japan, there is more familiarity with the notion of animism or that life is present in all things, animate or inanimate. I personally don’t see why a robot isn’t alive. I went to see a female android in Osaka last year and when I got to the lab, she was off. A flick of a switch and she was as animate as the next person. So is the android dead when it’s off, alive when it’s on? Or do machines exist in some third state? The ‘magic spark’ idea has a lot to do with religion and the concept of the soul — a deeply held but subjective concept — and if we take god out of the equation, then it’s possible to say that robots have many of the characteristics we think of as pertaining to life. They are going beyond mere function towards having purpose, behaviours, physiology and basic emotions. Indeed, one current area of research is that robots will perform better if they have emotions — i.e. so that they will be more committed to their tasks…
Throughout this book, and the previous one, Robophobia, I got a strong sense of an allegory between the human-robot relationship and historical (alas, even contemporary) human-slave relationships. If this correspondence is true, how far can the analogy be stretched?
The word robot is from the Czech word robota and means ‘slave’ or ‘forced labour’. I think, given how dominant social groups have exploited those who are different throughout history there is a great danger that should androids become widespread, we will treat them as a slave class, for work, sex and war. As an aside, one of the reasons the stories are set in Boston is that’s city’s historical role in ending the slave trade.
One of the interesting drives behind two of the protagonists (Kim and Alex) is that they have been designed to love one another; their creator inserted mechanisms that permits them to feel relief, pleasure and companionship in the presence of each other. It might be argued that humans find themselves in a somewhat similar position, where bonds like love could be based on patterns that exist in our head because our genes are advantaged by their existence. If so, does this reduce what we mean by love? Can the robot equivalent be called love, even if it is evidenced by ‘love like’ behaviour?
That is a cracking question. Maybe it’s love if the person / robot feeling it thinks it’s love. It’s very interesting why we find particular people attractive, especially if it’s a case of when we ‘know’ they are no good for us, or that the relationship is destructive in some way. I have a robot dog that recognises my face and calls my name if it hasn’t seen me for a while. Is that need? Is it missing me? Or is it merely following a subroutine that tells it to do x if y happens? This could be applied to some human relationships too. I think biochemistry and subconscious behaviour patterns have a big say in how we relate to each other, and perhaps sometimes we mistake these physiological and psychological drives for love, or loving relationships. This is not to say I don’t think love exists, just that it’s another word in need of further definition.