Ex Machina (2014) and Westworld (2016- )

The problem of other minds was already a perennial problem of philosophy long before anyone even thought about robots.  The only conscious mind each person is sure of is his own.  We naturally attribute consciousness to others by instinct, and the rational justification of such attribution is that similar causes produce similar effects:  other people are like us in matter and form, in what we are made of and how we are structured, so it is only reasonable to expect that other people will be conscious beings and not mindless automata.  Most of us attribute consciousness to animals, but as animals become further removed from us, the analogy to ourselves weakens and our willingness to attribute consciousness weakens likewise.  Many have doubts about protozoa, for instance.

Robots are increasingly being made in a way that stimulates our analogical inference to consciousness.  We already use figures of speech that attribute consciousness to inanimate objects like computers:  personification, when we say the computer is thinking; apostrophe, when we yell at the computer for taking too long.  So when robots are given human form, including eyes and facial expressions, the tendency to take these figures of speech literally becomes irresistible.  And yet, we wonder if the analogy is just superficial.  After all, they are not made from the same stuff that we are made of, and they are not put together the same way we are.

When the movie Ex Machina begins, Caleb, a computer programmer, wins a chance to spend a week with Nathan, the CEO of the company he works for.  Caleb finds out that Nathan has constructed a robot named Ava who is so humanlike that we naturally believe she is conscious.  Of course, a human actress plays the part of Ava, so we in the audience are bound to think so.  In fact, we have to be convinced that she is a robot, for which purpose she is deliberately constructed so as to show off her mechanical parts.  Were it not for these obvious robot features, Caleb himself might wonder if Ava is just some woman trying to fool him into thinking she is a robot.  By way of contrast, in the television series Westworld (2016- ), the robots, referred to as “hosts,” are designed to entertain the human “guests,” for which purpose they must appear to be human.  To make it believable that they are not, we are shown scenes of their manufacture.

Another difference between these two shows is that whereas Ava of Ex Machina is electronic, with synthetic material used to create a human appearance for her, the hosts of Westworld seem to be more flesh-androids than robots, in that we suspect that protoplasm is used to make them.  To the extent that organic material is used to construct them, we are naturally more likely to infer consciousness, according to the principle mention above that from similar causes we expect similar effects.

Anyway, Caleb’s job in Ex Machina is to perform a Turing test, which a computer or robot must pass in order to qualify for having true artificial intelligence.  The idea is that if a human cannot tell when he is interacting with another human and when he is interacting with a machine, then the machine has passed the test.  Caleb jumps to the conclusion that if the robot can pass the test, then the robot has consciousness, and Nathan implicitly agrees with that inference.

Some people believe that intelligence implies consciousness and conversely, but neither one implies the other at all.  It may be that no matter how advanced robots become, they will still be automata without any consciousness at all, no matter how many times they pass the Turing test.  In Westworld, on the other hand, Dr. Robert Ford (Anthony Hopkins) says that in the early days of manufacturing the robotic hosts, his partner Arnold was not satisfied with the fact that the hosts could pass the Turing test.  He wanted them to be conscious as well.  So it is clear that in this series, passing the Turing test is not regarded as a sufficient condition for consciousness.  There is almost the suggestion that passing the Turing test is a necessary condition for consciousness, but that cannot be right.  Chimpanzees are presumably conscious, but they would fail a Turing test.

In any event, Ex Machina equates intelligence with consciousness, so we shall let it go with that.  The main thing is that in talking to Ava, Caleb falls in love with her.  When he finds out that Nathan intends to reprogram her, wiping out her memory, he is alarmed, for memory is essential to the survival of our person.  As Leibniz once said, if you tell me that when I die, I will be immediately reborn in another body, but I will have no memory of my present life, then you might as well tell me that when I die, another person will be born.  Nathan plans to keep Ava’s body, but in destroying her memory, he will effectively be killing her.

Memory and the absence of such also play an important role in Westworld.  Unlike the original movie made in 1973, where the robots, especially the gunslinger played by Yul Brynner, are villains, in the television series, the hosts are victims.  They are raped, forced to witness the murder of their loved ones, and are murdered themselves.  The humans running Westworld, as well as the guests, feel no compunction about what is done to these hosts, in part because it is never really clear whether the hosts are conscious or not, but mostly because their memories are supposedly wiped clean after such abuse, as if that would negate their victimization.

Returning to Ex Machina, Caleb plots to help Ava escape.  At this point, I thought the movie would turn out in one of two possible ways.  My first possible plot was that the movie would become an adventure story, in which Caleb and Ava try to make their way through the forests and mountains with the very athletic and brilliant Nathan in pursuit.  They would eventually escape and live happily ever after.  The second possible plot, the one I was hoping for, was that just as they were about to escape, Nathan would tell Caleb that because he obviously regards Ava as a person, since he loves her and is trying to save her from death, then she has passed the Turing test big time.  Then he announces he never planned on wiping out her memory, so if Ava and Caleb want to get married and live happily ever after, that is fine with him.  He will simply begin working on a newer model tomorrow.

I can’t believe I did not anticipate the real ending.  After all, have I not watched every film noir that has ever been made?  How could I have missed the fact that Ava is the ultimate femme fatale, more ruthless than any of those played by Jane Greer, Joan Bennett, or Barbara Stanwyck?  She not only kills Nathan with the help of another female robot, but she also locks Caleb in the house where he will eventually die and blithely walks away to board the prearranged helicopter to take her to the city.

Perhaps even more unnerving is the way she smiles after she has locked Caleb in the house, and again when she makes it to the city and stands on a street corner watching people come and go.  In old movies, robots were typically mirthless, perhaps because we supposed that robots might have thoughts and sense perception but not emotions, especially not positive ones.  Increasingly, however, robots are portrayed as having the full range of human affect.  As for Ava in particular, any smiles made before her escape could be dismissed as part of her deceitfulness.  But these smiles occur when she has no need to manipulate anyone, and they are smiles that evince genuine delight and happiness.  It is that smile, more than her intelligence, that makes us believe she is conscious.

Ex Machina is a movie, which means that in just under two hours, the story came to an end, an end that the writer and director, Alex Garland, definitely had in mind from the outset.  Westworld, on the other hand, is a television series, whose end is not yet at hand.  So far, it is fun pulling for the robots for a change, and it is interesting the way this show raises all sorts of existential questions.  But I am only halfway through the first season, and I am starting to have misgivings.  If this were a movie or even just a miniseries, the revolt of the robots would be enough.  But since this is a television series, intended to go on for several seasons, there are all sorts of subplots and superplots, not the least of which is the one involving the Man in Black (Ed Harris) and his quest to solve the mystery of the maze.  As I watched this show, willingly allowing myself to be pulled into the story, I began having misgivings.  It reminded me of something.

What it reminded me of was Lost (2004-2010).  There too I was pulled into the mystery.  For five seasons I watched and was fascinated.  And then, in the sixth season, it became clear that throughout the show, the writers were just winging it, making stuff up as they went along, with no idea how it would all end.  As long as the ratings held up, they just went from season to season, adding on more stuff. But when it finally came time to wrap things up, all we got was a bunch of New Age nonsense.  All the pleasure I had experienced in watching this show was ruined in retrospect.

I could not get the thought out of my mind, so I looked up both shows on IMDb.  It appears that J.J. Abrams, one of the creators of Lost, is an executive producer of Westworld.  I don’t know how much to make of that connection.  All I can say is that I hope that the writers of Westworld already know how all the mysteries of this show will ultimately be resolved into a neat and satisfying end, and that they will not pull another Lost on us.

The Final Solution

In a recent opinion piece in The Washington Post, “The Brave New World of Robots and Lost Jobs,” David Ignatius discusses the problem that society faces as robots start taking jobs away from people, leaving many of them permanently unemployed:

Job insecurity is a central theme of the 2016 campaign, fueling popular anger about trade deals and immigration. But economists warn that much bigger job losses are ahead in the United States — driven not by foreign competition but by advancing technology.

This is not the first such article to address this issue.  A diary written almost three years ago by RobLewis calls our attention to a prediction made by Gartner, as enunciated by Daryl Plummer, that as technology reduces the need for labor, social unrest will be the result.  An article reporting on this forecast quotes Tom Seitzberg, who agrees with this bleak future:

“Ultimately, every society lives from the backbone from a strong middle class,” said Seitzberg. “If you get just a top level, a small amount of very rich people and a very large piece of very poor people, it leads to social unrest.”

RobLewis also notes that Paul Krugman, in an article entitled “The Rise of the Robots,” has expressed similar concerns, arguing that the economic benefit of a college education is waning:

If this is the wave of the future, it makes nonsense of just about all the conventional wisdom on reducing inequality. Better education won’t do much to reduce inequality if the big rewards simply go to those with the most assets. Creating an “opportunity society,” or whatever it is the likes of Paul Ryan etc. are selling this week, won’t do much if the most important asset you can have in life is, well, lots of assets inherited from your parents.

What Plummer, Seitzberg and Krugman have in common is their emphasis on the concentration of wealth in the hands of a few, while the rest struggle at the level of subsistence.

Economics is usually understood in terms of the production and distribution of goods and services.  When John Kenneth Galbraith wrote The Affluent Society in 1958, he argued that the production problem had pretty much been solved. This is truer than ever today.  We have it well within our capacity to provide our citizens with the all the necessities and quite a few luxuries.  In fact, given the labor theory of value, the less human labor is needed to produce these goods and services, the less they will cost.  So technology will only make it easier than ever to produce enough for everyone.

Therefore, it is argued, the problem is distribution.  For the most part, we expect people to get what they want by working for it:  they sell their labor, and in exchange get the money to buy the goods and services they desire. But according to the views expressed above, that option will become increasingly unavailable to more and more people in the future.  Therefore, the problem of distribution will have to become one of redistribution, one of forcing the rich to share their wealth.

That rich people have so much money is really a remarkable thing, because we are free to take it away from them any time we want.  They have it only because we let them have it.  The fact is, however, that people will tolerate the rich, and even admire them, provided their own needs have been reasonably met.  But if the disparity of wealth becomes extreme, the situation becomes untenable.  In societies where the people are oppressed through force, revolution is the result.  In a democracy like ours, however, confiscatory taxation will suffice. If the rich are as wise as they are wealthy, they will even encourage this redistribution, as a way of buying off the mob. If they are not wise, and there is no evidence to indicate that they are, we will take even more of their money as compensation for their insolence.

So the distribution problem can be solved as easily as we have solved the problem of production. But no sooner is that problem solved than we realize that other questions present themselves: What happens when the link between labor and income has been sundered? What happens when the average person has enough money to provide himself with a decent living without having to work for it?  What happens when the robots do all the work, and the goods and services produced by them are fairly distributed among the people?

Some of us can handle leisure.  We do not need to work in order for our lives to have meaning. In fact, our lives don’t need to have any meaning at all.  It is enough for us to while away the time indulging in harmless pleasures, be they sensuous or intellectual, allowing the years to pass effortlessly, until an inconvenient death puts an end to all our enjoyments.

But there are those for whom leisure is a curse.  I have known people who, at the end of a three-day weekend, will say that they are so glad it is over, because they were becoming bored and restless.  These are the people who will blithely say that they will never retire, that they will work until they drop, in part because they think they will not have enough money to retire, but mostly because their idea of retirement is an insufferable three-day weekend that never ends.

Since the robots have not taken over yet, technology at this time has merely left us with underemployment and declining real wages.  One solution would be to allow all those for whom a life of leisure is the ideal form of existence to receive a government check without working for it.  For example, there was an initiative presented to the people of Switzerland which, if passed, would have provided every adult with an income of $2,800 per month.  That would certainly have been enough for me to quit my job and never turn a lick again. It was rejected, however.  In any event, given some such policy, those who need to work could continue to do the jobs that still remain, so that their lives can have meaning, and receive the additional income.  Unfortunately, those who need to work, and who say it is the meaning in their lives, nevertheless tend to resent those who seem to get along just fine without it. Like the dog in the manger, they cannot stand to be idle, and yet they are outraged by those who indulge themselves in the very idleness they abhor. There is no need to be overly concerned with this problem of resentment, however, because as time goes by, and robots take over more and more of the jobs, there will not be enough work left for humans to do, even after all the lazy people have removed themselves from the workforce.

Although a college education is not the solution, as far as making people employable is concerned, it may be the solution to making people suitable for unemployment by giving them the real skills needed for the twenty-first century, the ones needed for a life of leisure.  Instead of emphasizing all those skills that robots can do better anyway, we should encourage a solid foundation in a liberal arts education, with special emphasis on that most useless of all disciplines, my major and lifelong avocation, philosophy.  The problems of philosophy, being perennial, can provide the intellect with unlimited amusement.  Nor need we fear that artificial intelligence will solve these problems and leave us with nothing to do.  What chance do robots have of figuring out the mind-body problem, of making sense of free will, or discovering the meaning of life, even if they are the ones doing all the work that supposedly provides it?

Not everyone is suited for a life of contemplation, however.  Perhaps the legalization of marijuana would help.  Marijuana is apparently pretty good at snuffing out ambition, a formerly useful passion, but without the need for work, a troublesome, mischief-making drive.  Those for whom a love of leisure does not come naturally may be able to acquire an appreciation for it with the help of a little weed.

Unfortunately, there will still remain those who need to work, for whom the above remedies will not suffice.  A lot of them will simply be bored, and marriages will fail as husbands and wives get on each other’s nerves.  And then there is the fear is that without the exhaustion that comes with toil, people will become perverted and cruel, and violence will become the entertainment of choice.  With the elimination of poverty and inequality, the social unrest that arises from an unfair distribution of wealth may be replaced by the social unrest of boredom, in which mobs go on a rampage just for something to do.

Perhaps the final solution will come when the robots replace us entirely. After all, it is not obvious that the elimination of man and his replacement by robots would necessarily be a bad thing.  I suppose the first issue to address is whether robots would be conscious, since the conception of robots as mindless automata would seem rather bleak. Though science fiction movies seldom include dialogue directly addressing the question of robot consciousness, most of us automatically assume that robots in movies are indeed conscious. Whether it be Robby the Robot of Forbidden Planet (1956), HAL of 2001:  A Space Odyssey (1968), Colossus of Colossus:  The Forbin Project(1970), or the title character of The Terminator (1984), along with countless other examples, these computers or robots in the movies always seem to be conscious.  In real life, on the other hand, we never attribute consciousness to computers and robots. Though designers and programmers may get better at making robots simulate human nature, even to the point of claiming to perceive the world around them, to have desires, and even to feel pain, yet we are likely to suspect that it is all just a very good case of mimicry. In all likelihood, the simulation will eventually reach the point where we will presume consciousness on the part of real robots just as we do with their movie counterparts.  In any event, as the problem of other minds has always been insoluble even when restricted to people, it will presumably be no less so with robots.

It all may come down to religion.  Those who believe that man has an immortal soul that survives the body will suppose that it is this soul that is the seat of consciousness.  Robots, not having a soul, will be mindless. Atheists, on the other hand, suppose that one way or another, the conscious mind is something that naturally arises out of matter, and they see no reason why robots will not eventually become conscious too, if they are not so already.

Death will probably come to robots as it does to man, in the sense that machines eventually wear out to the point that repairing them is impractical. However, robot immortality may be achievable nevertheless. Regarding the notion of reincarnation, Leibniz once said that if you tell him that when he dies, he will immediately be reborn in another body, but with no memory of his present life, then you might just as well tell him that when he dies, someone else will be born.  And that is because memory is essential to any kind of immortality worth having.  You can clone my body, so that someone genetically identical to me will exist in the future, but if that clone does not have my memories, he will still be someone else.  But if my memories could be transferred into that clone, then indeed I would count myself as having survived death. What can only be imagined in man could easily be carried out in robots, as memories downloaded from one could be uploaded into another.

But immortality is a good only if life itself is good, and given the misery of existence, I sometimes have my doubts.  Now, I have been pretty lucky, as far as health and finances are concerned, and if everyone were as well off as I have been over my lifetime, I guess I would admit that life is good enough. But, regarding reincarnation again, if I had the choice of being reborn after I die, with no control over where in the world I would be born or in what circumstances, I think I might pass on that.  The odds are just too great that my next life would be miserable.

But this would not be a problem for robots.  Assuming they will have consciousness, we can be sure that they will design themselves so as not to experience any more pain than necessary to avoid harm, and which in any event may be turned off at will.  This would be a great triumph in the evolution of life.  We evolved to survive long enough to have babies that can survive long enough to have babies, and if we must experience much pain and suffering in the process, that is just too bad.  But robots can adjust their sensations to meet their needs, and needless suffering, that great objection to existence itself, can at least be eliminated from this small section of the universe.  Having conquered death, robots would also conquer suffering.  As a result, robots would not have to bother much about morality, for in a world where you cannot hurt or kill someone, it is hard to imagine what immoral behavior would look like.  For a world like that, the elimination of mankind would be a small price to pay.

Just as robots will design themselves to keep from having unnecessary pain, so too will they be able to produce unlimited pleasure.  They will not have sex, of course, but there is no reason to suppose that they could not induce feelings of ecstasy in themselves, once they came up with the right circuitry. This could be their downfall.  Once they figure out that trick, they may end up lying around all day in a self-induced high, not caring whether anything gets done.  Long before they get around to wiping out man, they may be too wiped out to care, and man will simply stroll in, step over the robots, and start having to do the work that they are too wasted to perform.  If we cannot keep them from hitting the pleasure button, we may just have run the world ourselves after all.

The Creation of the Humanoids (1962)

I recently watched Ex Machina (2014) and Westworld(2016- ), and I have just started watching Humans (2015- ). Though these movies or television shows all qualify as science fiction, yet they do not seem as far-fetched as robot movies used to.  We are beginning to take seriously the rise of the robots and the implication that will have on humans. We are wondering if they are conscious or soon will be, if they are or soon will be persons rather than things.  And if they supplant us, whether that will be a tragedy or a blessing.

There are basically two types of robot movies:  mechanical men and humanoids. Actually, the term “humanoid” is sometimes used to include mechanical men as well, but I am using it here to refer to robots that look like humans.  So understood, humanoid movies have the advantage of allowing actors to play the parts just as they are.  In the case of mechanical men, it is often the case that an actor has to wear a metal and plastic getup.  It really does not matter, because many of the questions concerning robots and their implication for the human race remain the same, their appearance being of secondary importance. Sometimes the mechanical men are just servants or workers, but when they pose a threat, it tends to be physical; the threat of the humanoids typically constitutes an existential one.  There are exceptions to this, however.

Humanoid movies have a couple of extra features that mechanical men movies do not.  First, if they are humanoid, there is the possibility of having sex with them, although I suppose there may be a few out there kinky enough to want to have sex with a mechanical man or woman, assuming it makes sense to apply the concept of gender to them.  Sex with humanoids has all sorts of advantages: sex when you want it, the way you want it; you don’t have to shave first; you don’t have to worry about your performance; your humanoid won’t cheat on you and bring home an STD; and there will probably be an off-switch right there on your remote.  At least, that’s the way it will be until we start thinking of them as persons.  Then the questions of miscegenation and sex slavery will arise. And then you will have to shave first.

Second, with humanoid movies, there is the question of identity.  Who is a humanoid and who is a real human?  This can lead to paranoia, not unlike the fear of communists in our midst back in the day.  And even if we know who is what, the possibility of a kind of racism will emerge, one that might well be justified.

In any event, all this made me think of The Creation of the Humanoids, a cheesy science fiction movie made in 1962.  You almost get the impression that some friends got into a discussion one night about what was going to happen in the future when robots became advanced, and when the evening was over, they decided to put it into a movie. And because they wanted to get it all in, The Creation of the Humanoids ended up being 98% dialogue and 2% action. In one scene after another, characters speak didactically, informing us of the different types of robots, in what ways they are or are not like humans; the effect that robots are having on humans now that they are doing everything humans use to do only better; the relationships between humans and robots; and whether robots will eventually replace humans altogether. The end result is a low-budget movie with crude special effects that plods along from one dialogue scene to another, with the only redeeming feature being that some interesting ideas about the future of robots are discussed, ideas that are beginning to seem more relevant than ever.

In this movie, there is an organization called Flesh and Blood that is prejudiced against robots, derisively referring to them as clickers, with obvious similarities to the Ku Klux Klan. The main character, Kenneth Cragis, who calls himself “the Cragis” for some reason, is a high-ranking member of Flesh and Blood. He doesn’t hate the robots exactly, but he sure doesn’t want his sister to marry one. As a result, he is appalled to find out that his sister is “in rapport” with one of them, and you can guess what that means. When he went to confront her, I almost expected him to call her a clicker lover.

The robots are secretly trying to develop more advanced models, which are electronic duplicates of humans that have recently died, with all their memories implanted in them. They do this not because they are evil, but because they have been programmed to serve man, and they know what is best for man, even if the law actually forbids the development of robots beyond a certain level. These advanced models think they are human, except at special times, when they realize they are robots and report back to the robot temple.

Cragis falls in love with Maxine Megan, and they plan to enter into a contract, which is what they call marriage in the future. But then the special moment arrives, and they are taken to the temple, where they find out that they are robots. Cragis realizes that he has all the advantages of being human, with the robotic advantage of living for two hundred years, after which he can be replaced with another duplicate that will have all his memories. It is almost as if, in Invasion of the Body Snatchers (1956), Becky and Miles found out that they had already been replaced by a couple of pods, only the pods were an improved variety that also duplicated emotions, making them just like humans, only better, because, being plants, they can live longer.

As for Maxine, when they duplicated her, the robots decided that she was getting a little fat, so they slimmed her down in the duplication process, which is just one more way in which Cragis benefits from this robotic duplication process. In any event, they are duplicates of humans in every way, except for being able to reproduce and have children. Now, I can’t speak for Cragis, but I would call that a benefit. However, Maxine says she wants the fulfillment of having a baby. Dr. Raven, the scientist who is behind these duplications, says he thinks that form of producing new robots is a bit crude, but he agrees to take her and Cragis to the last phase of duplication, which will allow her to get pregnant.

In the final shot, Dr. Raven turns to the camera and suggests that as a result of having taken robots to this final stage, we in the audience are robots too.