I.Robot

If you’re a science fiction fan, chances are you’ve come across the name Isaac Asimov a few times. He’s one of those almost archetypical writers instantly associated with a certain genre.

Where detective novels have Arthur Conan Doyle and Agatha Christie, and Fantasy has J.R.R Tolkien, Sci-Fi has Isaac Asimov.

In 2004, The silver screen gave us I.Robot, a science fiction action movie, based on the anthology novel by the same name, courtesy of 20th Century Fox.

(On the one hand, I should at this point warn you that this will involve spoilers, but on the other, why would you read a blog post named “I.Robot” if you weren’t expecting spoilers?)

When I first saw it, I liked it. I had never read any of Asimovs stories at the time. In fact, I wasn’t even aware of their existence. I watched the movie, and I was given a fairly decent action movie with some light psychological discussions about the nature of humanity, followed by a ton of apple-esque robots being shot to pieces by Will Smith.

All was well.

Then about a year and a half ago, I was browsing through my local library and I came across what would be the first Asimov novel I ever read.

I.Robot

I started reading it, and I fell in love with the style, the world, the psychology, the ethical and philosophical discussions… I was absolutely hooked.

I also started to develop a dislike towards the movie I.Robot. Suddenly, the robots issuing curfews and going on murderous rampages wasn’t just mindless fun. It was insulting towards the concepts established in the book and, arguably, Isaac Asimov himself.

Not only did they ruin Susan Calvins personality, but they also completely ignored the rules set up by the book! They establish in the book that the Three Laws of Robotics cannot be bypassed, but in the movie,Sonny is said to “Have the Three Laws, but since he has two brains, he can chose not to follow them”. Since the Three Laws are fundamental for the construction of a positronic brain, that makes no sense! A robot attacking Shia LeBouf or KILLING A MAN is a complete impossibility!

Suffice to say, there was much ranting and raving following this revelation.

So with the self-righteous fury found in everyone who reads a clever book and feels intellectually superior because of it, I scoffed the movie I previously enjoyed and decided to atone by reading more of Asimov’s stories, simply so that I could judge the movie more harshly the more I read the books.

But then, something strange happened. The more I read, the more I started to forgive the movie. The idea of a robot attacking a human is completely possible. As I’m writing this, I don’t consider the movie bad, as much as a missed opportunity. Their problem was never in the scenario. Their problem was in the explanation of the scenario.

The movie gives us the Three Laws of Robotics, as put down by Asimov.

1) A Robot may not injure a human being or, through inaction, allow a human being to come to harm.

2) A Robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

3) A Robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The thing is, there’s one thing they could have added and worked around to make the movie make perfect sense for anyone.

The Zeroth Law.

0) A robot may not harm humanity or, by inaction, allow humanity to come to harm.

This, by extension, means that the First Law is rewritten as “A Robot may not harm a human being or, by inaction allow a human being to come to harm, so long as such actions do not conflict with the zeroth law.

Suppose that was given as the explanation. In the stories The Evitable Conflict and Robots and Empire, they establish that machines sophisticated enough would come to the conclusion the First Law wasn’t enough to protect people.

This is what VIKI does, of course, altought without actually saying it outright. But if that’s the case, why is Sonny not helping her? The only way for them to bypass that was to write in that Sonny can chose to ignore the Three Laws, since they are what dictates VIKI’s decisions. Since we know that a robot HAS to adhere to the Three Laws, since if you remove them, the robots ceases to function, that explanation doesn’t work.

So how can the scenario work, if Sonny also has the Three Laws? Surely, that’s not possible. He would be given the explanation for her actions, go “You’re right” and kill everyone, ushering in an oppressive nanny state lead by VIKI and her cold, heartless plastic iPod-robots of doom. How could it possibly work differently?

Simple. Suppose Sonny and VIKI both reached the conclusion of the Zeroth Law, but in different ways.

Let’s look at the differences between Sonny and VIKI. VIKI is a supercomputer monitoring vast groups of people, possibly all over the world, getting an objective look at humanity as a whole. To her, humanity is petty, selfish, childish and violent. To her, harming or killing a few humans is an acceptable sacrifice for the sake of protecting humanity as a whole. This programming is carried over to all NS-5 robots under her control. To her, the priority of the Zeroth Law compared to the First Law is equal to that of the First Law compared to the Second Law.

But Sonny isn’t connected to VIKI. He is one of the most sophisticated robots in the world at the time. A marvel of artificial intelligence. And he has been dealing with people on an individual level. From his interactions with Susan Calvin and Del Spooner, he has seen the friendship, bravery and the compassion humans are capable of, as well as how important the individual is. To him, the First Law is just as important as the Zeroth Law.

Suddenly, the plot makes perfect sense. and everyone is happy!

Well, maybe not everyone….
There’s still a few flaws with the story, besides the plot, but all of those could have been avoided very easily.

Susan Calvin, for example. In the stories, she was born in 1982, so she would be 53 years old in 2035. And even if she wasn’t that old, she still wouldn’t behave like she did in the movie. The first time you see her, she is an intelligent, but very cold woman. That’s all well and good. Susan Calvin is a very cold person, a misanthrope who prefers robots to people. By the end of the movie, however, Susan Calvin is carrying a machine gun… Something is wrong with that picture….

The Solution: Change the characters name. And if you have to tie it together with the stories, just say that she studied under Calvin. Problem solved.

At least that way, you’re not harming an established character.

So, what do I think of the movie? I don’t think it’s terrible. I just think it’s a missed opportunity. It had great potential, but it should serve as a reminder to everyone how potential can be ruined by a bad explanation.

But the thing that really puzzles me about this movie, the thing that I just cannot wrap my head around…

Nobody else is raising these issues.
No, what most people complain about is the fact that Will Smith wears Converse shoes and that all of the cars are Audies.

Suddenly, I’m not surprised Susan Calvin prefers robots to humans…

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s