Are We What We Do?

Check In: Yesterday, I angsted trying to write about the election and getting overwhelmed with outraged. Last night, Stephen Colbert explained what I was trying to say (in Tonight’s Werd!), and did way better than I could have. Thanks, Mr. Colbert!

Spoiler Warning: I discuss minor spoilers for the 2014 film Robocop while also discussing story elements that didn’t occur in the movie at all.

I seem to have the same reaction to RoboCop 2014 that I did to Total Recall 2012, specifically, they could have done so much… and didn’t.

Officer Murphy (Joel Kinnaman) is in control of his actions in combat situations. But when they put him in a combat training gauntlet, he is considerably slower than the EM-208 (man-sized combat android) that runs entirely on threat-response software. OmniCorp Executive Officer Raymond Sellars (Michael Keaton) decides this is the standard by which Murphy’s performance would be measured, and Murphy’s humanity is getting in his way.

So brain surgeon / neurologist / good-guy scientist Dr. Dennett Norton (Gary Oldman) rewires Murphy’s brain so that the combat software takes over in hostile situations, but Officer Murphy believes he is making decisions. From Murphy’s position, he is choosing to do what, in reality the software does for him.

RoboCop 2014 totally should have been about that. With this surgical operation, a Dickian identity crisis is foreshadowed in classic Chekhovian form. But then movie doesn’t actually address this it at all. Officer Alex Murphy has RoboCop adventures, but none confronting Dr. Norton’s work-around. The brain-bypass is relevant to the story because the people of the US don’t like the idea of being policed by autonomous machines, and Murphy is presumably still human enough.

But what if agency, and not identity was the pivotal point of the story? What would that look like?

Officer Murphy is re-introduced to Detroit by OmniCorp and so begins the clean up the town montage. This is when the combat software starts making choices that Officer Alex Murphy would not. At first, Alex notices it as a peculiarity: You know, maybe I was a little hasty in shooting that crazy transient. or The assault victim was distraught and holding — not brandishing — a gun. Why did I decide she was a threat?

A false positive threat ID is lampshaded at the beginning of the Robocop 2014. An ED-209 stationed in (squalid) Iran slays an aggressive teen attempting to assault the robot (ineffectively) with a kitchen knife. The ED-209 threat-assessment software tags the youth as armed and hostile. Ergo, a valid target for its twin .50 caliber Cobra Cannons. Likewise, Murphy’s threat-identification software is still too simple, It produces false negatives: Some threats it doesn’t detect until too late, costing lives. And it produces false positives: Non-threats and uncertain threats identified by the software as threatening, at which point the system goes into combat mode prematurely and acts to neutralize.

Eventually, Murphy encounters a child in the park playing with a (real, but unloaded) gun. It’s obviously a moment to negotiate with the tyke to relinquish his plaything, but as the kid brandishes at Murphy and pew! pew!, he whips out his taser and zaps the kid. Down the lad goes. An ambulance is summoned, but he gets unlucky and fails to recover. Why the heck did I do that? Why, indeed, Alex Murphy. Why, indeed.

Internal Affairs chastises and fines Officer Murphy, but it’s decided that it was an honest mistake. Any human officer could not have chanced that the gun was unloaded, or that the boy was not intending to use it. Just do better next time Murphy.

Alex goes to Dr. Norton, Something’s not right. I’m being messy. Maybe even Norton comes clean and explains to Murphy that the software (now getting fixed) shot the kid. It was a software error, Alex. You didn’t pull the trigger. The machine did. Still Murphy feels super guilty because guilt doesn’t always consider reason, and damn it, that boy did not need to die.

This is where the machines are bad, human is good message (implied but not actually addressed throughout RoboCop 2014) could get some traction. Threat-response software can be expanded to include nuance (e.g. proportional response based on a subject’s weapon and target of aggression). But Gödel’s Incompleteness Theorem applies to any sufficiently complex or open logical system. There will always be nuances that the decision tree doesn’t address. All we can hope for is a decision tree in which failures are rare. Preferably rarer than human bad decisions.

This would perfectly lead into our finale, a target who’s an obvious threat, but clutching a dead-man switch. And man, this bomb is really big. Alex realizes he needs to negotiate with the guy. Or distract him while someone can get close enough to neutralize the switch. But his threat-response software just wants to kill the target. Robo battles internal demons and external demons simultaneously.

That isn’t how RoboCop 2014 went, but it would have been so much better if it did.

Edits: Due to human error.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s