The Clean Code Blog

by Robert C. Martin (Uncle Bob)

Drive me to Toronto, Hal.

24 July 2017

I keep reading articles in the news about the imminent inevitability of driverless cars. The financial news is all abuzz over the idea. So are the techies. The predictions are that truck drivers, cab drivers, and Uberists will all be out of work in the next five years.

I have one word for all these predictions: Toronto!

Do you remember when IBM Watson played, and won, Jeopardy? This was back in February of 2011. The achievement was dramatic. Watson so vastly outperformed the human competitors that it was really kind of sad. Afterwards, Ken Jennings, one of Watson’s opponents who had previously won 74 games in a row, acquiesced by welcoming “our new computer overlords.”

But as remarkable as the IBM achievement was, it wasn’t without it’s embarrassments. And the embarrassments were telling. Very telling. The mistakes that Watson made were not the kind of mistakes that a human would make. Indeed, the mistakes were eyebrow raising – and given the implications perhaps even hair raising.

One such event occurred when the contestants were asked to name a US city that had one airport named after a WW2 hero, and another airport named after a WW2 battle. Think about this for a second. Walk through the top three cities in the US. New York? No, JFK, LGA, and Newark don’t fit. LA? No, LAX, John Wayne, Ontario don’t fit. Chicago? Aha! O’Hare and Midway! That’s the one. So what US city did Watson choose?


Now, of course, there was a reason that Watson chose Toronto over Chicago. Watson was, after all, a computer; and computers always have absolutely discrete and unambiguous reasons for what they do. A series of if statements comparing weighted values through a complex tree of associations yielded a final, definite result. So there was certainly a reason. A good reason.

I don’t know what the details of that reason were. I’d like to know because I think the answer would be interesting from a technical point of view. On the other hand I don’t much care what the reasons were; because whatever the reasons, the answer that Watson gave was supremely stupid.

No human of moderate intelligence and education would have made that mistake. No such human could understand another such human making that mistake. Indeed, any human who insisted on that answer, as Watson surely would have, might very well be deemed legally insane.

So here’s the dilemma: Watson outperformed the human Jeopardy contestants by a significant margin. The claim has been made that driverless cars will outperform humans by a significant margin too. Driverless cars will decrease the accident and fatality rates. Driverless cars will make the roads safer. Those are the claims.

But the inevitable tragedy will eventually occur. We can imagine it. Perhaps, one day, a driverless car will run down a two-year-old who strayed into the street.

The car will have had a good reason to kill that child. The car, after all, is a computer; and a computer always has an absolutely discrete and unambiguous reason for what it does. And, believe me, everyone is going to want to know the reason that the little child had to die.

Imagine the courtroom scene. The distraught parents, the angry press, the subdued lawyers representing the company who made the car. The car’s computer is on the stand. It is about to answer the question. There’s a hush as everyone leans in. The prosecutor pointedly asks: “Why did that child have to die?” And the computer, parsing through all the data in it’s memory, running through a chain of if statements comparing all the carefully weighted values, finally, and definitively answers: “The reason was: Toronto.