Relax: Automation isn’t coming for your job
By Justin Tenuto
For the past few years, the drumbeat of think pieces about automation taking your job–yes,your job–has gotten both louder and more incessant. Smart people like the folks at Oxford Martin and Gartner forecast more and more jobs being gobbled up by our mechanical overlords and President Obama made a passing reference in his otherwise upbeat final State of the Union address. But technological unemployment has been around forever; you can actually go back to the ancient Greeks here or the famous example of English luddites throwing their shoes into weaving machines they felt were destroying the textile industry (fun fact: those shoes were called “sabots” and yes, that’s where the word “sabotage” comes from). The point is, societies have forever dealt with technological unemployment. But even as automation continues–and make no mistake, it absolutely will–don’t buy into the Chicken Littles who say your job–yes, your job–is next. Automation isn’t coming to take your job. Rather, it’s just coming to take the most boring parts of it.
How can we be so sure? Because technology has been doing this forever. Driving once meant getting on steam-powered tricycle before it meant using a hand-crank to start your engine. But as time wended on, drivers ceded different parts of vehicle control to technology. A lot of these are so subtle you may not even think about it–take automatic windows and transmissions for example–while some recent advancements feel even more like relinquishing control–cars that can parallel park for you. These are all improvements that required different technologies and, of course, with Moore’s Law still marching on, unabated, you can bet the driving experience will become more automated as opposed to less.
Which brings us to the logical conclusion of self-driving cars. At what point do we all just surrender to the car itself? And does that mean that truck drivers, cabbies, delivery men, mail carriers, and all the other jobs that require a human behind the wheel suddenly just wink out of existence?
Not really. First, you have to consider some of the particulars of those jobs. How does the self-driving mail truck deliver your mail? Does it have a robot driving who gets out and physically brings your overnighted package to your door? Can it make the judgment about whether to leave it on your stoop? Can the car protect itself from thieves while it’s delivering your mail or can you just follow it around and snatch everyone’s Amazon shipments?
You get the idea.
But let’s get back to the real and actual question of self-driving becoming actual reality. Plenty of smart companies are investing a lot of equity in the idea but creating a fully automated self-driving car is still a ways off. The reason, in the end, boils down to data. Specifically: training data. A car needs no training data to maintain cruise control or roll your windows down for you, but it needs scads of it to parallel park. It needs to understand how close it is to the car in front and behind it, the right trajectory at which to back into the curb, and on and on. Machines “learn” things because they can find patterns in massive amounts of training data, in this case, parallel parking scenarios. A self-parking car has been trained with those scenarios. But again, even if you happened to fail your driving exam because you flattened an orange cone when you were sixteen, parking is far, far more rudimentary than driving. Driving requires you to understand and quickly judge a set of mammoth set of inputs, from knowing the best route to gauging weather conditions to judging whether the debris on the road is a flattened cardboard box or a wayward box-spring mattress. The list is manifold.
Now, most trips in your car aren’t that difficult. You pull out, you head to the store, you come back home. A machine can analyze and learn from hundreds of thousands of these trips and make solid judgments with high confidence about how to navigate that trip. That’s because there is (or at least will be) plenty of training data for the simple errand to your local grocer. Again, the more training data a machine has been fed, the better able it is to make good decisions. But what about the edge cases on your drive? What if there’s that box-spring we mentioned above? How about a detour? Is the thing rolling across the street an aluminum can? A soccer ball? A ten year-old chasing a soccer ball? Those are fairly novel scenarios. They simply don’t happen that often. And machines aren’t “smart” in new situations. They don’t have confidence because they haven’t been trained. And if the machine doesn’t know some new input, it can’t make a safe and sound decision. In that case, the decision is left up the driver.
This is called human-in-the-loop machine learning and its something we’ve gone on about from time to time. Continuing our use case here, it’s certainly conceivable we see a car that can make intelligent, autonomous decisions a large majority of the time in the not-too-distant future. But at what percent would you be okay relinquishing control–entirely relinquishing control–to a machine itself? 80% would be catastrophic. 95% means five trips out of a hundred could be dicey. Even 99% leaves too much to chance. Accidents happen to good drivers because something strange and unusual happens: the pizza guy running a red, a pedestrian running to catch a bus, an icy road where someone fishtails. And those are the exact types of scenarios a machine is unlikely to have seen. Meaning those are the exact types of scenarios a machine won’t have data from which to make smart choices. Meaning you’d better be able to take the wheel and steer.
This is what a lot of forecasters miss when they trumpet the new age of automated everything. Yes, automation is coming and it’s happening faster and faster. Yes, some of that automation will have ramifications on real jobs held by real people. This is always the case and has always been the case. But even with something like automatic sports writing, all the machine is doing is analyzing rafts of gamers (a.k.a. stories written about the actual game) and spitting out clichés. It’s not pressing the coach who called a terrible play for answers or getting clarity about a scuffle between two marque point guards. That’s because it simply can’t.
And that’s just a simple example. Because for as smart as machines will get–and make no mistake, whooping Ken Jennings at Jeopardy! was just the beginning–there are things they simply can’t do (like ask probing questions about a game they just watched). They can’t yet react to the exact sorts of situations humans are innately good at, in fact: split second decisions based on a life-time of experience, decisions that happen sometimes without us even knowing we’ve decided the thing in the first place. They can’t yet deal with novel scenarios, things where the training data itself is scant or nonexistent. If a car hasn’t seen enough data to distinguish a cardboard box from a person and sees both as simply an object, would you cede control to that?
That’s to say nothing, of course, of jobs where a human touch is not important but a veritable prerequisite. Yes, machines will eventually be able to diagnose diseases based on pre-existing data (X symptoms = Y disease) but do people want machines to deliver the news they have an incurable disease? Extrapolate that to anything from a state senator to a preschool teacher to the person you call when a mysterious charge appears on your cell phone bill.
Now, we’re getting to the heart of the thing: machines taking control of the parts of our jobs that, frankly, aren’t much fun. Even something as simple as this blog post has a sort of ersatz machine editor–there’s a squiggly red line underneath a word I misspelled up above, for example–but the machine isn’t going to say “hey, this is too long, wrap it up.” That’s a human judgment. But it is taking a fairly mundane and annoying part of the job (i.e. spellchecking) and doing that for me.
That’s where automation and machine learning tools will be leveraged increasingly in the near future: not pushing careers into obsolescence but giving all of us more time to work on the parts of jobs where humans excel. Machines will handle the more annoying bits, like spell-checking and coming up with a smart route from point A to point B, while people handle the more dynamic task, like writing or actually driving.
Now, this is where things get extra interesting: the judgments that machines make best are, as we mentioned, more objective and more commonplace. But once a model is trained and a machine is “smart”, the best way to improve that model is by training it with harder judgments.
Let us explain. Take a fairly typical machine learning practice like sentiment analysis. Even an out of the box sentiment solution will hit 65% or 70% accuracy. That’s because phrases like “I hate this commercial” or “This coffee is sooooooo good!” are really quite easy for a sentiment model to understand. But what about something like “I should hate this commercial. I really should” or “I want to drink this coffee soooooo bad!”? Trickier, right? A person can parse those phrases instantaneously. We don’t need to look at bigrams or thousands of similar judgments: we know because we’ve been dissecting language our whole lives. But if you give a machine learning model stuck at 65% these exact kind of human judgments, the machine will start to see patterns in those judgments. It will learn “should hate” is different than “hate” and that “want” in conjunction with “bad” isn’t always negative.
This is the same for any machine learning system, those same systems that hysterical pundits forecast will take all our jobs. They won’t, by the way. But they will start handling that 65%. Humans will handle the other 35%. And, over time, by feeding those more difficult judgments back into automation systems, yes, machines will encroach a little further into the workforce and into each of our jobs. But is that really a bad thing? When machines can confidently help doctors make intelligent diagnoses, simply because they can make connections between millions of rare disease research papers, that’s something to be celebrated. It doesn’t mean doctors will suddenly cease to exist. Rather, it means that they can spend more time with their patients or more time doing their own research.
In other words, automation doesn’t mean mass technological unemployment. It means mass employment augmentation. Job requirements will change, but they’ll change to maximize the skills people are good at, not machines. They will require us to learn how to work with machines, not fight them for our jobs. And that’s actually a really good thing.