Computers have made some startling advances lately in things like voice and object recognition. Put it together, and we are beginning to see some signs that our glorified typewriters-cun-filing cabinets are turning into the robots and computers of classical science fiction. Which brings us to the obvious question: what will they do to the labor market?
A recent paper tries to answer a more limited question: what percentage of jobs are susceptible to computerization? If that number is small, then there is nothing to worry about. If that number is large, then we have to hope that new human-only jobs will emerge that most humans can do ... or watch income inequality expand to unseen levels.
To make this analysis useful, you need to do three things. First, you need to estimate the limit capability of computers. That is, what will they be able to do once Moore’s Law comes to an end. (For the nerds: Koomey’s Law is just as important in this context. After all, a robot that consumed $500 per hour worth of electricity would not be very valuable.)
Second, you need to identify the current jobs that these hypothetical future computers will be able to replace.
Third, you need a methodology that provides you with an upper bound. Why an upper bound? Because the second thing above pushes your estimates upwards. After all, jobs change ... but any reasonable identification strategy will have to identify computer-susceptible jobs as they are and not as they could be. That already implies significant upwards bias, so in order to get a useful estimate one should run with it. That won’t help much if the percentage of computerizable jobs that you come up with is 98%, but it will help if it is, say, 60% or 40% or 25% ... it will provide a useful ceiling.
OK, so how does the paper stack up?
In terms of the raw theory, not too bad. They take a well-worn Cobb-Douglas production functionand replace labor with three labor inputs: perception and manipulation tasks, creative intelligence tasks, and social intelligence tasks.
In terms of its time horizon, it goes out about two decades, to 2035. “The main challenges to robotic computerisation, perception and manipulation, thus largely remain and are unlikely to be fully resolved in the next decade or two.” (Page 25.) This makes me vaguely unhappy: the big challenges are what happens after that, at the limit. After all, my boy will be my age in 2055 ... and if I live as long as my father did, I will also still be around.
If terms of data ... ugh. They used BLS data to group 702 jobs in terms of the three labor inputs. They then validated that model by using the BLS data to ... predict their subjective evaluations.
The results is that restaurant cooks and models wind up in the same easily-computerizable category as radio operators and cashiers. Down on the hard-to-computerize side, we have logisticians ranked with the guys who work in boiler rooms. (The latter is probably not what you think.) Both are hard to computerize ... but the latter is much harder. (I should add that I have worked in both fields, albeit very briefly and in one courtesy of the Army.)
Is there a non-intuitive way to test the plausibility of the rankings? Carlos Yu and I think we might have one. The less-computerizable jobs should also be the jobs where employees have more leverage over employers. Technically, you could use the the negative log
of computerizability as a measure of bargaining power regarding future
earnings. In Carlos’ words: “That works fine for
data entry and telemarketers, who have -log(.99) ~ zero bargaining power, but
given the weight of teaching, counseling, and nursing professions on the low-computerizability end of the scale, it suggests other factors are in play.”

In short, an intriguing paper, with what looks like the right methodology ... but not one that gives a useful roadmap to 2035, let along 2055 or 2100.
Recent Comments