I’ll never apologize for tasteless puns.
But for the record, even I thought this pun was a bad one.
It’s a mix of Michael Huemer’s The Problem of Political Authority (a wonderful book I’ve reviewed on C4SS here and here) and the subject of today’s article: economist Michael Autor. More specifically, Autor’s recent thoughts from a TED talk he did on automation and whether humans are going to be replaced or not.
Before I continue, it’s worth noting I’ve talked about Autor and his definitions of work here.
One of the best parts about this TED talk is that my response to it is (ironically) slightly automated thanks to the past insights of John Danaher who, besides being one of my favorite anti-work writers, has two articles (here and here) that respond to many of Autor’s claims within this presentation. For the record, I mostly agree with Danaher that while Autor makes some solid points, none of them seem particularly compelling enough to rule out technological automation.
To be clear, Danaher is specifically writing over a few years ago, on both posts. So none of his points are likely going to have a 1:1 ratio with Autor’s points most recently, but I think they’ll still be helpful to us so we can make better sense of whether Autor is right or not. This article is also further automated in that TED is kind enough to release their videos as transcripts, which means I can much more easily and accurately quote Autor!
Isn’t automation lovely?
Autor begins his presentation by making an observation I love denoting myself:
Here’s a startling fact: in the 45 years since the introduction of the automated teller machine, those vending machines that dispense cash, the number of human bank tellers employed in the United Stateshas roughly doubled, from about a quarter of a million to a half a million. A quarter of a million in 1970 to about a half a million today, with 100,000 added since the year 2000.
These facts, revealed in a recent book by Boston University economist James Bessen, raise an intriguing question: what are all those tellers doing, and why hasn’t automation eliminated their employment by now?
The reason why I use this point so frequently in my discussions with other folks about automation is its counterintuitive nature. I’m a sucker for the counterintuitive but true statements in the world and when you combine that with a subject that’s relevant to one of my passions in this world, you’ve got me.
That being said, it’s not a particularly strong point by itself. Perhaps if we had many many examples like it we could say that automation isn’t as big of a threat but one example from one industry isn’t sufficient to do that. For example, the notable example of agricultural farming that Autor himself denotes seems to cut against him, not for him.
Side note: Though Autor doesn’t identify what this phenomenon is called it’s name is the complimentary effect.
And it’s an effect that, like any other effect, has its limits as Danaher has argued:
The complementarity effect is, no doubt, real. But its ability to sustain demand for human labour in the medium-to-long term seems to depend on one crucial assumption: that technology will remain a narrow, domain-specific phenomenon. That there will always be this complementary space for human workers.
But what if we can create general artificial intelligence? What if robot workers are not limited to routine, narrowly-defined tasks? In that case, they could fill the complementary roles too, thereby negating the increased demand for human workers.
Much of Autor’s presentation rests on the idea that AI will always be specialized in a similar way to the ATMs. But he makes no argument (at least here) to think that this must necessarily be the case. For Autor, what stops a general AI from happening at some point in the future and reducing the ability of human laborers to benefit from the complimentary effect?
Autor’s argument stems from what he calls the “Polanyi Paradox” named after Michael Polanyi.
Danaher defines it as:
Polanyi’s Paradox: We can know more than we can tell, i.e. many of the tasks we perform rely on tacit, intuitive knowledge that is difficult to codify and automate.
As Danaher explains this is more of a side-constraint than a paradox but even so we can recognize that despite how good robots are at nominally doing specialized tasks, they still might struggle with tacit and intuitive knowledge given the difference between the ways a human brain works and that AI is currently programmed.
Are there ways around it? Danaher lists environmental control and machine learning:
Environmental Control: You control and manipulate the environment in such a way that it is easier for machines to perform the task.
Machine Learning: You try to get the machine to mimic expert human judgment (which often relies on tacit knowledge and heuristics). You do this by using bottom-up machine-learning techniques instead of top-down programming.
We could also use this definition of machine learning from Cathy Reisenwitz:
Machine learning allows computers to get better at doing things over time.
It works like this: the computer does a thing. Let’s say it suggests a restaurant for you to try tonight. Maybe you make a reservation. In that case, if the computer had a back and an arm it would pat itself on the back. But it doesn’t so the machine does nothing. But maybe you click “next.” In that case, the computer makes a small adjustment to its recommendation engine. With enough recommendations and enough “book” and “nexts,” the computer gets better and better at recommending restaurants.
If you’re curious what ideas I have about Reisenwitz’s article, you can click here.
But Autor argues (quoted by Danaher) there are some limits to each of these methods:
My general observation is that the tools [i.e. machine learning algorithms] are inconsistent: uncannily accurate at times; typically only so-so; and occasionally unfathomable… IBM’s Watson computer famously triumphed in the trivia game of Jeopardy against champion human opponents. Yet Watson also produced a spectacularly incorrect answer during its winning match. Under the category of US Cities, the question was, “Its largest airport was named for a World War II hero; its second largest, for a World War II battle.” Watson’s proposed answer was Toronto, a city in Canada.
Even leading-edge accomplishments in this domain can appear somewhat underwhelming…
Since the underlying technologies — the software, hardware and training data — are all improving rapidly (Andrespouos and Tsotsos 2013), one should view these examples as prototypes rather than as mature products. Some researchers expect that as computing power rises and training databases grow, the brute force machine learning approach will approach or exceed human capabilities.
Others suspect that machine learning will only ever get it right on average, while missing many of the most important and informative exceptions… Machine-learning algorithms may have fundamental problems with reasoning about “purposiveness” and intended uses, even given an arbitrarily large training database…(Grabner, Gall and Van Gool 2011). One is reminder of Carl Sagan’s (1980, p 218) remark, “If you wish to make an apple pie from scratch, you must first invent the universe.”
I don’t think Autor’s observations are particularly wrong, but they also seem to lack imagination. As Danaher himself remarks, Autor doesn’t seem fully convinced of his own arguments either. He seems weary of technology outpacing his expectations while denying that this will actually happen. But as he denotes in his presentation, it’s impossible to know how humans will be working in the next hundred year and I’d argue the same could be true of the next 20.
Autor makes remarks in his presentation that are overly-dismissive towards folks (like myself and Danaher) who don’t think the future is ironclad one way or the other. We both see the potential leaning more towards technological automation but it isn’t on the basis of arrogance that Autor claims it is. My own predictions come from my interpretation of history, my judgement about other folks competing interpretations, the world around me and so on.
I’m not convinced that Autor and I see the world very differently in terms of making future predictions. And while I have no doubt that some academics and tech-geeks have acted like their some sort of crypto-guru for the coming technological revolution, that doesn’t mean that this disposition is essential to the position itself.
For what it’s worth I agree with Autor that the future doesn’t solely hinge on our imagination, but it’s still an important component of any prediction model. We still need our imagination to some extent to see where things go and if we decide that, from a lot of the variables that we’ve encountered that one outcome seems the most logical, why not believe it?
Believing in a particular outcome doesn’t require thinking that no one will be able to outpace our expectations and imaginations, it just means we’re doing the best we think we can, with the information we’ve got.
That being said, Danaher has a few responses to the above points by Autor:
I agree that predictions about future technologies should be grounded in empirical realities, but there are always dangers when it comes to drawing inferences from those realities to the future.
The simplest one — and one that many futurists will be inclined to push — is that Autor’s arguments may come from a failure to understand the exponential advances in technology.
Autor is unimpressed by what he sees, but what he sees are advances from the relatively linear portion of an exponential growth curve. Once we get into the exponential takeoff phase, things will be radically different. Part of the problem here also has to do with how he emphasises and interprets recent developments in technology. When I look at Kiva robots, or the self-driving car or IBM’s Watson, I’m pretty impressed.
I think it is amazing that technology that can do these things, particularly given that people used to say that such things were impossible for machines in the not-to-distant past. With that in mind, I think it would be foolish to make claims about future limitations based on current ones. Obviously, Autor doesn’t quite see it that way. Where I might argue that his view is based on a faulty inductive inference; he might argue (I’m putting words in his mouth, perhaps unfairly) that mine is unempirical, overly-optimistic and faith-based. If it all boils down to interpretation and best-guess inferences, who is to say who’s right?
There’s all of this debate about “this time’s different” and whether that’s actually true or not.
And while it’s trivially true as Autor says during his presentation that “every time is different” this doesn’t strike me as a compelling response. It isn’t just the case that the time is different but that it’s radically different and we have a history, more and more, of living in exponentially different times in shorter and shorter spans of times.
It’s true, as Autor says, that farmers couldn’t have suspected app development and other such advancements But Autor’s point about imagination not determining history cuts both way. If imagination doesn’t lock down history than a lack of imagination won’t either and so this point seems to be moot.
I can’t say in all of the particular ways that this time is different from the first industrial revolution, I’m not a historian and am not trained in the particulars of these historical events. But I can say that the experience of my life in the past 25 years has been that of rocketing technological growth that doesn’t seem to be slowing down. Maybe that’s not evidence enough for the technological utopia I see possible, but it’s certainly a solid historical trend that’s empirically observable by many.
In less than 20 years smartphones have become widely accessible to millions of people. Heck, as poor as I am, I have an iPhone. It’s hilariously out of date (2010) but it’s a smartphone and gives me access to things my phones could never have done and I could never have anticipated even 5-10 years ago.
Again, I’m not saying any of this is the evidence that technological unemployment is going to happen.
Part of why it might still not is that capitalism has ingenious ways of incorporating the advances of technology within civilization and repurposing it for its own direction. And that’s something, as an anarchist, that I lament. But it’s also not impossible to treat capitalism, the state, intellectual property and the many other barriers to human progress as damage and route around them so we can better ensure a more just and technologically advanced society.
Danaher imagines certain solutions to Autor’s problems via Humans Need Not Apply:
This brings me to my third point, which is that there may be some reason to doubt Autor’s interpretation if it is based (implicitly or otherwise) on faulty assumptions about machine replacement. And I think it is.
Autor seems to assume that if machines are not as flexible and adaptable as we are, they won’t fully replace us. In short, that if they are not like us, we will maintain some advantage over them. I think this ignores the advantages of non-human-likeness in robot/machine design.
This is something that Jerry Kaplan discusses quite nicely in his recent book Humans need not apply.
Kaplan makes the point that you need four things to accomplish any task: (i) sensory data; (ii) energy; (iii) reasoning ability and (iv) actuating power. In human beings, all four of these things have been integrated into one biological unit (the brain-body complex). In robots, these things can be distributed across large environments: teams of smart devices can provide the sensory data; reasoning and energy can be centralised in server farms or on the ‘cloud’; and signals can be sent out to teams of actuating devices.
Kaplan gives the example of a robot painter. You could imagine a robot painter as a single humanoid object, climbing ladders and applying paint with a brush; or, more likely, you could imagine it as a swarm of drones, applying paint through a spray-on nozzle, controlled by some centralised or distributed AI programme.
The entire distributed system may look nothing like a human worker; but it still replaces what the human used to do. The point here is that when you look at the Kiva robots, you may be unimpressed because they don’t look or act like human workers, but they may be merely one component in a larger robotic system that does have the effect of replacing human workers.
You draw a faulty inference about technological limitations by assuming the technology will be human-like.
Think about Star Trek and their popularizing of the hive-mind and you’ve got something similar to what Kaplan is talking about here. Or at least it reminds me of that sort of amalgamation of robots. Related to Star Trek, I am starting to think the more and more I write about this and automation more generally, that imagination is a vital part of this conversation.
I think without the imagination necessary to make the radical demands we should from systems of power and then follow up on those demands ourselves I think our society will be the worst for it.
I won’t continue to go into Danaher’s responses to Autor (though for those curious he also replies to Autor’s comments about the polarizing effects of automation) but those were some of the pressing ones and respond to much of Autor’s presentation. Autor also makes some points about the “high school movement” that I find rather off-putting as someone who opposed compulsory schooling and the public education system more generally.
But that’s slightly besides the point within the confines of this article so I’ll hold off.
Generally I’ll conclude by remarking that this larger conversation is far from over and I believe that while many of Danaher’s arguments relies on speculation, they aren’t speculations without any merit. I believe time is on the side of those who see a technological utopia coming in the future.
Whether that future is sooner or later, is another question.
If you enjoyed this article you can donate to my Patreon!