Comments on the Hanson-Caplan Debate on Robots

This wasn’t for this debate but…come on, this is too dorky to not share!

I’m no stranger to the libertarian world, having participated in it (to one extent or another) for the better part of 7 years or so. As such I was interested to see one of the few modern anarcho-capitalist thinkers I find worth paying attention to, Bryan Caplan, in a debate with a thinker I’m not very familiar with, Robin Hanson, on the subject of whether robots will eventually dominate the economy.

The debate was very enjoyable though I found that both of the participants were unpersuasive for their own respective reasons. I took quite a few notes on the debate but as it turns out Caplan has actually put up his own thoughts that he presents in the debate (in PDF form!) here. So that’ll be very helpful for my upcoming comments on this debate.

Briefly, though I thought the debate was entertaining I didn’t find much in the way of elucidation. Will robots dominate the economy? What does it even mean to dominate? Will we try to become robots ourselves? What would that even mean for our former bodies? Would we be dead? What happens if an affluent few get control over the economy and impose a tyrannical robotic economy? Would capitalists only want docile workers?

These are a lot of the questions that are tackled (or at least asked) during this debate. None of these questions are conclusively answered but perhaps that’s no surprise given this is a debate about the future. To some extent then it seems probably that the argument is mostly going to be hypothetical and inconclusive.

My trouble with this debate was, despite agreeing with Hanson more he didn’t have many convincing counter-arguments to Caplan most of the time. And that’s even when I thought Caplan’s remarks (particularly on comparing domesticated animals to domesticated robots) weren’t as top-notch as I’d expect from him.

Overall, Caplan generally had the better arguments and it seemed to me that he was in much larger control of the debate. But it also seemed like most of the arguments he was clearly in the right about or at least made a better presentation of often just came off as small nitpicks. This is probably partly because Caplan and Hanson both teach economics at George Mason University (GMU) and as such likely agree on much between themselves.

On the other hand I agreed with Hanson a lot of the times but didn’t feel like he drove his point home. He even conceded ground that I really don’t think he needed to concede. One egregious example is later in the debate where he remarks that it’s at least possible that a Terminator situation could occur and robots may rule over us. But even if that does happen it would still have been worth it to get the progress in society that we got from robots, while humans are still around.

This was a hilarious but also very bizarre thing to concede that was only upped in its anti when Hanson said that to do that robots might only kill some of us, but not all of us. He quickly clarified (multiple times) this wasn’t what he was recommending or wanted to happen, but it seemed foolish to him to deny that it could happen.

But okay, that’s some very general and likely unhelpful remarks about the debate.

Let’s get to the debate itself.

Hanson Intro

The debate went as such: Hanson spoke for five minutes, Caplan spoke for five minutes. Then they went and had a free for all discussion amongst themselves (with a moderator) for 30 minutes. This was the heart of the debate but also notable was the Q&A section which ran for 30 minutes and then closing remarks with a similar format to the introduction.

Hanson’s intro repeats the resolution:

Robots will eventually dominate the world and eliminate human abilities to earn wages.

From there, Hanson gives his definition of domination which, to him, is analogous to how the skylines of most cities are dominated by tall buildings. So machines would similarly dominate the economy in the way that cities and their skylines are filled with skyscrapers and the like. Thus machines wouldn’t be omnipresent but there’d certainly be many of them relative to whatever human worker population is left over.

Hanson still thinks that humans could earn wages but he thinks it would be enough to subsist and it would have to be done in very particular ways. He gives the examples of owning the robots, owning the factories which produces robots, sharing arrangements (he’s even open to a basic income guarantee if that’ll work), insurance and more.

Hanson’s case for the resolution being answered in the affirmative relies on the fact that the historic trend is robots taking jobs from humans, not the other way around.

Now, strictly speaking that hasn’t always been the case since we can consider the ATM for example:

ATMs were widely introduced to American banking in the 1970s, with the total number increasing from 100,000 to 400,000 in the period from 1995 to 2010 alone. ATMs substitute for human bank tellers in many routine cash-handling tasks. But this has not led to a decrease in bank teller employment. On the contrary, the total number of (human) bank tellers increased from 500,000 to 550,000 between 1980 and 2010.

But even given this there are limits to the complimentary effect (robots helping humans get more jobs instead of  replacing them) that depends on elasticity with regards to the labor supply and income and output. The link I just cited from blogger John Danaher goes into more detail about that, so you can read more there.

So despite my counter-point, I think Hanson is generally right that the historic trend has been this. And I think, if Caplan had pressed on this point, Hanson likely would have conceded it to one degree or another.

Lastly, Hanson wants us to imagine (as I said before) that folks would exist based on stock options in popular robot-making companies. So perhaps (I’m extrapolating here from Hanson’s original point) they’d get some sort of percentage of their profits per year and make passive income this way. Another concept was real-estate (though he wasn’t specific about how this’d work, I don’t think) and reassured us it’d be centuries before we really had to worry about this.

The final point he makes (about how long it’d take) is a reassurance not only for our brains to have more room to comprehend the basic concept but also to tell us that people who are dependent on wages (for example: me) to get by, should start to learn some marketable skills to pass on so that they don’t get left behind by our new robotic companions…or is that overlords?

Just to be clear, as an anarchist, I don’t support corporations existing. That said, I’m completely fine with stock markets and stocks still existing if that’s how folks wanna make their passive income. I actually think this is a rather interesting and perhaps even viable way to make your money. I’m not sure how practical it’ll actually be and I’d of course like the stock options to be the choice of the workers within their collectives and cooperatives, but still. I don’t see any of these ideas as inherently incompatible with anti-capitalist economies.

Perhaps that’s naive of me, but that’s my intuition anyhow.

All of that being said, I obviously disagree with Hanson (and Caplan for that matter) on their support of capitalism. And all I mean by that is a system dominated by capital and those who own it. That seems to be the future that Hanson favors, if not Caplan as well, just in different ways. Hanson seems to favor capitalistic patterns of ownership revolving around who owns the most robots and robot making factories, etc. Caplan’s model is slightly more subtle and involves the general power differentials of employers and employees in a given economy.

In either case, I would oppose this particular version of a robotic economy. I’d much rather that we have a transhumanist society where people can become robots themselves, rather than relying on these external beings we create and program to do our work for us like slaves. Especially if we’re keeping things like hierarchy, bosses and corporations around, I’d rather have a much more decentralized and horizontal way to distribute robotic power.

Let’s move on to Caplan’s intro.

Caplan Intro

Caplan takes the impressive step of disagreeing with both clauses of the resolution (taken from the PDF):

Our most probable future:

•Robots will “dominate” nothing.
•Many humans will no longer WANT to earn wages, but those who choose to work will earn more than today

Caplan opens his claim with this (unconvincing) argument:

Imagine the dawn of the domestication of animals. Suppose someone said, “Domesticated animals will eventually dominate the world and eliminate human abilities to earn wages.“

Since we can improve and rapidly breed animals, eventually they’ll be smart and numerous enough to take over. Domestication will eventually yield animals so skilled humans will be unable to compete with them in the labor market.

Here, Caplan is attempting to do a counter-argument by analogy, but I think the analogy is rather weak:

  • The limits of animals are (generally) well-known and scientifically studied
  • The limits of animals are less than our own
  • Rapid creation of physical objects will always be less rapid than non-physical (i.e. digital) ones

Animals have very well known limitations to their physical brains but computers have no such thing. We could analogously say computers have processes and RAM and memory, sure. But such components are always being improved upon and in in faster ways than animals brains have been.

Caplan also claims that robots like animals, would be (mainly) bred for docility, but it’s not difficult to imagine that robots could much more easily overcome such programming than animals. You know why it’s easy to imagine?

Because there are literally millions of sci-fi stories about it happening!

Seriously, it’s not like a computer executing its original programming in unplanned or unforeseen ways has never happened or wouldn’t. Perhaps Caplan could slightly moderate his claim and make it so he’s only saying that while that might happen it still wouldn’t harm a human. And perhaps that’s a little more reasonable.

But then we also have movies/books like iRobot where robots try to “save” humans by killing them. There’s also times where these robots kill another human to save the one they think is more likely to survive, etc. There’s current examples of (relatively) much simpler AI that engages and executes strategies to accomplish their goals that were never originally intended by the makers but the robots do it anyways. Why couldn’t this happen in more dangerous ways?

So I don’t find the comparison very plausible.

To his credit, Caplan anticipates some of my claims:

But aren’t robots different from domesticated animals in fundamental ways?

The robots we can imagine are, because imagination is infinite.

The robots we’ve seen aren’t.

So far, we have:

Robots that are awesome at a narrow range of tasks, like computation, pure math, web search, and assembly line production.

Robots that are getting okay at a broader range of tasks, like voice recognition.

I don’t disagree that’s many of the robots we’ve seen but even looking at the last issue of Popular Science I found some examples of robots who can do everything from driving a car to playing videos games, to assisting in warfare and to using algorithms to give out results about things as trivial as selfies. All of these things are specialized tasks, no doubt, but in many of these cases, the robots are improving on their own programming in ways the makers don’t anticipate.

It is in this way that I think the improvements we make to robots will also make themselves more efficient which will give us Hanson . I think the specialized robots will continue to succeed and improve on their specialized tasks to the point that human workers are needed less and less. And the ones that are simply OK at broad ranges of task will keep improving on their software by listening to our feedback and programming.

Eventually, it seems like robots will learn their own programming (look into “deep learning”) and eventually use that to out pace our expectations and someday, ourselves. I think this argument is completely compatible with Hanson’s idea that this may take centuries to happen, but it seems like to me in the long-term, if not the short-term.

Lastly, Caplan says that, “X hasn’t happened is still the best argument available.”

I don’t think this says much. After all, Caplan is an anarcho-capitalist and I’m sure people could say his particular model of it hasn’t happened yet and therefore it won’t happen. Caplan would likely reject this argument given he still believes in anarcho-capitalism by either appealing to similar historical societies or general historical trends, etc.

I think Hanson does similar things here with how robots have repeatedly and increasingly replaced robots and the historical situations where farmers had to worry about the industrial revolution.

The Free For All

Anything We Can Do, Robots Can Do Better

Hanson asks a rather pointed question, what exactly is it that Caplan thinks robots will never be able to do?

Caplan’s answer is the usual one: The things we happen to excel at more than other beings! Things like art, intellectual discourse, articulation of abstract concepts, etc. Except there’s been plenty of original art by AI (at least in part) and while intellectual discourse or articulation hasn’t been done yet, computers have outmaneuvered us both in chess and jeopardy to name two big examples. So it’s clear that robots can learn and compute in hugely complex ways.

The moderator interjects at one point claiming that human bias is important too.

To give an example, even in Mass Effect 2 where robots have gained prominence in the galaxy, there are still kiosks that have a non-AI attendant. Why? Because individuals like to see that there’s a physical presence there as well. And even if we stipulated the situation so that it’s something more like cyborgs or emulations of humans (as Hanson says) we still might have (perhaps unfair) biases towards having humans as waiters and cashiers, etc.

But Hanson shrugs at this response. After all, humans still largely won’t be in control of whatever is going on in the given store nor will they be needed for much of the labor. And the jobs that will really require these sorts of situations will be few and far between (Caplan cites sports and service industries as well as retail) and even in those situations it usually takes just one or two humans to suffice. Not nearly as many humans would still be employed.

Which means that Hanson’s argument still holds.

Caplan tries to throw the question on its head and say the question isn’t whether humans will replace robots or robots replace humans but whether they’re complimentary. This (at least unintentionally if nothing else) brings us back to the earlier discussion revolving around the complimentary effect.

And of course they are, to an extent.

Caplan gives the example of airplanes giving pilots and stewardesses their jobs but unfortunatelly Hanson misses the (what I think is) obvious response: What if the planes become automated? Then it seems less obvious what these specific jobs would be for. Maybe you’d still need a pilot just in case something happened but it’d be a lot less in demand.

The Great Domination Issue

Caplan (once again) returns to the issue of “domination” and keeps picking at what Hanson means by it. I understand that philosophy of words is important but the number of times Caplan brings this up and the way he does it (self-aware enough to know that it’s something of a pet peeve, perhaps) doesn’t really strengthen his argument.

Regardless, it’s still an interesting discussion about domination and power with Caplan framing it in more libertarian terms (i.e. someone having power over someone else) and Hanson meaning it more generally (i.e. something having more power relative to everything else). One of the examples brought up in the debate is tractors and their role in agriculture historically and currently which Hanson sees as”dominating” and I would agree with that.

Caplan thinks this is a “bizarre use of the English language” but I think that’s only if we keep a narrow perspective of what “domination” means.

On his own blog post Caplan continues this point:

The main surprise for me: To my eyes, Robin initially (and uncharacteristically) ran away from his thesis by embracing a very weak sense of the word “dominate.”  Here are Merriam-Webster’s definitions:

  • to have control of or power over (someone or something)

  • to be the most important part of (something)

  • to be much more powerful or successful than others in a game, competition, etc.

Robin appealed to something like definition #2.  When challenged, he bit two bullets.  First, he said that tractors already “dominate” in agriculture.  Second, he denied that Mark Zuckerberg “dominates” Facebook.  This is especially odd because, at least in his Age of Em, robots dominate by all three definitions.  Indeed, as he eventually admitted in the debate, Robin thinks there’s a 30% chance the ems exterminate mankind within a year of their creation, in line with my argument here.  Now that’s domination in its most horrifying form.

It seems clear to me that Hanson’s definition is actually a mix of 2 and 3 and I don’t think it’s obvious that machines do dominate in those sorts of ways currently. Machines and robots are certainly important parts of the economy but if we’re going by Hanson’s “task basis” (whereas who dominates the given economy depends on who controls the most tasks) then humans dominate the current economy, easily.

On the other hand, if robots are doing everything in the given economy aren’t they still not dominating in the most relevant (read: libertarian) sense? Well, first, Caplan gives no real argument why this is the most relevant and pressing definition, other than that most people would agree with him. And I mean, it is a definition of domination and a totally valid one that I’ve used myself, but it isn’t the only one and I don’t see Hanson’s argument weakened by using it in any case.

Second, I agree with Hanson that we have no good reason to presume that all of the designers will want the robots to be as docile as possible. Hanson is correct that they’d most likely want the workers with the most amount of skills and virtues combined, instead of simply just doing whatever they are told. And many researchers, I think, would want to try to design robots as modern and efficient as possible, likely in the pursuit of making them human like.

But aren’t robots serving us if they’re doing all of the work for us and according to our whims?

This is another point Caplan makes contra Hanson and it’s a lot more interesting than many of his other points. I think in some sense it’d be true to say humans would still dominate the economy. But, as Hanson argues, this presumes that robots wouldn’t be emulations of human beings that are complex and have personality. In which case, they’d do these things for themselves because they wanted to. I’ve never personally found the idea of a robot slave economy to be particularly moral or nice to think about, so I’m down with the emulation concept Hanson is fond of.

The last point I’ll mention from Caplan is he asks (rather pointedly himself), well does Mark Zuckerburg not control FB?

Hanson argues that he doesn’t which seems implausible at its face but I think in Hanson’s own definition it makes sense. Zuckerburg isn’t dominating the everyday decisions of the company because (as Hanson points out) he can’t keep up with the day to day decisions made by the workers. While it’s true that Zuckerburg could in theory reverse any decision that the workers make, what matters is the actual power folks have and not simply their potential for the use of power.

For example, we could imagine a firm that teaches people how to be clowns. The lower rung workers have very particular concepts and ideas about what it means to be a clown. They constantly bring it to their boss and the boss never says no because they’re mostly in it to make money and don’t care what happens so long as that does. Would it be fair to say that the boss controls the firm in this case? I’m not saying I have a definite answer here, but I think there’s multiple definitions of dominate that relevant to this (admittedly odd) situation and others.

None of this is to say that being the boss or having this potential of power is a meaningless thing. But I think Caplan overplays the libertarian concept of domination to his own detriment. For more discussion about the concept of power from an anarchist point of view, I recommend this essay by William Gillis.

The Issue of Domicility

Caplan repeatedly asserts that people would go for the most docile workers (i.e. robot workers) but I don’t see why this is true. Having the most docile worker isn’t a clear indicator to success and often you want feedback in your work to make sure it’s up to par with the competition, even if you’re a capitalist! And of course as an anti-capitalist I get Caplan’s point here and (ironically) agree with it to a certain extent, but for different reasoning.

I think Hanson’s model of a future is much too based on the domination of capital over labor in as many facets as possible. This is another discussion for another day perhaps (one more focused on political economy) but just briefly I want to comment that my main disagreement with Hanson is how central he wants capitalists to be in an economy.

The moderator points out that perhaps a company could find a “sweet spot” of humans and then simply use those robots as much as possible. Hanson responds that a lot of what robots will keep is likely to be what humans have now because parts of ourselves have stayed with us because they’re adaptable and useful traits for us to have.

I don’t know that I find that an especially convincing response since I don’t think it really speaks to the sort of power Hanson is thinking about giving the bosses in this society. But again, that’s a larger point and I’ve dwelt enough on this portion of the debate, let’s move on.

Q&A

There’s some discussion before the Q&A about population and economic growth, the effects this system might have on folks who are dependent on wages (Hanson admits it could be bad for them) as well as a return to the discussion on domination. But I’ll skip all of that (though it’s worth checking out for sure) and head straight to the Q&A.

That said the Q&A is…largely disappointing. More than a few of the questions are very quickly answered and agreed on by both Caplan and Hanson, other questions are irrelevant and the ones that are relevant and interesting are usually answered quickly as well. This isn’t completely Caplan and Hanson’s fault given their under time, but it was irritating.

There’s an interesting moment where someone asks about the UBI and Caplan criticizes it as one of the stupidest ideas for redistributing wealth he’s ever heard of. He thinks we should be giving the money to those who specifically need it the most not to everyone. “Why should Bill Gates get more money?” Caplan wonders. It’s a perfectly legitimate question as far as I can tell. Why should the rich get money as well, do they really need it?

Hanson is more easy-going and is into whatever works and thinks that the previous “sharing arrangements” he mentioned before could plausibly include something like the basic income.

There were also discussions about the emulations around the problem of them deserving rights or not are all too brief but interesting to hear about. Basically Hanson thinks when they can articulate for them and demand them is likely when we should start seriously thinking about it. Caplan thinks it’s much more about when those in power want it to happen. He also makes a rather bold claim that this is how historically things have gotten better, specifically he claims that this is how slavery and imperialism ended, etc.

Perhaps this is tangential but I think it’s a bit of both. Slavery ended both because the slaves demanded and organized amongst themselves for better conditions but also because white folks decided the Civil War had done enough damage to their country.

Getting back to the emulations, Caplan doesn’t think their “real” people and that’s a whole ‘nother kettle of fish with the theory of the mind and what it means to be conscious and…yeah, I’m not touching that.

It’s also worth mentioning that Caplan (humorously) doesn’t think there’s any way to prevent (what I’ll call) The Affluent Few from developing in Hanson’s ideal society. I am also unsure that this could be prevented…in a capitalist society, but again that’s a bigger discussion than I want to get into. Suffice it to say, I think Caplan is right, but for different reasons.

Hanson thinks there are reasons why robots wouldn’t just carry on without us or (worst still) kill us because of their sense of connection to us as their makers. It’s also worth adding to Hanson’s points that once transhumanism happens, it’s likely this problem would be solved by us merging our conscious with the robots…so that’s that, I suppose.

But more generally Hanson can’t reasonably assure us that we won’t all die from the robots.

He gives us a 70% chance we’ll survive the first year, what a guy!

But similar to retirees, Hanson thinks there are incentives for robots to kill us. That there’ll be enough of us for it to make a difference to the robots and the economy if we’re gone, it could make certain institutions that we both share unstable and so on. But Caplan doesn’t think that’s likely given that humans kill many in virtual reality as long as they’re not considered human and he thinks the same may apply to robots.

Let’s hope Caplan is wrong.

Hanson’s Closing Remarks

Hanson’s main point (or at least the most important point I took from this) is the following:

There’s nothing special about humans that robots cannot replicate and do better.

This is a point I wish he had argued much more in the debate, but sadly he doesn’t.

He compares where humans will be in the distant future to subsistence farming today, e.g. humans in the workforce will be in the margins but not necessarily completely gone. Humorously Caplan may have the more straightforwardly attractive vision where work is largely voluntary and folks are earning more when they go in than they do now. He even makes a small comment during the debate about how most people these days dislike their jobs anyhow.

It’s funny to me that the person denying the resolution has the more attractive vision of what anti-work may look like.

Caplan’s Closing Remarks

One of Caplan’s main point is that computers are designed to be docile but I think that gives too little credit to machines and their histories of surprising their makers. It also gives much to credit to the markers who are infallible and likely can’t predict what all of the programming and wiring will do in the end.

The other part of Caplan’s point here is that perhaps Hanson is right that robots will dominate most fields but will still mostly be focused around specializations and such. In that case humans could still operate in the arts and the more abstract and intellectual sides of life. And I agree that’s a possibility and one that’s been attractive to many other anti-work advocates I know of. But as I said earlier, there are reasons to think this may not happen.

Lastly, Caplan ends with (honestly) a sick burn / backhanded compliment towards Hanson.

He says that Hanson is a genius and given the lack of folks studying the future, a pioneer.

But we shouldn’t expect a pioneer to get everything right.

My Closing Remarks

So the debate ended in a tie with both of the “affirmative” and  “negative” votes receiving the same amount of increase and “undecided” being comparable in its advances as well.

Given my position on this debate (Caplan being generally better presented but Hanson being ultimately closer to the truth of the matter), I think this was a fair outcome.


If you enjoyed this article, please consider donating to my Patreon!

Even $5 really helps me out!

25 days until Trump is inaugurated, be prepared.

Share on FacebookShare on RedditTweet about this on TwitterShare on Google+Share on Tumblr

2 thoughts on “Comments on the Hanson-Caplan Debate on Robots

  1. Pingback: A Critical Review of "Should Libertarians Support a Universal Basic Income?" - Abolish Work

Leave a Reply

Your email address will not be published. Required fields are marked *