There’s a lot of things to like about Rushkoff as someone who is critical and skeptical of work in general. I even hosted one of his articles about jobs and whether they were obsolete a few years back. In addition there’s a few books, essays and so on and so forth that I’m sure I’d enjoy reading and critiquing.
All of this is to say I’m not a big hater of him or his work.
But his recent video at Big Think (BT) while well communicated and interesting, also managed to frustrate me quite a bit.
Let’s go to the transcript, helpfully provided by BT’s website:
What people have to remember is that the object of industrialism wasn’t to make more stuff better and faster, it was to disconnect labor from the value they created. So if I have a shoe factory, I don’t want to hire expensive shoemakers and cobblers in my business, I want to go to the Home Depot parking lot, find a bunch of undocumented aliens and pay them two cents an hour.
Let’s pause here so I can comment on the fact that some folks are still using the word “alien” in place of human.
They aren’t from another planet they’re from another country. I’m sure Rushkoff likely doesn’t have anything personally against immigrants but this sort of “othering” language against people not born on the same plot of soil to you is part of a long history of dehumanizing individuals so we can justify all sorts of invasions in their lives.
Moreover this sort of language is a heavy part of what Bryan Caplan terms “anti-foreign bias” which is quite simply the “tendency to underestimate the benefit of interacting with foreigners”. I’m not accusing Rushkoff of being particularly committed to this bias in any serious way but I think it’s worth reflecting on the casual commitments we make as well.
But okay, I just wanted to get that out of the way because that bugged me a lot.
So I’m going to teach them something that’s going to take me 15 minutes to teach them how to nail one nail into the shoe and then pass it onto the next guy. The person who understands how this all works is actually my enemy.
This is certainly true. Bosses use the “division of labor” in a rather unfortunate way that makes workers do repetitive tasks (even ones they like) for long stretches of time and in similar manners throughout. This process of labor ultimately tends to deaden the joy and pleasure that folks associate with something that they used to work on and love doing.
So you fast forward to today when we implement digital technologies we try to do them in ways that get rid of people. We don’t want employees. If you need human beings, well then how are you going to scale up? It has to be able to be an algorithm. The easy way to think about it is most people’s first interaction with a computer was probably a telephone answering system.
And sure, I understand a company has a human receptionist that sitting there. She’s got a salary. She’s got benefits. She’s got a health plan. Get rid of her; put in a computer so people who call your company are going to have to take a little bit more time to get through all those menus, you’re going to save a lot of money and it makes you look kind of high tech.
This set of passages feeds into a larger issue of Rushkoff’s ideas: He’s conflating the problems of capitalism with the problems of the tools that it distorts for its own use. Technology and algorithms and robots and anything else we do to make robots a bigger part of the workforce is doubtless going to involve making someone unemployed.
But Rushkoff ignores the framework from which this automation is coming from which is capitalism.
And I’m not sure if y’all have noticed, but capitalism and secure social safety nets tend not to mix very well.
As an anarchist I don’t mean “secure social safety nets” in the sense of a welfare state or even a universal basic income (UBI) as superior of an alternative as that may be compared to what we have now. I’m talking more about solidarity networks, mutual aid associations and other ways of cooperatively structuring community support for each other.
Even if Rushkoff was right in isolating technology from the systematic constraints it comes from, I don’t think his assertions are as simple as he thinks.
For instance, one of my favorite bloggers on work and technological unemployment (though I admit I am no expert on the latter) John Danaher wrote about something called the complimentary effect:
i.e. the ways in which technology can complement and actually increase the demand for human labour.
This definition was actually derived from an author named David Autor who has an article called ‘Why are there still so many jobs? The history and future of workplace automation’ which is an article and an author we may get into another time. But briefly put there are many times where technologies can actually make people more necessary.
So firing the receptionist doesn’t become such a clear choice. If people like hearing human voices or they like seeing one when they come in (if the business operates like that) then the business and their sales may reflect that. On the other hand due to the subsidies that business can get from the government, the effects may be minimized.
But in any case, the receptionist may be necessary to do things that the robots cannot. Or, if they’re advanced enough that they can do anything they do then there’s always the task of keeping the robots updated. There are also other parts of the company that now may be able to have more workers dedicated to it that couldn’t have that dedication prior.
Automation may in these cases actually free up resources so that human workers can focus more on the jobs and sections of the given business that really need the work and that robots can’t do – yet.
On top of all or that, there’s simply not a lot of empirical data that backs up Rushkoff’s scenario here, as Autor explains in what Danaher terms Autor’s challenge:
Given that these technologies demonstrably succeed in their labor saving objective and, moreover, that we invent many more labor-saving technologies all the time, should we not be somewhat surprised that technological change hasn’t already wiped out employment for the vast majority of workers? Why doesn’t automation necessarily reduce aggregate employment, even as it demonstrably reduces labor requirements per unit of output produced?’
Danaher goes on to explain that technology is complex and jobs have many variable inputs and outputs that will always (at least for the foreseeable future) require tending to in various ways. He points out that while most jobs are very complex in these ways technologies are, at present, narrow and specific to certain tasks and needs of companies.
So you may see some companies let their receptionist go but you may also see them slightly rise as bank tellers have with the rise of ATMs so they can keep them maintained and be there in case the machines fail. After all, we’re still in the very early stages of automating a lot of the US economy. There are many kinks to be worked out in the coming years.
All of this is to say that Rushkoff’s alarmist picture of technology and how it relates to us as humans isn’t actually backed up by anything other than his vague sense that “technology” is a problem and it’s something we have to watch out for.
Now I don’t doubt that there are things to consider when it comes to technology and that we should not take its tremendous power and ability to shift and shape entire economies for granted. But we should also not take whose hands it is in for granted either and doing so results in the sort of uneven thinking we find with Rushkoff here.
But while you save money everybody who calls the company now spends more time going through those menus. You’ve actually created more work rather than less. You’ve externalized the cost of your human receptionist onto everybody else. So then what do they do? Well now they all have to get computer operators because they have to externalize the cost to everybody else.
So we all end up now spending more time and energy going through those menus than we did when we hired somebody, but because we’re so biased against hiring, because a company’s stock price will go up, if it can show that it’s hired less people we end up perpetuating that system.
Rushkoff doesn’t say the word “capitalism” here but it feels like he should. Why would companies be worried about stock prices if it wasn’t for a stock market (the classical liberal thinker Ludwig von Mises thought this was the defining trait of capitalism)?
It’s also not clear to me that you are always going to save money. Some services are handled better by robots and sometimes they aren’t. Some people use the audio input/output for call-centers and other people use text-based and some companies use both. But in many cases (anecdotally speaking I admit) it seems like people typically prefer speaking to people because people can pick up on the implications and subtleties of their own needs.
Now, I don’t want to press too hard against Rushkoff’s claims here.
I don’t think he’s completely wrong that current hiring practices are misplaced or that companies should really rethink what their priorities are when they take actions with regards to their employees. But again, these are all structural issues of capitalism and not necessarily an issue of technology itself. My main problem with Rushkoff’s argument(s) is that he doesn’t make this distinction clear enough. For all I know perhaps he agrees, but if so he doesn’t make that clear.
And I get it, I watch a lot of Big Think in a given month and I understand that their process tends towards the simpler and catchier (someone snarkily responded in the comments section whether Rushkoff knew his video would be operated by a word algorithm for maximum title efficiency) and so I get that Rushkoff can’t do everything in ten minutes or less.
Still, the way he frames this issue and emphasizes the role of technology instead of capitalism was like a constant gnawing at me, especially as I expected Rushkoff to know a bit better than that given my pleasant experiences with him.
So when we implement digital technologies, in order to get people out of the way, in order to get them out of the company we end up really killing the only expertise we have. If you’re using algorithms and big data to figure out your next product line rather than designers, what’s your competitive advantage?
The other company is using that same data and probably hiring the same big data analytics company to figure out the future trend. So now you’ve been turned into a commodity. No, you’ve got to reverse in a digital age. What you want is the most qualified people you can find so that your business actually can differentiate itself from all of the other automated algorithmic nonsensical platforms out of there.
I don’t think Rushkoff is completely wrong here, he’s overstating and oversimplifying but his central point isn’t wrong.
It’s just that “big data”, “killing expertise”, “commodification” and all of the rest don’t happen in vacuums.
I’ve been itching to do this all article so just let me get it out of my system and quote Kevin Carson’s excellent piece entitled Capitalism, Not Technological Unemployment, Is the Problem:
The problem arises, not from the increased efficiency, but from the larger structure of power relations in which the increase in efficiency takes place. When artificial land titles, monopolies, cartels and “intellectual property” are used by corporations to enclose increased productivity as a source of rents, instead of letting them be socialized by free competition and diffusion of technique, we no longer internalize the fruits of technological advance in the form of lower prices and leisure. We get technological unemployment.
But technological unemployment and the rich getting richer are symptoms, not of the progress itself, but of the capitalistic framework of state-enforced artificial property rights and privilege within which it takes place.
The economic ruling classes act through their state to intervene in the economy, to erect toll-gates and impede free market competition, so we have to work harder and longer than necessary in order to feed them in addition to ourselves.
So let’s not get rid of the technology.
Let’s get rid of the capitalists and their state that rob us of its full fruits.
My claim isn’t that Rushkoff is against technology but that he’s unfairly criticizing a tool and not the user.
And okay, the rest of this transcript is where I get the title of this article from:
What consumers have to understand is that there’s a value proposition with everything that they use. They have to be able, and currently they can’t, they have to be able to ask themselves is this platform creating value for me or am I creating value for it? Or is there an exchange that I’m aware of and I’m okay with?
Do I want to run my social life on Facebook? Is this an exchange that I like? Do I like defining myself in that way? Do I like these radio buttons? Do I want to present myself to the world through this platform and am I okay with everything they know about to me? I don’t know.
Am I okay with me getting my news and information through a newsfeed that’s algorithmically optimized to make me click on things, to narrow and figure out who I am? Am I okay living on a platform that’s using past data about me to advertise and market a future to me that I haven’t yet decided to go live?
There’s a lovely picture of Homer Simpson Sr. (“Grandpa”) from The Simpsons at the top that I think is humorous when discussing people who are clearly out of their element. It’s a picture of the grandfather yelling at clouds which is suppose to signal someone who is clearly not of sane mine and is in general a bit out of the loop.
The latter association is what I am gearing towards.
To be frank, does Rushkoff think that most people are stupid? I can’t really account for all of these questions otherwise. Does he honestly think that most people don’t know that value is attached to most things and that they haven’t asked themselves whether they’re okay with X or Y before doing it? I mean, it’s certainly possible that many folks don’t think about these things in quite the way Rushkoff would like them to but why is that an issue?
Rushkoff is treating Facebook as if it’s somehow silently exploiting people and their information. But most people I know, read, see, whatever are quite aware of the fact that Facebook is an information aggregator that is a social medial platform but also makes a lot of its money by using this information to advertisers…like most sites these days.
At this point it’s ludicrous to ask all of these questions as if the people who (at the very least) would be watching this video wouldn’t already know that Facebook and many other sites aren’t exactly the most trustworthy platforms if you care about your privacy or whatever else Rushkoff wants to ask us leading questions about.
That’s the real frustration: An argument that sounds likes its there but isn’t ever explicitly made.
I get where Rushkoff is coming from and many of his concerns are valid but asking all of these questions (or asking other people to ask these questions) just seems like a ridiculous thing in 2016. Maybe if it was five years ago or so and less information about how Facebook invariably existed he could ask these questions more seriously.
But in The Year of Our Lord 2K16, this just comes off as really smarmy and patronizing to me.
And I know, aesthetics aren’t everything and the important part is that Rushkoff has some valid concerns…but they’re like half a decade old at this point. Why does he think or seemingly act like these questions are something no one is considering these days? People have been writing about Facebook and social media and its faults for years. Whole websites, alternatives to social media and god knows whatever else have been tried in the last 10 years or so.
My point is that Rushkoff isn’t making a “cutting edge” argument here and he needs to stop acting otherwise.
If they know there is a 70 or 80 percent chance that I might go on a diet in a month, what are they going to do? They start filling my newsfeed with hey you’re looking kind of fat. Something is wrong with you. And they’re going to try to steer me to be more consistent with my profile, to make me a more predictable and cooperative consumer. I guess that’s okay as long as maybe it was a diet and so they’re going to encourage me to go on it, but what about the other 20 percent of people?
What about what I might have done instead? What about that unpredictability that would’ve made me different from the next guy and let me innovate something; let me have a new idea; let me have a more interesting personal anomalous weird life. Well, maybe I’m okay to surrender that. Maybe I want to be more like the rest of my statistical profile. But at least I should know this. At least I should know that my Google search results are different from yours. Why? Because Google wants me to do something. Google wants me to be a certain way. Google wants to help me be the real me. But how do they know what the real me is? What’s the algorithm they’re using and to what end?
So here we have, at the end of it, some sort of indictment of capitalism…through Google?
Man, I’m not going to beat a digital horse.
Abolish capitalism and the state or else y’all are gonna have a bad time.
If you enjoyed this article consider donating to my Patreon!