Skip to content
About

News and Events

Fellowships, research, and professional development

Home Podcast Enshittification, with Cory Doctorow and Matthew Potolsky

The Virtual Jewel Box

The podcast of the Tanner Humanities Center

Enshittification

with Cory Doctorow and Matthew Potolsky

In this episode, Matt Potolsky (Professor of English) talks with writer and activist Cory Doctorow about digital privacy, platform decay, and the politics of monopoly.

Drawing on his recent book, Enshittification: Why Everything Suddenly Got Worse and What to Do About It, Doctorow argues that the erosion of privacy is inseparable from the rise of unchecked commercial surveillance, and that many people care deeply about privacy without recognizing it as such.

They also discuss

  • the three-stage collapse of digital platforms
  • Robert Bork and the Chicago School’s influence on antitrust law
  • the IBM antitrust case
  • Yanis Varoufakis’s theory of techno-feudalism
  • algorithmic wage discrimination
  • effective altruism and longtermism
  • AI as a fantasy of boss-without-workers
  • the surprising global resurgence of anti-monopoly politics as a source of hope.

Cory Doctorow is a journalist, blogger, and the author of numerous works of fiction and nonfiction. He is a longtime contributor to the Electronic Frontier Foundation and blogs at pluralistic.net.

Episode edited by Ethan Rauschkolb. Named after our seminar room, The Virtual Jewel Box hosts conversations at the Obert C. and Grace A. Tanner Humanities Center at the University of Utah. Views expressed on The Virtual Jewel Box do not represent the official views of the Center or University.

  • This transcript is automatically generated and may contain errors. 

    Matthew Potolsky: Welcome to the Virtual Jewel Box podcast of the Tanner Humanities Center.

    My name is Matt Potolsky, professor of English at the University of Utah, and I’m here with Cory Doctorow, who is an influential tech blogger, journalist, prolific sci-fi and young adult novelist, and an activist who has been outspoken in defense of digital privacy, in the spirit of what he calls the good old internet, against the predations of big tech monopolies.

    He has worked with the Electronic Frontier Foundation and has served as a visiting professor at schools like MIT and Cornell. Some of his best-known and award-winning books include the young adult novel Little Brother; the sci-fi novel Down and Out in the Magic Kingdom; and nonfiction works including Information Doesn’t Want to Be Free, The Internet Con, and, most recently, Enshittification: Why Everything Suddenly Got Worse and What to Do About It. He also blogs regularly on the site Pluralistic. Born in Toronto, he now lives in Southern California and London. He spoke last night at the University of Utah at the invitation of the Tanner Humanities Center, and I’m very pleased to welcome him here.

    Cory Doctorow: Well, it’s a pleasure to be here. Thank you very much.

    Matthew Potolsky: I want to start with a retrospective question. Almost exactly ten years ago, in the wake of the Edward Snowden revelations about spying by the NSA, I invited you to the University of Utah to talk about privacy. Since then, claims for the value of privacy as a social good and a goal for activists have largely disappeared or gone mute, and relatively few people talk about it with the passion they once did. So my first question is: is my impression correct? And if so, what happened over the last decade?

    Cory Doctorow: I don’t think your impression is correct. First of all, we can look at what people do. Fifty-one percent of all web users have installed an ad blocker in their browser, which is a privacy tool, because the ads are using surveillance to target you. Doc Searls calls it the largest consumer boycott in human history.

    Now, admittedly, no one’s ever done that for an app, but that’s because apps are illegal to reverse engineer because of an old copyright law from 1998. But we can look at what people do in mobile platforms when they do get a choice. So in, I think it was 2022, Apple updated its operating system for its mobile devices, iOS, and gave users a single tick box. If they clicked it, they got to opt out of commercial surveillance by apps, and 96 percent of iOS users affirmatively took the step to tick that box.

    I would say the other 4 percent were either drunk or Facebook employees, or drunk Facebook employees, which makes sense, because if I had to work for Facebook, I would be drunk all the time. But it really tells you, right, when 96 percent of people click on something, take the affirmative step to click on something to get more privacy, they care about it.

    I’ve been giving a lot of thought to the failure of privacy, and my feeling is the reason our privacy has failed is because we just didn’t update our privacy laws. Our last consumer privacy law in this country, federally, was 1988’s Video Privacy Protection Act. It’s a law that makes it illegal for video store clerks to disclose your VHS rentals. All other forms of commercial surveillance are still lawful, and the fact that they’re still lawful means that we get horrendous abuses of our privacy.

    To the extent that people care about those, they often don’t characterize their concern as a privacy concern, but as a concern about something that is downstream from privacy. I don’t agree with all of the privacy concerns. I agree with some of them, but I think that they make for a really interesting potential coalition if we’re going to move the ball on privacy.

    So there are people out there who think the reason Grampy is a QAnon is because Facebook targeted him. They think the reason that their teenager is anorexic is because Instagram targeted them. They think the reason that millennials in their life quote Osama bin Laden is because TikTok targeted them. There are people who are worried about deepfake porn. There are people who worry about reverse warrants that are being used to round up ICE protestors or January 6 insurrectionists. There are people who are worried about racial minorities being discriminated against in hiring and housing and lending. All of those things are downstream of the fact that we have a totally unregulated commercial data brokerage industry.

    What we see when we give people a choice is that they take it. The fact that they don’t know that those are all privacy concerns, I think it represents maybe a failure among privacy activists to activate people on those questions, but it also poses a path forward. If we are going to get people excited about privacy, we just need to make this connection: a strong privacy law won’t necessarily solve all those problems, but it’s going to be a lot harder for people to harm you in those specific ways.

    Today, where you see gross privacy invasions being done commercially and being connected to state activity, you do see a lot of real concern. People are really upset about Discord doing age verification. They’re concerned, and this is a very Utah issue, because you have state age verification. People are very concerned that these large commercial enterprises that have unaccountable and opaque relationships to the government are gathering sensitive personal information about them that could be mobilized against them, and that as a condition of doing normal things, they’re having to yield extremely abnormal amounts of personal information and subject themselves to very objectively difficult amounts of surveillance.

    You see it in the uprisings about Flock cameras in cities. You see it in all kinds of ways. I think that because we don’t call all these things privacy concerns, it’s easy to think people don’t care about privacy.

    I’ll finish by saying my friend James Boyle, who’s a law professor at Duke, talks about how before the term ecology entered our lexicon, people didn’t know that they cared about the same thing. You care about owls, I care about the ozone layer. Your charismatic nocturnal avians are not in any obvious way connected to my concerns about the gaseous composition of the upper atmosphere. But let the term ecology emerge as a kind of umbrella, and we can see that we’re on the same side. I think there are a lot of people who are worried about privacy who just don’t know that it’s a privacy concern that they’re worried about.

    Matthew Potolsky: I think that’s actually a very good way of putting it. I was a little shocked because, what was it recently, Palantir was revealed to be collecting data from various government databases, various government agencies that were quite explicitly not supposed to share it, and putting it together in precisely the kind of dossier about people that we were so upset about with the NSA back in the Snowden era, and it just didn’t seem to raise the kind of concern that I would’ve seen ten years ago.

    So I think you’re right that maybe we need to bring the word privacy back in to try and connect the dots for people.

    Cory Doctorow: It may be because there’s a political valence now to whether you care about Palantir as a political question. Although I’ll say, in the Snowden era, there was a ton of LDS concern about NSA spying, even though politically Snowden was connected more to a kind of concern about Bush-era War on Terror–style measures and Total Information Awareness. Even though Mormons tend to swing Republican, there’s this kind of lasting traumatic memory about the role that surveillance plays in state suppression of disfavored minorities.

    It’s funny, I mentioned that the last time we got a consumer privacy law was to ban disclosure of your VHS rentals, and there’s a reason for that. Ronald Reagan nominated a judge to the Supreme Court. It was a virulent racist and all-around just bad guy named Robert Bork. His confirmation hearings went so badly that to this day we describe things that are very disastrous as “borked.” One of the things that came out during Bork’s confirmation hearing was his video rentals.

    It’s funny because probably the best thing you could say about Robert Bork was that he had pretty good taste in movies. But Congress took a look at this disclosure and they said, okay, we are entering a political era in which the things that I get from behind the beaded curtain at the corner video store are going to show up in oppo research when I’m standing for office. So they beat all land speed records to prohibit disclosure of VHS rental records.

    Now, last year there was a bill that cleared the Senate that was a consumer privacy law for federal lawmakers, and it cleared ninety-nine to one. The only senator who objected was Ron Wyden. Ron Wyden said this should be a privacy law for everyone, and if we can’t make it a privacy law for everyone, at the very least, we should make it a privacy law for state lawmakers too, because just a couple of months ago a bunch of Minnesota lawmakers were stalked using commercial surveillance data and murdered. So we need to protect those people too. And his colleagues in the Senate decided that they weren’t interested in protecting anyone else.

    I think this goes a long way to explaining why we have this vacuum in federal privacy legislation: lawmakers have figured out a way to protect their privacy without protecting privacy per se. In New Jersey, there is a law that protects the privacy—it’s a very strong consumer privacy law—for judges and cops and no one else. Interestingly, there’s someone who’s ex–Consumer Financial Protection Bureau, one of their great technology strategists, who’s leveraging that to shut down data brokers nationwide, because it’s very hard for them to figure out if they’ve got a cop from New Jersey in their database, which exposes them to so much liability that they kind of have to go out of business, and they’re doing some pretty cool impact litigation.

    But I wonder if, given the degree—and I know polarization’s kind of a thought-stopping cliché—but given the degree of naked partisan favoritism in law enforcement and in federal enforcement of regulation, whether people who are aligned with MAGA, like a lot of LDS, are saying in this moment, when Palantir’s data mining to find people to shoot in the face or shoot in the back or deport to a Salvadoran death camp, that it’s just not a thing you have to worry about because it is just going to happen to someone else.

    Back maybe in the Bush years, there was a sense that if there was a legal precedent that presidential administrations could expand surveillance radically, that would someday boomerang on people who are aligned with the current president. But now maybe there’s this sense that we’ve left the rule of law behind, and that in an era of the rule of man instead of the rule of law, so long as you are well aligned with the kind of gangsters that are running the state, you don’t have to worry if they arrogate powers to themselves that they could wield against their enemies, because you are their ally and there’s probably not going to be another election. I wonder if that’s what’s dampening the response from right-wing or Republican-aligned blocs that have been historically vocal about privacy.

    Matthew Potolsky: Yeah. I mean, ICE agents are allowed to cover their faces, whereas not even local police or state police are allowed to do that.

    Cory Doctorow: People with chronic illnesses worried about getting a disease, yeah.

    Matthew Potolsky: Exactly. Yeah. I wonder if we can kind of carry this over to the concept of enshittification. So I’ll ask you first to define it briefly, and then my sort of larger question, which I was thinking about as I was reading the book, is: what role does the sort of shifting language around privacy, the sort of acceptance of surveillance, at least as a background activity for our sort of condition, for our participation in a digital economy or in digital life—how might that sort of figure in enshittifying the internet? So begin with—

    Cory Doctorow: Just to stay canonical, let me correct you. Canonically, it’s enshittification.

    Matthew Potolsky: Enshittification.

    Cory Doctorow: Thank you. I can declench it like a Latin student.

    So, enshittification. I should say, I work for the Electronic Frontier Foundation. I’m going into my twenty-fifth year with them. Getting people to care about tech policy questions, it’s hard, because tech policy questions are, in their native form, abstract, technical, and a long way off. For extremely good reasons, people care about concrete things that are right in front of them. The problem is that by the time these questions are right in front of you and concrete, it’s too late. Someone’s used a database to round you up and stick you in a Salvadoran death camp. So you kind of want people to care about it before it’s too late.

    There’s a connection there maybe to climate activism, where when we’re talking about parts per million of greenhouse gases in the atmosphere in the 1980s and 1990s, that’s a very abstract question. But if you don’t deal with it, then the question you’re dealing with is: what do you do about zoonotic plagues, hundreds of millions of climate refugees, and wildfires? Those are much more difficult questions to deal with once they’re immediate and in your face.

    As with climate activism, a lot of the job of technology activism is finding ways to raise the salience of these issues, to make them more immediate and less abstract. I’m a science fiction writer, so sometimes I write science fiction about this stuff, and sometimes I come up with framing devices or metaphors or parables, little ways of making things seem more immediate. And sometimes I coin dirty words. It turns out that giving people a minor license to vulgarity is a really winning combination here. But not merely for the vulgarity. I think that the fact that this is connected to a pretty far-reaching wraparound analysis that has a lot of explanatory power and also some prescriptions about what we should do has made this very successful ideologically.

    So enshittification is partly a set of observations about how platforms decay. It’s partly a theory about why they’re decaying now. And it’s partly a set of ideas about how to reverse their decay and protect from it in the future.

    The decay of platforms has a characteristic three-stage process. In stage one, platforms are good to end users. It brings them in and it finds a way to lock them in. In stage two, knowing that the users are locked in and would struggle to exit the platform, the platform makes things materially worse for them in order to improve the value that it can deliver to business customers, who also pile in and also get locked in. In stage three, value is extracted from all parties, not just end users.

    I’m not a believer that if you’re not paying for the product, you’re the product. I’m a believer in: if they can get away with treating you as the product, you’re the product. So value is extracted from business customers, from end users. The goal is to find an equilibrium where you have a kind of homeopathic residue of value left on the platform that’s sufficient to keep the end users locked in and the businesses locked to those end users, and all the value is harvested for the executives and the shareholders. That’s when the platform becomes a pile of shit.

    Which raises this question: why is this happening now? It’s not like we invented greed in the 2010s. Clearly some of these platforms are being run by the same people who ran them when they were much better. Mark Zuckerberg was always a creep. You just need to read Sarah Wynn-Williams’s book from last year, Careless People, to learn that not only did he start Facebook to non-consensually rate the fuckability of Harvard undergraduates, but he also cheats at Settlers of Catan. He’s just a jerk. But Facebook was good, and now it’s bad. It’s not just Facebook. It’s all of them that have gone through this characteristic three-stage pattern of collapse.

    My thesis is that the reason for this is that firms in their ideal state would like to charge infinity for their outputs and pay zero for their inputs. While this does admittedly describe the academic publishing industry, every other sector has to accept some limits on what they can charge and how much they can extract. Those limits come from things like competition, the fear that someone will leave for a better offer, whether that’s a worker or a customer. They come from regulation, the fear that whatever benefits you get from abusing people will be swamped by the cost you’ll pay when the state visits its wrath upon you. They come from fear about empowered workforces. Tech workers were very empowered. A lot of them really cared about users, but they didn’t have unions. Their power came from scarcity.

    And the fourth one in tech is worrying about a special kind of new market entry, new kinds of competitors, not existing ones, who derive their market power from something called interoperability, just making one thing work with another.

    There’s only one kind of computer we know how to make. It’s something that computer scientists call the Turing-complete universal von Neumann machine, which means a computing engine that’s capable of calculating every valid program, which is a way of saying that every enshittifying program has an anti-enshittification program waiting to be created. If someone installs a ten-foot pile of shit in a program that you rely on, a programmer can costlessly, infinitely, all around the world, deliver an anti-enshittification program, like an eleven-foot ladder made of code that goes right over that ten-foot pile of shit.

    So firms have to worry. If you make the ink too expensive, then someone will write a program that disables the ink-checking routine and start selling ink to your customers, and maybe they buy their next printer from that company too. This enforces a form of discipline as well.

    As to what happened to those forces of discipline, we just dismantled them. It’s not that people chose unwisely in how they shopped. It’s not that we forgot that if you’re not paying for the product, you’re the product, and we’re lazy and we’re bad. Consumers are not the arbiters of policy outcomes. Shopping your way out of monopolies is like recycling your way out of the climate emergency. It’s not merely that these ketamine-addled Zucker-Muskian failures who run our digital world are bad people, because they were bad people before. It’s that our policymakers, in living memory, took decisions that had the foreseeable and foreseen outcome of making the most profitable and successful strategies for business the ones that are most extractive and platform-destroying. They created the enshittogenic policy environment.

    Matthew Potolsky: You mentioned Bork earlier. In your book you describe how Bork’s theories about monopoly became very important to the formation of a lot of these platforms.

    Cory Doctorow: Very much so. Bork was the figurehead of a movement within the Chicago School of economics, the neoliberal economists, as they’re called sometimes, who held that monopolies are evidence of efficiency, not market abuse. That where you see a monopoly in the wild, your first impression should be: here is a company that is so good, everyone wants to buy its products—not here is a company that has rigged the market so that people can’t shop elsewhere. And if you think that, then wouldn’t it be perverse to punish that company for being so pleasing to so many people that everyone voluntarily buys their products?

    So they counseled that we should not enforce anti-monopoly laws. Actually, they went further than that. Bork was, in addition to being a bad economist, he was a bad historian, and he claimed that the statutory language of antitrust had been misinterpreted, and that what Congress had always intended in passing these antitrust laws was not to police monopolies per se, but only to police monopolies that abused their market power, and that they weren’t concerned with the formation of monopolies, just the abuse of monopolies. This is just frankly untrue.

    There’s no reading of this—you have to be like a kind of QAnon gnostic—to read Senator John Sherman, brother of Tecumseh Sherman, author of the 1890 Sherman Act, the very first antitrust law, who on the Senate floor said, “If we will not tolerate a king, we should not have an autocrat of trade.” It wasn’t about whether the autocrat of trade wielded their power wisely. It was about whether they existed at all.

    This kind of paranoid conspiratorial belief that the law said something other than what it said found very fertile soil. In the same way that Upton Sinclair says it’s impossible to explain something to a man when his salary depends on him not understanding it, it’s impossible to explain something to a billionaire when their billions depend on them not understanding it. So a lot of people found this to be a very palatable message, in the same way that there’s a certain kind of very rich self-proclaimed Christian who finds theological interpretations of the message of Christ being “make as much money as you can and screw everyone else” to be a very convenient one. You can kind of understand why if you’re very wealthy all this stuff about rich men and camels and needles turns out to be just unimportant, the footnotes to the scriptures.

    The outcome here is that we have an environment that encourages monopoly formation, that encourages market-power abuse, that stripped away protection from workers, that made it impossible to regulate these firms. They became so large they were, in fact, more powerful than their regulators.

    IBM was a good example of this. For a long time, IBM had official forbearance from antitrust enforcement, even when we were enforcing antitrust law very vigorously against other firms. IBM was considered structurally important to the operation of the American government, particularly to the War Department, the Pentagon. So wherever there was a juncture where someone should have done something about IBM, they just didn’t.

    By 1970, it was really obvious that American technological progress was being held back by IBM, that they were effectively acting as a market regulator to prevent the emergence of products that competed with theirs, and that as a result other powers—Japan, rather—constituted a real threat to American tech dominance around the world and therefore a geostrategic problem. So the U.S. government finally, under Nixon, decided that they were going to sue IBM for antitrust violations, and there began a twelve-year journey that they called antitrust’s Vietnam.

    Every year for twelve consecutive years, IBM spent more on outside counsel to fight the Department of Justice Antitrust Division than the entire DOJ Antitrust Division spent on all of its lawyers fighting all of the cases in the country. They outspent the U.S. government for twelve consecutive years, ran out the clock, waited till Reagan was elected, and then Reagan let them off the hook.

    This is a real example of why you need to intercede before monopolies gain a foothold, because once they’re there, it’s too late—or, if not too late, at least heroically difficult to dislodge them. The time to deal with a monopolist is before they have a monopoly, not after.

    So here we are. We’re in this moment where all of these foreseeable outcomes—regulatory capture, the destruction of labor protection, the wielding of tech policy by tech incumbents to prevent new tech companies from emerging the same way IBM did in the sixties and seventies—that’s all upon us. As a result, the internet has devolved into what Tom Eastman calls “five giant websites filled with screenshots of text from the other four.” They spy on us from asshole to appetite. They abuse us. They degrade their platforms. They find all kinds of ways of harming us, many of which are intimately related to privacy.

    There are lots of ways you can just degrade a platform willy-nilly. You can just make it worse for everyone. Think about Sonos deciding that they wanted to compete with Beats and with the AirPods. Sonos makes these smart speakers. They said, we’re going to bring out smart headphones, and we’re going to make them work with our smart speakers. We’re going to do that as a rush job, as a matter of priority, just like with the space shuttle Columbia, where there are people there saying this is not ready to launch, and then there are people in charge saying, well, it would be reputationally damaging for us not to launch now, and then the whole thing blows up. They launched the new Sonos update, and they broke every speaker they’d ever sold.

    Matthew Potolsky: I was there.

    Cory Doctorow: And there was no way for them to fix them. I have some Sonos speakers. They still don’t work. It’s been like three years, and thousands of dollars of electronics in my home do not work.

    That wasn’t targeted. They just did it to everyone who had a Sonos speaker. But then there are tons of ways where these firms that want to extract are like, okay, we know that our median customer will tolerate a certain amount of abuse, but there are customers who are sort of six-sigma customers who are so locked in and so dependent on our platform that we can abuse them in highly personalized ways without risking their departure. You can only do that if you know which customers are which. So there is a kind of baseline of enshittification that cannot be exceeded without surveillance.

    I’ll give you an example. Nurses are preferentially hired by American hospitals as contractors. That lets them do union avoidance. It used to be that contract nursing was handled through local staffing agencies, brick-and-mortar firms, and there would be a few in every town. Now they’ve all just been demolished by four national apps. Each of these apps markets itself as a kind of Uber for nursing. Before a nurse is offered a shift with one of these apps, the app looks up the nurse’s credit card debt and it makes a calculation, based on how much credit card debt the nurse is carrying and how delinquent that debt is, about the lowest wage that nurse will be willing to accept, and then offers a lower wage based on that calculation.

    We’ve all seen wage erosion in many fields, but this is—Veena Dubal, she calls it—algorithmic wage discrimination. This algorithmic wage discrimination allows you to do a personalized form of wage erosion where you are locating individual workers within the labor pool and doing targeted wage erosion against them specifically and individually. That is something that, on the one hand, you need computers to do. There are black-hearted coal bosses in Tennessee Ernie Ford songs who would’ve happily done this, but they couldn’t afford the army of Pinkertons to follow around coal workers or the army of guys in green eyeshades to mark up the ledgers. But you also need the surveillance data. Unless you know about the credit card debt, it doesn’t matter that you can change the wage, because you don’t have the information about the worker. So you can see that there’s this nexus with enshittification and the failure of privacy law.

    Matthew Potolsky: Yeah, exactly. It occurs to me that, again, without all the data enabled by weak privacy protections, you can’t do any of that kind of, as you called it, twiddling, I think.

    Cory Doctorow: Twiddling, yeah. Changing, turning the knobs to change it. But this isn’t just twiddling. This is per-user twiddling.

    Matthew Potolsky: Per-user, yeah.

    Cory Doctorow: This is happening increasingly here in the physical world. The prevalence of electronic shelf tags means that prices can change from moment to moment. Norway really pioneered this, and Norwegian grocery stores are changing prices two to three thousand times a day. There’s a great story in—I forget what it’s called, the Las Vegas daily newspaper, the big one—that was done by a bunch of their interns, where they figured out that the sundry stores in the casinos have stopped pricing their goods. They reprice the goods based on the hour of day.

    Matthew Potolsky: Oh, wow.

    Cory Doctorow: So Red Bull costs more at one in the morning than it does at three in the afternoon.

    Matthew Potolsky: I was struck by an analogy you made in your book between enshittification and feudalism, the feudal relationship between the feudal lords and serfs. In the last year or so, we’ve seen this sort of bubbling up of the kind of reactionary feudal philosophers like Curtis Yarvin and Nick Land. So I’m curious if you would expand on that analogy, and whether it’s just an analogy or it’s a dream.

    Cory Doctorow: Maybe it’s both. It’s funny. My dear friend Ada Palmer’s a Renaissance scholar, has a new book out called Inventing the Renaissance, which is historiography of the Renaissance. It’s sort of exploring the work that the idea of a Renaissance has done for different people at different times.

    Feudalism is one of those things. In the same way that there wasn’t a Dark Ages, but talking about a Dark Ages is very beneficial to certain people at certain times, describing things as feudal or not feudal, or figuring out what you call feudal, or what you make the salient fact or the dispositive fact of feudalism, it does different work for different people.

    So one definition of feudalism comes from the security researcher Bruce Schneier, who says that in techno-feudalism you have firms that are very powerful and that offer to use that power to defend their users, provided that their users surrender their autonomy to them. You can think of this as Apple’s pitch or Google’s pitch or Microsoft’s pitch. I am a ferocious feudal lord, and I have a keep, and outside of my keep are bandits who want to steal your identity and hit your computer with ransomware. Move inside my castle and I will hire mercenaries to stand on the battlements and kill anyone who tries to come over them to harm you.

    Which works great until the feudal lord decides to make a meal of you, and then the walled garden becomes a prison and the mercenaries turn towards you, and anything they want to do to harm you becomes fair game.

    I mentioned that Apple turned on a switch box that 96 percent of users ticked to block all spying. As Apple was rolling that out, they also rolled out surveillance on their own platform. Apple does the spying that Facebook can’t do anymore against Apple users in order to target Apple’s ad-targeting network. It’s literally the same data being collected for the same purpose, except there’s no way to turn it off, because all the things that let Apple block Facebook from spying on you stop someone else from making software for your iPhone that stops Apple from spying on you. It’s literally a felony to jailbreak an iPhone so that you can install software on it to stop Apple from spying on you. So this is feudalism in the style of Bruce Schneier.

    But there’s an economist, Yanis Varoufakis, who has a different definition of techno-feudalism. For Varoufakis, as an economist, the most salient fact of feudalism is how value was created and where it landed in the economy. For him, the most important distinction is the one between what economists call profits and what economists call rents.

    We’re accustomed to using these terms colloquially, and it’s a pretty good model to think about rents and profits in the colloquial sense as translating into the economy. Rent is something that you get from owning something that someone else needs to do something, and profits are what you get from doing something.

    If you’re a coffee-shop owner, you’re renting your premises, and one day someone opens up a better coffee shop across the street, your best baristas defect, your best customers defect, and you go out of business. For your landlord, that’s not bad news. They now own an asset on a street with the hottest coffee shop in the city on it. They can rent out that empty storefront for even more money. But for you as the entrepreneur who’s reliant on profits, this is bad news.

    The moral philosophers of capitalism, its early progenitors—Adam Smith, David Ricardo—they hated rents. They thought that rents were the biggest barrier we had to profits. They said rents are what feudal lords get, and feudal lords are fat and happy and they don’t have to invest in making things better, because they get rent every year no matter whether things are better or worse. The peasants owe them the rents whether they’ve made a breakthrough in plows or seeds or precision agriculture or new ways to raise cattle or whatever. Whereas entrepreneurs have to find new ways to make more out of what they have, to find ways to become more efficient and more productive. It is through entrepreneurship and it is through the profit motive, which includes the fear of competition, that we are able to derive these great productivity gains that make the Industrial Revolution so exciting to live through.

    If you read Marx and Engels, if you read The Communist Manifesto, chapter one is just them geeking out about how cool it is that the profit motive has made people super productive and has increased the kind of general welfare and the material wealth of everyday people, because there’s more stuff that’s cheaper and better.

    You can think about society as being capitalist or feudal not about whether it has rents or profits, because clearly the fact that there’s a coffee-shop owner and that they’re paying rent does not mean that they live in a feudal society, nor does the fact that there’s a peddler who goes from one lord’s land to the next selling treasures of the Far East mean that you’re living in a capitalist society just because there’s someone who’s doing capitalism at the periphery. Really, the thing that determines whether you live in a capitalist or feudalist society is what happens when rents and profits come into conflict and how that conflict cashes out.

    Feudal lords wanted to have peasants bound to the land and land that was used for subsistence agriculture and the production of surpluses by those peasants. Whereas capitalists who were building the dark Satanic mills of the Industrial Revolution in Manchester, they wanted to kick the peasants off the land so they would become a precaritized proletariat who would have to come work in the factories, and they also wanted to turn the land over to sheep grazing so they could have the key input to the mills, which was wool. So you have this fight between profits and rents, and the capitalist won and the lords lost.

    Today you have a new era in our market where rentiers, the people who own things, are consistently winning the fights with the people who do things. Think about Amazon. Amazon mostly makes money from rents. Mostly they own the storefront and they own their cloud servers. It’s tempting to look at Amazon and see a flea market with a million stallholders all selling their wares and eking out their livings in a market. But really every one of those stalls is controlled by one guy, by Jeff Bezos, who levies a 50 to 60 percent tax on every dollar they make. That’s the junk-fee load for a seller on Amazon these days. It was 45 to 51 percent when I wrote the book; it’s now 50 to 60 percent. He also decides what they can sell and where it gets sold and whether a customer ever sees it and what it costs. That is much more about extracting rent than it is about making and selling things.

    Apple makes and sells a phone, and they make a lot of money from that, but their single largest line of business is the hundred billion dollars a year that they get from charging thirty cents on every dollar that someone spends inside an app. That’s a pure rent. They’re not improving the marketplace by draining thirty cents out of every dollar. Lots of businesses that can’t exist at a 30 percent rake for transaction processing just don’t get to sell on the platform. That’s the consequence of having a feudal economy. You end up with what amounts to regulators, except they’re not publicly accountable democratic regulators. They’re commercial regulators who structure whole markets based on their own whims and their parochial needs. In the same way that feudal lords were interested in just enough progress to keep them fat and happy, but not enough to make the world a better place, these people are interested in the kinds of progress that benefit them and not the kind of progress that might challenge their authority.

    Matthew Potolsky: So what do you make of this fantasy that I keep reading about, the sort of obsession that tech workers and tech owners now have to make as much money as possible before AI turns everybody else into serfs? Whether this is true or not, I’m interested in the fantasy that we are going back to some kind of feudal social arrangement and not just a sort of feudal economy.

    Cory Doctorow: I think that it is somehow connected to what I call billionaire solipsism. Every billionaire is a policy failure. It’s a cliché, but it’s true. The only way to become a billionaire is to really inflict a lot of misery on a lot of people. The only way to inflict that much misery on that many people and still look yourself in the mirror is to convince yourself that somehow the pain that those people suffer through just doesn’t matter. That in some sense they’re not real people.

    Elon Musk calls the people who disagree with him NPCs, non-player characters. These are the crude software robots in video games that say, “Welcome, traveler, to my tavern. If you are interested in a quest, I suggest you go over there.” They’re not real people. They’re just bots. That’s what Elon Musk calls the people who disagree with him.

    This idea of making as much money as possible actually got its kind of moral justification in this decade through the effective altruists, who claimed that they just wanted to do the most good possible. But that became this idea called earn to give, which is to say that you can wash clean all the sins of the worst jobs in the world. You can go to work just sort of murdering millions of people, provided you make a lot of money and save billions of people with the money that you make, and that the ledger will balance in your favor.

    Then it turned out that the effective altruists were also what they called long-termist, which sounds great. If you’re worried about companies chasing the next quarter at the expense of the health of the business, a long-termist sounds great. Except that’s not the kind of long-termism they were interested in. They said that the thing that you had to realize is that in ten thousand years we would have 10 to the 53 artificial humans, and that if you could increase the joy of 10 to the 53 artificial humans by one infinitesimal iota, that in aggregate that would be worth more than all the joy you could bring to every human being that was alive today.

    Again, this is a solipsistic belief. Real people don’t matter as much as imaginary people in the future. There’s some connection there maybe to a wider school of right-wing thought where you sometimes see people who claim to be pro-child only caring about imaginary children, unborn children, children who might be in the basement of a pizza parlor that doesn’t exist, and so on. But when it comes to actual children who are in immigration detention or who are starving because of cuts to social programs or cuts to overseas aid, those children actually don’t matter. I only care about imaginary children. That’s maybe connected to the solipsism as well, this failure to believe that other people exist.

    I think AI is connected to it too, because—

    Matthew Potolsky: I was going to ask you to sort of—

    Cory Doctorow: Yeah.

    Matthew Potolsky: Connect it with imaginary people and imaginary artificial intelligence.

    Cory Doctorow: The crisis of being a billionaire is, if you think that you are the only real person, or maybe part of a small cohort of real people, you run up against this kind of unfortunate reality, which is that you don’t know how to do anything. If you’re the boss and you don’t show up for work, the job just keeps ticking over. The hotel continues to rent out rooms, or the factory keeps making widgets, or the restaurant keeps serving meals. But if none of the staff turn up, you’re out of business.

    There is this sense in which bosses want to insist that they’re in the driver’s seat, but secretly suspect that they’re in the back seat with a Fisher-Price steering wheel. AI is the fantasy that you can get rid of all the people and you can kind of wire the Fisher-Price steering wheel directly into the drivetrain of the car. So the chain goes: boss vision, AI execution, with no people in the middle who know how to do things, who have these ego-shattering confrontations with you in which they point out that you don’t know how to do anything and they do.

    When I was walking the picket line with the screenwriters—because I live in Burbank, which is home to three of the big studios, Universal, Warner, and Disney—when I was walking the picket line with the screenwriters, one of them told me, you prompt an LLM the same way you give stupid notes to a writers’ room: “Make me E.T., but make it about a dog and put a love interest in there and stick a car chase in the third act.” And that is Air Bud.

    For the most part, if you give that note to a writers’ room, they kind of roll their eyes and they call you an idiot and they call you a suit and they tell you to go play with your spreadsheets, and they remind you that they are the ones who know how to write screenplays and you are someone who knows how to make Excel formulas. That is really tough. The fact that if you give that note to an LLM, it will shit out an unreproducible script is less important than the fact that it won’t remind you that you don’t know how to do anything.

    So we’re getting back to this solipsism, this boss-vision output, no humans required. There’s even this funny thing where they’re like, well, maybe we should have universal basic income. Sam Altman has this idea that we all stick our heads in an orb and have our retina scan so that universal basic income can be apportioned to us. But when you hear him drill into it, he’s like, it’s not just that I want to have universal basic income. The reason we’re going to need to have universal basic income is that I’m creating God, and God will provide all of your services, and I will collect rent on those services. So universal basic income in that vision is more like getting a charter-school voucher. You have a government that issues a voucher to you, and you can only get value out of that voucher by giving it to a rich guy who’s kind of a dilettante, who’s got ideas about how you should live your life. You can’t just turn that into a thing you want. You have to turn it into a thing someone who’s richer and better than you wants, which starts to feel very feudal in the sense of these people are your social betters. They were ordained by God or providence to be running the economy. But it also feels very solipsistic.

    I write science fiction novels. I sometimes write fantasy, and obviously those two fields are joined at the hip. One thing that’s remarkable of fantasy novels is they almost universally get the ratio of lords to vassals wrong. There’s just not enough serfs in the median fantasy novel. But there are a couple of fantasy writers who are Marxists, and the thing that distinguishes their fantasy novels is they really have a lot of serfs in them. So Stephen Brust and China Miéville, these are writers who really understand how many serfs you need to keep the lords afloat.

    Matthew Potolsky: You ended your talk last night with a very strong note of optimism, which I feel like has been in very, very short supply lately, especially with all the doom-saying around AI and what it’s going to do to jobs or what it’s doing to the environment. So I wonder if you could recapitulate that a little bit. Why do you feel optimistic about things?

    Cory Doctorow: Let me make a small pedantic correction, because I’m not a believer in optimism. I think optimism and pessimism both are forms of fatalism. They’re this belief that the human doesn’t matter, that the great forces of history are making things worse or better, or maybe the iron laws of economics or something.

    I think hope is the belief that if you materially alter your circumstances so that you can ascend a gradient towards a world that you’d like to live in, that even though you can’t see your way from A to Z, as you attain a higher and higher vantage point, more terrain is revealed to you. The fact that you can’t see your way all the way to the top doesn’t mean that that route doesn’t exist. It just means that you’re going to have to do it in a stepwise fashion, taking steps towards the world you want to live in. So long as there’s a step that materially improves your circumstances in any way, you can’t know a priori whether or not that’s the step that’s going to reveal something very powerful that takes you a long way up that gradient.

    I have a lot of hope because I can see a lot of steps we can take. In particular, I have a lot of hope because I’m really excited about the rise and rise of anti-monopoly ideology around the world, because anti-monopoly ideology is anti-oligarch ideology. Monopoly and oligarchy are inseparable from one another.

    If you ask a political scientist, they will tell you that the greatest predictor of the outcome of a policy is whether billionaires like it. There are these big empirical studies of thousands of policy outcomes that find that things that billionaires as a class don’t want never happen, and things that billionaires as a class want always happen, and that the policy preferences of ordinary people, to quote one important Princeton study, “have no measurable impact on the outcomes of policies.” That’s a pretty disheartening idea.

    And yet, since the late 2010s, and especially in the last few years, we have seen everywhere we look, everywhere in the world, an incredible growth in anti-monopoly policy, action, and enforcement. Everywhere we look. It’s part and parcel of some other forces that we can see. More Americans want to be in a trade union now than at any time in generations, and Americans view trade unions more favorably than at any time in generations. You see all this stuff that is really antithetical to the interest of rich people that is just gaining ground in this incredible way. There is really no explanation for it. It is as though the law of political gravity was repealed. The skies are filled with flying political pigs. Political water is flowing uphill. No one notices.

    I think it’s an incredibly salient phenomenon. We often talk about political will as an innate invisible force. I think it is invisible, but I think it’s invisible like the wind is invisible. The wind is invisible until something gets in its way, and then you see the force of the wind when something gets blown away. What we’ve seen over and over again is when any politician of any political stripe, including reactionaries on the right, unfurl even the most modest sail that has the word antitrust or anti-billionairism on the outside of it, they escape these doldrums in which policy has been calmed since the Reagan years, and they are jetted across the surface of the water like a catamaran.

    That’s true whether we’re talking about Zohran Mamdani or, to a great extent, Donald Trump, who is propelled to office on a mixture, let us not forget, not just of grievance, but also of promise to deal with elites. Now, the fact that he’s a member of the elites and that he kowtows to them, that’s depressing.

    Matthew Potolsky: And there’s multiple elites.

    Cory Doctorow: And there are multiple elites, and it’d be great if people were a little more discerning in the way that they took that. But boy, it sure tells you something about how people feel about the idea of elites, how people feel about concentrated power.

    I think the facts on the ground have changed. Ten years ago, if you said—we know what happens if you’re Bernie Sanders ten years ago this year, and you go on the campaign trail to stamp out billionaires and to enact policies that are anti-elite. You get your ass kicked. I don’t think that anyone’s thinking that it’s automatic that someone who said what Bernie said ten years ago is going to get their ass kicked in 2026 or 2028, or in other places in the world.

    The leading politician in the UK right now is Zack Polanski, who’s a Green politician, came out of nowhere and is campaigning for basically Zohran Mamdani’s platform in a country that has elected either Conservatives or a Labour Party who out-Conservative the Conservatives since 2010, and who since the 1990s elected Labour politicians like Tony Blair, of whom Margaret Thatcher said, “He is my proudest accomplishment.” So this is a total reversal of British politics. It’s an incredible moment. It’s happening everywhere we look.

    I’m not saying it’s foreordained, but I’m saying that if you’re asking yourself, here we are at the bottom of Mount Hope trying to ascend that gradient, what paths are available to us, and how far can we see the traversal that gets us to a higher elevation? There is a path right now that gets us to a much higher elevation than any that we could have seen, and it is much more clearly demarcated than any path we’ve had for generations.

    If that doesn’t excite you as someone who wants to live in a better world, then I think you’re unrealistic. You’re expecting optimism. You’re expecting the great forces of history to propel you up that gradient as opposed to your own hard work. We’ve got a lot of hard work ahead of us, and we’ve had that hard work for a long time, but that hard work has got a direction we can go in that was invisible to us, that was almost unimaginable to us just a few years ago. I think that’s a reason to be excited and very hopeful.

    Matthew Potolsky: That’s great. Thank you. That’s a good place to end. I want to thank Cory for agreeing to come in and talk. It’s a really fascinating discussion. You’ve been listening to the Virtual Jewel Box. Our theme music is by Jelly Roll Morton. It’s “The Perfect Rag.” I’m Matt Potolsky from the English Department at the University of Utah. Again, thank you, Cory.

    Cory Doctorow: Thank you, Matt. I had a great time.