“Our inventions are wont to be pretty toys, which distract our attention from serious things. They are but improved means to an unimproved end, an end which it was already but too easy to arrive at.” – Henry David Thoreau, Walden (1854)
The internet, we were told, would liberate us. It would democratize information access; it would eliminate borders; it would free us from our corporeal existences. In 1993, Wired magazine promised that the digital revolution would bring about “social changes so profound their only parallel is probably the discovery of fire.” In 2000, Bill Clinton proclaimed that China’s efforts to crack down on the internet was “like trying to nail Jell-O to the wall.” In 2011, many speculated that the Nobel Peace Prize would be awarded to Twitter for catalyzing the Arab Spring. The internet was a place of limitless potential.
All the while, the World Wide Web grew and would continue to grow: 130 pages in 1993; 17 million pages in 2000; 700 million pages in 2011. And with the exponential growth emerged the problem of search and discovery. How do we find web pages in an exponentially expanding digital haystack? Like most problems under technocratic capitalism, a solution was found in monetization, this time in the form of the commercial search engine.
As early as 1998, the troubling effects of monetizing digital information access were clear. Some proto-search engines sold “preferred listings” at the top of search results; others served advertisements. No matter the path to commercialization, the end result was to distort what people saw and steer people’s clicks. “The Anatomy of a Search Engine,” one of the most highly-cited computer science papers of all-time, addressed this troubling trend head-on:
Currently, the predominant business model for commercial search engines is advertising. The goals of the advertising business model do not always correspond to providing quality search to users… we expect that advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers…. we believe the issue of advertising causes enough mixed incentives that it is crucial to have a competitive search engine that is transparent and in the academic realm.
The authors of the 1998 paper were remarkably prescient. They would also turn out to be remarkably hypocritical. Their names were Sergey Brin and Larry Page, the founders of Google, and their paper introduced their search engine to the academic world.
Like the World Wide Web, Google grew and would continue to grow, evolving from a Stanford dissertation project to one of most powerful companies in the world. Abandoning their 1998 stance, Brin and Page would cave under investor pressure to monetize their search engine via advertising and double down by serving targeted ads based on search queries. With every Google search, Google’s automated auction system would dynamically execute a bidding war among advertisers to serve ads, which generated revenue each time a person clicked on them. The resonances with quantitative finance on Wall Street were by design. Google’s chief economist, Hal Varian, intended to turn advertising into a commodity to be traded on its own codified market, complete with high-frequency trading. Unlike other commodities, however, the market was—and is to this day—entirely unregulated, with Google representing parties on both sides of the transaction. As articulated by Nicholas Carr in the Los Angeles Review of Books, this auction system “gave Google a huge financial incentive to make accurate predictions about how users would respond to ads and other online content… And so the company began deploying its stores of behavioral data not for the benefit of users but to aid advertisers — and to juice its own profits.”
In 2004, Google’s IPO prospectus contained the now-famous motto Don’t Be Evil: “We believe strongly that in the long term, we will be better served – as shareholders and in all other ways – by a company that does good things for the world even if we forgo some short term gains,” Brin and Page wrote. According to former Google executives Eric Schmidt and Jonathan Rosenberg in their 2014 bestseller How Google Works, the mantra was coined during “a meeting in which [employees] were debating the merits of a change to the advertising system, one that had the potential to be quite lucrative for the company. One of the engineering leads pounded the table and said, ‘We can’t do that; it would be evil.’” And yet, the very existence of the advertising system whose proposed change inspired the motto was an abdication of Brin’s and Page’s supposed principles. Like all adages from Silicon Valley, Don’t Be Evil was thus an empty rhetorical obfuscation from its inception.
In the intervening years, the successes of Google and its parent company, Alphabet, would be punctuated not only by rolling out products and acquiring companies but also by being evil, perpetrating increasingly inventive ethical transgressions in service of gathering our behavioral data and training predictive models. To better serve targeted ads, Google scanned our email correspondences in Gmail. To covertly steal our unencrypted home WiFi router data, it added functionality to its Street View cars that drove through residential streets across the world. To harvest training data for its machine learning algorithms for free, it made us identify stop signs and traffic lights in reCAPTCHAs when visiting web pages. To improve the machine learning algorithms themselves, it plundered our universities of professors and students alike. To mine our health, it acquired Fitbit and partnered with hospitals to ingest our medical records (in the case of the University of Chicago Medical Center, Google “allegedly” obtained hundreds of thousands of medical records in clear violation of informed consent and HIPAA). To invade our personal spaces, it gave us Google Homes along with Spotify subscriptions and free donuts. To keep us watching YouTube videos, it created a recommendation algorithm that has radicalized and scarred us and our children alike. To occupy our metropolitan spaces, it acquired Sidewalk Labs and promised us the “smart city.”
Today, Google and Alphabet are everywhere, and so is their data collection: Google search (our search queries), YouTube and Chromecast (our video preferences), Android (our mobile behavior and app usage), DeepMind and Google Health (our health data), Google Maps (our geolocation data), Google Chrome (our browsing histories), Google Home and Google Nest (our home lives and voice commands), Google Drive (our files), Google Docs (our collaborations), Gmail (our emails), Google Books (our collective cultural heritage), Google Photos (our personal photos), Google News (our political leanings), Google Hangouts, Google Meet, and Google Duo (our social connections), Google Colab (our code), Google Analytics (our website metrics), YouTube Music (our music preferences), Google Calendar (our plans), Google Workspace Enterprise (our professional lives), Google Classroom (our education). Though I often feel guilty about my inability to dissociate myself from the Google ecosystem, the reality is that as a participating member of society, I simply cannot escape it. There is no denying the utility of Google’s products and services, but to call this utility “convenience” is to outright deny our autonomy and consent. The age-old tradeoff between privacy and convenience thus breaks down completely.
What is surveillance capitalism? In her influential 2019 book The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, Shoshana Zuboff defines it as the claiming of “human experience as free raw material for translation into behavioral data”; by way of machine learning algorithms, this “behavioral surplus,” or “data exhaust,” is then converted into “prediction products,” which are traded on the “behavioral futures markets.” Phrased differently, our behavioral data are used to train predictive models that can guess things about us, which can then be monetized in many different forms, from steering our clicks or purchases to selling our data and predictions to others. The better and more plentiful the data, the better the downstream machine learning predictions, and the larger the profits. In her book, Zuboff aptly describes how Google’s monetized surveillance tactics have matured into a profit structure amenable to tech and tech-adjacent companies more generally. The list has grown to include Facebook, Amazon, Microsoft, Snapchat, Netflix, Spotify, LinkedIn, IBM, Palantir, Verizon, rideshare companies, automotive companies, e-commerce sites, health companies, dating apps, porn sites, banks, insurance companies, and consultancy firms. The scale of these companies’ investments in machine learning research is remarkably difficult to comprehend, as evidenced by the many research labs that power surveillance capitalism: Google Brain, DeepMind, Facebook AI Research, Microsoft Research, IBM Watson, Uber AI Labs, Walmart Labs, and Salesforce Research to name a few. And yet, the scale of data collection in our daily lives is even more difficult to comprehend.
The sartorial concerns of surveillance capitalism are the Apple Watch and the Google Glass. The domestic infiltrations are the Amazon Echo and the smart fridge. With better data collection, monitoring, and tracking, we are promised insights into ourselves: our sleep habits, our sex lives, our time management. Just as capitalism promises the domination of nature, surveillance capitalism promises the domination of human nature via our behavior. The totality of surveillance has become a spectacle to champion: the surveillance capitalists preach individual emancipation via datafication and computation. We are told that we will unlock our true selves with enough data and enough computing power.It is Silicon Valley’s perversion of the Socratic method: the unoptimized life is not worth living. This fetish of optimization pervades tech culture. Consider the obsession with machine learning (itself a form of statistical optimization) and time management (namely, a preoccupation with calendars and agile workflows), as well as the modern technocrat’s brand of philosophy (utilitarianism, “rationalist thought,” futurism, and the “quantified self” movement) and philanthropy (effective altruism). But below the surface, this proselytizing cult of optimization is nothing more than capitalism as religion, marked by an all-consuming belief in maximizing profits via datafication. Through this lens, the intentions behind spreading the gospel of personal optimization in product launches and quarterly earnings reports become clear. Even though the technocrats will have us believe that it is their worldview, it is in reality an economic theory. Its distilled calculus is as simple to grasp as supply and demand: more data on us equals more money for them.
The acts of surveillance capitalism perpetrated under this extractive economic theory are remarkably egregious and yet so commonplace that we have become entirely desensitized. Consider, for example, the infamous Cambridge Analytica scandal that influenced elections across the world; Facebook’s social contagion experiment in which researchers modified 689,003 users’ news feeds in order to study how users’ emotions were manipulated; Pokémon Go’s gamification of behavioral modification to direct players to sponsored locations such as Starbucks franchises; health insurance companies that vacuum up data on our clothing sizes; Roomba robots making maps of our homes to be sold to other companies; companies that sell predictive algorithms for legal recidivism and facial recognition for law enforcement; targeted advertisements so granular that pregnant women who miscarry are haunted by baby clothing ads; and Facebook’s rollout of “free” mobile internet in India best characterized as neo-colonialism due to the predatory data harvesting (“Anti-colonialism has been economically catastrophic for the Indian people for decades. Why stop now?” Facebook board member Mark Andreessen tweeted in 2016). Surveillance capitalism-funded academic research papers such as “Multi-person Localization via RF Body Reflections” and “Capturing the Human Figure through a Wall” paint a future of Wi-Fi routers that will capture all our movements. All the while, our personal lives have been monitored and aggregated into lists and spreadsheets to be sold by data brokers: $79 per directory of 1,000 rape victims or genetic disease sufferers. While the amateur scammers have been trying to snooker us with pop up ads celebrating us all as the millionth visitor, the true scammers have been instead hiding in plain sight all along, mining our behavior with each click, keystroke, and voice command with promises of videos of cute dogs and better directions and healthier lifestyles, commodifying us through comprehensive surveillance. The surveillance capitalists are wolves in technology’s clothing.
“The Egyptian who conceived and built the pyramids thousands of years ago was really just a successful manager,” Eric Schmidt and Jonathan Rosenberg tell us in How Google Works. “The Internet Century brims with pyramids yet unbuilt…. And this time, with no slave labor.” Last year, approximately 83 percent of Alphabet’s $161 billion in revenue was derived from advertising. It has become abundantly clear that the pyramids of the Internet Century have already been built. They have been financed with blood money, generated by exploiting and forcibly harvesting the very thing that surveillance capitalism has promised to liberate: ourselves.
Who, precisely, are our surveillance capitalists? My guess is that when the history books are written, the remembered faces of surveillance capitalism will be the oligarchs of Silicon Valley—the same faces who occupy so much of our attention today. But obscured behind them will be the massive systemic complicity and banal personal motives that have fueled it all. For each Sergey Brin and Larry Page, there are a thousand Google employees who had each accumulated a net worth of $5 million or more within a few years after Google’s IPO. For each Mark Zuckerberg, Jeff Bezos, and Peter Thiel, there are countless engineers, data scientists, machine learning researchers, product managers, IT specialists, operations researchers, advertising executives, corporate lawyers, in-house economists, UX designers, tech consultants, and university professors with surveillance capitalism research appointments who profit off of surveillance capitalism’s monetization of us all. Some turn a blind eye to their own complicity. Others hide behind their inconsistent utilitarianism (it was Facebook’s News Feed creator Andrew Bosworth, after all, who said, “We connect more people…. Maybe it costs a life by exposing someone to bullies. Maybe someone dies in a terrorist attack coordinated on our tools…. The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is *de facto* good.”). Some pay lip service to the ills of technocratic capitalism. Others legitimize and rationalize corporate transgressions through big tech AI ethics groups. The common denominator is that they all cash their paychecks. “It is difficult to get a man to understand something when his salary depends upon his not understanding it,” Upton Sinclair told us.
The decision by Microsoft employees to protest $8 million in ICE contracts is commendable. So, too, is Google employees’ successful campaign against the company’s $10 billion contract with the Pentagon to provide AI services. These successful demonstrations reveal the collective power held by these many thousands in their capacity to say “no.” And thus, along with our tech gospels Sergey and Mark and Jeff and Bill, they, too, are responsible for what they have not taken an emphatic enough stand against: the extractive mining of our bodies; the exploitation of laborers who power content moderation, product shipments,and data labeling; the unprecedented embargoing of wealth among the technocratic elite; the co-optation of university engineering schools into recruitment centers; the monetary AI arms race that is erasing academic research in favor of big tech AI labs; the conversion of the internet into one large A/B test; the dark design patterns that force us to opt in to data collection; the euphemistic reduction of people to “users” who are addicted to the dopamine hits of likes and shares; the pollution of local elections such as those in California and my home city of Seattle with floods of tech money; the intimate relationship between surveillance capitalism and the carceral state; the proliferation of NDAs with non-disparagement clauses to silence those who sour.These unspoken thousands, too, are our surveillance capitalists, and these, too, are their wrongs.
● “Be socially beneficial.” – Google’s AI Principles
● “People should be accountable for AI systems.” – Microsoft AI Principles
● “Our researchers focus on the projects that we believe will have the most positive impact on people and society.” –Facebook AI Research Values
● “Underpinning our corporate responsibility standards and practices is our dedication to respect human rights.” – IBM Human Rights Principles
● “Our employees constantly think about how to invent for good.” – About Amazon: Innovative Culture
● “With good data and the right technology, people and institutions today can still solve hard problems and change the world for the better.” – Palantir: Why We’re Here
● “Ask the right questions.” – Spotify: 3 Principles for Designing ML-Powered Products
● “To help create positive, enduring change in the world.” – McKinsey & Company: Our Purpose
● “We treat others as we expect to be treated ourselves.” – Enron’s Principles of Human Rights
● “We commit to being a good corporate citizen in all the places we operate worldwide. We will maintain high ethical standards, obey all applicable laws, rules, and regulations, and respect local and national cultures.” – ExxonMobil Corporation’s Guiding Principles
● “Honesty, respect and fairness are the core values that embody our commitment to society…. We care about their dignity and human rights.” – Phillip Morris International
A central argument of Zuboff in The Age of Surveillance Capitalism is that today’s surveillance capitalism evades granular comparison with the capitalism of yore. In Zuboff’s eyes, the resonances with Fordism and the industrial capitalism of the turn of the 20th century only go so far, restricting our ability to critique a phenomenon entirely new. Zuboff calls this the “syndrome of the horseless carriage, where we attach our new sense of peril to old, familiar facts, unaware that the conclusions to which they lead us are necessarily incorrect.” It is a pervasive view throughout tech criticism: if the technology is novel, so too must be the threat.
To view the surveillance capitalists as disruptive savants of the free market is to give them too much credit and to deny just how much they have benefited from the government, neoliberalism, and sheer circumstance. Much of the hardware, software, infrastructure, and algorithms that power surveillance capitalism—including the internet, the very substrate of surveillance capitalism – were funded by the military-industrial complex and taxpayer money. Moreover, surveillance capitalism’s longitudinal profits off of 9/11 are second only to the defense industry: the attacks were a decisive inflection point in the growth of surveillance and devaluation of privacy, causing the FTC to abandon potential legislation for data protection. It is hard to imagine Google wielding so much power if not for the Obama administration’s infatuation with the company (the administration met with Google employees 427 times from 2009 to 2016). The tax breaks that the titans of surveillance capitalism have reaped are just as significant. Google & Co. owe their success to a rich genealogy of interventions and events that had nothing to do with their self-proclaimed ingenuity.
And yet, it is a fallacy to believe that surveillance capitalism is anything new to begin with, as Zuboff and others have suggested. As pointed out by Brett Christophers in Jacobin, “it is hard to avoid the thought that [behavioral modification] aims have always been integral to capitalism in general and advertising in particular.” The datafication of behavior for advertising purposes was only a matter of time. As early as the 1960’s, the Simulmatics Corporation was selling advertising agencies datafied behavioral science approaches and had visions of a “mass culture model,” which, in the words of Jill Lepore, would be used to “collect consumer data from companies across all media – publishing houses, record labels, magazine publishers, television networks, and moviemakers – in order to devise a model that could be used to direct advertising and sales by way of a meta-media-and-data corporation that sounds rather a lot like Amazon.” By 1991, Equifax had compiled granular personal information including purchase habits on 120 million Americans into a digital marketing database to be sold as a CD-ROM. By 1998 – the year of Sergey Brin and Larry Page’s paper—a vocabulary for describing surveillance capitalism was already prevalent among technologists. David Shenk popularized the term data smog (“the noxious muck and druck of the information”), which lives on as Zuboff’s “data exhaust,” and the term dataveillance (“the massive collection and distillation of consumer data into a hyper-sophisticated brand of marketing analysis”) was already ten years old. The writing on the wall was there for decades: surveillance capitalism was the natural continuation of industrial capitalism.
In fact, the ills of industrial capitalism can tell us a lot about surveillance capitalism. For example, the resonances with climate change are uncanny. Consider not only the widespread corporate transgressions but also the incessant and sustained public gaslighting: the lies of recycling’s efficacy and wholesale denial of scientific evidence are not unlike the promises made by Facebook executives about the platform connecting us all despite refusing to let their own children use it (former executive Chamath Palihapitiya said his kids “aren’t allowed to use that shit”). Consider further the systemic complicity for personal gain at the expense of others, as well as the failure to translate widespread frustration into the enforcement of meaningful corporate accountability. These longitudinal similarities reveal what we are truly up against.
And thus, the gravest fallacy among critics is to overemphasize the “surveillance” in “surveillance capitalism.” To see the surveillance capitalists as unprecedented in their threat is to fall prey to their endless claims that what they offer is new and emancipatory (“X will free your time!” “Y is the next frontier of [insert here]!”). And so the two decades of debates around surveillance capitalism have been polluted by an amnesic cycle of re-inventing critiques with every technological development. Questions of novelty prevail. Is data the new oil? Are we the consumers, products and/or raw supply? Does surveillance capitalism exploit more than just our labor in the Marxist sense? But the reality is that such theoretical debates ultimately elide the bitingly corrosive effects of capitalism. What truly matters is the wholesale exploitation itself.
Let me be clear: this exploitation has been exceedingly well-documented. But too often, the allure of technological novelty in this exploitation occupies the foreground. Like parents with a newborn, the media inundates us with surveillance capitalism’s “firsts”: the first presidential campaign influenced by algorithmic radicalization, the first person killed by a self-driving car, the first legal recidivism machine learning model to be deployed in court, or the first case of being wrongfully accused by an algorithm. Zuboff et al. cite these “firsts” as evidence of the uniquely perverse downstream effects of malfeasance under surveillance capitalism. Computer scientists in the field of fairness, accountability, and transparency invoke them as examples of why we must de-bias algorithms and training datasets. Policy experts use them to argue why machine learning has no place in our judicial systems. Privacy advocates will point to them when telling you to restrict the number of devices you use. But all of these technologically-informed viewpoints obfuscate the heart of the problem: the “capitalism” in “surveillance capitalism.”
Just as climate change is not solved by minor refinements on the status quo, surveillance capitalism is not addressed by cosmetic, technologically-informed fixes. Uber’s killer self-driving car is not solved by delaying deployment in favor of the more “reliable” gig economy or salivating about AI trolley problems (“In an unavoidable collision, who does the AI choose to save?” goes the well-worn AI ethics thought experiment). Rather, it is solved by divesting from exploitative rideshare companies and investing in public transit. The bias in algorithmic credit determinations is not mitigated by adopting the European Union’s GDPR legislation guaranteeing a right to an explanation for an algorithm’s decision. It is improved by lessening the burden of credit scores themselves via heavy taxation and wealth redistribution. The inequity of the police wrongfully accusing people with machine learning algorithms is not addressed by auditing or removing the algorithm. It is solved by defunding the police. The predatory data vacuuming by health insurance companies is not ameliorated by legislating data protections. It is addressed by providing universal healthcare. Google’s most toxic abuses—YouTube’s recommendation algorithm comes to mind—are not solved by an antitrust lawsuit. They are mitigated only by re-evaluating the profit motives that drive Google to begin with.
The systemic injustices of surveillance capitalism are a call to action.
Perhaps the greatest trap set by the surveillance capitalists has been to convince us that this was all inevitable—that the internet had to turn out this way. They have gone to great lengths to sell a binary narrative reminiscent of Cold War-era rhetoric: we must submit to the Silicon Valley way or the authoritarian highway, with surveillance as the necessary future in either case. In 2009, then-CEO of Google Eric Schmidt proclaimed, “Would you prefer to have the government running innovative companies or would you rather have the private sector running them? There are models and there are countries where in fact the government does try to do that, and I think the American model works better.” The lasting legacy of the PR surrounding Google’s withdrawal from China has been to reify this grossly reductive narrative. It has proven to be remarkably successful, leaving little room for us to speculatively mine the gap between where we are and where we could be. Peppered throughout many critiques of surveillance capitalism are references to authoritarian surveillance as a foil. Comparisons to “Big Brother” are ubiquitous. Zuboff goes so far as to devote multiple pages of The Age of Surveillance Capitalism to a table comparing “Big Brother” totalitarianism and “Big Other” instrumentarianism (Zuboff’s term for the power wielded by the surveillance capitalists) along axes such as “totalistic vision,” “means of power,” and “ideological style.” Some have rightly pointed out how these elaborate comparisons fall prey to the discourse of inevitability and crowd out our ability to re-imagine our technology. I’d like to posit that the discourse of inevitability is even more limiting: it crowds out the view of a brighter society. Just as we cannot speak of the ills of surveillance capitalism without capitalism, we cannot re-envision what the internet could be without reimagining the contours of society itself and actualizing these changes.
If there is one thing to remember about the surveillance capitalists, it is not that they are unicorns or visionaries or moonshot thinkers. It is not that they have innovated disruptively. And it is not that they pose an unprecedented threat. Rather, we must remember above all else how profoundly unimaginative and uninspired they are and will continue to be. With their unoriginal exploitation, garden variety profit maximization, and derivative, centuries-old, empty promises of a liberated technological future that obscure their intentions, they have recycled and appropriated the well-worn tropes of unchecked capitalism.
True creativity and originality lies in our collective ability to re-imagine the unjust system itself and realize a more equitable society. It is our only path to closing the gap between what technology is and what it could be. It is the inspired path.