EN PT/BR
Silicon Valley and the English Language
This piece was first written as a ten-minute talk delivered at the University of São Paulo in February 2020 as part of the Affecting Technologies, Machining Intelligences gathering. It was revised for publication in Pittsburgh, PA, U.S. in July 2020. This piece is an informal essay reflecting on the author’s experience as a technologist based in the U.S.
“We build human-centered tools that scale”
This simple sentence, which wouldn’t be out of place overheard at a U.S. tech company or university, indexes a whole worldview of Silicon Valley. If you understand why someone would talk like this, then you will understand how they think.
U.S. computing culture has changed fast from 2000-2020 and critique is still racing to catch up. To me, U.S. computing culture right now seems to be a half-digested mix of old countercultures and new ones that seek to overthrow it. I define a counterculture as a group of people who share values that oppose those of a dominant, mainstream culture, and so define themselves in opposition to it, both by critiquing it and by building new things to counter it, often through a lens of redressing power. They often share a lived experience, so they distill common metaphors to reflect it in a shared language.
“We build human-centered tools that scale”
In just six words, you can hear the echoes of old computing countercultures in it and the ringing tones of the new dominant culture, and you can just about hear new critical computing countercultures starting to whisper over it. Because the Silicon Valley worldview is enormously influential, and these simple words have a long history, I want to give you a brief introduction to a historical understanding of these words.
This language reflects a certain worldview that is deeply linked with the problems in what Silicon Valley produces. But the Valley mindset is so common that if you have been raised in it, it might lock you into thinking a certain way. So I find it very valuable to consider the critiques of this sentence from a body of people who share different values—academics, artists, and activists—who have thought hard about the problems with such a sentence, or who use the same language but coming from a different lived experience. To me, their critiques form a new U.S. computing counterculture, which is still young and very critical because it has just reacted to the rise of platform and information companies (and the accompanying entrenchments of power) of the last twenty years. What might come next? I identify a way of thinking that infuses a constructionist, worldbuilding spirit into the recent base of critique—a new culture of computing that I call “supercritical.”
Let’s start with a tour of Silicon Valley computing culture.
“We build human-centered tools that scale”
First, Silicon Valley technologists really do sound like this. People are always talking about “building” things: software systems, new hardware, apps, products, tools. For example, a recent essay by a prominent venture capitalist argues that the bipartisan solution to almost every social ill is to ”build” (Andreesen, 2020). Mark Zuckerberg often describes Facebook as a “tool,” e.g. in 2018 in his testimony before Congress (Weigel, 2018). Major Valley design firms like IDEO and design institutions like the Stanford d.school tout their “human-centered” design approach prominently on their webpages. Sam Altman, the head of Y Combinator (a major venture capital firm in Silicon Valley), exhorts readers to “scale” themselves and their businesses (Altman, 2019); Zuckerberg is known for his constant use of scale-oriented words like “more,” “grow,” and references to the orders of magnitude of revenue and users that Facebook engages with (millions, billions, etc.) (Grosser, 2019). Taken at face value, the phrase “human-centered tool that scales” describes a part of Silicon Valley that holds a high valuation, both in the stock market and in the popular imagination: user-facing software systems like Facebook, Twitter, and macOS.
In this language, you can hear the echoes of the old computing counterculture from the 1970s as it rose to prominence and power circa 2010, and the voice of another, more recent, counterculture coming through. For example, “tool” talk can be traced back to the 1970s, to Stewart Brand’s Whole Earth Catalog. The word, back then, was used by a circle of hackers to signal their belief that the personal computer (whose userbase was, back then, limited to a set of superelite U.S. hobbyists) could be an instrument of personal empowerment, that this “tool” could work social change through play (Turner, 2008; Felstenstein, 2013). “Human” talk, too, has its own fringe flavor. For example, CHI (Computer-Human Interaction conference) is the flagship conference for research in HCI (Human-Computer Interaction). The field of HCI itself, with a focus on building personal, interactive, and applied systems, used to be a radical fringe compared to the field of AI, which was more focused on military use, theory, and automation (Grudin, 2009). Now human-computer interaction is widely accepted as a discipline in computer science departments (Grudin, 2009). To talk about the “human” in the context of computing was to align yourself with the hippies and rebels of the 1970s. Now that personal computers and the Internet have evolved from luminous possibility to dissonant reality, you’ll see different uses of “human” talk to signal an opposition to the rise of platform companies; for example, the word “humane” as used in the name of the Center for Humane Technology (founded by a disaffected ex-Google ethicist and designer in 2013), whose slogan is “human downgrading” (Thompson, 2019). (This use is complicated, of course, by the fact that the platform companies themselves invoke the figure of the “human” when it suits them, e.g. recent interest in “human-in-the-loop machine learning,” as one Amazon Web Services video demonstrates.)
Another phenomenon to note is the use of technical and business metaphors to understand and control a social context. “Scale” is such a lens on the world. The question that engineers ask themselves all the time, and are trained to do from the beginning of an engineering education, is “How does it scale?”, because developing code involves working with material at scales beyond what one person can perceive, such as a computer that can do something a million times a second, processing information gathered from millions of people. So “scale” is an idea that you can find in actual lines of code. When tech people talk about “scaling,” they always mean “scaling up,” not “scaling down,” implicitly asking: “What happens when you add more zeros to that number? No matter if it describes a quantity in a technical context (e.g. rows in a database) or in a social one (e.g. number of users).” “Scale” also has the flavor of the business phrase “economies of scale” (Tsing, 2012): scaling up is better because it means more profit. Finally, I think, but can’t prove, that “scale” may have emerged from a 1970s U.S. utopian wonder at the possibility of a global network: “What if the personal computer could scale to cover the globe? What might be possible if we could easily, without putting in additional effort, connect billions of people?”
In sum, if you hear someone say a sentence like “We build human-centered tools that scale,” you can make a pretty informed guess at their socialization: they were part of a mainstream U.S. technologist community from about 1970-2020, they received a U.S. engineering education that emphasizes technical prowess, and they value unrestrained growth.
Now let’s talk about how this language can be deconstructed.
“We build human-centered tools that scale”
As I summarized earlier, I believe this language reflects the capture of 1970s U.S. computing counterculture into the private U.S. tech monopolies of the last twenty years. My informal impression (as someone who has worked in tech since 2013) is that the rise of these monopolies reflects several technical trends circa 2000: lots of people getting personal computing devices that can connect to a global Internet, combined with the realization—first at Google in 2002 (Zuboff, 2020)—that companies could collect mass amounts of user data, plus the unexpected success of machine learning techniques in finding patterns in this data. (I’m only going to consider the private sector in this discussion of trends.)
My experience is that popular critique (that is, technocritique that is loud, sustained, and prescient enough to leave its specialized academic context and enter circles of mainstream tech, arts, and policy) has just caught up with the technical leaps and bounds of 2000-2020. One type of recent popular U.S. technocritique seems to come from a more diverse set of scholars, including Ruha Benjamin, Simone Browne, Shoshana Zuboff, Virginia Eubanks, Sasha Costanza-Chock, Cathy O’Neil, and Safiya Umoja Noble, who are reacting to the rise of U.S. platform companies, data collection, and deployment of algorithmic decision systems.
I’ll summarize some of their critiques of this kind of Silicon Valley language and culture. First, let’s return to the sentence itself: “We build human-centered tools that scale.” To situate this sentence, I would ask: How does this language look through the lenses of power, capital, history, and identity? What is the speaker’s reference point? What, concretely, is the speaker’s subject? How is the speaker situated in time and place? What are the unsaid, shared assumptions that the speaker is making? Let’s go word by word.
Engineers tend to focus on building for its own sake, because that is what they have been trained to do, and that is the way they (we) see the world. I believe it is not widely taught in U.S. engineering programs how to consider the social consequences of building, resulting in companies like Clearview AI—a facial recognition startup that faced a recent wave of popular and legal backlash (Alba, 2020). New critique, then, hits back by proposing to unbuild, e.g. through bans and refusal. For example, several nonprofits and academics have called for blanket bans on facial recognition used by law enforcement, which one historian dubs the “plutonium of computer science” (Stark, 2019) and has already been banned in several U.S. cities (Jarmanning, 2020). Other scholars point to a “second wave of algorithmic accountability” focused on banning or severely curtailing existing AI systems (Pasquale, 2019).
“Human” talk is easy to see through. Applying the lens of capital shows that phrases like “human-centered design” elide the distinctions between the communities that are overserved and underserved and serving. For example, as Ruha Benjamin writes, “In ‘human-centered design’. . . as we think about coded inequity and discriminatory design, it is vital to question which humans are prioritized in the process. Practically speaking, the human paying the designer’s bill is the one most likely prioritized” (Benjamin, 2019). Thinking clearly about such universalist language means revising it to reflect a concrete reality. Given the reality of digital gig work—that it is usually people of color outside the United States who are workers, such as the 36% of Amazon Mechanical Turk workers who are based in India as of 2010 (Ross, 2010)—what effect does it have on the speaker and listener to replace a phrase like “human-in-the-
-loop” with a phrase like “Deepa Patel-in-the-loop”?
Moreover, appeals to “humanity” can be understood by analogy to political language, which is already widely lampooned for its emptiness (Orwell, 1946). Such appeals carry no more weight than the old rhetorical appeal to “Think of the children!” Who would dare argue against “humanity,” “the human,” and the “humane”? The feel-good language of the “human-centered” system uses the means of individual comfort—these minor feelings of affective satisfaction, like the “oh, ah” pop-click of a slot machine—to wallpaper over the sometimes-extractive, coercive means of a technical system. For example, one tenet of human-centered design is to “delight” the user, not to trouble them (Fessenden, 2017).
As for “tool” talk, if you apply a feminist lens to the lineage of this word, particularly the history of the Whole Earth Catalog that massively influenced the 1970s U.S. computing counterculture that was absorbed into “establishment tech” (Wiener, 2018) you could sum it up as follows: “boys and their toys.” Taking as its cover a view of Earth from space, the catalog adopts the omnipotent “god view” so common in male-dominated technological societies (Haraway, 1988). Stewart Brand, the catalog’s creator, underlines this point on its inner cover with the inscription “We are as gods, and might as well get good at it.” Lee Felsenstein (one of the members of Brand’s inner circle and one of the earliest personal computer developers) explicitly frames tools as toys, saying “If work is to become play, then tools must become toys” (Felsenstein, 2013).
If we accept the metaphor of the tool, we must also consider different subjective understandings of the word “tool” to arrive at a fuller understanding. Tools carry the histories of their builders and users—the histories that shape futures. As a simple example, consider a hammer that is made with an extra-large handle, so that only people with large hands can hold it. What groups of people will be better at wielding this hammer? What groups of people would be worse at it? Simone Browne writes of the racialized history of a tool as simple as a candle: the Lantern Laws of eighteenth-century New York decreed that Black, mixed-race, and Indigenous enslaved people had to illuminate themselves at night, and if not, they would be beaten (Browne, 2015). This tool of self-surveillance doesn’t require computers or big data, it is just a candle. Ruha Benjamin cites the French West Indian philosopher Frantz Fanon, who in 1952 recognized how a tool might be used for oppression: “I, the [hu]man of color, want only this: That the tool never possess the [hu]man” (Benjamin, 2019). Audre Lorde, an African-American writer and organizer, writes that “The master’s tools will never dismantle the master’s house” (Lorde, 1996). These writers use the word “tool” to reflect another body of lived experiences, that of oppression and injustice, that differs sharply from the desire of those influential early U.S. hackers to use tools for playful self-empowerment.
The problem with “scale” (as it is used in the sentence) is that it is, again, a lense that structures the way that the speaker sees the world. Scaling up values unrestrained growth (often for the sake of profit) over community. What scales? Families don’t scale. (How well would your family picnic do with 10000x more parents or kids?) But cafeterias, call centers, and jails all scale because they isolate workers from each other. In fact, the first scalable construct was the European sugarcane plantation (Tsing, 2012). The idea of scale itself, originating in “economies of scale,” is built on breaking relationships to create isolated units by oppressing certain groups, especially Black people. Diverse, transformative relationships don’t scale.
“We build human-centered tools that scale”
This sentence doesn’t do well under critique. In many contexts, the speaker means no harm, but at its worst, this sentence flattens power dynamics and history and different identities into a vacuous sameness. It floats in an unsituated ether of information, devoid of time, place, and relationships. It speaks from a very narrow set of lived experiences. In fact, a “human-centered tool that scales” could easily describe an electronic slot machine or a jail anklet, both of which have been metaphors applied to technology that Silicon Valley develops. These metaphors serve as a shorthand for the reality of pervasive, targeted surveillance that such technology enables (Crawford & Joler, 2018). Things are bad. Now what? Critique, it seems, does not say.
For a new computing counterculture to emerge, I think it needs to build on top of critique. Since a counterculture needs to speak from a different point of view from the dominant culture, it would need to develop language that is situated in different communities’ lived experiences and their different values. The members might create new language, or use old language in new ways, to help the members of the counterculture express their existing values more clearly. That would lay the foundation for members to express what they believe to be possible and valuable in the future. This language, if it both heeds the “deconstructionist” critical language and takes a “constructionist” approach by creating new frames and primitives and futures, I would call “supercritical.”
Like a particle fired into an atomic pile of supercritical mass, where critique tears down ideas then falls into quiescence (Latour, 2004), a language and culture of “supercritique” would generate more ideas than it destroys. But transformative relationships are as important as growth: like a dynamical system that reaches a critical point and then displays unpredictable, emergent behavior, a language of “supercritique” would also emphasize “critical connections, not critical mass” (brown, 2017). The idea of “supercritical computing” also continues a conversation about “critical technical practice” that other U.S. technologists have started, particularly Philip Agre, D. Fox Harrell, and Phoebe Sengers (Agre, 1997). (Their thinking largely predated the frame shifts of 2010; e.g. in Agre’s time, “good old fashioned AI” was still cutting-edge computer science.)
I think one promising direction for “supercritical computing” is to create a culture that values diverse, transformative relationships. For example, following adrienne maree brown’s lead, speaking not about building “tools” but about maintaining “ecosystems” would emphasize an awareness of a delicate balance of relationships in a system that needs to be continually cared for. To speak of “scale” imposes a lens of growth and interchangeability on the world; to speak of what’s “nonscalable” emphasizes the social relationships that scale breaks, as well as the social relations between scalable parts and nonscalable parts. Anna Tsing coined the word “nonsoel” to express this idea: just as a pixel in a digital image remains “uniform, separate, and autonomous,” so a “nonsoel” is an element of the social landscape removed from formative social relationships, a “nonsocial landscape element” (Tsing, 2012). What kind of computing culture might arise if, from the beginning of a computing education, students were taught to recognize and discuss nonsoels in their environments, to work on problems involving both scalability theory (e.g. standard asymptotic analysis of algorithms) and nonscalability theory?
I think countercultures find their success in centering what is impossible or invisible in the current dominant culture. Fifty years ago, widespread personal computers and a global information network seemed quite impossible, a possibility invisible until some U.S. hackers started to invent the language of “tools” and “scale” to make it tangible. Let’s try shifting the frame again. For example, a new practice of computing might not involve computing as we know it. The computers we have may be wrong because they work on information in a way that rewards constantly gathering new information; the computing expertise of the dominant Silicon Valley culture may be blinkered by the standard U.S. engineering education and its economic incentives. What are we missing, as practitioners, because we limit ourselves to the material and techniques we inherited? Finally, I don’t think there will ever be just one counterculture. I think there are (and ought to be) many of them.
What futures might come to be? I admire the work of community organizers who are already putting together supercritical language by and for their communities. For example, the Detroit Digital Justice Coalition’s Principles center the values of “access,” “participation,” “common ownership,” and “healthy communities” (Detroit Digital Justice Coalition, 2019). The members of this coalition wrote these principles by listening: by interviewing community members who were already using technology for organizing or grassroots economic development. Reading these principles helps me imagine a future where technologists and community members work together to co-design technologies and policies that serve real community needs, creating infrastructure that is governed by the community it serves. In fifty years, I hope this kind of counterculture has gone mainstream.
REFERENCES
AGRE, P. E. (1997). Toward a Critical Technical Practice: Lessons Learned in Trying to Reform AI. Retrieved August 14, 2020, from https://pages.gseis.ucla.edu/faculty/agre/critical.html
ALBA, D. (2020, May 28). A.C.L.U. Accuses Clearview AI of Privacy ‘Nightmare Scenario.’ Retrieved August 14, 2020, from https://www.nytimes.com/2020/05/28/technology/clearview-ai-privacy-lawsuit.html
ALTMAN, S. (2019, January 24). How to Be Successful. Retrieved August 14, 2020, from https://blog.samaltman.com/how-to-be-successful
ANDREESSEN, M. (2020, July 30). It’s Time to Build. Retrieved August 14, 2020, from https://a16z.com/2020/04/18/its-time-to-build/
BENJAMIN, R. (2019). Race after technology: Abolitionist tools for the New Jim Code. Cambridge, UK, MA: Polity.
BROWN, A. M. (2017). Emergent strategy: Shaping change, changing worlds. Chico, CA: AK Press.
BROWNE, S. (2015). Dark matters: on the surveillance of Blackness. Durham, NC: Duke University Press.
CRAWFORD, K., & Joler, V. (2018, September 7). Anatomy of an AI System. Retrieved August 14, 2020, from https://anatomyof.ai/
Detroit Digital Justice Coalition. (2019, February 25). Principles to Guide Our Work. Retrieved August 14, 2020, from https://www.alliedmedia.org/ddjc/principles
FELSENTEIN, L. (2013, January). Introduction. Retrieved August 14, 2020, from http://www.leefelsenstein.com/
FESSENDEN, T. (2017, March 5). A Theory of User Delight: Why Usability Is the Foundation for Delightful Experiences. Retrieved August 14, 2020, from https://www.nngroup.com/articles/theory-user-delight/
GROSSER, B. (2019, May 2). Order of Magnitude. Retrieved August 14, 2020, from https://bengrosser.com/projects/order-of-magnitude/
GRUDIN, J. (2009). AI and HCI: Two Fields Divided by a Common Focus. AI Magazine, 30(4), 48. doi:10.1609/aimag.v30i4.2271
HARAWAY, D. (1988). Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective. Feminist Studies, 14(3), 575. doi:10.2307/3178066
JARMANNING, A. (2020, June 24). Boston Bans Use Of Facial Recognition Technology. It’s The 2nd-Largest City To Do So. Retrieved August 14, 2020, from https://www.wbur.org/news/2020/06/23/boston-facial-recognition-ban
LATOUR, B. (2004). Why Has Critique Run out of Steam? From Matters of Fact to Matters of Concern. Critical Inquiry, 30(2), 225. doi:10.2307/1344358
LORDE, A. (1996). Sister outsider: Essays and speeches. Trumansburg, NY: Crossing Press.
ORWELL, G. (1946, April). Politics and the English Language. Retrieved August 14, 2020, from https://www.orwell.ru/library/essays/politics/english/e_polit
PASQUALE, F. (2019, November 25). The Second Wave of Algorithmic Accountability. Retrieved August 14, 2020, from https://lpeproject.org/blog/the-second-wave-of-algorithmic-accountability/
Practical Human-in-the-Loop Machine Learning—Amazon Web Services. (2018, May 24). Retrieved August 14, 2020, from https://www.youtube.com/watch?v=rJw7u8qyDf4
ROSS, J., Irani, L., Silberman, M. S., Zaldivar, A., & Tomlinson, B. (2010). Who Are the Crowdworkers? Proceedings of the 28th of the International Conference Extended Abstracts on Human Factors in Computing Systems – CHI EA ‘10. doi:10.1145/1753846.1753873
STARK, L. (2019). Facial Recognition is the Plutonium of AI. XRDS: Crossroads, The ACM Magazine for Students, 25(3), 50-55. doi:10.1145/3313129
THOMPSON, N. (2019, April 23). Tristan Harris: Tech Is ‘Downgrading Humans.’ It’s Time to Fight Back. Retrieved August 14, 2020, from https://www.wired.com/story/tristan-harris-tech-is-downgrading-humans-time-to-fight-back/
TSING, A. L. (2012). On Nonscalability: The Living World Is Not Amenable to Precision-Nested Scales. Common Knowledge, 18(3), 505-524. doi:10.1215/0961754x-1630424
TURNER, F. (2008). From counterculture to cyberculture: Stewart Brand, the Whole Earth network, and the rise of digital utopianism. Chicago, IL: The University of Chicago Press.
WEIGEL, M. (2018, April 12). Silicon Valley’s Sixty-Year Love Affair with the Word “Tool”. Retrieved August 14, 2020, from https://www.newyorker.com/tech/annals-of-technology/silicon-valleys-sixty-year-love-affair-with-the-word-tool
What is Human-Centered Design? (n.d.). Retrieved August 14, 2020, from https://www.designkit.org/human-centered-design
WHEARLEY, N. (2017, February 18). History & Approach. Retrieved August 14, 2020, from https://dschool.stanford.edu/fellows-in-residence/project-fellowship-history-approach
WIENER, A. (2018, November 16). The Complicated Legacy of Stewart Brand’s “Whole Earth Catalog.” Retrieved August 14, 2020, from https://www.newyorker.com/news/letter-from-silicon-valley/the-complicated-legacy-of-stewart-brands-whole-earth-catalog
ZUBOFF, S. (2020). The age of surveillance capitalism: The fight for a human future at the new frontier of power. New York, NY: PublicAffairs.