Digital complicity: The intent and embrace of problematic tech

Joseph Reagle

2018

STATUS: Rough Draft; for early Feedback Only

This draft has not yet received any feedback. It is rough and will benefit from suggestions about thesis, themes, sources, and coherence, in addition to sentence level issues and typos. I especially welcome feedback on high level issues:

Thesis: Understanding the relevance of complicity to those who create or embrace problematic digital technology is difficult but important.

Abstract: Facebook’s propagation of fake news and bio-hackers’ “chipping” themselves are examples of possible complicity in the creation and embrace of problematic technology. I extend Lepora and Goodin’s (2013) framework for complicity to address this concern. To address intent, I adapt Robert Merton’s (1936) classic essay “The Unanticipated Consequences of Purposive Social Action.” On the embrace of problematic technology, I make use of Margret Olivia Little’s (1998) notion of culturally complicity. When digital complicity is likely, I conclude with how people have opposed, limited, or (at least) disclaimed the harmful uses of technology they create or embrace.


Introduction

To be complicit is to be an accessory to harm. In 2017, globalism, sexist culture, and Trumpian nepotism prompted Dictionary.com to recognize complicity as their word of the year.

Complicity is also a concern in the digital realm. Is Facebook complicit in spreading the fake news that roiled the 2016 election? Are bio-hackers who inject themselves with implants for the sake of convenience complicit in the more oppressive tracking which will follow? Are those who develop facial recognition technology and databases complicit when these systems are used by oppressive regimes?

These questions are about problematic technology: tech-related artifacts, ideology, and techniques that have potential to be broadly harmful. I frame these questions relative to Chiara Lepora and Robert Goodin’s (2013) framework for assessing complicit blameworthiness. From this, I distill two attributes most salient to the digital realm: creators’ intent toward and enthusiasts’ embrace of problematic technology. Facebook didn’t intend to distribute propaganda. Bio-hackers give little thought to how their experiments presage dystopic scenarios.

Although intention and eager adoption are important concerns, they are not well understood in the digital context. To understand the role of intent, I extend Robert Merton’s (1936) classic essay “The Unanticipated Consequences of Purposive Social Action.” On the embrace of problematic technology, I make use of Margret Olivia Little’s (1998) notion of culturally complicity.

I do not presume my assessments of digital complicity are conclusive. Whether, for example, bio-hackers are complicit in a dystopian future is defeasible. Rather, I intend to illustrate the concerns and concepts that we must engage when considering such questions. And when digital complicity is likely, I conclude with how people have opposed, limited, or (at least) disclaimed the harmful uses of technology they create or embrace.

A framework for complicity

Who, beyond the thief, is obliged to compensate the victim of that theft? Aristotle (1274/1917) responded that we must consider those who contribute “by command, by counsel, by consent, by flattery, by receiving, by participation, by silence, by not preventing, [and] by not denouncing” (II.II.7.6). Aristotle’s discussion, though brief, touched on the extent of the accomplice’s contribution to the harm, their obligations, and the difference between action and inaction. A frail person who does not intervene in a mugging is different than an official who condones the graft he is obliged to prevent.

Since Aristotle, moral and legal philosophers have continued to discuss complicity, offering different terms, definitions, rubrics, value judgments, and case studies. John Gardner (2006) focused on causality and degree of difference-making in his analysis, as did Christopher Kutz (2000). H. D. Lewis (1948) maintained an individualistic, skeptical, and minimalist approach to collective culpability; Karl Jasper (2000) did the opposite. Gregory Mellema (2016), in Complicity and Moral Accountability, summarized this literature well, in addition to offering his own theoretical distinctions between enabling versus facilitating harm and shared versus collective responsibility.

For the sake of concision, I won’t attempt a gloss of the literature. Instead, I make use of Lepora and Goodin’s (2013) exhaustive framework. Though it appears complicated—and my summary should be used as a reference rather than something that must be comprehended before continuing—it is the most comprehensive and cohesive framing on a difficult topic.

Lepora and Goodin’s work was initially motivated by the quandaries faced by aid workers. Is an NGO that imports food complicit in supporting the warlord who purloins much of it? This real world case is supplemented by hypothetical scenarios, including a bank robbery and an assassination.

In their analysis, Lepora and Goodin distinguished between three groups of agents: (a) co-principals of a wrongdoing, (b) contributors who are complicit in it, and (c) non-contributors who have no causal relation to the harm. Whether agents are blameworthy (in the case of coercion) or excusable (for participating in a lesser evil) are additional and independent distinctions.

Figure 1: Lepora and Goodin’s (2013) types of agents.

Participating Co-principals
(constitutive)
Complicit Contributors
(causally necessary)
Blameless Non-Contributors
(no causal connection)
  • full joint (identical plan
    & action)
  • co-operation (shared plan
    & different actions)

  • conspiracy (shared plan)
  • collusion (plan in secret)

  • complicity simpliciter
  • … by collaboration

  • … by connivance
  • … by condoning
  • … by condoning
  • … by consorting
  • … by contiguity



  • connivance
  • condoning
  • condoning
  • consorting
  • contiguity

Co-principals are active participants in the planning and execution of a wrong-doing; their actions constitute the harm. In most cases, the co-principles are co-operators: they each take the plan as their own and partake in its actions, even if in different but interdependent ways. Members of a bank robbery gang are co-operating co-principals, including those holding the guns, the lookout, and the getaway driver.

Contributors, on the other hand, are casually necessary to the harm but not constitutive; their complicity “necessarily involves committing an act that potentially contributes to the wrongdoing of others in some causal way” (Lepora & Goodin, 2013, p. 6). They contribute to but do not join in the wrongdoing. This includes the generic complicity simpliciter (without qualification), such as the bank teller who knowingly leaves the bank vault open. There are also more specific (qualified) types of complicit contribution, including complicity by collaboration (going along with a plan), by connivance (tacitly assenting), by condoning (granting forgiveness), by consorting (close social distance), and by contiguity (close physical distance). The first of this class, the collaborator, goes along with a plan that is not their own. A teller who is forced to open that vault at gunpoint is a complicit collaborator—though morally exonerable in the larger framework because their action was coerced.

Of course, a given scenario can have multiple harms in which an agent plays multiple roles. Someone who backed out of a heist after helping plan it is a co-principal in the conspiracy and a complicit contributor in its enactment but not a co-principal in the robbery itself.

Connivance, condoning, consorting, and contiguity can also be the non-causal acts of non-contributors. Non-contributory connivance is tacitly assenting to a harm, like not reporting a crime. However, if criminals know they won’t be reported and consequently commit another crime, this connivance becomes complicit connivance The same holds true for condoning, consorting, and contiguity. These terms describe people associated with wrongdoers; when their association becomes causal, encouraging harm, they move from this third group of non-contributors to the second group of complicit contributors.

Within these roles we can see various “dimensions of difference” (Lepora & Goodin, 2013, ch. 4). Centrality speaks to the extent of contribution: how much and how essential. Lepora and Goodin use counter-factual thinking about alternative worlds for this. An essential contribution is necessary to the harm in “every suitably nearby possible world.” For example, someone who smuggles a sniper rifle past security to an assassin is essential to the murder. A potentially essential contribution is a “necessary condition of the wrong occurring, along some (but not all) possible paths by which the wrong might occur” (p. 62). The sniper who successfully assassinated the target was essential; the back-up assassin was potentially so because it is conceivable that some path would’ve led to the primary assassin failing and the back-up assassin succeeding.

Proximity speaks to the closeness to the harm in the causal chain; the last contribution to a wrongdoing has a greater weight than an earlier one. Firing a rifle in an assassination is more proximate to the harm than the person who procured it. The reversibility of the contribution is a factor as is its temporality (e.g., condoning happens after the primary wrongdoing). There is also a person’s mental stance toward planning the harm. The person might be the plan-maker or a plan-taker. If the latter, there is the degree of shared purpose and responsiveness to the plan, from eagerly adopting it as their own, otherwise accepting it, or merely complying.

These dimensions are inputs to factors within Lepora and Goodin’s framework for assessment. Complicit blameworthiness is a function of the badness, responsibility, contribution, and shared purpose factors: CB = (RF*BF*CF) + (RF*SP). An implication of this equation is that even though bank tellers are complicit when coerced, they are not blameworthy; if RF=0, so is CB. Another implication is that you do not need a shared purposes (SP) with the wrong-doer to be blameworthy. Even when SP=0, a contributor might have non-zero factors of responsibility (RF), badness (BF), and contribution (CF), resulting in a positive CB.

Figure 2: Lepora and Goodin’s (2013) dimensions of complicit blameworthiness
CB = (RF*BF*CF) + (RF*SP) .

This equation also implies four categories of secondary agents: (1) those who are not complicit because they have no knowledge nor contribution, (2) those who are complicit but without blame because of coercion, (3) those who are complicit and somewhat to blame, and (4) those who bear maximal blame.

Figure 3: Lepora and Goodin’s (2013) blameworthiness.

  1. Secondary agents are not complicit with the principal wrongdoing if they had no knowledge of their contribution, its wrongness, or did not contribute.
Kc=0 or Kw=0 or CF=0
  1. Knowing and contributing agents are complicit but bear no blame for contributing to the principal wrongdoing if it wasn’t voluntary.
Kc=1 and Kw=1 and CF > 0 but V=0
  1. Agents are complicit and bear more or less blame if they knew but only partially contributed, consented, or shared the purpose.
Kc=1 and Kw=1 but (0<CF<1 or 0<V<1 or 0<SP<max)
  1. Agents are complicit and bear maximal blame if they knew, volunteered, made an essential contribution and shared the purpose of the wrongdoing.
Kc=1 and Kw=1 and V=1 and CF=1 and SP=max

Finally, Lepora and Goodin, true to their concern about humanitarian efforts, acknowledged that blameworthy complicity in one harm (provisioning a warlord) can be the lesser evil of another harm (letting others starve). Nonetheless, we should still recognize the lesser evil as an evil: “We think that is a better way of explicating the morality of the situation than to deny that you are doing anything wrong at all by contributing to wrongdoing, on the grounds that your own intentions are pure” (Lepora & Goodin, 2013, p. 96).

Facebook’s intent

Whereas complicit was Dictionary.com’s word of 2017, fake news took the honor at the Collins dictionary. Existing concerns about social media effects on users’ identities, behavior, and relationships was joined by the harm of misinformation. In 2017, two former Facebook executives confessed to feeling partly responsible.

During an on stage interview Sean Parker, Facebook’s former president, confessed that he was presently “something of a conscientious objector” to social media. He was now like those who, in Facebook’s early days, told him they valued their real-life presence and interactions too much to waste time on social media. The younger Parker would glibly respond: “We’ll get you eventually.”

I don’t know if I really understood the consequences of what I was saying, because [of] the unintended consequences of a network when it grows to a billion or two billion people and … it literally changes your relationship with society, with each other …

In oder to consume as much time and attention of their users as possible, Parker claimed Facebook created “a social-validation feedback loop … exactly the kind of thing that a hacker like myself would come up with, because you’re exploiting a vulnerability in human psychology.” Even if they did not fully appreciate the consequences of what they were doing, they knew what they were doing: “The inventors, creators—it’s me, it’s Mark [Zuckerberg], it’s Kevin Systrom on Instagram, it’s all of these people—understood this consciously. And we did it anyway” (Parker quoted in Allen, 2018). In the civic realm, Parker once believed the “Internet could be our salvation,” but, given recent events, he now criticizes such naiveté and advocates for increased civic participation (Walters, 2017).

Similarly, during a public interview at Stanford School of Business, Chamath Palihapitiya, former vice-president of user growth at Facebook, said he felt “tremendous guilt” about his role at Facebook. The platform’s exploitation of “short-term dopamine driven feedback loops” has led to a proliferation of incivility and misinformation.

Even though we feigned this whole line of “there probably aren’t any really bad unintended consequences,” I think in the back deep recesses of our minds we kind of knew something bad could happen. But I think the way we defined it was not like this. It literally at a point now where I think we have created tools that are ripping apart the social fabric of how society works. (Palihapitiya, 2017, min. 21:38)

In these excerpts we see two harms: (1) stoking users’ obsessive engagement in pursuit of growth and advertising revenue, and (2) distorting civic culture via polarization and propaganda. Facebook is a participating co-principle in the first harm. My present concern, though, is if Facebook is a complicit contributor in the second harm of spreading misinformation.

The spread of fake news was a significant harm. Even Mark Zuckerberg agrees fake news is bad and concedes that the platform did not do enough to prevent its spread. However, though Facebook enabled the harm, it did not share the purpose of the propagandists nor did it have a role in their planning. The damage that was done was irreversible, though the platform is now attempting reform. In Lepora and Goodin’s terms, the badness factor was high, the contribution factor was less so. What, then, of responsibility? This is a central question of digital complicity.

In Lepora and Goodin’s framework, responsibility is contingent on voluntariness and knowledge of wrongness and contribution. Facebook wasn’t coerced, and Palihapitiya and Parker’s comments reveal that even if initially unaware of the problem, they suspected there would be unintended consequences. Palihapitiya knew “something bad could happen,” but suppressed the idea; when it was considered, the imagined extent of problem “was not like this.” The extent of their knowledge of wrongness was naive and their knowledge of contribution was suppressed.

It is not uncommon for technologies to have unintended consequences (Tenner, 1996). Robert Merton’s (1936, p. 901) germinal—but imprecise—essay on “The Unanticipated Consequences of Purposive Social Action” defines unanticipated consequences and their causes. Though Merton and others have conflated unanticipated and unintended consequences, it is a distinction we should maintain (de Zwart, 2015). Anticipation is the extent to which a consequence is foreseen. According to Merton, this is because some uses of technology can be unknowable, because of the “interplay of forces and circumstances which are so complex and numerous that prediction of them is quite beyond our reach,” and because of ignorance, for which “knowledge could conceivably be obtained” but is not (Merton, 1936, p. 899). Other uses might be anticipated but still be unintended, meaning they are foreseen as possible but not aligned with the creator’s intent. For example, makers of a hammer might not anticipate the claw end of the hammer as a bottle opener. Even so, they might anticipate it being used as a weapon, though this is contrary to their intention.

In the case of Facebook’s spread of misinformation, its former executives claimed both unanticipated and unintended consequences. Early on, Facebook didn’t anticipate becoming a platform for propaganda. And even when it appeared to be happening, it was never their intention. It was not as if propaganda was inconceivable (unknowable), only that they were largely ignorant to the possibility.

Merton specifies five causes of unanticipated consequences, four of which are relevant to this case. First, as stated, Facebook was initially ignorant to the harm and their responsibility in spreading misinformation. Second, this ignorance was a consequence of surprise: it had not been a problem in the past, so they assumed it would not be a problem in the future even as Facebook gained millions and then billions of users. And, third, Facebook had an “imperious immediacy of interest(Merton, 1936, p. 901) in user and advertising growth; little else was given much attention. Fourth, given many social media platforms’ initial commitments to free speech, this value hindered Facebook from censoring political-seeming speech.

But to give explanations for Facebook’s ignorance is not to excuse it. Indeed, ignorance can be negligent and even purposeful. Legal theorists typically enumerate a handful of elements necessary for negligence, including duty, breach of duty, a causal relationship, and consequent damage (Owen, 2007). Therefore, Lepora and Goodin’s concern with knowledge in assessing responsibility must also include the moral character of any ignorance. If Facebook had a duty to be a responsible media platform, to be knowledgeable about its harms and their contribution, they failed, which casually contributed to the spread of propaganda.

Figure 4: An extension of Merton’s (1936) unanticipated consequences.

Based on Palihapitiya’s comments, Facebook was complicit in spreading propaganda. In Lepora and Goodin’s terms, what happened was bad and Facebook causally contributed to the harm. Individuals at Facebook have some stake in this collective culpability. And some, include Palihapitiya, are personally complicit if they were at least potentially essential; that is, there is a possible history in which their behavior contributed to the harm.

The key question is about responsibility. Facebook’s spreading of propaganda was a primary, negative, and substantial consequence of creating a commenting platform. In terms of responsibility, they were initially ignorant in allowing this consequence to manifest, even if it was not their intention. Even so, this ignorance may have been negligent, and neither negligent nor purposeful ignorance should be absolving of responsibility.

Nothing was coerced, so there is no exoneration of blameworthiness. A possible “lesser evil” exoneration would be for Facebook to claim “if not us, someone else.” In a competitive market, such a conceit must be common: “It’s better that we succeed and do what we can, perhaps in the future, then let the less scrupulous win.” However, such a claim is morally distasteful when winning is also so obviously self-interested. It is easy for technologists to fret over their consciences once they are millionaires. This also presumes a false dichotomy between growth at all costs and failure. Something might have been done; some foresight might have been spent. In any case, this argument for exoneration would be tenuous and, even if it excuses a lesser evil, it is an evil nonetheless.

Life hackers’ embrace

In 2017 the Swedish innovation incubator Epicenter held a well-publicized party at which employees could opt for an injection of an identity implant. After being “chipped,” they would be able to unlock doors and purchase items from vending machines with a wave of a hand. One of the volunteers remarked that she opted for the implant because she was prone to losing her keys and that she wanted to be “part of the future” (Sandra Haglof quoted in Salles, 2017).

Epicenter describes itself as a space for “hackers and technology enthusiasts” to meet and collaborate—and it calls out Tim Ferriss, “life hacker and NYTimes best-selling author,” as a member (Epicenter, 2018). Hackers are known for their facility with systems. Although systems behave in typical ways, they can also be optimized or contravened with hacking, and the practice is not limited to computers. Hacking now encompasses many domains of life and business. We already saw Sean Parker speak of exploiting users’ psychology while at Facebook—an approach now known as growth hacking. Similarly, Epicenter’s life hackers believe that by chipping themselves they improve their lives and have an (augmented) hand in shaping the future.

Yet, a future where such augmentation is commonplace is not necessarily a uniformly good one. This technology is problematic, with potential to be broadly harmful—beyond any personal risk. This makes assessment of digital complicity difficult because it often entails the prospective evaluation of consequences, both good and bad. For example, bio-hacking can be useful and a cause of concern. A chip in the hand can be convenient, but is it appropriate for children (for their own safety) or those under house arrest (for their own convenience)? Or, beginning in 2013, the Consciousness Hacking movement coalesced around the idea of monitoring and manipulating the mind, often by way of devices. Brain sensing headbands can help enthusiasts learn to meditate by indicating when they are focused or distracted. In China, similar technology is being used to monitor the mental state and emotions of employees (Chen, 2018). The Chinese company puts a positive spin on their product, suggesting worker productivity and happiness might be optimized by increasing breaks when fatigue sets in, but exploitative uses are as likely.

It’s not difficult to imagine life hacking techniques and technologies implicated in dystopic scenarios. To what extent are those who embrace life hacking, then, complicit? In Foucauldian terms, by embracing technologies of the self, life hackers further technologies of power.

Foucault defined technologies of the self as those that permit individuals to perform “operations on their own bodies and souls, thoughts, conduct, and way of being, so as to transform themselves in order to attain a certain state of happiness, purity, wisdom, perfection, or immortality.” Technologies of power, on the other hand, “determine the conduct of individuals and submit them to certain ends or domination, an objectivizing of the subject” (Foucault, 1982/1997, p. 225). For example, in China, a high school is using facial recognition to monitor students’ alertness, and the government is using biometrics to track ethnic minorities, a capability central to its nascent social credit system (Ma, 2018; Wu, 2018). Whereas technologies of the self are performed by the self for the self’s benefit, technologies of power see the self dominated and objectivized by another. (Foucault additionally used the term biopower to speak of controlling the bodies of populations.)

When considering technologies of power, we can further distinguish between what I will call hard and soft power. Hard power determines the conduct of the individual; soft power shapes their conduct through norms. Although I claim there is movement between technologies of self and power (hard and soft), the difference between the latter is that of, for example, imprisoning a class of people versus endorsing the norm that they ought not leave home without a chaperon—related to Foucault’s (1977) notion of discipline.

Figure 5: Foucauldian (1977) technologies and complicity.

The complicity concern, again, is that enthusiasts who embrace problematic technology further dystopic ends. Their voluntary embrace of technology eases its (1) coercive imposition (hard power) and (2) normative pressure (soft power).

The first concern of hard power is one of a slippery slope. When enthusiasts voluntarily adopt implants, they contribute to the technology’s development. If they purchase it, they create market demand. When adopted from their employer, they test and apply the technology for their employer’s benefit. While this is voluntary, it primes the technology for less voluntary use. Consider the example of behavioral modification. In 2014, life hacker Maneesh Sethi launched the Pavlok wrist-band which vibrates or zaps its users to discourage bad habits—such as biting their nails or wasting time on Facebook (Sethi, 2014). This is a technology of the self, invented and embraced by life hackers. Four years later, Amazon patented something similar to alert (vibrate) its warehouse employees when they reach for the wrong item (Ong, 2018). This is a technology of power. Such monitoring and modification might one day happen at Amazon, and, as noted, is already happening in China. The slope between the hacker who happily adopts, the worker who reluctantly accedes, and the person who has little choice might be greased by hackers’ embrace.

In Lepora and Goodin’s terms, the badness factor of this scenario is massive but uncertain: it is contingent on the likelihood of the dystopic scenario. The essentiality of ordinary enthusiasts to the harmful norm is low: it’s hard to imagine a history in which such an individual’s embrace makes a difference—though it is easy to see the effect of life hacking gurus like Tim Ferriss. The responsibility of the enthusiast adopting the technology is also uncertain. Wearing a Pavlok or getting chipped is voluntary, and the enthusiasts might have little knowledge of their contribution to and the wrongness of the dystopic possibilities. The hacker’s contribution, as an adopter of problematic technology, is minimal as they are peripheral to the harm. Their shared purpose with oppressive wrong-doers is likely minimal and possibly zero. In most cases, those whose embrace problematic technology have less culpability than those who create it. Only when harm is probable is their blameworthiness like that of consumers complicit in labor exploitation (Brock, 2016; Lawford-Smith, 2017).

On the second concern of soft power, a number of critics have noted how individual efforts toward self-improvement amplify the social impetus for others to do the same. Maximizing productivity, for example, can backfire on the individual and further exploitative workplaces (Gregg, 2015; Moore & Robinson, 2015; Penny, 2016). Among academics hoping to boost their productivity, Matt Thomas writes that by embracing the ideology, apps, and techniques of the blog ProfHacker, “one essentially becomes complicit in one’s own obsolescence” and that these techniques ignore and amplify “the various structural pressures that make them seem like the way out” (Thomas, 2015, p. 182).

Margret Olivia Little (1998) refers to the culpability of participating in the exercise of soft power as cultural complicity, which is “when one endorses, promotes, or unduly benefits from norms and practices that are morally suspect.” Little’s concern was how cosmetic surgery can reinforce the norms of whiteness and Barbie Doll femininity. She used crass complicity to characterize those who bolster and personally benefit from harmful norms, such as an exploitative plastic surgeon. I use the term banal complicity to characterize those who add to “the increased pressure others may in fact feel as a result of having surgically ‘improved’ appearance” (p. 173). In the case of cosmetic surgery we can easily see how technologies of the self overlap with disciplinary technologies of power.

In Lepora and Goodin’s terms, the badness factor of problematic norms is significant and certain. With respect to responsibility, cosmetic patients act voluntarily, but may be ignorant to any wrongness of their contribution to harmful norms; their centrality and essentiality is very low. Their contribution is minimal, as is their shared purpose in fostering cosmetic insecurity in others. Surgeons, on the other hand, have a significant centrality, responsibility and contribution. This is why Little labels those who bolster and benefit from harmful norms as crassly complicit.

Enthusiasts’ embrace of life hacking can be parsed in a similar way. The productivity hacker, for example, also has a minimal responsibility. Such enthusiasts are acting voluntarily and give little thought to the larger system motivating their efforts. Their centrality and contribution is minimal as is a shared purpose toward a more hectic and precarious workplace. In this case, there is no crassly complicit surgeon to blame. There are productivity and self-help gurus who are crassly complicit, but their centrality and proximity is more removed than that of a surgeon. Additionally, this scenario is complicated by the fact that the badness factor is equivocal. Improving one’s productivity can be taken to excess and contribute to inimical norms, but not necessarily so. The problem is one of context: what motivates and who benefits from increased productivity? Workers who are motivated by self-efficacy and who benefit from their efforts are different than those driven by precarity for others’ gain.

Responses which limit complicity

Even if the complicity around problematic technology is not clear cut, the potential magnitude of such technology’s harm is massive. Letting a child play with a loaded gun, for example, has a significant chance of resulting in a definite harm that affects the child and dozens of others. It is a probable tragedy at the micro scale. Facial recognition technology has good applications, but it can be abused with society-wide effects. It is a possible nightmare at the macro scale.

Some technologists wish to limit their complicity in these harms. The question, then, is how? Given the spate of news related to digital complicity, I discern a few strategies.

The first category of response is to refuse to contribute to technology which is knowingly being used in problematic applications. Early in 2018, a handful of influential Google engineers refused to work on a security system which would allow the company to work on more military projects. (The Pentagon requires such systems of its contractors for sensitive projects.) A few months later, over four-thousand Google employees petitioned their company to cancel an existing contract with the Pentagon. The controversial project developed object recognition technology which would be used by drones—though, purportedly, not for their weapons. Some Googlers even resigned in protest (Bergen, 2018; Conger, 2018; Wakabayashi & Shane, 2018). Similarly, in response to a change in U.S. border policy which increased the separation of migrant families, over three hundred Microsoft employees called for their company to cancel any contracts related to U.S. Immigration and Customs Enforcement (ICE) (Lecher, 2018).

Beyond these specific cases, such distancing has also been generalized. Over twenty-eight hundred technologists have pledged to “never again” build database that allow governments to “target individuals based on race, religion or national origin.” The “never again” references IBM’s role in providing tabulating equipment used in genocide by the Nazi regime (Lien & Etehad, 2016; “Never Again,” 2017) This pledge also entail minimizing the collection of related data (even if not originally intended toward oppressive ends), destroying existing “high-risk data sets and backups,” and resigning if need be. Even more generally, some programmers have developed a Hippocratic-like oath for their profession by which individuals pledge to only undertake “honest and moral work” and to “consider the possible consequences of my code and actions” (Johnstone, 2016).

Another category of response is to clarify one’s intentions about technology and its applications. Twenty years ago, when I worked at the World Wide Web Consortium, I co-authored a document entitled “Statement on the Intent and Use of PICS: Using PICS Well” (Reagle & Weitzner, 1998). The Platform for Internet Content Selection (PICS) was a content labeling system intended as an alternative to government censorship. Sites could label their content (choosing a rating system similar to that used for video games) and parents could configure the family computer as they saw fit. However, some critics feared that deploying a voluntary system could have unintentional dystopic ends (Lessig, 1997). This statement made it clear that the designers intended the technology to be used at the content creators and consumers’ discretion with full transparency about its use. (PICS ended up having little effect beyond Reno V. American Civil Liberties Union; a concurring opinion cited it as evidence that technical options were available and preferable to government censorship (Brekke, 1996).)

Attempts at clarification also followed the efforts of Google and Microsoft employees. Both companies spoke to what type of work they would undertake. Google announced it would not renew its drone imaging contract with the Pentagon. Additionally, its CEO, Sundar Pichai, publicly wrote that Google will not apply its artificial intelligence (AI) work where it is likely to cause harm, be used within weapons, violate international surveillance norms, or otherwise violate international law and human rights (Pichai, 2018). Microsoft’s Satya Nadella internal memo was less decisive, clarifying that the company’s contracts with ICE were related to legacy office applications and not “any projects related to separating children from their families at the border” (Nadella quoted in Warren, 2018). As journalists and a subsequent petition noted, this did not address earlier boasts by Microsoft that it was supporting ICE’s efforts in voice and facial recognition (“An open letter to Microsoft drop your $19.4 million ice tech contract,” 2018).

With respect to concerns about those who embrace problematic technology, there aren’t many strong examples of distancing from complicity—which is not surprising given it is a more diffuse concern. An obvious strategy is to no longer embrace a technology once you determine it is harmful. Consumer boycotts are commonplace, but none of the scenarios I’ve discussed prompted such action, though some outside of Microsoft said they would withdraw from Microsoft-related events in solidarity with the ICE protest (Thorp, 2018). Beyond the economic effect of such distancing, which I suspect is minimal, protests have a more important role: by making a concern a topic of public discussion, protesters remove the potential for others to claim they were ignorant to their responsibility.

With respect to the embrace of norms among life hackers, it’s not a difficult to find disenchanted life hackers who distant themselves from, or even renounce, their earlier embrace, as did productivity hacker Merlin Mann and digital minimalist Everett Bogue (Reagle, 2019, pp. TODO; Thomas, 2015, p. 74). More generally, the slow movement—applied to food, work, and even professing (Berg & Seeber, 2017)—can be seen as a pushback against norms of efficiency that overlap with the optimizing inclination of life hackers.

Conclusion

An assessment of digital complicity requires a synthesis of existing frameworks and concepts. I’ve extended Lepora and Goodin’s framework so as to address the creation and embrace of problematic technology. Although this adds clarity in understanding digital scenarios, assessment is not straightforward.

First, the assessment of badness is complicated in the digital realm. This technology has uncertain and equivocal consequences: both good and bad, contingent, unanticipated, and unintended.

Second, Lepora and Goodin’s notion of responsibility is dependent on voluntariness. Yet, the voluntary embrace of technology for innocuous ends might ease its harmful imposition on others. Responsibility also requires knowledge of contribution and harm. Because of equivocal badness and because the distant proximity between those who create technology and its harmful applications, this knowledge is often attenuated. Indeed, purposeful or negligent ignorance is all too easy—and ought not be exonerating.

Finally, Lepora and Goodin do not include harmful cultural norms within their scenarios, like those undergirding some cosmetic surgery and life hacking. Here, banal and crass cultural complicity become new facets in the assessment of shared purpose and extent of contribution.

As problematic technologies—especially AI, bio-hacking, and surveillance—continue their rapid advances, concern about digital complicity will become ever more pertinent, as will the need for a coherent framework for understanding, discussing, and limiting it.

References

Allen, M. (2018, February 4). Sean Parker unloads on Facebook: “God only knows what it’s doing to our children’s brains”. Axios. Retrieved from https://www.axios.com/sean-parker-unloads-on-facebook-god-only-knows-what-its-doing-to-our-childrens-brains-1513306792-f855e7b4-4e99-4d60-8d51-2775559c2671.html

An open letter to Microsoft drop your $19.4 million ice tech contract. (2018, January 22). Retrieved June 22, 2018, from https://actionnetwork.org/petitions/an-open-letter-to-microsoft-drop-your-194-million-ice-tech-contract

Aristotle. (1917). Summa Theologiae/Second part of the second Part/Question 62. Retrieved January 22, 2018, from https://en.wikisource.org/wiki/Summa_Theologiae/Second_Part_of_the_Second_Part/Question_62#Art._7_-_Whether_restitution_is_binding_on_those_who_have_not_taken (Original work published 1274)

Berg, M., & Seeber, B. (2017, April 16). The slow professor. Retrieved June 19, 2018, from

Bergen, M. (2018, June 21). Google engineers refused to build security tool to win military contracts. Retrieved June 25, 2018, from https://www.bloomberg.com/news/articles/2018-06-21/google-engineers-refused-to-build-security-tool-to-win-military-contracts

Brekke, D. (1996, February 8). CDA struck down. Wired. Retrieved from https://www.wired.com/1997/06/cda-struck-down/

Brock, G. (2016). Consumer complicity and labor exploitation. Croatian Journal of Philosophy, 16(1), 113–125.

Chen, S. (2018, June 13). “Forget the Facebook leak”: China is mining data directly from workers’ brains on an industrial scale. South China Morning Post. Retrieved from http://www.scmp.com/news/china/society/article/2143899/forget-facebook-leak-china-mining-data-directly-workers-brains

Conger, K. (2018, May 14). Google employees resign in protest against Pentagon contract. Gizmodo. Retrieved from https://gizmodo.com/google-employees-resign-in-protest-against-pentagon-con-1825729300

de Zwart, F. (2015). Unintended but not unanticipated consequences. Theory and Society, 44(3), 283–297. Retrieved from https://link.springer.com/content/pdf/10.1007%2Fs11186-015-9247-6.pdf

Epicenter. (2018, May 17). Epicenter. Retrieved May 17, 2018, from https://epicenterstockholm.com/

Foucault, M. (1977). Discipline and punish: The birth of the prison (1975). (A. Sheridan, Ed. & Trans.). New York: Pantheon Books.

Foucault, M. (1997). Technologies of the self. (P. Rabinow, Ed., R. Hurley, Trans.), Ethics: Subjectivity and Truth. New York: New Press. (Original work published 1982)

Gardner, J. (2006). Complicity and causality. Criminal Law and Philosophy, 1(2), 127–141. Retrieved from http://dx.doi.org/10.1007/s11572-006-9018-6

Gregg, M. (2015). Getting things done: Productivity, self-management, and the order of things. In Networked affect (pp. 187–202). Cambridge, MA: MIT Press.

Jaspers, K. (2000). The question of German guilt. (E. Ashton, Trans.). New York: Fordham University.

Johnstone, N. (2016, December 2). An oath for programmers, comparable to the Hippocratic oath. Retrieved February 26, 2018, from https://github.com/Widdershin/programmers-oath

Kutz, C. (2000). Complicity: Ethics and law for a collective age. Cambridge: Cambridge University.

Lawford-Smith, H. (2017). Does purchasing make consumers complicit in global labour injustice? Res Publica. https://doi.org/10.1007/s11158-017-9355-4

Lecher, C. (2018, June 21). The employee letter denouncing Microsoft’s ICE contract now has over 300 signatures. The Verge. Retrieved from https://www.theverge.com/2018/6/21/17488328/microsoft-ice-employees-signatures-protest

Lepora, C., & Goodin, R. E. (2013). On complicity and compromise. Oxford: Oxford University Press.

Lessig, L. (1997). Tyranny in the infrastructure. Wired, 5(07). Retrieved from http://www.wired.com/wired/archive/5.07/cyber_rights_pr.html

Lewis, H. D. (1948). Collective responsibility. Philosophy, 23(84), 3–18.

Lien, T., & Etehad, M. (2016, December 14). Tech workers pledge to never build a database of Muslims. LA Times. Retrieved from http://www.latimes.com/business/technology/la-fi-tn-tech-oppose-muslim-database-20161214-story.html

Little, M. O. (1998). Cosmetic surgery, suspect norms, and the ethics of complicity. (E. Parens, Ed.), Enhancing human traits: Ethical and social implications (pp. 162–177). Washington, DC: Georgetown University Press.

Ma, A. (2018, April 29). How China is watching its citizens in a modern surveillance state. Business Insider. Retrieved from http://www.businessinsider.com/how-china-is-watching-its-citizens-in-a-modern-surveillance-state-2018-4

Mellema, G. (2016). Complicity and moral accountability. Indiana: University of Notre Dame.

Merton, R. K. (1936). The unanticipated consequences of purposive social action. American Sociological Association, 1(6), 894–904. Retrieved from https://pdfs.semanticscholar.org/dc9f/6f377a93108e8ad73be1e9c0111428a5a8b9.pdf

Moore, P., & Robinson, A. (2015). The quantified self: What counts in the neoliberal workplace. New Media & Society. https://doi.org/10.1177/1461444815604328

Never Again. (2017). Retrieved January 11, 2018, from http://neveragain.tech/

Ong, T. (2018, February 1). Amazon patents wristbands that track warehouse employees’ hands in real time. The Verge. Retrieved from https://www.theverge.com/2018/2/1/16958918/amazon-patents-trackable-wristband-warehouse-employees

Owen, D. G. (2007, Summer). The five elements of negligence. Retrieved May 17, 2018, from https://scholarlycommons.law.hofstra.edu/hlr/vol35/iss4/1/

Palihapitiya, C. (2017, November 13). Chamath Palihapitiya, founder and CEO of Social Capital, on money as an instrument of change. Retrieved February 5, 2018, from https://www.youtube.com/watch?v=PMotykw0SIk&feature=youtu.be&t=21m21s

Penny, L. (2016, July 8). Life-hacks of the poor and aimless. The Baffler. Retrieved from http://thebaffler.com/blog/laurie-penny-self-care

Pichai, S. (2018, June 7). Our principles. Retrieved June 7, 2018, from https://blog.google/topics/ai/ai-principles/

Reagle, J. (2019). Hacking Life: Systematized living and its discontents. Cambridge, MA: MIT Press.

Reagle, J., & Weitzner, D. (1998). Statement on the intent and use of PICS: Using PICS well (Note). W3C. Retrieved from https://www.w3.org/TR/NOTE-PICS-Statement

Salles, A. (2017, April 3). Ignoring privacy worries, firm implants microchips in employees. Retrieved February 1, 2018, from https://www.carbonated.tv/news/despite-privacy-concerns-swedish-employees-ok-use-of-microchips

Sethi, M. (2014, November 30). Pavlok breaks bad habits. Retrieved July 11, 2016, from https://www.indiegogo.com/projects/pavlok-breaks-bad-habits#/

Tenner, E. (1996). Why things bite back: Technology and the revenge of unintended consequences. New York: Knopf.

Thomas, M. (2015). Life hacking: A critical history, 2004 – 2014 (PhD thesis). Iowa University.

Thorp, J. (2018, June 19). I have withdrawn from this event in protest of MSFT’s complicity with ICE. Retrieved June 20, 2018, from https://twitter.com/blprnt/status/1009105549219237888?s=19

Wakabayashi, D., & Shane, S. (2018, June 1). Google will not renew pentagon contract that upset employees. The New York Times. Retrieved from https://www.nytimes.com/2018/06/01/technology/google-pentagon-project-maven.html

Walters, J. (2017, September 20). Sean Parker: The internet is not the answer for those seeking change technology. The Guardian. Retrieved from https://www.theguardian.com/technology/2017/sep/20/sean-parker-the-internet-is-not-the-answer-for-those-seeking-change

Warren, T. (2018, June 20). Microsoft CEO plays down ice contract in internal memo to employees. The Verge. Retrieved from https://www.theverge.com/2018/6/20/17482500/microsoft-ceo-satya-nadella-ice-contract-memo

Wu, A. (2018, May 16). High school in China installs facial recognition cameras to monitor students’ attentiveness. The Epoch Times. Retrieved from https://www.theepochtimes.com/high-school-in-china-installs-facial-recognition-cameras-to-monitor-students-attentiveness_2526662.html