Digital complicity: The intent and embrace of problematic tech




Facebook executives didn’t intend to create a platform for Russian propaganda; they were preoccupied with growth. Bio-hackers who “chip” themselves give little thought to dystopic monitoring; they simply seek convenience. Is this exonerating? Not necessarily. Growth- and bio-hackers can be complicit to a harm, even if not principals, and I apply Lepora and Goodin’s (2013) framework to these high-tech cases. However, assessing digital complicity is difficult: consequence can be contingent, even if dire, and responsibility can seem tenuous. Creators’ intent and users’ embrace of problematic technology requires additional consideration. On the former, I adapt Merton’s (1936) classic essay on unanticipated consequences. On the latter, when technologies of the self become technologies of power (Foucault, 1977), I make use of Little’s (1998) notion of cultural complicity. When digital complicity is likely, I conclude with how people have opposed, limited, or (at least) disclaimed the harmful uses of technology they create or embrace.

Keywords:complicity, ethics, technology, design


To be complicit is to be an accessory to harm. In 2017, globalism, sexist culture, and Trumpian nepotism prompted to recognize complicity as their word of the year.

Complicity is also a concern in the digital realm. Is Facebook complicit in spreading the fake news that roiled the 2016 election? Are bio-hackers who inject themselves with implants for the sake of convenience complicit in the more oppressive tracking which will follow? Are those who develop facial recognition technology and databases complicit when these systems are used by oppressive regimes?

These questions are about problematic technology: tech-related artifacts, ideology, and techniques that have potential to be broadly harmful. I frame these questions relative to Chiara Lepora and Robert Goodin’s (2013) framework for assessing complicit blameworthiness. From this, I distill two attributes most salient to agents’ responsibility in the digital realm: creators’ intent toward and enthusiasts’ embrace of problematic technology. Facebook didn’t set out with the intention to distribute Russian propaganda. Bio-hackers give little thought to how their experiments presage dystopic scenarios.

Although intention and eager adoption are important concerns, they are not well understood in the digital context. To understand the role of intent, I extend Robert Merton’s (1936) classic essay “The Unanticipated Consequences of Purposive Social Action.” On the embrace of problematic technology, when technologies of the self become technologies of power (Foucault, 1977), I make use of Margret Olivia Little’s (1998) notion of cultural complicity.

I selected my cases, in part, based on what is in the news; they also have analytic and pragmatic merits. Consideration of Facebook’s complicity allows us to focus on intent and to distinguish between individual and collective culpability (i.e., Facebook’s complicity as a corporation versus that of an employee). Bio-hackers’ possible complicity allows us to see the importance of market demand and cultural norms in the proliferation of problematic technology. Finally, I use human tracking technology, including facial recognition and databases of people, as an example for understanding the overall framework in the next section.

I do not presume my assessments of digital complicity are conclusive. Whether, for example, bio-hackers are complicit in a dystopian future is defeasible. Rather, I intend to illustrate the concerns and concepts that we must engage when considering such questions.

A framework for complicity

Who, beyond the thief, is obliged to compensate the victim of that theft? Aristotle (1917) responded that we must consider those who contribute “by command, by counsel, by consent, by flattery, by receiving, by participation, by silence, by not preventing, [and] by not denouncing” (II.II.7.6). Aristotle’s discussion, though brief, touched on the extent of the accomplice’s contribution to the harm, their obligations, and the difference between action and inaction. A frail person who does not intervene in a mugging is different than an official who condones the graft she is obliged to prevent.

Since Aristotle, moral and legal philosophers have continued to discuss complicity, offering different terms, definitions, rubrics, value judgments, and case studies. John Gardner (2006) focused on causality and degree of difference-making in his analysis, as did Christopher Kutz (2000). H. D. Lewis (1948) maintained an individualistic, skeptical, and minimalist approach to collective culpability; Karl Jasper (2000) did the opposite. Gregory Mellema (2016), in Complicity and Moral Accountability, summarized this literature well, in addition to offering his own theoretical distinctions between enabling versus facilitating harm and shared versus collective responsibility.

For the sake of concision, I won’t attempt a gloss of the literature. Instead, I make use of Lepora and Goodin’s (2013) exhaustive framework. Though it initially appears complicated—and the following summary should be used as a reference rather than something that must be comprehended before continuing—it is the most comprehensive and coherent framing of a difficult topic.

Lepora and Goodin’s work was initially motivated by the quandaries faced by aid workers. Is an NGO that imports food complicit in supporting the warlord who purloins much of it? This real world case is supplemented by hypothetical scenarios, including a bank robbery and an assassination. I complement these scenarios with the example of the development of human tracking technology, which can be used by oppressive regimes.

In their analysis, Lepora and Goodin distinguished between four groups of agents: (a) shady non-contributors who have no causal relation to the harm, (b) non-blameworthy but complicit contributors to the harm, (c) blameworthy complicit contributors to the harm, of varying degrees, (d) and co-principals of the harm. Whether agents are blameworthy (in the case of coercion) or excusable (for participating in a lesser evil) are additional and independent distinctions.

Figure 1: Lepora and Goodin’s (2013) types of agents.

Shady Non- Contributors
(not causal)
Non-Blameworthy Complicit Contributors
(causal but not responsible)
Blameworthy Complicit Contributors
(causal & responsible)
Participating Co-principals
  • connivance
  • condoning
  • consorting
  • contiguity
(not voluntary or knowledgeable of of harm/role)
  • connivance
  • condoning
  • consorting
  • contiguity
  • collaboration (follows plan)
  • complicity simpliciter
  • full joint (identical plan & action)
  • co-operation (shared plan & different actions)
  • conspiracy (shared plan)
  • collusion (plan in secret)

Even non-contributors, with no causal connection to a harm, can have “shady” (my term) relationships to the wrongdoer. Such shady non-complicit relationships are connivance (tacitly assenting), condoning (granting forgiveness), consorting (close social distance), and contiguity (close physical distance). Even if unsavory, these non-contributors have no causal contribution to the harm.

Contributors, on the other hand, are causally necessary to the harm but not constitutive; their complicity “necessarily involves committing an act that potentially contributes to the wrongdoing of others in some causal way” (Lepora and Goodin, 2013: 6). The researcher who develops a better facial recognition algorithm is a contributor to downstream harm. She did not join in the harm, and she may not even be morally blameworthy—which I will return to shortly—but she did contribute to the harm.

The six types of complicit behavior include the four shady behaviors, already discussed, when they contribute to harm. If criminals know they won’t be reported and consequently commit another crime, the non-reporter’s connivance becomes complicit connivance. The same holds true for condoning, consorting, and contiguity. These terms describe agents associated with wrongdoers; when their behavior becomes causal, encouraging harm, they move from being shady non-contributors to complicit contributors.

The other two types of complicit contribution are collaboration and complicity simpliciter. In collaboration the agent goes along with a wrongful plan of the wrongdoer. A teller who is forced to open that vault at gunpoint is a complicit collaborator—though morally exonerable in the larger framework because his action was coerced. An engineer who is threatened with termination if she does not help build an oppressive system is a complicit collaborator. Complicity simpliciter is a catch-all term used to cover those cases not already addressed. Absent the specificity of the other types of complicity, the agent simply has to have known (or should have known) they were contributing to a harm.

Finally, co-principals are active participants in the planning and execution of a wrong-doing; their actions constitute the harm. In most cases, the co-principles are co-operators: they each take the plan as their own and partake in its actions, even if in different but interdependent ways. Members of a bank robbery gang are co-operating co-principals, including those holding the guns, the lookout, and the getaway driver. Similarly, consultants who customize and install a biometric database system so that the innocent can be tracked by an oppressive regime are cooperating co-principals in this harm.

Of course, a given scenario can have multiple harms in which an agent plays multiple roles. Someone who backed out of a heist after helping plan it is a co-principal in the conspiracy and a complicit contributor in its enactment but not a co-principal in the robbery itself.

Within these roles we can see various “dimensions of difference” (Lepora and Goodin, 2013, ch. 4). Centrality speaks to the extent of contribution: how much and how essential. Lepora and Goodin use “counter-factual difference making,” which entails thinking about alternative worlds, for this. An essential contribution is necessary to the harm in “every suitably nearby possible world.” For example, someone who smuggles a sniper rifle past security to an assassin is essential to the murder. A potentially essential contribution is a “necessary condition of the wrong occurring, along some (but not all) possible paths by which the wrong might occur” (p. 62). The sniper who successfully assassinated the target was essential; the back-up assassin was potentially so because it is conceivable that some course of events would’ve led to the primary assassin failing and the back-up assassin succeeding.

Essentiality is a means by which Lepora and Goodin address individual and collective culpability. (This is a complex issue but not especially novel to digital cases.) A classic example is the culpability of firing squad members when some of its members shoot bullets and others shoot blanks. There’s two ways of approaching this scenario. First, we can take the firing squad as a “consolidated wrongdoing” in which the actions of agents together constitute the harm. Each soldier, whether they shot a bullet or not, has jointly participated in the collective harm of a firing squad. Second, we can perform an individual assessment of essentiality in which we ask if there is a possible universe in which an individual’s actions made a difference. Because each soldier understood that his or her gun might have had a bullet, each one is potentially essential and exposed to complicity.

Proximity speaks to the closeness to the harm in the causal chain; the last contribution to a wrongdoing has a greater weight than an earlier one. Firing a rifle in an assassination is more proximate to the harm than the person who procured it. Installing a tracking system for an oppressive regime is more proximate than someone who designed it. The reversibility of the contribution is a factor as is its temporality (e.g., condoning happens after the primary wrongdoing). There is also a person’s mental stance toward planning the harm. The person might be the plan-maker or a plan-taker. If the latter, there is the degree of shared purpose and responsiveness to the plan, from eagerly adopting it as their own, otherwise accepting it, or merely complying.

These dimensions are inputs to factors within Lepora and Goodin’s framework for assessment, which they express as equations. Complicit blameworthiness is a function of the badness, responsibility, contribution, and shared purpose factors: CB = (RF*BF*CF) + (RF*SP). An implication of this equation is that even though bank tellers are complicit—they contributed—when coerced, they are not blameworthy; if RF=0, so is CB. Similarly, a researcher who has no reason to know of the harmful consequences or use of their algorithm is complicit but not blameworthy. Another implication is that you do not need a shared purposes (SP) with the wrong-doer to be blameworthy. Even when SP=0, a contributor might have non-zero factors of responsibility (RF), badness (BF), and contribution (CF), resulting in a positive CB.

Figure 2: Lepora and Goodin’s (2013) dimensions of complicit blameworthiness
CB = (RF*BF*CF) + (RF*SP) .

Complicit blameworthiness, then, is a spectrum, from those who are complicit but with zero blame (because of ignorance or coercion) and those who bear some degree of blame (from minimal to maximal).

Finally, Lepora and Goodin, true to their concern about humanitarian efforts, acknowledged that blameworthy complicity in one harm (provisioning a warlord) can be the lesser evil of another harm (letting others starve). Nonetheless, we should still recognize the lesser evil as an evil: “We think that is a better way of explicating the morality of the situation than to deny that you are doing anything wrong at all by contributing to wrongdoing, on the grounds that your own intentions are pure” (Lepora and Goodin, 2013: 96).

Facebook’s intent

Whereas complicit was’s word of 2017, fake news took the honor at the Collins dictionary. Existing concerns about social media effects on users’ identities, behavior, and relationships was joined by the harm of misinformation. In 2017, two former Facebook executives confessed to feeling partly responsible.

During an on stage interview, Sean Parker, Facebook’s former president, confessed that he was presently “something of a conscientious objector” to social media. He was now like those who, in Facebook’s early days, told him they valued their real-life presence and interactions too much to waste time on social media. The younger Parker would glibly respond: “We’ll get you eventually.”

I don’t know if I really understood the consequences of what I was saying, because [of] the unintended consequences of a network when it grows to a billion or two billion people and … it literally changes your relationship with society, with each other…. (Parker quoted in Allen, 2018)

In order to consume as much time and attention of their users as possible, Parker claimed Facebook created “a social-validation feedback loop … exactly the kind of thing that a hacker like myself would come up with, because you’re exploiting a vulnerability in human psychology.” Even if they did not fully appreciate the consequences of what they were doing, they knew what they were doing: “The inventors, creators—it’s me, it’s Mark [Zuckerberg], it’s Kevin Systrom on Instagram, it’s all of these people—understood this consciously. And we did it anyway” (Parker quoted in Allen, 2018). In the civic realm, Parker once believed the “Internet could be our salvation,” but, given recent events, he now criticizes such naiveté and advocates for increased civic participation (Walters, 2017).

Similarly, during a public interview at Stanford School of Business, Chamath Palihapitiya, former vice-president of user growth at Facebook, said he felt “tremendous guilt” about his role at Facebook. The platform’s exploitation of “short-term dopamine driven feedback loops” has led to a proliferation of incivility and misinformation.

Even though we feigned this whole line of “there probably aren’t any really bad unintended consequences,” I think in the back deep recesses of our minds we kind of knew something bad could happen. But I think the way we defined it was not like this. It’s literally at a point now where I think we have created tools that are ripping apart the social fabric of how society works. (Palihapitiya, 2017, min. 21:38) In these excerpts we see two harms: (1) stoking users’ obsessive engagement in pursuit of growth and advertising revenue, and (2) distorting civic culture via polarization and propaganda. Facebook is a participating co-principle in the first harm. My present concern, though, is if Facebook is a complicit contributor in the second harm of spreading misinformation.

Let us take the spread of fake news as a significant harm—even if a minority might disagree sincerely or to rationalize their past choices or current position. Even Mark Zuckerberg agrees fake news is bad and concedes that the platform did not do enough to prevent its spread. Although Facebook enabled the harm, however, it did not share the purpose of the propagandists nor did it have a role in their planning. The damage that was done was irreversible, though the platform is now attempting reform. In Lepora and Goodin’s terms, the badness factor was high, the contribution factor was less so. What, then, of responsibility? This is a central question of digital complicity.

In Lepora and Goodin’s framework, responsibility is contingent on voluntariness and knowledge of wrongness and contribution. Facebook wasn’t coerced, and Palihapitiya and Parker’s comments reveal that even if initially unaware of the problem, they suspected there would be unintended consequences. Palihapitiya knew “something bad could happen,” but suppressed the idea; when it was considered, the imagined extent of problem “was not like this.” The extent of their knowledge of wrongness was naive and their knowledge of contribution was suppressed.

It is not uncommon for there to be unintended consequences, especially when it comes to technology (Tenner, 1996). Anticipation is the extent to which a consequence is foreseen. According to Merton, this is because some consequences can be unknowable, because of the “interplay of forces and circumstances which are so complex and numerous that prediction of them is quite beyond our reach” and because of ignorance, for which “knowledge could conceivably be obtained” but is not (Merton, 1936: 899). (Lepora and Goodin (2013: 95) use the related terms of unavoidable ignorance and culpable ignorance.) Other uses might be anticipated but still be unintended, meaning they are foreseen as possible but not aligned with the creator’s intent. For example, makers of a hammer might not anticipate the claw end of the hammer as a bottle opener. Even so, they might anticipate it being used as a weapon, though this is contrary to their intention.

In the case of Facebook’s spread of misinformation, its former executives claimed both unanticipated and unintended consequences. Early on, Facebook didn’t anticipate becoming a platform for propaganda. And even when it appeared to be happening, it was never their intention. It was not as if propaganda was inconceivable (unknowable), only that they were largely ignorant to the possibility.

Merton specifies five causes of unanticipated consequences, four of which are relevant to this case. First, as stated, Facebook was initially ignorant to the harm and their responsibility in spreading misinformation. Second, this ignorance was a consequence of surprise: it had not been a problem in the past, so they assumed it would not be a problem in the future even as Facebook gained millions and then billions of users. And, third, Facebook had an “imperious immediacy of interest(Merton, 1936: 901) in user and advertising growth; little else was given much attention. Fourth, given many social media platforms’ initial commitments to free speech, this value hindered Facebook from censoring political-seeming speech.

But to give explanations for Facebook’s ignorance is not to excuse it. Indeed, ignorance can be negligent and even contrived. Legal theorists typically enumerate a handful of elements necessary for negligence, including duty, breach of duty, a causal relationship, and consequent damage (Owen, 2007). Therefore, Lepora and Goodin’s concern with knowledge in assessing responsibility must also include the moral character of any ignorance. If Facebook had a duty to be a responsible media platform (Gillespie, 2018: 206–211), to be knowledgeable about its harms and their contribution, they failed, which causally contributed to the spread of propaganda.

Figure 4: An extension of Merton’s (1936) unanticipated consequences.

Based on Palihapitiya’s comments, Facebook was complicit in spreading propaganda. In Lepora and Goodin’s terms, what happened was bad and Facebook causally contributed to the harm.

The key question is about responsibility. Facebook’s spreading of propaganda was a primary, negative, and substantial consequence of creating a commenting platform. In terms of responsibility, they were initially ignorant in allowing this consequence to manifest, even if it was not their intention. Even so, this ignorance may have been negligent, and neither negligent nor contrived ignorance should be absolving of responsibility. Nothing was coerced, so there is no exoneration of blameworthiness.

Individuals at Facebook have some stake in this collective culpability as well. I’m not inclined to argue all Facebook employees are necessarily party to a collective harm, like the members of a firing squad. Rather, more fine-grained assessments are appropriate and dependent on variations in responsibility and contribution. The most difficult assessment is if any given employee, Palihapitiya for example, made a difference. Such an individual can be determined as individually complicit if he was potentially essential to the harm; that is, there is a possible sequence of events in which his behavior contributed to the harm.

Finally, a possible “lesser evil” exoneration would be for Facebook to claim “if not us, someone else.” In a competitive market, such a conceit must be common: “It’s better that we succeed and do what we can, perhaps in the future, then let the less scrupulous win.” However, such a claim is morally distasteful when winning is also so obviously self-interested. It is easy for technologists to fret over their consciences once they are millionaires. This also presumes a false dichotomy between growth at all costs and failure. Something might have been done; some foresight might have been spent. In any case, this argument for exoneration would be tenuous and, even if it excuses a lesser evil, it is an evil nonetheless.

Life hackers’ embrace

In 2017 the Swedish innovation incubator Epicenter held a well-publicized party at which employees could opt for an injection of an identity implant. After being “chipped,” they would be able to unlock doors and purchase items from vending machines with a wave of a hand. One of the volunteers remarked that she opted for the implant because she was prone to losing her keys and that she wanted to be “part of the future” (Sandra Haglof quoted in Salles, 2017).

Epicenter describes itself as a space for “hackers and technology enthusiasts” to meet and collaborate—and it calls out Tim Ferriss, “life hacker and NYTimes best-selling author,” as a member (Epicenter, 2018). Hackers are known for their facility with systems. Although systems behave in typical ways, they can also be optimized or contravened with hacking, and the practice is not limited to computers (Author, 2000 ). Hacking now encompasses many domains of life and business. We already saw Sean Parker speak of exploiting users’ psychology while at Facebook—an approach now known as growth hacking. Similarly, Epicenter’s life hackers believe that by chipping themselves they improve their lives and have an (augmented) hand in shaping the future.

Yet, a future where such augmentation is commonplace is not necessarily a uniformly good one. This technology is problematic, with potential to be broadly harmful—beyond any personal risk. This makes assessment of digital complicity difficult because it often entails the prospective evaluation of consequences, both good and bad. For example, bio-hacking can be useful and a cause of concern. Is chipping appropriate for children (even if for their own safety) or those under house arrest (even if for their own convenience)? Or, beginning in 2013, the Consciousness Hacking movement coalesced around the idea of monitoring and manipulating the mind, often by way of devices. Brain sensing headbands can help enthusiasts learn to meditate by indicating when they are focused or distracted. In China, similar technology is being used to monitor the mental state and emotions of employees (Chen, 2018). The Chinese company puts a positive spin on their product, suggesting worker productivity and happiness might be optimized by increasing breaks when fatigue sets in, but exploitative uses are as likely.

It’s not difficult to imagine life hacking techniques and technologies implicated in dystopic scenarios. To what extent are those who embrace life hacking, then, complicit? The question can also be asked of companies, including Epicenter, though I wish to focus on individual hackers. In Foucauldian terms, by embracing technologies of the self, life hackers further technologies of power.

Foucault defined technologies of the self as those that permit individuals to perform “operations on their own bodies and souls, thoughts, conduct, and way of being, so as to transform themselves in order to attain a certain state of happiness, purity, wisdom, perfection, or immortality.” Technologies of power, on the other hand, “determine the conduct of individuals and submit them to certain ends or domination, an objectivizing of the subject” (Foucault, 1997: 225). For example, in China, a high school is using facial recognition to monitor students’ alertness, and the government is using biometrics to track ethnic minorities, a capability central to its nascent social credit system (Ma, 2018; Wu, 2018). Whereas technologies of the self are performed by the self for the self’s benefit, technologies of power see the self dominated and objectivized by another. (Foucault additionally used the term biopower to speak of controlling the bodies of populations.)

When considering technologies of power, we can further distinguish between what I will call hard and soft power. Hard power determines the conduct of the individual; soft power shapes their conduct through norms—related to Foucault’s (1977) notion of discipline. For example, imprisoning a class of people in a cage is a technology of hard power; endorsing the norm that they ought not leave home without a chaperon is a technology of soft power.

Figure 5: Foucauldian (1977) technologies and complicity.

The complicity concern, again, is that an enthusiast, at the individual level, who embraces problematic technology further dystopic ends. Their voluntary embrace of technology eases its (1) coercive imposition (hard power) and (2) normative pressure (soft power).

The first concern of hard power is one of a slippery slope. When enthusiasts voluntarily adopt implants, they contribute to the technology’s development. If they purchase it, they create market demand. When adopted from their employer, they test and apply the technology for their employer’s benefit. While this is voluntary, it primes the technology for less voluntary use. Consider the example of behavioral modification. In 2014, life hacker Maneesh Sethi launched the Pavlok wrist-band which vibrates or zaps its users to discourage bad habits—such as biting their nails or wasting time on Facebook (Sethi, 2014). This is a technology of the self, invented and embraced by life hackers. Four years later, Amazon patented something similar to alert (vibrate) its warehouse employees when they reach for the wrong item (Ong, 2018). This is a technology of power. Such monitoring and modification might one day happen at Amazon, and, as noted, is already happening in China. The slope between the hacker who happily adopts, the worker who reluctantly accedes, and the person who has little choice might be greased by hackers’ embrace.

In Lepora and Goodin’s terms, the badness factor of this scenario is massive but uncertain: it is contingent on the likelihood of the dystopic scenario. The essentiality of ordinary enthusiasts to the harmful norm is low: it’s hard to imagine a history in which such an individual’s embrace makes a difference—though it is easier to imagine this of companies and life hacking gurus. The responsibility of the enthusiast adopting the technology is also uncertain. Wearing a Pavlok or getting chipped is voluntary, and the enthusiasts might have little knowledge of their contribution to and the wrongness of the dystopic possibilities. The hacker’s contribution, as an adopter of problematic technology, is minimal as they are peripheral to the harm. Their shared purpose with oppressive wrong-doers is likely minimal and possibly zero. In most cases, those whose embrace problematic technology have less culpability than those who create it. Only when harm is probable would their blameworthiness be like that of consumers complicit in labor exploitation (Brock, 2016; Lawford-Smith, 2017).

On the second concern of soft power, a number of critics have noted how individual efforts toward self-improvement amplify the social impetus for others to do the same. Maximizing productivity, for example, can backfire on the individual and further exploitative workplaces (Gregg, 2015; Moore and Robinson, 2015; Penny, 2016). Among academics hoping to boost their productivity, Matt Thomas writes that by embracing the ideology, apps, and techniques of the blog ProfHacker, “one essentially becomes complicit in one’s own obsolescence” and that these techniques ignore and amplify “the various structural pressures that make them seem like the way out” (Thomas, 2015: 182).

Margret Olivia Little (1998) refers to the culpability of participating in the exercise of soft power as cultural complicity, which is “when one endorses, promotes, or unduly benefits from norms and practices that are morally suspect.” Little’s concern was how cosmetic surgery can reinforce the norms of whiteness and Barbie Doll femininity. She used crass complicity to characterize those who bolster and personally benefit from harmful norms, such as an exploitative plastic surgeon. I use the term banal complicity to characterize those who add to “the increased pressure others may in fact feel as a result of having surgically ‘improved’ appearance” (p. 173). In the case of cosmetic surgery we can easily see how technologies of the self overlap with disciplinary technologies of power.

In Lepora and Goodin’s terms, the badness factor of problematic norms is significant and certain. With respect to responsibility, cosmetic patients act voluntarily, but may be ignorant to any wrongness of their contribution to harmful norms; their centrality and essentiality is very low. Their contribution is minimal, as is their shared purpose in fostering cosmetic insecurity in others. Surgeons, on the other hand, have a significant centrality, responsibility and contribution. This is why Little labels those who bolster and benefit from harmful norms as crassly complicit.

Enthusiasts’ embrace of life hacking can be parsed in a similar way. The productivity hacker, for example, also has a minimal responsibility. Such enthusiasts are acting voluntarily and give little thought to the larger system motivating their efforts. Their centrality and contribution is minimal as is a shared purpose toward a more hectic and precarious workplace. In this case, there is no crassly complicit surgeon to blame. There are companies and self-help gurus who are crassly complicit, but their centrality and proximity is more removed than that of a surgeon. Additionally, this scenario is complicated by the fact that the badness factor is equivocal. Improving one’s productivity can be taken to excess and contribute to inimical norms, but not necessarily so. The problem is one of context: what motivates and who benefits from increased productivity? Workers who are motivated by self-efficacy and who benefit from their efforts are different than those driven by precarity for others’ gain.

Responses which limit complicity

Even if the complicity around problematic technology is equivocal, the potential magnitude of such technology’s harm is massive. Letting a child play with a loaded gun, for example, has a significant chance of resulting in a definite harm that affects the child and dozens of others. It is a probable tragedy at the micro scale. Facial recognition technology has good applications, but it can be abused with society-wide effects. It is a possible nightmare at the macro scale.

Some technologists wish to limit their complicity in these harms. The question, then, is how? Given the spate of news related to digital complicity, I discern three strategies: distancing, declaiming, and opposition.

The first category of response is to distance oneself from technology which is knowingly being used in problematic applications. Early in 2018, a handful of influential Google engineers refused to work on a security system which would allow their company to work on additional military projects. (The Pentagon requires its contractors to have such systems for its most sensitive projects.) A few months later, over four-thousand Google employees petitioned their company to cancel an existing contract with the Pentagon. This latter project developed object recognition technology which would be used by drones—though, purportedly, not for their weapons. Some Googlers even resigned in protest (Bergen, 2018; Conger, 2018; Wakabayashi and Shane, 2018). Similarly, in response to a change in U.S. border policy that increased the separation of migrant families, over five hundred Microsoft employees called for their company to cancel any contracts related to U.S. Immigration and Customs Enforcement (ICE) (Frenkel, 2018).

Beyond these specific cases, such distancing has also been generalized. Over twenty-eight hundred technologists have pledged to “never again” build database that allow governments to “target individuals based on race, religion or national origin.” The “never again” references IBM’s role in providing tabulating equipment used in genocide by the Nazi regime (Again, 2017; Lien and Etehad, 2016) This pledge also entail minimizing the collection of related data (even if not originally intended toward oppressive ends), destroying existing “high-risk data sets and backups,” and resigning if need be. Even more generally, some programmers have developed a Hippocratic-like oath for their profession by which individuals pledge to only undertake “honest and moral work” and to “consider the possible consequences of my code and actions” (Johnstone, 2016).

Another category of response is to declaim the application of technology counter to one’s intentions. Twenty years ago, when I worked at the World Wide Web Consortium, I co-authored a document entitled “Statement on the Intent and Use of PICS: Using PICS Well” (Author, 2000 ). The Platform for Internet Content Selection (PICS) was a content labeling system intended as an alternative to government censorship. Sites could label their content (choosing a rating system similar to that used for video games) and parents could configure the family computer as they saw fit. However, some critics feared that deploying a voluntary system could have unintentional dystopic ends (Lessig, 1997). This statement made it clear that the designers intended the technology to be used at the content creators and consumers’ discretion with full transparency about its use. (PICS ended up having little effect beyond Reno V. American Civil Liberties Union; a concurring opinion cited it as evidence that technical options were available and preferable to government censorship (Brekke, 1996).)

Attempts at clarification also followed the efforts of Google and Microsoft employees. Both companies spoke to what type of work they would undertake. Google announced it would not renew its drone imaging contract with the Pentagon. Additionally, its CEO, Sundar Pichai, publicly wrote that Google will not apply its artificial intelligence (AI) work where it is likely to cause harm, be used within weapons, violate international surveillance norms, or otherwise violate international law and human rights (Pichai, 2018). Microsoft’s Satya Nadella internal memo was less decisive, clarifying that the company’s contracts with ICE were related to legacy office applications and not “any projects related to separating children from their families at the border” (Nadella quoted in Warren, 2018). As journalists and a subsequent petition noted, this did not address earlier boasts by Microsoft that it was supporting ICE’s efforts in voice and facial recognition (Action Network, 2018).

With respect to concerns about those who embrace problematic technology, there aren’t many strong examples of distancing from complicity—which is not surprising given it is a more diffuse concern. An obvious action is to no longer embrace a technology once you discover its harmful potential. Consumer boycotts are commonplace, but none of the scenarios I’ve discussed prompted such action, though some outside of Microsoft said they would withdraw from Microsoft-related events in solidarity with the ICE protest (Thorp, 2018). Beyond the economic effect of such distancing, which I suspect is minimal, protests have a more important role: by making a concern a topic of public discussion, protesters remove the potential for others to claim they were ignorant to their responsibility.

With respect to the embrace of norms among life hackers, it’s not difficult to find disenchanted life hackers who declaim and distance themselves from their earlier embrace, as did productivity hacker Merlin Mann and digital minimalist Everett Bogue (Author, 2000 , pp. TODO; Thomas, 2015: 74). More generally, the slow movement—applied to food, work, and even professing (Berg and Seeber, 2017)—can be seen as a pushback against norms of efficiency that overlap with the optimizing inclination of life hackers.

Finally, there are tactics of opposition, which go beyond limiting one’s complicity in a harm to actively countering it. Diaspora and Mastodon, for example, were designed as a Facebook alternatives; they are decentralized, opposed to ads, and give users greater control over their data and experience. Similarly, technologists can oppose digital harms by creating systems that oppose or interfere with problematic technologies, such as those that counter surveillance (Brunton and Nissenbaum, 2015). Also, some biohackers might even claim their “hacktivist” experiments are, in fact, a type of resistance: by experimenting with problematic technologies they better illuminate their promise and perils.


An assessment of digital complicity requires a synthesis of existing frameworks and concepts. I’ve extended Lepora and Goodin’s framework so as to address the creation and embrace of problematic technology. Although this adds clarity in understanding digital scenarios, assessment is not straightforward.

First, the assessment of badness is complicated in the digital realm. This technology has uncertain and equivocal consequences: both good and bad, contingent, unanticipated, and unintended.

Second, Lepora and Goodin’s notion of responsibility is dependent on voluntariness. Yet, the voluntary embrace of technology for innocuous ends might ease its harmful imposition on others. The line between technologies of self and power blur. Responsibility also requires knowledge of contribution and harm. Because of equivocal badness and because the proximity between those who create technology and its harmful applications is distant, this knowledge is often attenuated. Indeed, contrived or negligent ignorance is all too easy—and ought not be exonerating.

Finally, Lepora and Goodin do not include harmful cultural norms within their scenarios, like those undergirding some cosmetic surgery and life hacking. Here, banal and crass cultural complicity become new facets in the assessment of shared purpose and extent of contribution.

As problematic technologies—especially AI, bio-hacking, and surveillance—continue their rapid advances, concern about digital complicity will become ever more pertinent, as is the need for a coherent framework for understanding, discussing, and limiting it.


Action Network (2018) An open letter to Microsoft drop your $19.4 million ice tech contract. Available at: (accessed 22 June 2018).

Again N (2017) Never Again. Available at: (accessed 11 January 2018).

Allen M (2018) Sean Parker unloads on Facebook: ‘’God only knows what it’s doing to our children’s brains’’. Axios, 4 February. Available at: (accessed 9 November 2017).

Aristotle (1917) Summa Theologiae/Second part of the second Part/Question 62. New York: Benzinger Brothers Printers to the Holy Apostolic See. Available at: (accessed 22 January 2018).

Author A (2000) Sources blinded for peer review.

Berg M and Seeber B (2017) The slow professor.

Bergen M (2018) Google engineers refused to build security tool to win military contracts. Available at: (accessed 25 June 2018).

Brekke D (1996) CDA struck down. Wired, 8 February. Available at: (accessed 21 June 2018).

Brock G (2016) Consumer complicity and labor exploitation. Croatian Journal of Philosophy 16(1): 113–125.

Brunton F and Nissenbaum H (2015) Obfuscation: A User’s Guide for Privacy and Protest. Cambridge: MIT Press.

Chen S (2018) ‘Forget the Facebook leak’: China is mining data directly from workers’ brains on an industrial scale. South China Morning Post, 13 June. Available at: (accessed 13 June 2018).

Conger K (2018) Google employees resign in protest against Pentagon contract. Gizmodo, 14 May. Available at: (accessed 14 May 2018).

Epicenter (2018) Epicenter. Available at: (accessed 17 May 2018).

Foucault M (1977) Discipline and Punish: The Birth of the Prison (1975). Sheridan A (ed.). New York: Pantheon Books.

Foucault M (1997) Technologies of the Self. Rabinow P (ed.) Ethics: Subjectivity and Truth. New York: New Press.

Frenkel S (2018) Microsoft employees question C.E.O. Over Company’s contract with ice. The New York Times, 26 July. Available at: (accessed 16 August 2018).

Gardner J (2006) Complicity and causality. Criminal Law and Philosophy 1(2). Springer Nature: 127–141. Available at: (accessed 10 January 2018).

Gillespie T (2018) Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. New Haven: Yale University Press.

Gregg M (2015) Getting things done: Productivity, self-management, and the order of things. In: Networked Affect. Cambridge, MA: MIT Press, pp. 187–202.

Jaspers K (2000) The Question of German Guilt. New York: Fordham University.

Johnstone N (2016) An oath for programmers, comparable to the Hippocratic oath. Available at: (accessed 26 February 2018).

Kutz C (2000) Complicity: Ethics and Law for a Collective Age. Cambridge: Cambridge University.

Lawford-Smith H (2017) Does purchasing make consumers complicit in global labour injustice? Res Publica. Springer Nature. DOI: 10.1007/s11158-017-9355-4.

Lepora C and Goodin RE (2013) On Complicity and Compromise. Oxford: Oxford University Press.

Lessig L (1997) Tyranny in the infrastructure. Wired 5(07). Available at: (accessed 15 September 2005).

Lewis HD (1948) Collective responsibility. Philosophy 23(84): 3–18.

Lien T and Etehad M (2016) Tech workers pledge to never build a database of Muslims. LA Times, 14 December. Available at: (accessed 29 June 2017).

Little MO (1998) Cosmetic Surgery, Suspect Norms, and the Ethics of Complicity. Parens E (ed.) Enhancing human traits: Ethical and social implications. Washington, DC: Georgetown University Press.

Ma A (2018) How China is watching its citizens in a modern surveillance state. Business Insider, 29 April. Available at: (accessed 7 June 2018).

Mellema G (2016) Complicity and Moral Accountability. Indiana: University of Notre Dame.

Merton RK (1936) The unanticipated consequences of purposive social action. American Sociological Association 1(6): 894–904. Available at: (accessed 16 May 2018).

Moore P and Robinson A (2015) The quantified self: What counts in the neoliberal workplace. New Media & Society. SAGE Publications. DOI: 10.1177/1461444815604328.

Ong T (2018) Amazon patents wristbands that track warehouse employees’ hands in real time. The Verge, 1 February. Available at: (accessed 9 March 2018).

Owen DG (2007) The five elements of negligence. Hofstra Law Review 35(4): 1671–1686. Available at: (accessed 17 May 2018).

Palihapitiya C (2017) Chamath Palihapitiya, founder and CEO of Social Capital, on money as an instrument of change. Available at: (accessed 5 February 2018).

Penny L (2016) Life-hacks of the poor and aimless. The Baffler, 8 July. Available at: (accessed 9 July 2016).

Pichai S (2018) Our principles. In: AI at Google. Available at: (accessed 7 June 2018).

Salles A (2017) Ignoring privacy worries, firm implants microchips in employees. Available at: (accessed 1 February 2018).

Sethi M (2014) Pavlok breaks bad habits. Available at: (accessed 11 July 2016).

Tenner E (1996) Why Things Bite Back: Technology and the Revenge of Unintended Consequences. New York: Knopf.

Thomas M (2015) Life hacking: A critical history, 2004 – 2014. PhD thesis. Iowa University. Available at: (accessed 31 August 2015).

Thorp J (2018) I have withdrawn from this event in protest of MSFT’s complicity with ICE. Available at: (accessed 20 June 2018).

Wakabayashi D and Shane S (2018) Google will not renew pentagon contract that upset employees. The New York Times, 1 June. Available at: (accessed 4 June 2018).

Walters J (2017) Sean Parker: The internet is not the answer for those seeking change technology. The Guardian, 20 September. Available at: (accessed 8 June 2018).

Warren T (2018) Microsoft CEO plays down ice contract in internal memo to employees. The Verge, 20 June. Available at: (accessed 20 June 2018).

Wu A (2018) High school in China installs facial recognition cameras to monitor students’ attentiveness. The Epoch Times, 16 May. Available at: (accessed 31 May 2018).