Morality and the Dilemma

The challenge at the heart of collective action is how cooperative behavior emerges when there are apparent reasons for it not to. This is famously demonstrated by the Prisoner’s Dilemma in which two co-suspects have compelling cause to defect – turn informer – against the other but the consequent of both following such a strategy is worse than had they cooperated and remained silent (Axelrod 1984). That it, if your partner remains silent, you will get six months in jail if you are also silent, but you go free by defecting and saddling your partner with a ten year sentence. If your partner informs on you, and you do the same, you each receive five years unless you’re the sucker and get ten. Defecting is the dominant “equilibrium” state regardless of your partner’s choice: going free is preferable to six months; five years is preferable to ten. So both players defect, get five year sentences, and wish they had remained silent and gotten off with six months. The dilemma is that the individual’s dominant strategy also creates a mutually suboptimal result; in this case, fear of the worst-case scenario inhibits beneficial collective action. Understanding the distance between the lack of cooperation implied by the dominant strategy and the mutual benefits of cooperation has been a central concern of social science since Garrett Hardin’s (1968) article “The Tragedy of the Commons.” In this scenario, the dominant strategy of a herder is to put as many animals as possible on common land, despite the fact that if everyone were to do the same it would soon be overgrazed. A few years before, in 1965, Mancur Olson (1971) published a book by which he characterized this type of problem as “The Logic of Collective Action.”

Olson, considering production rather than consumption, asks who would contribute to a common public good when they might just as easily defect and “free ride?” Yet, again, should everyone follow this reasoning, no public goods will be produced. Olson provides an extensive taxonomy of group characteristics that affect this logic, including their size and interdependence, the market’s demand elasticity, the balance of costs and benefits, and the ability for a group to exclude or penalize those who fail to contribute. (Ultimately, “trust” becomes a central element in such group dynamics and might arise in the context of time and reputation, institutional controls, or group norms.)

Around the same time, Robert Trivers (1971) characterized a related problem in animal behavior. In his article “The Evolution of Reciprocal Altruism,” he defined an “altruistic situation” as one in which “one individual can dispense a benefit to a second greater than the cost of the act to himself” (Trivers 1971) and modeled the conditions under which altruistic behaviors were likely to emerge. (Like Olson, these relate to the character and extent of social interaction.) Of course, as noted by Frans de Waal (2008), “a return-benefits calculation typically remains beyond the animals cognitive horizon” and altruism itself is likely the result of a more proximate evolved behavior: empathy. (This link between empathy and altruism is hypothesized, outside of the evolutionary context, by Daniel Batson (1991).)

Recently, these two threads of political economy and evolution have been combined in the work of Elinor Ostrom. In “Governing the Commons” she makes a slight digression away from a macro-political perspective to note that “communities of individuals have relied on institutions resembling neither the state nor the market to govern some resource systems with reasonable degrees of success over long periods of time” (Ostrom1990gce). By studying such institutions she recommends that the dilemma of “common pool resources” might be addressed by eight institutional design principles: clearly defined boundaries, congruence between appropriation/provision rules and local conditions, collective-choice arrangements, monitoring, graduated sanctions, conflict-resolution mechanisms, state recognition of groups’ right to self-organize, and the nesting of enterprises in large systems.

More recently, Ostrom makes greater use of the evolutionary approach to focus on the emergence of norms (Ostrom 2000). She takes issue with Olson’s (1971) earlier claim that unless the group is small, or there is a way to force individuals to act in their common interest, “rational self-interested individuals will not act to achieve their common or group interests.” She characterizes this as Olson’s “zero contribution thesis” and notes that it contradicts everyday experience; the problem of free riding exists, but community governance regimes do emerge and persist (Ostrom 2000). While it might be “irrational” from the egoist perspective, a significant proportion of people will act cooperatively (i.e., 40-60% of people will initially contribute to the public good in a finite-round game). This cooperation is affected by factors such as expectations about others, and the framing and number of interactions between peers. And, in keeping with Olson, people will expend resources to punish those who make below average contributions. Hence Ostrom characterizes norms as those values (e.g., reciprocity, fairness, and trustworthiness) that affect the preference for cooperation. If there is a sufficient proportion of “norm using” players (i.e., conditional cooperators and willing punishers), this “creates an opening for collective action” (Ostrom 2000). This is especially so if there is good information about the trustworthiness of one’s peers. If cooperation has been successfully established, new members will likely be appropriately acculturated. Hence, collective action and their supportive social norms can emerge in an evolutionary context: the gap of the cooperative dilemma can be bridged. Indeed, Olson recommends her eight institutional mechanisms (or “principles”) to further such outcomes.

Recently, a number of scholars have applied this literature on collective action to Wikipedia. Johnson (2007) uses Ostrom to characterize vandalism and point-of-view (POV) pushing as collective action problems. Viegas, Wattenberg, and Mckeon (2007) argue that Wikipedia’s Featured Article process reflects Ostrom’s first four principles of locality, collective choice (participation), monitoring (accountability) and conflict resolution. Andrea Forte and Amy Buckman (2008) use all eight of Ostrom’s design principles to evaluate Wikipedia governance and its Biography of Living Persons policy; they argue that there is decentralized policy creation, interpretation (i.e., its Arbitration Committee) and enforcement (i.e., administrators) but conclude the biggest lack relative to Ostrom is the uneven enforcement of policy.

However, these works tend to remain focused at an institutional level, focusing on community mechanisms for content and membership policy. (Two exceptions are a quantitative analysis of patterns in Wikipedian references to policies and guidelines from discussion pages (Beschastnikh, Kriplean, and Mcdonald 2008) and a characterization of the type of “utterances” used on Discussion pages (Goldspink 2009).) If, following Ostrom, we can think of norms as those values (e.g., reciprocity, fairness, and trustworthiness) that affect the preference for cooperation, can we find and characterize such norms in Wikipedia culture? I believe we can, and this is the focus of my work on Wikipedia.

Might we even characterize prosocial norms as a form of morality, in the sense employed by Bowles and Gintis (1998)? Indeed, despite preceding theorists of collective action by almost two centuries, Kant’s (2005) categorical imperative is a moral response to the collective action dilemma: “I ought never to act in such a way that I couldn’t also will that the maxim on which I act should be a universal law.” Coincidently, the lesser well known subtitle to Hardin’s famous “Tragedy of the Commons” article is “the population problem has no technical solution; it requires an extension in morality.” Therefore, I do not think it is a stretch to conclude that Wikipedia collaboration is as much a “moral” problem as a technical one.


Ported/Archived Responses

Joseph Reagle on 2009-12-21

Like you, I do consider these game theoretic and Ostrom-related investigations of interest and value, particularly with respect to the emergence of norms. I also agree with you that it is problematic to view Wikipedia production only in terms of “production” from an economic point of view. Indeed, if I were to simplify Shirky’s “Here Comes Everybody” is that Wikipedia is not so much a form of labor, but an expenditure of cognitive surplus in the pursuit of fun and satisfaction.

Sage Ross on 2009-12-21

Thanks for pointing this post out to me.  I don’t know how it slipped under my radar, but you do a great job with it.  I didn’t realize there was so much out there already using Ostrom’s work to understand Wikipedia.

As you can probably tell from my posts, I think these sorts of economic approaches to understanding Wikipedia are interesting and valuable.  But, especially after some recent on conversations with Wikipedians on my blog and elsewhere, I think it’s unsatisfying compared to other approaches.  Maybe the biggest shortcoming in applying Ostrom’s ideas and others in this vein is the first one: equating Wikipedia to a common pool of resources in the traditional sense.  Part of the reason this is problematic is that it assumes that contributing to the ‘common pool of resources’ (Wikipedia) is a net cost to contributors (i.e., it really is a prisoner’s dilemma).

But if contributors are motivated primarily by some combination of fun and audience-seeking, then things start to look very different.  Those who merely read without contributing are no longer free riders, but instead–by the act of providing an audience–actually make editing more attractive.  It sounds like maybe Johnson (2007) works along these lines, taking negative contribution rather than simply using the common pool without contributing as the main problem.  I’ll have to read all these papers you link when I get a chance.

Richard James on 2009-06-08

Hey Joseph,

Before I say anything else, I’d like to inquire where I might find more of your work on the topic of wikipedia and the emergence of collaborative culture.  Based upon the overlap between the lit review of this post and my own, I have a feeling that we are dealing with much the same theoretical issues (only in slightly different settings).  So, I would be eager to see what else you’ve put together.  That being said, here are some of my thoughts:

Much in line with the conclusion of this post, your quote today in the New York Times [http://www.nytimes.com/2009/06/08/technology/internet/08link.html?ref=business] mentions the presence of implicit rules or norms governing individual behavior in Wikipedia.  However, this quote (and blog entry) seems oddly juxtaposed with the central message of the article, that wikipedia has governance.  With ArbCom recently making a number of rulings regarding some hotly-contested pages on Wikipedia with enforcible bans on specific IP addresses, the article seems to indicate that wikipedia is neither a pristine example of the emergence of cooperative behavior from pre-formalized norms nor an environment where cooperative behavior can be maintained completely unmanaged by coercive entities.  At least not on the fringes of that environment at the moment.

Although institutions and norms can clearly co-exist, this post seems to want to differentiate between the two in order to emphasize the importance of norms, and it seems to do so by conflating ‘technical’ solutions with institutions and ‘moral’ solutions with norms.  However, I’m not sure that the terminology used by Hardin and other social scientists studying population growth, fishery management and other commons issues 40 years ago can be used as interchangeably with current nomenclature as this post seems to take for granted. 

Unless I’m mistaken, for Hardin, technical solutions relate to efforts to maximize crop yields, develop reliable contraceptives and other scientific endeavors focused on providing solutions that facilitate, rather than constrain, market behavior.  Since tragedies of the commons have no technical solutions, Hardin concludes that some ‘moral’ effort to create mutual coercion, mutually agreed upon by the majority, is necessary to create sustainable exploitation of natural resources and prevent resource extinction, human misery and the collapse of environments in the world.  In this sense, ‘morality’ refers to a condition of governance, regulation or management of common resources which need not be institutionalized formally, but certainly requires some coercive force to prevent defection.  This coercive force may be internalized as Kant’s categorical imperative or operate socially through a system of shame, ostracism or approbation, but, since each of these strategic responses suffer to short-term self-serving behavior, this coercive force tends towards institutionalization over time in order to be comprehensive.

That norms and institutions share coercive similarities seems especially important once we begin to look at the emergent behavior for commons systems over time.  As this particular article illustrates, with wikipedia’s expansion and complexity, informal norms have become insufficient for dealing with some of the disputes, vandalism and predatory behavior that increasingly threaten the commons.  In their place, formal institutions have been created with greater transparency, democratic qualities and at least 4 of the 8 Ostrom design principles, which have the capacity to solve intractable problems through more coercive measures. 

Since the issue in question [Scientology] resides on the margins of wikipedia, what is amazing is the relative resilience of pages with the most traffic to invasive behavior.  There certainly exists a norm within the community of readers/editors to self-police and maintain the standards of information through self-sacrifice despite the capacity for anyone to alter it.  However all editing behavior is ultimately constrained by protocols, filters and restoration points, and official sanctions exist to circumscribe the excessive unruly behavior of participants or to prevent the editing of contested pages.  So, it is fair to say wikipedia is far from informally policed.

However, the fair, or unfair, distinction which is drawn between norms and institutions seems to only skirt the central thrust of Ostrom’s argument that values arise from the structure of the system.  It is when mechanisms of monitoring are available, due to say limited group size, that trustworthiness becomes possible.  Or sustained interaction for the value of reciprocity.  In discussing the future of wikipedia ourselves, we are in some sense debating the way that it should be structured so as to maintain the commons for everyone.  And, where the intrinsic structure of user interaction is insufficient to establish the values of cooperation, or in this particular case, outweigh the ideological commitments to political, religious or national affiliation, the recourse is to form a coercive apparatus that can.

I am suggesting that institutions evolve as the complexity of the system evolves and participants with a preference for cooperative behavior are forced to deal with behavior which they cannot otherwise control.  To prevent predatory behavior, cooperators create formal procedures meant to limit its effects.  However, this transition from norm driven to institutionally constrained interaction is nothing more than an (usually ad hoc) attempt to restructure the environment to restore the evolutionary advantage of cooperative behavior, much the same way that regimes of ‘morality’ attempt to structure behavior. 

Institutions restructure behavior in the environment to induce cooperation and limit the effects of predatory participants but, insodoing, they also eliminate many of the conditions which made ‘morality’ based informal inducements to cooperation function properly.  The apparatuses associated with constructing and maintaining institutions quickly reorient the nature of association between participants in the system, making it prone to future invasion by a different kind of self-serving behavior.  It is no longer the poor fisherman that represents a threat to the commons, but the official who administers its oversight.  In institutionalized cooperation, it is the regulatory agencies which become the largest threat to the commons through co-optation by the industries that they regulate. 

This is why Hardin returns time and time again to the dilemma: “Quis custodiet?”  Without any sufficient ‘technical’ solution, I might add.  And why, ultimately, norms and institutions cannot be disentangled when analyzing the emergence (and dissolution) of cooperative behavior.

Comments !

links

social