The untested prejudice

Sanger's latest article, Who Says We Know: on the New Politics of Knowledge is an argument that meritocracy, including an authority accorded to credentialed experts, is preferable to "epistemic egalitarianism." He writes that Wikipedia defenders argue for epistemic egalitarianism, or "dabblerism," on the basis of pragmatics (i.e., experts don't have the time or interest, or the crowd can be wise) and fairness. However, Sanger objects to these claims as either inaccurate (i.e., experts can be included, or the crowd is often dumb) or inferior to a genuine meritocracy. He notes that Wikipedia, too, in a conflicted and twisted way, also "likes" meritocracy. But its meritocracy is not with respect to what you know, but how much time you spend on the project. Or, academically credentialed expertise, as a form of merit, is accepted, but only with respect to citation. This leads him to ask, "If Wikipedians actually believe that the credibility of articles is improved by citing things written by experts, will it not improve them even more if people like the experts cited are given a modest role in the project?" Sanger concludes that Wikipedia has an "untested" prejudice against experts.

The essay has a number of interesting points. For example, Sanger claims Wikipedia's parity with Encyclopedia Britannica in the famous Nature study was not because of many Wikipedian's background in technology and science, but because the science domain is epistemologically more "objective" and consequently an easier topic on which to collaborate. This makes me think of The Story of Webster's Third: Philip Gove's Controversial Dictionary and Its Critics, in which the Third's editor, Philip Gove, campaigned for a new standard of objectivity, much to the chagrin of the editor preparing the wine guidelines (Morton 1994:92). I am no wine expert, but I wonder how Wikipedia's article fare?

In any case, it is the claim that Wikipedia has an untested prejudice that I find most perplexing. If Wikipedia has a prejudice, it is towards "dabblerism" because of the often cited failure of the Nupedia "test" -- which perhaps unfairly taints "expertism" -- and the success of Wikipedia. Sanger will keep making his expertise argument, and Wikipedians will keep making their crowd arguments -- and despite all the words being spent, neither party precludes the participation of crowds, in Sanger's case, nor of experts, in Wikipedia's case. Ultimately, this argument will be won by the test of "running code" -- in this case a widely used reference work. Will Sanger's new project provide so much added value to sustain itself? Or, will Wikipedia develop means of rating the quality of articles or contributors? Time, not arguments, will utlimately tell.


Ported/Archived Responses

Joseph Reagle on 2007-04-26

Thank you all for these comments. I made some responses off-blog but what I will share here is the presumption of variance between the disciplines. First, I suppose we should step back and ask if it is indeed more difficult to collaborate in the humanities, rather than the sciences. I wonder if there's any empirical work on this. I've also been doing a lot of reading authorship and plagiarism and expect that there may be differences on the import of autonomic and original authorship across the disciplines. So instead of the background of the contributors (common explanation), or the epistemic character of a knowledge domain (Sanger's theory), there might also be cultural factors determining how peer collaboration is rewarded. For example, in the foreword to Perspectives on Plagiarism: and Intellectual Property in the Postmodern World, Andrea Lunsord comments on "how both our tenure reviews were affected (to say the least) by our continued collaborations; how we had difficulty getting funding for collaborative research." I expect this would be quite different in the scientific domains, particularly because some of the hottest problems right now require so much capital investment, and the resulting papers will often have dozens of authors.

Said Kassem Hamideh on 2007-04-25

I am fed up with this very Scientistic approach that Sanger uses to evaluate the quality of knowledge. What is it good for anyway, besides to cross examine that which is empirical and barren, who wrote the most  accurate article on the advent of the garden plow, he wonders. I'm wondering, who of these accuracy hounds is considering the vastly critical area of knowledge deemed "socially-contingent" when they ponder over the value that Wikipedia has offered us?

Sage on 2007-04-26

Joseph, I think you hit on something important by bringing up the funding of collaborative research.  The sciences themselves are so much more collaborative than the humanities (in practice, if not inherently).  The question is, what would humanities collaboration look like?  One answer is, something like Wikipedia.  Even if collaboration is not overwhelmingly apparent within a single article, there is a less-remarked-upon level of collaboration between articles and among groups of articles, to make them jive with each other and speak to each other.  I think this happens quite a lot on Wikipedia.

Said Kassem Hamideh on 2007-04-30

I actually read the article this time. He does hedge against most of my criticisms. His essay then has a much smaller point. He really is only arguing for the elevated role for experts. I do object to some of his language though. What he calls Wikipedia's "knowledge egalitarianism" supposedly rejects the value experts bring to the table. Is this really so?
What about the mechanism that allows everyone to vote for the best ideas? Wouldn't that provide a natural advantage for experts who have better and control and access to the most persuasive arguments? Certainly Sanger couldn't be saying that non-experts can't be entrusted with, at the very least, voting to know what's a compelling vs. uncompelling case.

Also, I disagree that experts aren't already involved in Wikipedia projects. I bet there is evidence of expertise everywhere.

Biella Coleman on 2007-04-25

I am intrigued Joe by "This makes me think of The Story of Webster's Third: Philip Gove's Controversial Dictionary and Its Critics, in which the Third's editor, Philip Gove, campaigned for a new standard of objectivity, much to the chagrin of the editor preparing the wine guidelines (Morton 1994:92"

What was the new standard?

Thanks for this, I agree: time will reveal in ways that arguments can't but for us researchers, thankfully there are arguments to unpack :-)

Sage on 2007-04-26

Thanks for this post; I couldn't bring myself to slog through Sanger's essay, so I'm glad for the rundown.

One thing to consider with the quality difference between science and humanities content on Wikipedia is that the ratio of humanists to humanities topics is much lower than the ratio of scientists to science topics.  In my experience, non-expert Wikipedians are capable of doing a great job in the humanities, but there are often fewer editors per topic.  Also, I suspect that humanities articles being tougher to write and to collaborate on has less to do with objectivity than with the kinds of work that are necessary to translate the intentionally concise and formalized knowledge of science vs. to distill a sprawling body of literature that explore many different aspects of a topic into a focused article.  There may be strong consensus among humanists about a topic, but it will still be harder to write and to collaborate on a Wikipedia article than for a typical science topic.

Comments !

blogroll

social