Polarizing comments

While catching up on podcasts this weekend I was surprised that news of Popular Science "Shutting Off Our Comments" had traveled so far. On September 24, PopSci explained they were disabling comments because trolls and spambots were overwhelming their ability of "fostering lively, intellectual debate" (Labarre2013wws). They justified this decision based on a New York Times op-ed from Dominique Brossard and Dietram Scheufele who wrote of a study of theirs. The story was mentioned on most everything I listened to including Tekzilla and Skeptics Guide to the Universe (SGU). I was most disappointed with SGU since one of their themes is how research is often distorted in the media. I thought it would be useful to trace the hyperbolic mutation of this idea.

In "The 'Nasty Effect': Online Incivility and Risk Perceptions of Emerging Technologies" the authors test if exposure to uncivil comments about an article on nanotechnology (including name-calling) affected the views of readers (AndersonEtal2013neo). Readers were asked to rate their perceived level of risk of nanotechnology on a five-point scale between "1: Benefits far outweigh the risks" and "5: Risks far outweigh the benefits." They tested a number of hypotheses, the first of which was that uncivil comments below a newspaper blog post about nanotech correlated with an increase in risk perception. They were unable to support this hypothesis: "Our findings did not demonstrate a significant direct relationship between exposure to incivility and risk perceptions" (AndersonEtal2013neo, p. 8). They were able to find some relationship between incivility and risk perceptions in two specific populations, those with existing opinion on nanotech and those with a higher level of reported religiosity. If you look at Figure 1 you do see a polarization. Among those who previously had low support, their sense of risk increased from a 3.25 to a 3.5 when exposed to uncivil comments. Conversely, among those who supported nanotech, their sense of risk decreased from a 3.0 to a 2.85 when exposed to uncivil comments. That's about a 3-5% shift in risk perception within the five-point scale among proponents and opponents. Similarly, in Figure 2, those with high religiosity went from a civil 3.14 to an uncivil 3.28 sense of risk, a 3% shift within the five-point scale. These are notable differences in the aggregate, but any individual would be hard-pressed to explain such a difference. That is, if a 3 means the risk is balanced with the benefit, what does a shift from 3.25 to 3.5 mean? When the authors tested how much all the things they tested for (e.g., incivility, demographics, media use, and science support) contributed to the variations in risk perception, their model explained 17% of the variation. Hence, this is an interesting finding, with a notable (but not earth shattering) effect size, among two specific populations, on the specific topic of nanotech.

In February of this year, two of the authors published a report of this forthcoming work in Science in which they write:

Disturbingly, readers' interpretations of potential risks associated with technology described in the news article differed significantly depending only on the tone of the manipulated reader comments posted with the story. Exposure to uncivil comments (which included name-calling and other non-content-specific expressions of incivility) polarized the views among proponents and opponents of the technology with respect to its potential risks.... In other words, just the tone of the comments following balanced science stories in Web 2.0 environments can significantly alter how audiences think about the technology itself. (BrossardScheufele2013snm, p. 41)

Here, they are correctly talking about polarization among proponents and opponents of a specific technology but describe the effect size as "significant" and speak of audiences generally.

A month later, the same two authors publish an op-ed in the New York Times in which they write:

Comments from some readers, our research shows, can significantly distort what other readers think was reported in the first place.... In the civil group, those who initially did or did not support the technology -- whom we identified with preliminary survey questions -- continued to feel the same way after reading the comments. Those exposed to rude comments, however, ended up with a much more polarized understanding of the risks connected with the technology. (BrossardScheufele2013tss)

Now we are reading "significantly distort" and "much more polarized"; this is for a ~5% polarization among opponents and proponents within the five-point scale, a difference most individuals would be hard-pressed to note or explain.

This then manifests in hyperbolic media reports. On this weekend's podcasts Tekzilla reports "Comments kill the Internet" and on the SGU Rebecca Watson says the study that inspired the PopSci's decision "shows pretty convincingly that people are grossly affected by the comments they read on science news articles...." This story has feet because it's narrow, circumscribed findings speak to a larger phenomenon with which we all have experience: Internet comments often suck. Yet, recall that the authors of the study concluded "Our findings did not demonstrate a significant direct relationship between exposure to incivility and risk perceptions."

Comments !