Article ratings and expectations

October 1st, 2010 by

I am working late and procrastinating, so a quick note on the recent Wikipedia article feedback pilot:

It appears as though registered users are “tougher” in their grading of the articles than are anon users. This is especially notable in the area of “well sourced” (3.7 mean for anon vs. 2.8 mean for registered) and “complete” (3.5 vs. 2.7). It’s interesting to note that the means for “neutral” are almost identical.

Anecdotally, this fits well with a lot of what I’ve noticed with external feedback in the past; when someone writes in, it’s usually with a report of “X is wrong” rather than “the article on Y is atrocious”. When X is fixed, even when the article itself still seems to be a mess, people seem quite happy with it, even if it contains cleanup tags or ugly layout or the like.

Presumably, this suggests casual readers have low expectations of Wikipedia’s average quality; they accept bad (or terse) articles as par for the course but are pleasantly surprised by decent ones. Editors, meanwhile, are more closely familiar with the better ones, and apply somewhat more aspirational standards – a “tolerable” article is a deficient one.

On the matter of sourcing, I’d take a wild guess that if we went down to the article-specific level, we’d see a lot of this driven by the difference in articles with or without footnotes. Readers wanting a general overview may well be happy with general references or further-reading type external links; editors are more focused on the text, and more likely to prioritise specific footnoting of individual points.

The discrepancy in perceptions of completeness may come into play here, too – if you expect a terse cruddy article, then 5k of competently-written text seems relatively comprehensive. If you expect a detailed article with layout and images, then the 5k of text seems a bit of a damp squib.

A difference in expectations is probably partly driven by involvement – if you’re an editor, you’re more likely to expect good things and see room for improvement everywhere – but also partly by experience and estimation of quality. Which prompts the thought: do readers and editors read “different Wikipedias”? Do involved editors spend more time, on average, looking at or working with higher-quality text than casual readers do? An interesting question, but I’m not immediately sure how to quantify it. Ratio between raw pageviews and edits to an article, or pageviews versus talk pageviews?

Tags:

Leave a Reply