Quality versus age of Wikipedia’s Featured Articles

There’s been a brief flurry of interest on Wikipedia in this article, published last week:

Evaluating quality control of Wikipedia’s feature articles – David Lindsey.

…Out of the Wikipedia articles assessed, only 12 of 22 were found to pass Wikipedia’s own featured article criteria, indicating that Wikipedia’s process is ineffective. This finding suggests both that Wikipedia must take steps to improve its featured article process and that scholars interested in studying Wikipedia should be careful not to naively believe its assertions of quality.

A recurrent objection to this has been that Lindsey didn’t take account of the age of articles – partly because article quality can degrade over time, since the average contribution is likely to be below the quality of the remainder of the article if it began at a high level, and partly because the relative stringency of what constitutes “featured” has changed over time.

The interesting thing is, this partly holds and partly doesn’t. The article helpfully “scored” the 22 articles reviewed on a reasonably arbitrary ten-point scale; the average was seven, which I’ve taken as the cut-off point for acceptability. If we graph quality against time – time being defined as the last time an article passed through the “featuring” process, either for the first time or as a review – then we get an interesting graph:

Here, I’ve divided them into two groups; blue dots are those with a rating greater than 7, and thus acceptable; red dots are those with a rating lower than 7, and so insufficient. It’s very apparent that these two cluster separately; if an article is good enough, then there is no relation between the current status and the time since it was featured. If, however, it is not good enough, then there is a very clear linear relationship between quality and time. The trendlines aren’t really needed to point this out, but I’ve included them anyway; note that they share a fairly similar origin point.

Two hypotheses could explain this. Firstly, the quality when first featured varies sharply over time, but most older articles have been brought up to “modern standards”. Secondly, the quality when first featured is broadly consistent over time, and most articles remain that level, but some decay, and that decay is time-linked.

I am inclined towards the second. If it was the first, we would expect to see some older articles which were “partially saved” – say, one passed when the average scoring was three, and then “caught up” when the average scoring was five. This would skew the linearity of the red group, and make it more erratic – but, no, no sign of that. We also see that the low-quality group has no members older than about three years (1100 days); this is consistent with a sweeper review process which steadily goes through old articles looking for bad ones, and weeding out or improving the worst.

(The moral of the story? Always graph things. It is amazing what you spot by putting things on a graph.)

So what would this hypothesis tell us? Assuming our 22 are a reasonable sample – which can be disputed, but let’s grant it – the data is entirely consistent with all of them being of approximately the same quality when they first become featured; so we can forget about it being a flaw in the review process, it’s likely to be a flaw in the maintenance process.

Taking our dataset, the population of featured articles falls into two classes.

  • Type A – quality is consistent over time, even up to four years (!), and they comply with the standards we aim for when they’re first passed.
  • Type B – quality decays steadily with time, leaving the article well below FA status before even a year has passed.

For some reason, we are doing a bad job of maintaining the quality of about a third of our featured articles; why, and what distinguishes Type B from Type A? My first guess was user activity, but no – of those seven, in only one case has the user who nominated it effectively retired from the project.

Could it be contentiousness? Perhaps. I can see why Belarus and Alzheimer’s Disease may be contentious and fought-over articles – but why Tōru Takemitsu, a well-regarded Japanese composer? We have a decent-quality article on global warming, and you don’t get more contentious than that.

It could be timeliness – an article on a changing topic can be up-to-date in 2006 and horribly dated in 2009 – which would explain the problem with Alzheimer’s, but it doesn’t explain why some low-quality articles are on relatively timeless topics – Takemitsu or the California Gold Rush – and some high-quality ones are on up-to-date material such as climate change or the Indian economy.

There must be something linking this set, but I have to admit I don’t know what it is.

We would be well-served, I think, to take this article as having pointed up a serious problem of decay, and start looking at how we can address that, and how we can help maintain the quality of all these articles. Whilst the process for actually identifying a featured article at a specific point in time seems vindicated – I am actually surprised we’re not seeing more evidence of lower standards in the past – we’re definitely doing our readers a disservice if the articles rapidly drop below the standards we advertise them as holding.

Demographics in Wikipedia

There’s a lengthy internal debate going on in Wikipedia at the moment (see here, if you really want to look inside the sausage factory) about how best to deal with the perennial article of biographies of living people, of which there are about 400,000.

As an incidental detail to this, people have been examining the issue from all sorts of angles. One particularly striking graph that’s been floating around shows the number of articles marked as being born or died in any given year from the past century:


User:Carcharoth

As the notes point out, we can see some interesting effects here. Firstly – and most obviously – is the “recentism”; people who are alive and active in the present era tend to be more likely to have articles written about them, so you get more very recent deaths than (say) people who died forty years ago. Likewise, you have a spike around the late 1970s / early 1980s of births of people who’re just coming to public attention – in other words, people in their early thirties or late twenties are more likely to have articles written about them.

If we look back with a longer-term perspective, we can see that the effects of what Wikipedia editors have chosen to write about diminish, and the effects of demographics become more obvious. There are, for example, suggestions of prominent blips in the deathrate during the First and Second World Wars, and what may be the post-war baby boom showing up in the late 1940s.

So, we can distinguish two effects; underlying demographics, and what people choose to write about.

(In case anyone is wondering: people younger than 25 drop off dramatically. The very youngest are less than a year old, and are invariably articles about a) heirs to a throne; b) notorious child-murder cases; c) particularly well-reported conjoined twins or other multiple births. By about the age of five you start getting a fair leavening of child actors and the odd prodigy.)

Someone then came up with this graph, which is the same dataset drawn from the French Wikipedia:


User:Pymouss

At a glance, they look quite similar, which tells us that the overall dynamic guiding article-writing is broadly the same in both cases. This doesn’t sound that drastic a change, but different language editions can vary quite dramatically in things like standards for what constitutes a reasonable topic, so it is useful to note. French has a more pronounced set of spikes in WWI, WWII, and the post-war baby boom, though, as well as a very distinctive lowering of the birthrate during WWI. These are really quite interesting, especially the latter one, because it suggests we’re seeing a different underlying dynamic. And the most likely underlying dynamic is, of course, that Francophones tend to prefer writing about Francophones, and Anglophones tend to prefer writing about Anglophones…

So, how does this compare in other languages? I took these two datasets, and then added Czech (which someone helpfully collected), German and Spanish. (The latter two mean we have four of the five biggest languages represented. I’d have liked to include Polish, but the data was not so easily accessible.) I then normalised it, so each year was a percentage of the average for that language for that century, and graphed them against each other:

What can we see from these? Overall, every project has basically the same approach to inclusion; ramping up steadily over time, a noticeable spike in people who died during WWII or in the past two decades, and a particular interest in people who are about thirty and in the public eye. There is one important exception to this last case – German, which has a flat birthrate from about 1940 onwards, and apparently no significant recentism in this regard. The same is true of Czech to a limited degree. (Anecdotally I believe the same may be true of Japanese, but I haven’t managed to gather the data yet)

The WWII death spike is remarkably prominent in German and Czech, moderately prominent in French, and apparent but less obvious in English and Spanish. This could be differential interest in military history, where biographies tend to have deaths clustered in wartime, but it also seems rational to assume this reflects something of the underlying language-biased data. More Central Europeans died in WWII than Western Europeans; proportionally fewer died in the Anglosphere because English-speaking civilian populations escaped the worst of it, and the Spanish-speaking world was mostly uninvolved. The deaths in WWI are a lot more tightly clustered, and it’s hard to determine anything for sure here.

The other obvious spike in deaths is very easy to understand from either interpretation of the reason; it’s in 1936, in Spanish, which coincides with the outbreak of the Civil War. Lots of people to write articles about, there, and people less likely to be noted outside of Spain itself.

I mentioned above that (older) birthrates are more likely to represent an underlying demographic reality than deathrates are; localised death rates could be altered by a set of editors who choose to write on specific themes. You’d only get a birthdate spike, it seems, if someone was explicitly choosing to write about people born in a specific period; it’s hard to imagine it from a historical perspective. Historically linked people are grouped by when they’re prominent and active, and that happens at a variable time in their lives, so someone specifically writing about a group of people is likely to “smear” out their birthdates in a wide distribution.

So, let’s look at the historic births graph and see if anything shows up there. German and French show very clear drops in the birth rate between 1914 and about 1920, round U-shaped falls. German appears to have a systemic advantage over the other projects in birthrate through the 1930s and 1940s, though as the data is normalised against an average this may be misleadingly inflated – it doesn’t have the post-1970 bulge most languages do. The very sharp drop in births in 1945 is definitely not an artefact, though; you can see it to a lesser degree in the other languages, except English, where it’s hardly outside normal variance.

So, there does seem to be a real effect here; both these phenomena seem predictable as real demographic events, and the difference between the languages is interpretable as different populations suffering different effects in these periods and being represented to different degrees in the selection of people by various projects.

The next step would be, I suppose, to compare those figures to known birth and death rates both globally and regionally over the period; this would let us estimate of the various degrees of “parochialism” involved in the various projects’ coverage of people, as well as the varying degrees of “recentness” which we’ve seen already. Any predictions?