Gender and BLPs on Wikipedia, redux

Back in 2019 I generated some data on how the English Wikipedia’s biographies of living people (BLPs) broke down by gender, and how that intersected with creation and deletion rates. The headline figures were that:

  • There was a significant difference between the gender split for all biographies, at 17.8% women – and for biographies of living people (BLPs), 22.7%.
  • In 2009, around 20% of existing BLPs were on women. As time went on, the average share of BLPs increased slowly, by perhaps a quarter of a percentage point per year.
  • In 2009, around 20% of newly created BLPs were on women. In about 2012, this kicked up a gear, rising above the long term average – first to about 25%, peaking around 33% before falling back a little.
  • BLP articles on women were more likely to be nominated for deletion until about 2017, when the effect disappeared.

One thing that was raised during the subsequent discussion was that a lot of the skew by gender was potentially linked to subject area – I was able to identify that for athletes (defined broadly, all sports players) the articles were much more likely to be men. I didn’t investigate this too much, though. Last week, I was reminded about this, and I’ve been looking at the numbers again. It brought up two interesting divergences.

Please don’t make me read all this

Okay – but you’ll miss the graphs. In summary:

English Wikipedia has more women in recent cohorts (about ~25% of living people born since the seventies) and there are far more men among athletes. Since the athletes make up a staggeringly high amount of articles among younger subjects, the gender split among non-athletes is much more balanced – a little under a third overall, but breaking 50% female among the younger cohorts.

Still with me? Let’s start. Sorry about the spoilers.

Time and tide

The first phenomenon is very straightforward: while the overall percentage across all people is around 25% women, how that is distributed over time varies. In general, there is a steady rise until about the 1970s; for those born from the 1970s onwards, the generation who are currently in their active working lives, the level is relatively stable at around 25% women.

The exception is those born in the 1920s (where it sits at 26%) – this is presumably affected by the fact that at this point, female life expectancy is significantly higher than male, and so the proportion of women begins to rise as a result.

One surprising outcome, however, is that the share of living people with no recorded age (green) is much more female than the average. This is a large cohort – there are in fact slightly more women in it than in any individual decade. I believe that it skews young – in other words, were this information known, it would increase the share of women in recent decades – but it is hard to find a way to confirm this. This issue is discussed in more detail below.

(Those born in the 2010s/20s and in the 1900s/10s are omitted – the four groups have a total of 175 articles, while the cohorts shown range from 5,000 to 170,000 – but the levels are around 50%. This is likely due to life expectancy in the oldest cohorts, and the fact that the people in the youngest cohorts are mostly notable at this point as being “the child of someone famous” – which you would broadly expect to be independent of gender.)

The percentages shown here are of the total male + female articles, but it is also possible to calculate the share of people who have a recorded gender that is not male/female. These show a very striking rise over time, though it should be cautioned that the absolute numbers are small – the largest single cohort is the 1980s with 345 people out of 170,000.

Sports by the numbers

The original question was to look at what the effect of athlete articles is on the overall totals. It turns out… very striking.

A few things are immediately apparent. The first is that the share of athletes is very substantial – it reflects only around a quarter of people born in the 1950s, but 85-90% of people born in the 1990s/2000s.

The second is that those athletes are overwhelmingly men – among the 1950s cohort, only about 10% of those athletes are female, and even by recent years it is only around 20%. This means that if we look purely at the non-athlete articles, the gender split becomes a lot more balanced.

Across all articles, it is around 32% female. But among living non-athletes, born since 1990, the gender balance is over 50% female.

This is a really amazing figure. I don’t think I ever particularly expected to see a gender analysis on Wikipedia that would break 50%. Granted, the absolute numbers involved are low – as is apparent from the previous graph, “non-athletes born in the 1990s” is around 22,000 people, and “born in the 2000s” is as low as 2,500 – but it’s a pretty solid trend and the total numbers for the earlier decades are definitely large enough for it to be no anomaly.

(Eagle-eyed readers will note that these do not quite align with the numbers in the original linked discussion – those were a couple of points lower in recent decades. I have not quite worked out why, but I think this was an error in the earlier queries; possibly it was counting redirects?)

One last detail to note: the “date missing” cohort comes out over 90% non-athletes. Presumably this is because their exact age is often significant and linked in to eg when they start professional sports, so it’s easily publicly available.

Methodology: the thousand word footnote

Feel free to let your eyes glaze over now.

These numbers were constructed mostly using the petscan tool, and leveraging data from both English Wikipedia and Wikidata. From Wikipedia, we have a robust categorisation system for year/decade of birth, and for whether someone is a living person. From Wikidata, we have fairly comprehensive gender data, which Wikipedia doesn’t know about. (It also has dates of birth, but it is more efficient to use WP categories here). So it is straightforward to produce intersection queries like “all living people marked as 1920s births and marked as female” (report). Note that this is crunching a lot of data – don’t be surprised if queries take a minute or two to run or occasionally time out.

To my surprise, the report for “living people known to be female” initially produced a reliable figure, but one for “living people known to be male” produced a figure that was an undercount. (I could validate this by checking against some small categories where I could run a report listing the gender of every item). The root cause seemed to be a timeout in the Wikidata query – I was originally looking for { ?item wdt:P31 wd:Q5 . wdt:P21 wd:Q6581097 } – items known to be human with gender male. Tweaking this to be simply { ?item wdt:P21 wd:Q6581097 } – items with gender male – produced a reliable figure. Similarly, we had the same issue when trying to get a total for all items with reported gender – simply { ?item wdt:P21 ?val } works.

Percentages are calculated as percentage of the number of articles identified as (male + female), rather than of all BLPs with a recorded gender value or simply of all BLPs. There are good arguments for either of the first two, but the former is simpler (some of my “any recorded gender value” queries timed out) and also consistent with the 2019 analysis.

A thornier problem comes from the sports element. There are a number of potential ways we could determine “sportiness”. The easiest option would be to use Wikidata occupation and look for something that indicates their occupation is some form of athlete, or that indicates a sport being played. The problem is that this is too all-encompassing, and would give us people who played sports but for whom it is not their main claim to fame. An alternative is to use the Wikipedia article categorisation hierarchy, but this is very complex and deep, making the queries very difficult to work with. The category hierarchy includes a number of surprise crosslinks and loops, meaning that deep queries tend to get very confusing results, or just time out.

The approach I eventually went with was to use Wikipedia’s infoboxes – the little standardised box on the top right of a page. There are a wide range of distinct infobox templates tailored to specific fields; each article usually only displays one, but can embed elements of others to bring in secondary data. If we look for articles using one of the 77(!) distinct sports infoboxes (report), we can conclude they probably had a significant sporting career. An article that does not contain one can be inferred to not have a sporting background.

But then we need to consider people with significant sports and non-sports careers. For example, the biographies of both Seb Coe and Tanni Grey-Thompson use the “infobox officeholder” to reflect their careers in Parliament being more recent, but it is set up to embed a sports infobox towards the end. This would entail them being counted as athletes by our infobox method. This is probably correct for those two, but there are no doubt people out there where we would draw the line differently. (To stay in the UK political theme: how about Henry McLeish? His athletic career on its own would probably just qualify for a Wikipedia biography, but it is, perhaps, a bit of a footnote compared to being First Minister…)

So, here is another complication. How reliable is our assumption that an athlete has a sports infobox, and that non-athletes don’t? If it’s broadly true, great, our numbers hold up. If it’s not, and if it’s not in some kind of systematic way, there might be a more complex skew. I believe that for modern athletes, it’s reasonably safe to assume that infoboxes are nearly ubiquitous; there are groups of articles where they’re less common, but this isn’t one of them. However, I can’t say for sure; it’s not an area I’ve worked intensively in.

Finally, we have the issue of dates. We’ve based the calculation on Wikipedia categories. Wikipedia birth/death categories are pretty reliably used where that data is known. However, about 150k (14%) of our BLP articles are marked “year of birth unknown”, and these are disproportionately female (35.4%).

What effect do these factors have?

Counting the stats as percentage of M+F rather than percentage of all people with recorded gender could be argued either way, but the numbers involved are quite low and do not change the overall pattern of the results.

The infobox question is more complicated. It is possible that it is meaning we are not picking up all athletes because they do not have infoboxes. On the other hand, it is possible that it is meaning we are being more expansive in counting people as athletes because they have a “secondary” infobox along the line of Coe & Grey-Thompson above. The problem there is defining where we draw the line, and what level of “other significance” stops someone being counted. That feels like a very subjective threshold and hard to test for automatically. It is certainly a more conservative test than a Wikidata-based one, at least.

And for dates, hmm. We know that the articles that do not report an age are disproportionately female (35% vs the BLP average of 25%), but also that they are even more disproportionately “not athletes” (7% athletes vs the BLP average of 43%). There are also a lot of articles that don’t report an age; around 14% of all BLPs.

This one probably introduces the biggest question mark here. Depending on how that 14% break down, it could change the totals for the year-by-year cohorts; but there’s not really much we can do at the moment to work that out.

Anecdotally, I suspect that they are more likely to skew younger rather than being evenly distributed over time, but there is very little to go on here. However, I feel it is unlikely they would be distributed in such a way as to counteract the overall conclusions – this would require, for example, the female ones being predominantly shifted into older groups and the male ones into younger groups. It’s possible, but I don’t see an obvious mechanism to cause that.

[Edit 4/8/23 – tweaked to confirm that these are English Wikipedia only figures, after a reminder from Hilda. I would be very interested in seeing similar data for other projects, but the methodology might be tricky to translate – eg French and German do not have an equivalent category for indexing living people, and different projects may have quite different approaches for applying infoboxes.]

Gender and deletion on Wikipedia

So, a really interesting question cropped up this weekend:

I’m trying to find out how many biographies of living persons exist on the English Wikipedia, and what kind of data we have on them. In particular, I’m looking for the gender breakdown. I’d also like to know when they were created; average length; and whether they’ve been nominated for deletion.

This is, of course, something that’s being discussed a lot right now; there is a lot of emerging push-back against the excellent work being done to try and add more notable women to Wikipedia, and one particular deletion debate got a lot of attention in the past few weeks, so it’s on everyone’s mind. And, instinctively, it seems plausible that there is a bias in the relative frequency of nomination for deletion – can we find if it’s there?

My initial assumption was, huh, I don’t think we can do that with Wikidata. Then I went off and thought about it for a bit more, and realised we could get most of the way there of it with some inferences. Here’s the results, and how I got there. Thanks to Sarah for prompting the research!

(If you want to get the tl;dr summary – yes, there is some kind of difference in the way older male vs female articles have been involved with the deletion process, but exactly what that indicates is not obvious without data I can’t get at. The difference seems to have mostly disappeared for articles created in the last couple of years.)

Statistics on the gender breakdown of BLPs

As of a snapshot of yesterday morning, 5 May 2019, the English Wikipedia had 906,720 articles identified as biographies of living people (BLPs for short). Of those, 697,402 were identified as male by Wikidata, 205,117 as female, 2464 had some other value for gender, 1220 didn’t have any value for gender (usually articles on groups of people, plus some not yet updated), and 517 simply didn’t have a connected Wikidata item (yet). Of those with known gender, it breaks down as 77.06% male, 22.67% female, and 0.27% some other value. (Because of the limits of the query, I didn’t try and break down those in any more detail.)

This is, as noted, only articles about living people; across all 1,626,232 biographies in the English Wikipedia with a gender known to Wikidata, it’s about 17.83% female, 82.13% male, and 0.05% some other value. I’ll be sticking to data on living people throughout this post, but it’s interesting to compare the historic information.

So, how has that changed over time?

BLPs by gender and date of creation

This graph shows all existing BLPs, broken down by gender and (approximately) when they were created. As can be seen, and as might be expected, the gap has closed a bit over time.

Percentage of BLPs which are female over time

Looking at the ratio over time (expressed here as %age of total male+female), the relative share of female BLPs was ~20% in 2009. In late 2012, the rate of creation of female BLPs kicked up a gear, and from then on it’s been noticeably above the long-term average (almost hitting 33% in late 2017, but dropping back since then). This has driven the overall share steadily and continually upwards, now at 22.7% (as noted above).

Now the second question, do the article lengths differ by gender? Indeed they do, by a small amount.

BLPs by current article size and date of creation

Female BLPs created at any time since 2009 are slightly longer on average than male ones of similar age, with only a couple of brief exceptions; the gap may be widening over the past year but it’s maybe too soon to say for sure. Average difference is about 500 bytes or a little under 10% of mean article size – not dramatic but probably not trivial either. (Pre-2009 articles, not shown here, are about even on average)

Note that this is raw bytesize – actual prose size will be smaller, particularly if an article is well-referenced; a single well-structured reference can be a few hundred characters. It’s also the current article size, not size at creation, hence why older articles tend to be longer – they’ve had more time to grow. It’s interesting to note that once they’re more than about five years old they seem to plateau in average length.

Finally, the third question – have they been nominated for deletion? This was really interesting.

Percentage of BLPs which have previously been to AFD, by date of creation and gender

So, first of all, some caveats. This only identifies articles which go through the structured “articles for deletion” (AFD) process – nomination, discussion, decision to keep or delete. (There are three deletion processes on Wikipedia; the other two are more lightweight and do not show up in an easily traceable form). It also cannot specifically identify if that exact page was nominated for deletion, only that “an article with exactly the same page name has been nominated in the past” – but the odds are good they’re the same if there’s a match. It will miss out any where the article was renamed after the deletion discussion, and, most critically, it will only see articles that survived deletion. If they were deleted, I won’t be able to see them in this analysis, so there’s an obvious survivorship bias limiting what conclusions we can draw.

Having said all that…

Female BLPs created 2009-16 appear noticeably more likely than male BLPs of equivalent age to have been through a deletion discussion at some point in their lives (and, presumably, all have been kept). Since 2016, this has changed and the two groups are about even.

Alongisde this, there is a corresponding drop-off in the number of articles created since 2016 which have associated deletion discussions. My tentative hypothesis is that articles created in the last few years are generally less likely to be nominated for deletion, perhaps because the growing use of things like the draft namespace (and associated reviews) means that articles are more robust when first published. Conversely, though, it’s possible that nominations continue at the same rate, but the deletion process is just more rigorous now and a higher proportion of those which are nominated get deleted (and so disappear from our data). We can’t tell.

(One possible explanation that we can tentatively dismiss is age – an article can be nominated at any point in its lifespan so you would tend to expect a slowly increasing share over time, but I would expect the majority of deletion nominations come in the first weeks and then it’s pretty much evenly distributed after that. As such, the drop-off seems far too rapid to be explained by just article age.)

What we don’t know is what the overall nomination for deletion rate, including deleted articles, looks like. From our data, it could be that pre-2016 male and female articles are nominated at equal rates but more male articles are deleted; or it could be that pre-2016 male and female articles are equally likely to get deleted, but the female articles are nominated more frequently than they should be. Either of these would cause the imbalance. I think this is very much the missing piece of data and I’d love to see any suggestions for how we can work it out – perhaps something like trying to estimate gender from the names of deleted articles?

Update: Magnus has run some numbers on deleted pages, doing exactly this – inferring gender from pagenames. Of those which were probably a person, ~2/3 had an inferred gender, and 23% of those were female. This is a remarkably similar figure to the analysis here (~23% of current BLPs female; ~26% of all BLPs which have survived a deletion debate female)

So in conclusion

  • We know the gender breakdown: skewed male, but growing slowly more balanced over time, and better for living people than historical ones.
  • We know the article lengths; slightly longer for women than men for recent articles, about equal for those created a long time ago.
  • We know that there is something different about the way male and female biographies created before ~2017 experience the deletion process, but we don’t have clear data to indicate exactly what is going on, and there are multiple potential explanations.
  • We also know that deletion activity seems to be more balanced for articles in both groups created from ~2017 onwards, and that these also have a lower frequency of involvement with the deletion process than might have been expected. It is not clear what the mechanism is here, or if the two factors are directly linked.

How can you extract this data? (Yes, this is very dull)

The first problem was generating the lists of articles and their metadata. The English Wikipedia category system lets us identify “living people”, but not gender; Wikidata lets us identify gender (property P21), but not reliably “living people”. However, we can creatively use the petscan tool to get the intersection of a SPARQL gender query + the category. Instructing it to explicitly use Wikipedia (“enwiki” in other sources > manual list) and give output as a TSV – then waiting for about fifteen minutes – leaves you with a nice clean data dump. Thanks, Magnus!

(It’s worth noting that you can get this data with any characteristic indexed by Wikidata, or any characteristic identifiable through the Wikipedia category schema, but you will need to run a new query for each aspect you want to analyse – the exported data just has article metadata, none of the Wikidata/category information)

The exported files contain three things that are very useful to us: article title, pageid, and length. I normalised the files like so:

grep [0-9] enwiki_blp_women_from_list.tsv | cut -f 2,3,5 > women-noheader.tsv

This drops the header line (it’s the only one with no numeric characters) and extracts only the three values we care about (and conveniently saves about 20MB).

This gives us two of the things we want (age and size) but not deletion data. For that, we fall back on inference. Any article that is put through the AFD process gets a new subpage created at “Wikipedia:Articles for deletion/PAGENAME”. It is reasonable to infer that if an article has a corresponding AFD subpage, it’s probably about that specific article. This is not always true, of course – names get recycled, pages get moved – but it’s a reasonable working hypothesis and hopefully the errors are evenly distributed over time. I’ve racked my brains to see if I could anticipate a noticeable difference here by gender, as this could really complicate the results, but provisionally I think we’re okay to go with it.

To find out if those subpages exist, we turn to the enwiki dumps. Specifically, we want “enwiki-latest-all-titles.gz” – which, as it suggests, is a simple file listing all page titles on the wiki. Extracted, it comes to about 1GB. From this, we can extract all the AFD subpages, as so:

grep "Articles_for_deletion/" enwiki-latest-all-titles | cut -f 2 | sort | uniq | cut -f 2 -d / | sort | uniq > afds

This extracts all the AFD subpages, removes any duplicates (since eg talkpages are listed here as well), and sorts the list alphabetically. There are about 424,000 of them.

Going back to our original list of articles, we want to bin them by age. To a first approximation, pageid is sequential with age – it’s assigned when the page is first created. There are some big caveats here; for example, a page being created as a redirect and later expanded will have the ID of its initial creation. Pages being deleted and recreated may get a new ID, pages which are merged may end up with either of the original IDs, and some complicated page moves may end up with the original IDs being lost. But, for the majority of pages, it’ll work out okay.

To correlate pageID to age, I did a bit of speculative guessing to find an item created on 1 January and 1 July every year back to 2009 (eg pageid 43190000 was created at 11am on 1 July 2014). I could then use these to extract the articles corresponding to each period as so:

...
awk -F '\t' '$2 >= 41516000 && $2 < 43190000' < men-noheader.tsv > bins/2014-1-M
awk -F '\t' '$2 >= 43190000 && $2 < 44909000' < men-noheader.tsv > bins/2014-2-M
...

This finds all items with a pageid (in column #2 of the file) between the specified values, and copies them into the relevant bin. Run once for men and once for women.

Then we can run a short report, along these lines (the original had loops in it):

  cut -f 1 bins/2014-1-M | sort > temp-M
  echo -e 2014-1-M"\tM\t"`cat bins/2014-1-M | wc -l`"\t"`awk '{ total += $3; count++ } END { print total/count }' bins/2014-1-M`"\t"`comm -1 -2 temp-M afds | wc -l` >> report.tsv

This adds a line to the file report.tsv with (in order) the name of the bin, the number of entries in it, the mean value of the length column, and a count of the number which also match names in the afds file. (The use of the temp-M file is to deal with the fact that the comm tool needs properly sorted input).

After that, generating the data is lovely and straightforward – drop the report into a spreadsheet and play around with it.

George Ernest Spero, the vanishing MP

As part of the ongoing Wikidata MPs project, I’ve come across a number of oddities – MPs who may or may not have been the same person, people who essentially disappear after they leave office, and so on. Tracking these down can turn into quite a complex investigation.

One such was George Ernest Spero, Liberal MP for Stoke Newington 1923-24, then Labour MP for Fulham West 1929-30. His career was cut short by his resignation in April 1930; shortly afterwards, he was declared bankrupt. Spero had already left the country for America, and nothing more was heard of him. The main ambiguity was when he died – various sources claimed either 1960 or 1976, but without it being clear which was more reliable, or any real details on what happened to him after 1930. In correspondence with Stephen Lees, who has been working on an incredibly useful comprehensive record of MP’s death-dates, I did some work on it last year and eventually confirmed the 1960 date; I’ve just rediscovered the notes from this and since it was an interesting little mystery, thought I’d post them.

George Spero, MP and businessman

So, let’s begin with what we know about him up to the point at which he vanished.

George Ernest Spero was born in 1894. He began training at the Royal Dental Hospital in 1912, and served in the RNVR as a surgeon during the First World War. He had two brothers who also went into medicine; Samuel was a dentist in London (and apparently also went bankrupt, in 1933), while Leopold was a surgeon or physician (trained at St. Mary’s, RNVR towards the end of WWI, still in practice in the 1940s). All of this was reasonably straightforward to trace, although oddly George’s RNVR service records seem to be missing from the National Archives.

After the war, he married Rina Ansley (nee Rina Ansbacher, born 14 March 1902) in 1922; her father was a wealthy German-born stockbroker, resident in Park Lane, who had naturalised in 1918. They had two daughters, Rachel Anne (b. 1923) and Betty Sheila (b. 1928). After his marriage, Spero went into politics in Leicester, where he seems to have been living, and stood for Parliament in the 1922 general election. The Nottingham Journal described him as for “the cause of free, unfettered Liberalism … Democratic in conviction, he stands for the abolition of class differences and for the co-operation of capital and labour.” However, while this was well-tailored to appeal to the generally left-wing voters of Leicester West, and his war record was well-regarded, the moderate vote was split between the Liberal and National Liberal candidates, with Labour taking the seat.

The Conservative government held another election in 1923, aiming to strengthen a small majority (does this sound familiar?), and Spero – now back in London – contested Stoke Newington, then a safe Conservative seat, again as a left Liberal. With support from Labour, who did not contest the seat, Spero ran a successful campaign and unseated the sitting MP. He voted in support of the minority Labour government on a number of occasions, and was one of the small number of Liberal rebels who supported them in the final no-confidence vote. However, this was not enough to prevent Labour fielding a candidate against him in 1924; the Conservative candidate took 57% of the vote, with the rest split evenly between Labour and Liberal.

Spero drifted from the Liberals into the Labour Party, probably a more natural home for his politics, joining it in 1925. By the time of the next general election, in May 1929, he had become the party’s candidate for Fulham West, winning it from the Conservatives with 45% of the vote.

He was a moderately active Government backbencher for the next few months, including being sent as a visitor to Canada during the recess in September 1929, travelling with his wife. While overseas, she caused some minor amusement to the British papers after reporting the loss of a £6,000 pearl necklace – they were delighted to report this alongside “socialist MP”. He was last recorded voting in Hansard in December, and did not appear in 1930. In February and March he was paired for votes, with a newspaper report in early March stating that he had been advised to take a rest to avoid a complete nervous breakdown about the start of the year, and had gone to the South of France, but “hopes to return to Parliament before the month is out”. However, on 9th April he formally took the Chiltern Hundreds (it is interesting that a newspaper report suggested his local party would choose whether to accept the resignation).

However, things were moving quickly elsewhere. A case was brought against him in the High Court for £10,000, arising from his sale of a radio company in 1928-29. During the court hearing, at the end of May, it was discovered that a personal cheque for £4000 given by Spero to guarantee the company’s debts had been presented to his bank in October 1929, but was not honoured. He had at this point claimed to be suing the company for £20,000, buying six months legal delay, sold his furniture, and – apparently – left the country for America. Bankruptcy proceedings followed later that year (where he was again stated to be in America) and, unsurprisingly, his creditors seem to have received very little.

At this point, the British trail and the historic record draw to a gentle close. But what happened to him?

The National Portrait Gallery gave his death as 1960, while an entry in The Palgrave Dictionary of Anglo-Jewish History reported that they had traced his death to 1976 in Belgrade, Yugoslavia (where, as a citizen, it was registered with the US embassy). Unfortunately, it did not go into any detail about how they worked this out, and this just heightened the mystery – if it was true, how had a disgraced ex-MP ended up in Yugoslavia on a US passport three decades later? And, conversely, who was it had died in 1960?

George Spears, immigrant and doctor

We know that Spero went to America in 1929-30; that much seemed to be a matter of common agreement. Conveniently, the American census was carried out in April 1930, and the papers are available. On 18 April, he was living with his family in Riverside Drive, upper Manhattan; all the names and ages line up, and Spero is given as a medical doctor, actively working. Clearly they were reasonably well off, as they had a live-in maid, and it seems to be quite a nice area.

In 1937, he petitioned for American citizenship in California, noting that he had lived there since March 1933. As part of the process, he formally notified that he intended to change his name to George Ernest Spears. (He also gave his birthdate as 2 March 1894, of which more later).

While we can be reasonably confident these are the same man due to the names and dates of the family, the match is very neatly confirmed by the fact that the citizenship papers have a photograph, which can be compared to an older newspaper one. There is fifteen years difference, but we can see the similarities between the prospective MP of 27 and the older man of 43.

George Spears, with the same family, then reappears in the 1940 census, back in Riverside Drive. He is now apparently practicing as an optician, and doing well – income upwards of $6000. Finally, we find a draft record for him living in Huntingdon, Long Island at some point in 1942. Note his signature here, which is visibly the same hand as in 1937, except “E. Spears” not “Ernest Spero”.

It is possible he reverted to his old name for a while – there are occasional appearances of a Dr. George Spero, optometrist, in the New York phone books between the 1940s and late 1950s. Not enough detail to be sure either way, though.

So at this point, we can trace Spero/Spears continually from 1930 to 1942. And then nothing, until on 7 January 1960, George E. Spears, born 2 March 1894, died in California. Some time later, in June 1976, George Spero, born 11 April 1894, died in Belgrade, Yugoslavia, apparently a US citizen. Which one was our man?

The former seemed more likely, but can we prove it? The death details come from an index, which gives a mother’s maiden name of “Robinson” – unfortunately the full certificate isn’t there and I did not feel up to trying to track down a paper Californian record to see what else it said.

If we return to the UK, we can find George Spero in the 1901 census in Dover, with his parents Isidore Sol [Solomon], a ‘dental mechanic’, and Rachel, maiden name unknown. The family later moved to London, the parents naturalised, Isidore died in 1925 – and probate goes to “George Ernest Spero, physician”, which seems to confirm that this is definitely the right family and not a different George Spero. The 1901 censuses note that two of the older children were born in Dublin, so we can trace them in the Irish records. Here we have an “Israel S Spero” marrying Rachel Robinson in 1884, and a subsequent child born to Solomon Israel Spero and Rachel Spero nee Robinson. There are a few other Speros or Spiros appearing in Dublin, but none married around the right time, and none with such similar names. If Israel Solomon Spero is the same as Isidore Solomon Spero, this all ties up very neatly.

It leaves open the mystery, however, of who died in Yugoslavia. It seems likely this was a completely different man (who had not changed his name), but I have completely failed to trace anything about him. A pity – it would have been nice to definitively close off that line of enquiry.

History of Parliament and Wikidata – the first round complete

Back in January, I wrote up some things I was aiming to do this year, including:

Firstly, I’d like to clear off the History of Parliament work on Wikidata. I haven’t really written this up yet (maybe that’s step 1.1) but, in short, I’m trying to get every MP in the History of Parliament database listed and crossreferenced in Wikidata. At the moment, we have around 5200 of them listed, out of a total of 22200 – so we’re getting there. (Raw data here.) Finding the next couple of thousand who’re listed, and mass-creating the others, is definitely an achievable task.

Well, seven months later, here’s where it stands:

  • 9,372 of a total 21,400 (43.7%) of History of Parliament entries been matched to records for people in Wikidata.
  • These 9,372 entries represent 7,257 people – 80 have entries in three HoP volumes, and 1,964 in two volumes. (This suggests that, when complete, we will have about ~16,500 people for those initial 21,400 entries – so maybe we’re actually over half-way there).
  • These are crossreferenced to a lot of other identifiers. 1,937 of our 7,257 people (26.7%) are in the Oxford Dictionary of National Biography, 1,088 (15%) are in the National Portrait Gallery database, and 2,256 (31.1%) are linked to their speeches in the digital edition of Hansard. There is a report generated each night crosslinking various interesting identifiers.
  • Every MP in the 1820-32 volume (1,367 of them) is now linked and identified, and the 1790-1820 volume is now around 85% complete. (This explains the high showing for Hansard, which covers 1805 onwards)
  • The metadata for these is still limited – a lot more importing work to do – but in some cases pretty decent; 94% of the 1820-32 entries have a date of death, for example.

Of course, there’s a lot more still to do – more metadata to add, more linkages to make, and so on. It still does not have any reasonable data linking MPs to constituencies, which is a major gap (but perhaps one that can be filled semi-automatically using the HoP/Hansard links and a clever script).

But as a proof of concept, I’m very happy with it. Here’s some queries playing with the (1820-32) data:

  • There are 990 MPs with an article about them in at least one language/WM project. Strikingly, ten of these don’t have an English Wikipedia article (yet). The most heavily written-about MP is – to my surprise – David Ricardo, with articles in 67 Wikipedias. (The next three are Peel, Palmerston, and Edward Bulwer-Lytton).
  • 303 of the 1,367 MPs (22.1%) have a recorded link to at least one other person in Wikidata by a close family relationship (parent, child, spouse, sibling) – there are 803 links, to 547 unique people – 108 of whom are also in the 1820-32 MPs list, and 439 of whom are from elsewhere in Wikidata. (I expect this number to rise dramatically as more metadata goes in).
  • The longest-surviving pre-Reform MP (of the 94% indexed by deathdate, anyway) was John Savile, later Earl of Mexborough, who made it to August 1899…
  • Of the 360 with a place of education listed, the most common is Eton (104), closely followed by Christ Church, Oxford (97) – there is, of course, substantial overlap between them. It’s impressive to see just how far we’ve come. No-one would ever expect to see anything like that for Parliament today, would we.
  • Of the 1,185 who’ve had first name indexed by Wikidata so far, the most popular is John (14.4%), then William (11.5%), Charles (7.5%), George (7.4%), and Henry (7.2%):

  • A map of the (currently) 154 MPs whose place of death has been imported:

All these are of course provisional, but it makes me feel I’m definitely on the right track!


So, you may be asking, what can I do to help? Why, thankyou, that’s very kind…

  • First of all, this is the master list, updated every night, of as-yet-unmatched HoP entries. Grab one, load it up, search Wikidata for a match, and add it (property P1614). Bang, one more down, and we’re 0.01% closer to completion…
  • It’s not there? (About half to two thirds probably won’t be). You can create an item manually, or you can set it aside to create a batch of them later. I wrote a fairly basic bash script to take a spreadsheet of HoP identifiers and basic metadata and prepare it for bulk-item-creation on Wikidata.
  • Or you could help sanitise some of the metadata – here’s some interesting edge cases:
    • This list is ~680 items who probably have a death date (the HoP slug ends in a number), but who don’t currently have one in Wikidata.
    • This list is ~540 people who are titled “Honourable” – and so are almost certainly the sons of noblemen, themselves likely to be in Wikidata – but who don’t have a link to their father. This list is the same, but for “Lord”, and this list has all the apparently fatherless men who were the 2nd through 9th holders of a title…

Most popular videos on Wikipedia, 2015

One of the big outstanding questions for many years with Wikipedia was the usage data of images. We had reasonably good data for article pageviews, but not for the usage of images – we had to come up with proxies like the number of times a page containing that image was loaded. This was good enough as it went, but didn’t (for example) count the usage of any files hotlinked elsewhere.

In 2015, we finally got the media-pageviews database up and running, which means we now have a year’s worth of data to look at. In December, someone produced an aggregated dataset of the year to date, covering video & audio files.

This lists some 540,000 files, viewed an aggregated total of 2,869 million times over about 340 days – equivalent to 3,080 million over a year. This covers use on Wikipedia, on other Wikimedia projects, and hotlinked by the web at large. (Note that while we’re historically mostly concerned with Wikipedia pageviews, almost all of these videos will be hosted on Commons.) The top thirty:

14436640 President Obama on Death of Osama bin Laden.ogv
10882048 Bombers of WW1.ogg
10675610 20090124 WeeklyAddress.ogv
10214121 Tanks of WWI.ogg
9922971 Robert J Flaherty – 1922 – Nanook Of The North (Nanuk El Esquimal).ogv
9272975 President Obama Makes a Statement on Iraq – 080714.ogg
7889086 Eurofighter 9803.ogg
7445910 SFP 186 – Flug ueber Berlin.ogv
7127611 Ward Cunningham, Inventor of the Wiki.webm
6870839 A11v 1092338.ogg
6865024 Ich bin ein Berliner Speech (June 26, 1963) John Fitzgerald Kennedy trimmed.theora.ogv
6759350 Editing Hoxne Hoard at the British Museum.ogv
6248188 Dubai’s Rapid Growth.ogv
6212227 Wikipedia Edit 2014.webm
6131081 Newman Laugh-O-Gram (1921).webm
6100278 Kennedy inauguration footage.ogg
5951903 Hiroshima Aftermath 1946 USAF Film.ogg
5902851 Wikimania – the Wikimentary.webm
5692587 Salt March.ogg
5679203 CITIZENFOUR (2014) trailer.webm
5534983 Reagan Space Shuttle Challenger Speech.ogv
5446316 Medical aspect, Hiroshima, Japan, 1946-03-23, 342-USAF-11034.ogv
5434404 Physical damage, blast effect, Hiroshima, 1946-03-13 ~ 1946-04-08, 342-USAF-11071.ogv
5232118 A Day with Thomas Edison (1922).webm
5168431 1965-02-08 Showdown in Vietnam.ogv
5090636 Moon transit of sun large.ogg
4996850 President Kennedy speech on the space effort at Rice University, September 12, 1962.ogg
4983430 Burj Dubai Evolution.ogv
4981183 Message to Scientology.ogv

(Full data is here; note that it’s a 17 MB TSV file)

It’s an interesting mix – and every one of the top 30 is a video, not an audio file. I’m not sure there’s a definite theme there – though “public domain history” does well – but it’d reward further investigation…

Taking pictures with flying government lasers

Well, sort of.

A few weeks ago, the Environment Agency released the first tranche of their LIDAR survey data. This covers (most of) England, at varying resolution from 2m to 25cm, made via LIDAR airborne survey.

It’s great fun. After a bit of back-and-forth (and hastily figuring out how to use QGIS), here’s two rendered images I made of Durham, one with buildings and one without, now on Commons:

The first is shown with buildings, the second without. Both are at 1m resolution, the best currently available for the area. Note in particular the very striking embankment and cutting for the railway viaduct (top left). These look like they could be very useful things to produce for Commons, especially since it’s – effectively – very recent, openly licensed, aerial imagery…

1. Selecting a suitable area

Generating these was, on the whole, fairly easy. First, install QGIS (simplicity itself on a linux machine, probably not too much hassle elsewhere). Then, go to the main data page and find the area you’re interested in. It’s arranged on an Ordnance Survey grid – click anywhere on the map to select a grid square. Major grid squares (Durham is NZ24) are 10km by 10km, and all data will be downloaded in a zip file containing tiles for that particular region.

Let’s say we want to try Cambridge. The TL45 square neatly cuts off North Cambridge but most of the city is there. If we look at the bottom part of the screen, it offers “Digital Terrain Model” at 2m and 1m resolution, and “Digital Surface Model” likewise. The DTM is the version just showing the terrain (no buildings, trees, etc) while the DSM has all the surface features included. Let’s try the DSM, as Cambridge is not exactly mountainous. The “on/off” slider will show exactly what the DSM covers in this area, though in Cambridge it’s more or less “everything”.

While this is downloading, let’s pick our target area. Zooming in a little further will show thinner blue lines and occasional superimposed blue digits; these define the smaller squares, 1 km by 1 km. For those who don’t remember learning to read OS maps, the number on the left and the number on the bottom, taken together, define the square. So the sector containing all the colleges along the river (a dense clump of black-outlined buildings) is TL4458.

2. Rendering a single tile

Now your zip file has downloaded, drop all the files into a directory somewhere. Note that they’re all named something like tl4356_DSM_1m.asc. Unsurprisingly, this means the 1m DSM data for square TL4356.

Fire up QGIS, go to Layer > Add raster layer, and select your tile – in this case, TL4458. You’ll get a crude-looking monochrome image, immediately recognisable by a broken white line running down the middle. This is the Cam. If you’re seeing this, great, everything’s working so far. (This step is very helpful to check you are looking at the right area)

Now, let’s make the image. Project > New to blank everything (no need to save). Then Raster > Analysis > DEM (terrain models). In the first box, select your chosen input file. In the next box, the output filename – with a .tif suffix. (Caution, linux users: make sure to enter or select a path here, otherwise it seems to default to home). Leave everything else as default – all unticked and mode: hillshade. Click OK, and a few seconds later it’ll give a completed message; cancel out of the dialogue box at this point. It’ll be displaying something like this:

Congratulations! Your first LIDAR rendering. You can quit out of QGIS (you can close without saving, your converted file is saved already) and open this up as a normal TIFF file now; it’ll be about 1MB and cover an area 1km by 1km. If you look closely, you can see some surprisingly subtle details despite the low resolution – the low walls outside Kings College, for example, or cars on the Queen’s Road – Madingley Road roundabout by the top left.

3. Rendering several tiles

Rendering multiple squares is a little trickier. Let’s try doing Barton, which conveniently fits into two squares – TL4055 and TL4155. Open QGIS up, and render TL4055 as above, through Raster > Analysis > DEM (terrain models). Then, with the dialogue window still open, select TL4155 (and a new output filename) and run it again. Do this for as many files as you need.

After all the tiles are prepared, clear the screen by starting a new project (again, no need to save) and go to Raster > Miscellaneous > Merge. In “Input files”, select the two exports you’ve just done. In “Output file”, pick a suitable filename (again ending in .tif). Hit OK, let it process, then close the dialog. You can again close QGIS without saving, as the export’s complete.

The rendering system embeds coordinates in the files, which means that when they’re assembled and merged they’ll automatically slot together in the correct position and orientation – no need to manually tile them. The result should look like this:

The odd black bit in the top right is the edge of the flight track – there’s not quite comprehensive coverage. This is a mainly agricultural area, and you can see field markings – some quite detailed, and a few bits on the bottom of the right-hand tile that might be traces of old buildings.

So… go forth! Make LIDAR images! See what you can spot…

4. Command-line rendering in bulk

Richard Symonds (who started me down this rabbit-hole) points out this very useful post, which explains how to do the rendering and merging via the command line. Let’s try the entire Durham area; 88 files in NZ24, all dumped into a single directory –

for i in `ls *.asc` ; do gdaldem hillshade -compute_edges $i $i.tif ; done

gdal_merge.py -o NZ24-area.tif *.tif


rm *.asc.tif

In order, that a) runs the hillshade program on each individual source file ; b) assembles them into a single giant image file; c) removes the intermediate images (optional, but may as well tidy up). The -compute_edges flag helpfully removes the thin black lines between sectors – I should have turned it on in the earlier sections!

Wikidata and identifiers – part 2, the matching process

Yesterday, I wrote about the work we’re doing matching identifiers into Wikidata. Today, the tools we use for it!

Mix-and-match

The main tool we’re using is a beautiful thing Magnus developed called mix-and-match. It imports all the identifiers with some core metadata – for the ODNB, for example, this was names and dates and the brief descriptive text – and sorts them into five groups:

  • Manually matched – these matches have been confirmed by a person (or imported from data already in Wikidata);
  • Automatic – the system has guessed these are probably the same people but wants human confirmation;
  • Unmatched – we have no idea who these identifiers match to;
  • No Wikidata – we know there is currently no Wikidata match;
  • N/A – this identifier shouldn’t match to a Wikidata entity (for example, it’s a placeholder, a subject Wikidata will never cover, or an cross-reference with its own entry).

The goal is to work through everything and move as much as possible to “manually matched”. Anything in this group can then be migrated over to Wikidata with a couple of clicks. Here’s the ODNB as it stands today:

(Want to see what’s happening with the data? The recent changes link will show you the last fifty edits to all the lists.)

So, how do we do this? Firstly, you’ll need a Wikipedia account, and to log in to our “WiDaR” authentication tool. Follow the link on the top of the mix-and-match page (or, indeed, this one), sign in with your Wikipedia account if requested, and you’ll be authorised.

On to the matching itself. There’s two methods – manually, or in a semi-automated “game mode”.

How to match – manually

The first approach works line-by-line. Clicking on one of the entries – here, unmatched ODNB – brings up the first fifty entries in that set. Each one has options on the left hand side – to search Wikidata or English Wikipedia, either by the internal search or Google. On the right-hand side, there are three options – “set Q”, to provide it with a Wikidata ID (these are all of the form Q—–, and so we often call them “Q numbers”); “No WD”, to list it as not on Wikidata; “N/A”, to record that it’s not appropriate for Wikidata matching.

If you’ve found a match on Wikidata, the ID number should be clearly displayed at the top of that page. Click “set Q” and paste it in. If you’ve found a match via Wikipedia, you can click the “Wikidata” link in the left-hand sidebar to take you to the corresponding Wikidata page, and get the ID from there.

After a moment, it’ll display a very rough-and-ready precis of what’s on Wikidata next to that line –

– which makes it easy to spot if you’ve accidentally pasted in the wrong code! Here, we’ve identified one person (with rather limited information, just gender and deathdate, currently in Wikidata, and marked another as definitely not found)

If you’re using the automatically matched list, you’ll see something like this:

– it’s already got the data from the possible matches but wants you to confirm. Clicking on the Q-number will take you to the provisional Wikidata match, and from there you can get to relevant Wikipedia articles if you need further confirmation.

How to match – game mode

We’ve also set up a “game mode”. This is suitable when we expect a high number of the unmatched entries to be connectable to Wikipedia articles; it gives you a random entry from the unmatched list, along with a handful of possible results from a Wikipedia search, and asks you to choose the correct one if it’s there. you can get it by clicking [G] next to the unmatched entries.

Here’s an example, using the OpenPlaques database.

In this one, it was pretty clear that their Roy Castle is the same as the first person listed here (remember him?), so we click the blue Q-number; it’s marked as matched, and the game generates a new entry. Alternatively, we could look him up elsewhere and paste the Q-number or Wikipedia URL in, then click the “set Q” button. If our subject’s not here – click “skip” and move on to the next one.

Finishing up

When you’ve finished matching, go back to the main screen and click the [Y] at the end of the list. This allows you to synchronise the work you’ve done with Wikidata – it will make the edits to Wikidata under your account. (There is also an option to import existing matches from Wikidata, but at the moment the mix-and-match database is a bit out of synch and this is best avoided…) There’s no need to do this if you’re feeling overly cautious, though – we’ll synchronise them soon enough. The same page will also report any cases where two distinct Wikidata entries have been matched to the same identifier, which (usually) shouldn’t happen.

If you want a simple export of the matched data, you can click the [D] link for a TSV file (Q-number, identifier, identifier URL & name if relevant), and some stats on how many matches to individual wikis are available with [S].

Brute force

Finally, if you have a lot of matched data, and you are confident it’s accurate without needing human confirmation, then you can adopt the brute-force method – QuickStatements. This is the tool used for pushing data from mix-and-match to Wikidata, and can be used for any data import. Instructions are on that page – but if you’re going to use it, test it with a few individual items first to make sure it’s doing what you think, and please don’t be shy to ask for help…

So, we’ve covered a) what we’re doing; and b) how we get the information into Wikidata. Next instalment, how to actually use these identifiers for your own purposes…

Wikidata identifiers and the ODNB – where next?

Wikidata, for those of you unfamiliar with it, is the backend we are developing for Wikipedia. At its simplest, it’s a spine linking together the same concept in different languages – so we can tell that a coronation in English matches Tacqoyma in Azeri or Коронація in Ukranian, or thirty-five other languages between. This all gets bundled up into a single data entry – the enigmatically named Q209715 – which then gets other properties attached. In this case, a coronation is a kind of (or subclass of, for you semanticians) “ceremony” (Q2627975), and is linked to a few external thesauruses. The system is fully multilingual, so we can express “coronation – subclass of – ceremony” in English as easily as “kroning – undergruppe af – ceremoni” in Danish.

So far, so good.

There has been a great deal of work around Wikipedia in recent years in connecting our rich-text articles to static authority control records – confirming that our George Washington is the same as the one the Library of Congress knows about. During 2012-13, these were ingested from Wikipedia into Wikidata, and as of a year ago we had identified around 420,000 Wikidata entities with authority control identifiers. Most of these were from VIAF, but around half had an identifier from the German GND database, another half from ISNI, and a little over a third LCCN identifiers. Many had all four (and more). We now support matching to a large number of library catalogue identifiers, but – speaking as a librarian – I’m aware this isn’t very exciting to anyone who doesn’t spend much of their time cataloguing…

So, the next phase was to move beyond simply “authority” identifiers and move to ones that actually provide content. The main project that I’ve been working on (along with Charles Matthews and Magnus Manske, with the help of Jo Payne at OUP) is matching Wikidata to the Oxford Dictionary of National Biography – Wikipedia authors tend to hold the ODNB in high regard, and many of our articles already use it as a reference work. We’re currently about three-quarters of the way through, having identified around 40,000 ODNB entries who have been clearly matched to a Wikidata entity, and the rest should be finished some time in 2015. (You can see the tool here, and how to use that will be a post for another day.) After that, I’ve been working on a project to make links between Wikidata and the History of Parliament (with the assistance of Matthew Kilburn and Paul Seaward) – looking forward to being able to announce some results from this soon.

What does this mean? Well, for a first step, it means we can start making better links to a valuable resource on a more organised basis – for example, Robin Owain and I recently deployed an experimental tool on the Welsh Wikipedia that will generate ODNB links at the end of any article on a relevant subject (see, eg, Dylan Thomas). It means we can start making the Wikisource edition of the (original) Dictionary of National Biography more visible. It means we can quickly generate worklists – you want suitable articles to work on? Well, we have all these interesting and undeniably notable biographies not yet covered in English (or Welsh, or German, or…)

For the ODNB, it opens up the potential for linking to other interesting datasets (and that without having to pass through wikidata – all this can be exported). At the moment, we can identify matches to twelve thousand ISNIs, twenty thousand VIAF identifiers, and – unexpectedly – a thousand entries in IMDb. (Ten of them are entries for “characters”, which opens up a marvellous conceptual can of worms, but let’s leave that aside…).

And for third parties? Well, this is where it gets interesting. If you have ODNB links in your dataset, we can generate Wikipedia entries (probably less valuable, but in oh so many languages). We can generate images for you – Wikidata knows about openly licensed portraits for 214,000 people. Or we can crosswalk to whatever other project we support – YourPaintings links, perhaps? We can match a thousand of those. It can go backwards – we can take your existing VIAF links and give you ODNB entries. (Cataloguers, take note.)

And, best of all, we can ingest that data – and once it’s in Wikidata, the next third party to come along can make the links directly to you, and every new dataset makes the existing ones more valuable. Right now, we have a lot of authority control data, but we’re lighter on serious content links. If you have a useful online project with permanent identifiers, and you’d like to start matching those up to Wikidata, please do get in touch – this is really exciting work and we’d love to work with anyone wanting to help take it forward.

Update: Here’s part 2: on how to use the mix-and-match tool.

Laws on Wikidata

So, I had the day off, and decided to fiddle a little with Wikidata. After some experimenting, it now knows about:

  • 1516 Acts of the Parliament of the United Kingdom (1801-present)
  • 194 Acts of the Parliament of Great Britain (1707-1800)
  • 329 Acts of the Parliament of England (to 1707)
  • 20 Acts of the Parliament of Scotland (to 1707)
  • 19 Acts of the Parliament of Ireland (to 1800)

(Acts of the modern devolved parliaments for NI, Scotland, and Wales will follow.)

Each has a specific “instance of” property – Q18009569, for example, is “act of the Parliament of Scotland” – and is set up as a subclass of the general “act of parliament”. At the moment, there’s detailed subclasses for the UK and Canada (which has a seperate class for each province’s legislation) but nowhere else. Yet…

These numbers are slightly fuzzy – it’s mainly based on Wikipedia articles and so there are a small handful of cases where the entry represents a particular clause (eg Q7444697, s.4 and s.10 of the Human Rights Act 1998), or cases hwere multiple statutes are treated in the same article (eg Q1133144, the Corn Laws), but these are relatively rare and, mostly, it’s a good direct correspondence. (I’ve been fairly careful to keep out oddities, but of course, some will creep in…)

So where next? At the moment, these almost all reflect Wikipedia articles. Only 34 have a link to (English) Wikisource, though I’d guess there’s about 200-250 statutes currently on there. Matching those up will definitely be valuable; for legislation currently in force and on the Statute Law Database, it would be good to be able to crosslink to there as well.

Mechanical Curator on Commons

The internet has been very enthralled by the British Library’s recent release of the Mechanical Curator collection: a million public-domain images extracted from digitised books, put online for people to identify and discover. The real delight is that we don’t know what’s in there – the images have been extracted and sorted by a computer, and human eyes may never have looked at them since they were scanned.

Image taken from page 171 of '[Seonee, or, camp life on the Satpura Range ... Illustrated by the author, etc.]'

I wasn’t directly involved with this – it was released after I left – but it was organised by former colleagues of mine, and I’ve worked on some other projects with the underlying Microsoft Books collection. It’s a great project, and all the more so for being a relatively incidental one. I’m really, really delighted to see it out there, and to see the outpouring of interest and support for it.

One of the questions that’s been asked is: why put them on Flickr and not Commons? The BL has done quite a bit of work with Wikimedia, and has used it as the primary way of distributing material in the past – see the Picturing Canada project – and so it might seem a natural home for a large release of public domain material.

The immediate answer is that Commons is a repository for, essentially, discoverable images. It’s structured with a discovery mechanism built around knowing that you need a picture of X, and finding it by search or by category browsing, which makes metadata essential. It’s not designed for serendipitous browsing, and not able to cope easily with large amounts of unsorted and unidentified material. (I think I can imagine the response were the community to discover 5% of the content of Commons was made up of undiscoverable, unlabelled content…) We have started looking at bringing it across, but on a small scale.

Putting a dump on archive.org has much the same problem – a lack of functional discoverability. There’s no way to casually browse material here, and it relies very much on metadata to make it accessible. If the metadata doesn’t exist, it’s useless.

And so: flickr. Flickr, unlike the repositories, is designed for casual discoverability, for browsing screenfuls of images, and for users to easily tag and annotate them – things that the others don’t easily offer. It’s by far the best environment of the three for engagement and discoverability, even if probably less useful for long-term storage.

This brings the question: should Commons be able to handle this use case? There’s a lot of work being done just now on the future of multimedia: will Commons in 2018 be able to handle the sort of large-scale donation that it would choke on in 2013? Should we be working to support discovery and description of unknown material, or should we be focusing on content which already has good metadata?