Archive for November, 2014

Wikidata and identifiers – part 2, the matching process

Thursday, November 27th, 2014

Yesterday, I wrote about the work we’re doing matching identifiers into Wikidata. Today, the tools we use for it!

Mix-and-match

The main tool we’re using is a beautiful thing Magnus developed called mix-and-match. It imports all the identifiers with some core metadata – for the ODNB, for example, this was names and dates and the brief descriptive text – and sorts them into five groups:

  • Manually matched – these matches have been confirmed by a person (or imported from data already in Wikidata);
  • Automatic – the system has guessed these are probably the same people but wants human confirmation;
  • Unmatched – we have no idea who these identifiers match to;
  • No Wikidata – we know there is currently no Wikidata match;
  • N/A – this identifier shouldn’t match to a Wikidata entity (for example, it’s a placeholder, a subject Wikidata will never cover, or an cross-reference with its own entry).

The goal is to work through everything and move as much as possible to “manually matched”. Anything in this group can then be migrated over to Wikidata with a couple of clicks. Here’s the ODNB as it stands today:

(Want to see what’s happening with the data? The recent changes link will show you the last fifty edits to all the lists.)

So, how do we do this? Firstly, you’ll need a Wikipedia account, and to log in to our “WiDaR” authentication tool. Follow the link on the top of the mix-and-match page (or, indeed, this one), sign in with your Wikipedia account if requested, and you’ll be authorised.

On to the matching itself. There’s two methods – manually, or in a semi-automated “game mode”.

How to match – manually

The first approach works line-by-line. Clicking on one of the entries – here, unmatched ODNB – brings up the first fifty entries in that set. Each one has options on the left hand side – to search Wikidata or English Wikipedia, either by the internal search or Google. On the right-hand side, there are three options – “set Q”, to provide it with a Wikidata ID (these are all of the form Q—–, and so we often call them “Q numbers”); “No WD”, to list it as not on Wikidata; “N/A”, to record that it’s not appropriate for Wikidata matching.

If you’ve found a match on Wikidata, the ID number should be clearly displayed at the top of that page. Click “set Q” and paste it in. If you’ve found a match via Wikipedia, you can click the “Wikidata” link in the left-hand sidebar to take you to the corresponding Wikidata page, and get the ID from there.

After a moment, it’ll display a very rough-and-ready precis of what’s on Wikidata next to that line –

– which makes it easy to spot if you’ve accidentally pasted in the wrong code! Here, we’ve identified one person (with rather limited information, just gender and deathdate, currently in Wikidata, and marked another as definitely not found)

If you’re using the automatically matched list, you’ll see something like this:

– it’s already got the data from the possible matches but wants you to confirm. Clicking on the Q-number will take you to the provisional Wikidata match, and from there you can get to relevant Wikipedia articles if you need further confirmation.

How to match – game mode

We’ve also set up a “game mode”. This is suitable when we expect a high number of the unmatched entries to be connectable to Wikipedia articles; it gives you a random entry from the unmatched list, along with a handful of possible results from a Wikipedia search, and asks you to choose the correct one if it’s there. you can get it by clicking [G] next to the unmatched entries.

Here’s an example, using the OpenPlaques database.

In this one, it was pretty clear that their Roy Castle is the same as the first person listed here (remember him?), so we click the blue Q-number; it’s marked as matched, and the game generates a new entry. Alternatively, we could look him up elsewhere and paste the Q-number or Wikipedia URL in, then click the “set Q” button. If our subject’s not here – click “skip” and move on to the next one.

Finishing up

When you’ve finished matching, go back to the main screen and click the [Y] at the end of the list. This allows you to synchronise the work you’ve done with Wikidata – it will make the edits to Wikidata under your account. (There is also an option to import existing matches from Wikidata, but at the moment the mix-and-match database is a bit out of synch and this is best avoided…) There’s no need to do this if you’re feeling overly cautious, though – we’ll synchronise them soon enough. The same page will also report any cases where two distinct Wikidata entries have been matched to the same identifier, which (usually) shouldn’t happen.

If you want a simple export of the matched data, you can click the [D] link for a TSV file (Q-number, identifier, identifier URL & name if relevant), and some stats on how many matches to individual wikis are available with [S].

Brute force

Finally, if you have a lot of matched data, and you are confident it’s accurate without needing human confirmation, then you can adopt the brute-force method – QuickStatements. This is the tool used for pushing data from mix-and-match to Wikidata, and can be used for any data import. Instructions are on that page – but if you’re going to use it, test it with a few individual items first to make sure it’s doing what you think, and please don’t be shy to ask for help…

So, we’ve covered a) what we’re doing; and b) how we get the information into Wikidata. Next instalment, how to actually use these identifiers for your own purposes…

Wikidata identifiers and the ODNB – where next?

Wednesday, November 26th, 2014

Wikidata, for those of you unfamiliar with it, is the backend we are developing for Wikipedia. At its simplest, it’s a spine linking together the same concept in different languages – so we can tell that a coronation in English matches Tacqoyma in Azeri or Коронація in Ukranian, or thirty-five other languages between. This all gets bundled up into a single data entry – the enigmatically named Q209715 – which then gets other properties attached. In this case, a coronation is a kind of (or subclass of, for you semanticians) “ceremony” (Q2627975), and is linked to a few external thesauruses. The system is fully multilingual, so we can express “coronation – subclass of – ceremony” in English as easily as “kroning – undergruppe af – ceremoni” in Danish.

So far, so good.

There has been a great deal of work around Wikipedia in recent years in connecting our rich-text articles to static authority control records – confirming that our George Washington is the same as the one the Library of Congress knows about. During 2012-13, these were ingested from Wikipedia into Wikidata, and as of a year ago we had identified around 420,000 Wikidata entities with authority control identifiers. Most of these were from VIAF, but around half had an identifier from the German GND database, another half from ISNI, and a little over a third LCCN identifiers. Many had all four (and more). We now support matching to a large number of library catalogue identifiers, but – speaking as a librarian – I’m aware this isn’t very exciting to anyone who doesn’t spend much of their time cataloguing…

So, the next phase was to move beyond simply “authority” identifiers and move to ones that actually provide content. The main project that I’ve been working on (along with Charles Matthews and Magnus Manske, with the help of Jo Payne at OUP) is matching Wikidata to the Oxford Dictionary of National Biography – Wikipedia authors tend to hold the ODNB in high regard, and many of our articles already use it as a reference work. We’re currently about three-quarters of the way through, having identified around 40,000 ODNB entries who have been clearly matched to a Wikidata entity, and the rest should be finished some time in 2015. (You can see the tool here, and how to use that will be a post for another day.) After that, I’ve been working on a project to make links between Wikidata and the History of Parliament (with the assistance of Matthew Kilburn and Paul Seaward) – looking forward to being able to announce some results from this soon.

What does this mean? Well, for a first step, it means we can start making better links to a valuable resource on a more organised basis – for example, Robin Owain and I recently deployed an experimental tool on the Welsh Wikipedia that will generate ODNB links at the end of any article on a relevant subject (see, eg, Dylan Thomas). It means we can start making the Wikisource edition of the (original) Dictionary of National Biography more visible. It means we can quickly generate worklists – you want suitable articles to work on? Well, we have all these interesting and undeniably notable biographies not yet covered in English (or Welsh, or German, or…)

For the ODNB, it opens up the potential for linking to other interesting datasets (and that without having to pass through wikidata – all this can be exported). At the moment, we can identify matches to twelve thousand ISNIs, twenty thousand VIAF identifiers, and – unexpectedly – a thousand entries in IMDb. (Ten of them are entries for “characters”, which opens up a marvellous conceptual can of worms, but let’s leave that aside…).

And for third parties? Well, this is where it gets interesting. If you have ODNB links in your dataset, we can generate Wikipedia entries (probably less valuable, but in oh so many languages). We can generate images for you – Wikidata knows about openly licensed portraits for 214,000 people. Or we can crosswalk to whatever other project we support – YourPaintings links, perhaps? We can match a thousand of those. It can go backwards – we can take your existing VIAF links and give you ODNB entries. (Cataloguers, take note.)

And, best of all, we can ingest that data – and once it’s in Wikidata, the next third party to come along can make the links directly to you, and every new dataset makes the existing ones more valuable. Right now, we have a lot of authority control data, but we’re lighter on serious content links. If you have a useful online project with permanent identifiers, and you’d like to start matching those up to Wikidata, please do get in touch – this is really exciting work and we’d love to work with anyone wanting to help take it forward.

Update: Here’s part 2: on how to use the mix-and-match tool.