Posts Tagged ‘wikipedia’

Taking pictures with flying government lasers

Friday, October 2nd, 2015

Well, sort of.

A few weeks ago, the Environment Agency released the first tranche of their LIDAR survey data. This covers (most of) England, at varying resolution from 2m to 25cm, made via LIDAR airborne survey.

It’s great fun. After a bit of back-and-forth (and hastily figuring out how to use QGIS), here’s two rendered images I made of Durham, one with buildings and one without, now on Commons:

The first is shown with buildings, the second without. Both are at 1m resolution, the best currently available for the area. Note in particular the very striking embankment and cutting for the railway viaduct (top left). These look like they could be very useful things to produce for Commons, especially since it’s – effectively – very recent, openly licensed, aerial imagery…

1. Selecting a suitable area

Generating these was, on the whole, fairly easy. First, install QGIS (simplicity itself on a linux machine, probably not too much hassle elsewhere). Then, go to the main data page and find the area you’re interested in. It’s arranged on an Ordnance Survey grid – click anywhere on the map to select a grid square. Major grid squares (Durham is NZ24) are 10km by 10km, and all data will be downloaded in a zip file containing tiles for that particular region.

Let’s say we want to try Cambridge. The TL45 square neatly cuts off North Cambridge but most of the city is there. If we look at the bottom part of the screen, it offers “Digital Terrain Model” at 2m and 1m resolution, and “Digital Surface Model” likewise. The DTM is the version just showing the terrain (no buildings, trees, etc) while the DSM has all the surface features included. Let’s try the DSM, as Cambridge is not exactly mountainous. The “on/off” slider will show exactly what the DSM covers in this area, though in Cambridge it’s more or less “everything”.

While this is downloading, let’s pick our target area. Zooming in a little further will show thinner blue lines and occasional superimposed blue digits; these define the smaller squares, 1 km by 1 km. For those who don’t remember learning to read OS maps, the number on the left and the number on the bottom, taken together, define the square. So the sector containing all the colleges along the river (a dense clump of black-outlined buildings) is TL4458.

2. Rendering a single tile

Now your zip file has downloaded, drop all the files into a directory somewhere. Note that they’re all named something like tl4356_DSM_1m.asc. Unsurprisingly, this means the 1m DSM data for square TL4356.

Fire up QGIS, go to Layer > Add raster layer, and select your tile – in this case, TL4458. You’ll get a crude-looking monochrome image, immediately recognisable by a broken white line running down the middle. This is the Cam. If you’re seeing this, great, everything’s working so far. (This step is very helpful to check you are looking at the right area)

Now, let’s make the image. Project > New to blank everything (no need to save). Then Raster > Analysis > DEM (terrain models). In the first box, select your chosen input file. In the next box, the output filename – with a .tif suffix. (Caution, linux users: make sure to enter or select a path here, otherwise it seems to default to home). Leave everything else as default – all unticked and mode: hillshade. Click OK, and a few seconds later it’ll give a completed message; cancel out of the dialogue box at this point. It’ll be displaying something like this:

Congratulations! Your first LIDAR rendering. You can quit out of QGIS (you can close without saving, your converted file is saved already) and open this up as a normal TIFF file now; it’ll be about 1MB and cover an area 1km by 1km. If you look closely, you can see some surprisingly subtle details despite the low resolution – the low walls outside Kings College, for example, or cars on the Queen’s Road – Madingley Road roundabout by the top left.

3. Rendering several tiles

Rendering multiple squares is a little trickier. Let’s try doing Barton, which conveniently fits into two squares – TL4055 and TL4155. Open QGIS up, and render TL4055 as above, through Raster > Analysis > DEM (terrain models). Then, with the dialogue window still open, select TL4155 (and a new output filename) and run it again. Do this for as many files as you need.

After all the tiles are prepared, clear the screen by starting a new project (again, no need to save) and go to Raster > Miscellaneous > Merge. In “Input files”, select the two exports you’ve just done. In “Output file”, pick a suitable filename (again ending in .tif). Hit OK, let it process, then close the dialog. You can again close QGIS without saving, as the export’s complete.

The rendering system embeds coordinates in the files, which means that when they’re assembled and merged they’ll automatically slot together in the correct position and orientation – no need to manually tile them. The result should look like this:

The odd black bit in the top right is the edge of the flight track – there’s not quite comprehensive coverage. This is a mainly agricultural area, and you can see field markings – some quite detailed, and a few bits on the bottom of the right-hand tile that might be traces of old buildings.

So… go forth! Make LIDAR images! See what you can spot…

4. Command-line rendering in bulk

Richard Symonds (who started me down this rabbit-hole) points out this very useful post, which explains how to do the rendering and merging via the command line. Let’s try the entire Durham area; 88 files in NZ24, all dumped into a single directory –

for i in `ls *.asc` ; do gdaldem hillshade -compute_edges $i $i.tif ; done -o NZ24-area.tif *.tif

rm *.asc.tif

In order, that a) runs the hillshade program on each individual source file ; b) assembles them into a single giant image file; c) removes the intermediate images (optional, but may as well tidy up). The -compute_edges flag helpfully removes the thin black lines between sectors – I should have turned it on in the earlier sections!

Wikidata and identifiers – part 2, the matching process

Thursday, November 27th, 2014

Yesterday, I wrote about the work we’re doing matching identifiers into Wikidata. Today, the tools we use for it!


The main tool we’re using is a beautiful thing Magnus developed called mix-and-match. It imports all the identifiers with some core metadata – for the ODNB, for example, this was names and dates and the brief descriptive text – and sorts them into five groups:

  • Manually matched – these matches have been confirmed by a person (or imported from data already in Wikidata);

  • Automatic – the system has guessed these are probably the same people but wants human confirmation;
  • Unmatched – we have no idea who these identifiers match to;
  • No Wikidata – we know there is currently no Wikidata match;
  • N/A – this identifier shouldn’t match to a Wikidata entity (for example, it’s a placeholder, a subject Wikidata will never cover, or an cross-reference with its own entry).

The goal is to work through everything and move as much as possible to “manually matched”. Anything in this group can then be migrated over to Wikidata with a couple of clicks. Here’s the ODNB as it stands today:

(Want to see what’s happening with the data? The recent changes link will show you the last fifty edits to all the lists.)

So, how do we do this? Firstly, you’ll need a Wikipedia account, and to log in to our “WiDaR” authentication tool. Follow the link on the top of the mix-and-match page (or, indeed, this one), sign in with your Wikipedia account if requested, and you’ll be authorised.

On to the matching itself. There’s two methods – manually, or in a semi-automated “game mode”.

How to match – manually

The first approach works line-by-line. Clicking on one of the entries – here, unmatched ODNB – brings up the first fifty entries in that set. Each one has options on the left hand side – to search Wikidata or English Wikipedia, either by the internal search or Google. On the right-hand side, there are three options – “set Q”, to provide it with a Wikidata ID (these are all of the form Q—–, and so we often call them “Q numbers”); “No WD”, to list it as not on Wikidata; “N/A”, to record that it’s not appropriate for Wikidata matching.

If you’ve found a match on Wikidata, the ID number should be clearly displayed at the top of that page. Click “set Q” and paste it in. If you’ve found a match via Wikipedia, you can click the “Wikidata” link in the left-hand sidebar to take you to the corresponding Wikidata page, and get the ID from there.

After a moment, it’ll display a very rough-and-ready precis of what’s on Wikidata next to that line –

– which makes it easy to spot if you’ve accidentally pasted in the wrong code! Here, we’ve identified one person (with rather limited information, just gender and deathdate, currently in Wikidata, and marked another as definitely not found)

If you’re using the automatically matched list, you’ll see something like this:

– it’s already got the data from the possible matches but wants you to confirm. Clicking on the Q-number will take you to the provisional Wikidata match, and from there you can get to relevant Wikipedia articles if you need further confirmation.

How to match – game mode

We’ve also set up a “game mode”. This is suitable when we expect a high number of the unmatched entries to be connectable to Wikipedia articles; it gives you a random entry from the unmatched list, along with a handful of possible results from a Wikipedia search, and asks you to choose the correct one if it’s there. you can get it by clicking [G] next to the unmatched entries.

Here’s an example, using the OpenPlaques database.

In this one, it was pretty clear that their Roy Castle is the same as the first person listed here (remember him?), so we click the blue Q-number; it’s marked as matched, and the game generates a new entry. Alternatively, we could look him up elsewhere and paste the Q-number or Wikipedia URL in, then click the “set Q” button. If our subject’s not here – click “skip” and move on to the next one.

Finishing up

When you’ve finished matching, go back to the main screen and click the [Y] at the end of the list. This allows you to synchronise the work you’ve done with Wikidata – it will make the edits to Wikidata under your account. (There is also an option to import existing matches from Wikidata, but at the moment the mix-and-match database is a bit out of synch and this is best avoided…) There’s no need to do this if you’re feeling overly cautious, though – we’ll synchronise them soon enough. The same page will also report any cases where two distinct Wikidata entries have been matched to the same identifier, which (usually) shouldn’t happen.

If you want a simple export of the matched data, you can click the [D] link for a TSV file (Q-number, identifier, identifier URL & name if relevant), and some stats on how many matches to individual wikis are available with [S].

Brute force

Finally, if you have a lot of matched data, and you are confident it’s accurate without needing human confirmation, then you can adopt the brute-force method – QuickStatements. This is the tool used for pushing data from mix-and-match to Wikidata, and can be used for any data import. Instructions are on that page – but if you’re going to use it, test it with a few individual items first to make sure it’s doing what you think, and please don’t be shy to ask for help…

So, we’ve covered a) what we’re doing; and b) how we get the information into Wikidata. Next instalment, how to actually use these identifiers for your own purposes…

Wikidata identifiers and the ODNB – where next?

Wednesday, November 26th, 2014

Wikidata, for those of you unfamiliar with it, is the backend we are developing for Wikipedia. At its simplest, it’s a spine linking together the same concept in different languages – so we can tell that a coronation in English matches Tacqoyma in Azeri or Коронація in Ukranian, or thirty-five other languages between. This all gets bundled up into a single data entry – the enigmatically named Q209715 – which then gets other properties attached. In this case, a coronation is a kind of (or subclass of, for you semanticians) “ceremony” (Q2627975), and is linked to a few external thesauruses. The system is fully multilingual, so we can express “coronation – subclass of – ceremony” in English as easily as “kroning – undergruppe af – ceremoni” in Danish.

So far, so good.

There has been a great deal of work around Wikipedia in recent years in connecting our rich-text articles to static authority control records – confirming that our George Washington is the same as the one the Library of Congress knows about. During 2012-13, these were ingested from Wikipedia into Wikidata, and as of a year ago we had identified around 420,000 Wikidata entities with authority control identifiers. Most of these were from VIAF, but around half had an identifier from the German GND database, another half from ISNI, and a little over a third LCCN identifiers. Many had all four (and more). We now support matching to a large number of library catalogue identifiers, but – speaking as a librarian – I’m aware this isn’t very exciting to anyone who doesn’t spend much of their time cataloguing…

So, the next phase was to move beyond simply “authority” identifiers and move to ones that actually provide content. The main project that I’ve been working on (along with Charles Matthews and Magnus Manske, with the help of Jo Payne at OUP) is matching Wikidata to the Oxford Dictionary of National Biography – Wikipedia authors tend to hold the ODNB in high regard, and many of our articles already use it as a reference work. We’re currently about three-quarters of the way through, having identified around 40,000 ODNB entries who have been clearly matched to a Wikidata entity, and the rest should be finished some time in 2015. (You can see the tool here, and how to use that will be a post for another day.) After that, I’ve been working on a project to make links between Wikidata and the History of Parliament (with the assistance of Matthew Kilburn and Paul Seaward) – looking forward to being able to announce some results from this soon.

What does this mean? Well, for a first step, it means we can start making better links to a valuable resource on a more organised basis – for example, Robin Owain and I recently deployed an experimental tool on the Welsh Wikipedia that will generate ODNB links at the end of any article on a relevant subject (see, eg, Dylan Thomas). It means we can start making the Wikisource edition of the (original) Dictionary of National Biography more visible. It means we can quickly generate worklists – you want suitable articles to work on? Well, we have all these interesting and undeniably notable biographies not yet covered in English (or Welsh, or German, or…)

For the ODNB, it opens up the potential for linking to other interesting datasets (and that without having to pass through wikidata – all this can be exported). At the moment, we can identify matches to twelve thousand ISNIs, twenty thousand VIAF identifiers, and – unexpectedly – a thousand entries in IMDb. (Ten of them are entries for “characters”, which opens up a marvellous conceptual can of worms, but let’s leave that aside…).

And for third parties? Well, this is where it gets interesting. If you have ODNB links in your dataset, we can generate Wikipedia entries (probably less valuable, but in oh so many languages). We can generate images for you – Wikidata knows about openly licensed portraits for 214,000 people. Or we can crosswalk to whatever other project we support – YourPaintings links, perhaps? We can match a thousand of those. It can go backwards – we can take your existing VIAF links and give you ODNB entries. (Cataloguers, take note.)

And, best of all, we can ingest that data – and once it’s in Wikidata, the next third party to come along can make the links directly to you, and every new dataset makes the existing ones more valuable. Right now, we have a lot of authority control data, but we’re lighter on serious content links. If you have a useful online project with permanent identifiers, and you’d like to start matching those up to Wikidata, please do get in touch – this is really exciting work and we’d love to work with anyone wanting to help take it forward.

Update: Here’s part 2: on how to use the mix-and-match tool.

Laws on Wikidata

Tuesday, September 9th, 2014

So, I had the day off, and decided to fiddle a little with Wikidata. After some experimenting, it now knows about:

  • 1516 Acts of the Parliament of the United Kingdom (1801-present)
  • 194 Acts of the Parliament of Great Britain (1707-1800)
  • 329 Acts of the Parliament of England (to 1707)
  • 20 Acts of the Parliament of Scotland (to 1707)
  • 19 Acts of the Parliament of Ireland (to 1800)

(Acts of the modern devolved parliaments for NI, Scotland, and Wales will follow.)

Each has a specific “instance of” property – Q18009569, for example, is “act of the Parliament of Scotland” – and is set up as a subclass of the general “act of parliament”. At the moment, there’s detailed subclasses for the UK and Canada (which has a seperate class for each province’s legislation) but nowhere else. Yet…

These numbers are slightly fuzzy – it’s mainly based on Wikipedia articles and so there are a small handful of cases where the entry represents a particular clause (eg Q7444697, s.4 and s.10 of the Human Rights Act 1998), or cases hwere multiple statutes are treated in the same article (eg Q1133144, the Corn Laws), but these are relatively rare and, mostly, it’s a good direct correspondence. (I’ve been fairly careful to keep out oddities, but of course, some will creep in…)

So where next? At the moment, these almost all reflect Wikipedia articles. Only 34 have a link to (English) Wikisource, though I’d guess there’s about 200-250 statutes currently on there. Matching those up will definitely be valuable; for legislation currently in force and on the Statute Law Database, it would be good to be able to crosslink to there as well.

Mechanical Curator on Commons

Sunday, December 15th, 2013

The internet has been very enthralled by the British Library’s recent release of the Mechanical Curator collection: a million public-domain images extracted from digitised books, put online for people to identify and discover. The real delight is that we don’t know what’s in there – the images have been extracted and sorted by a computer, and human eyes may never have looked at them since they were scanned.

Image taken from page 171 of '[Seonee, or, camp life on the Satpura Range ... Illustrated by the author, etc.]'

I wasn’t directly involved with this – it was released after I left – but it was organised by former colleagues of mine, and I’ve worked on some other projects with the underlying Microsoft Books collection. It’s a great project, and all the more so for being a relatively incidental one. I’m really, really delighted to see it out there, and to see the outpouring of interest and support for it.

One of the questions that’s been asked is: why put them on Flickr and not Commons? The BL has done quite a bit of work with Wikimedia, and has used it as the primary way of distributing material in the past – see the Picturing Canada project – and so it might seem a natural home for a large release of public domain material.

The immediate answer is that Commons is a repository for, essentially, discoverable images. It’s structured with a discovery mechanism built around knowing that you need a picture of X, and finding it by search or by category browsing, which makes metadata essential. It’s not designed for serendipitous browsing, and not able to cope easily with large amounts of unsorted and unidentified material. (I think I can imagine the response were the community to discover 5% of the content of Commons was made up of undiscoverable, unlabelled content…) We have started looking at bringing it across, but on a small scale.

Putting a dump on has much the same problem – a lack of functional discoverability. There’s no way to casually browse material here, and it relies very much on metadata to make it accessible. If the metadata doesn’t exist, it’s useless.

And so: flickr. Flickr, unlike the repositories, is designed for casual discoverability, for browsing screenfuls of images, and for users to easily tag and annotate them – things that the others don’t easily offer. It’s by far the best environment of the three for engagement and discoverability, even if probably less useful for long-term storage.

This brings the question: should Commons be able to handle this use case? There’s a lot of work being done just now on the future of multimedia: will Commons in 2018 be able to handle the sort of large-scale donation that it would choke on in 2013? Should we be working to support discovery and description of unknown material, or should we be focusing on content which already has good metadata?

Not all encyclopedias are created equal

Saturday, August 3rd, 2013

Wikipedia has some way to go before it can comprehensively replace the great Britannica in all its many roles. From Shackleton’s South, a passage in which he and his crew are stranded on a drifting ice-floe in the Weddell Sea, November 1915:

In addition to the daily hunt for food, our time was passed in reading the few books that we had managed to save from the ship. The greatest treasure in the library was a portion of the “Encyclopaedia Britannica.” This was being continually used to settle the inevitable arguments that would arise. The sailors were discovered one day engaged in a very heated discussion on the subject of Money and Exchange. They finally came to the conclusion that the Encyclopaedia, since it did not coincide with their views, must be wrong.

“For descriptions of every American town that ever has been, is, or ever will be, and for full and complete biographies of every American statesman since the time of George Washington and long before, the Encyclopaedia would be hard to beat. Owing to our shortage of matches we have been driven to use it for purposes other than the purely literary ones though; and one genius having discovered that the paper, used for its pages had been impregnated with saltpetre, we can now thoroughly recommend it as a very efficient pipe-lighter.”

We also possessed a few books on Antarctic exploration, a copy of Browning and one of “The Ancient Mariner.” On reading the latter, we sympathized with him and wondered what he had done with the albatross; it would have made a very welcome addition to our larder.

Young Cree man, 1902

Wednesday, June 26th, 2013

Most of the Picturing Canada images are of historic rather than aesthetic value, but here’s a really standout portrait I spotted today:

Cree Indian (HS85-10-13885)

A young Cree man, name unrecorded; probably taken in Alberta or Saskatchewan, 1902. A little fragment of history.

Carolyn Mayben Flowers: the Lady Prospector of Porcupine

Sunday, June 9th, 2013

Working my way through some of the Canadian Collection on Commons this morning, I discovered a rather eye-catching picture:

Porcupine's lady prospector (HS85-10-24373)

“Porcupine’s Lady Prospector”, photographed at the Porcupine Gold Rush in the summer of 1911. Two things immediately strike the viewer: one is that the woman in the photograph is dressed decorously by the standards of Edwardian Canada, with a white blouse and a long dark skirt, despite the searing heat of that summer – Porcupine would later be devastated by wildfire – and the second is that she has a revolver slung casually on one hip.

There has to be a story here.

It turns out to be quite quick to put a name to her; the Timmins Daily Press captions a copy of the picture as Carolyn Mayben Flowers, and the Timmins Museum gives us still around in 1915, giving piano lessons. I haven’t been able to trace her after that, or indeed before. There is a “Cathaline Flowers” in Gowganda (aged 26, married, with a six-year-old daughter), but Gowganda is a long way from Timmins, and she doesn’t list herself as American…

Wikipedians in Residence: a recap

Wednesday, April 24th, 2013

To my great surprise, I got named in a BBC story today. The article is about the upcoming Wikipedian in Residence at the National Library of Scotland; it’s really pleasing that as my own work at the British Library is coming to an end, there’ll be someone else taking up the work at an equally interesting organisation.

NLS is just the tip of the iceberg, though. Here is a list of all the current and past Wikimedians in Residence, and below is a list of everyone who is currently looking for a Wikipedian (or Wikimedian) in Residence that I’ve heard about – please let me know if I’ve missed any!

  • The National Library of Scotland (paid)

    Four-month residency working with the National Library of Scotland in Edinburgh to help disseminate the Library’s content to Wikipedia, and work with librarians to help encourage understanding and use of the projects.

  • JISC “Wikimedia Ambassador” residency (paid)

    Nine-month program looking to build skills and expertise engaging with Wikimedia projects among JISC-funded research programs, and to help disseminate knowledge from that research. (In many ways, this fits very neatly with some of the work I was doing for AHRC…).

  • ZDF Television (Germany) (paid)

    Short-term program (until mid-October) to liaise between the organisation and Wikipedia contributors on – I love this – a project to fact-check political claims during the months before the 2013 federal election in September.

  • Smithsonian Institution (paid)

    Internship (with stipend), aiming to build on and sustain the existing partnership programs with the Smithsonian.

  • Swiss Federal Archives (paid)

    Three to six month program with a particular focus on digitising WWI-related photographs.

  • METRO (New York) Open Data Fellowship (paid)

    An interesting two-track program; an eight-week fellowship working as a Wikipedian in Residence for a consortium of cultural institutions, and also as an advisor on open data/licensing/etc. US only, students preferred.

  • Olympia Timberland Library (US) (volunteer)

    The library is looking for a “Wiki-Ninja” (now there’s something to put on a job description) to help build and sustain a local-history editing program among the local community.

And, of course, there’s plenty more institutions which are setting up similar volunteer programs without going through a formal recruitment process – it only tends to be needed when money gets involved. If you’re a Wikipedia volunteer thinking of what you could do with a local institution, now is as good a time as any to approach them…

How many hours?

Tuesday, February 19th, 2013

A newly released (and very interesting) paper – Using Edit Sessions to Measure Participation in Wikipedia – looks at estimating the level of participation in Wikipedia using an estimate of time spent contributing, rather than previous studies based on raw edit numbers, etc.

Their headline figure is an estimate that all of Wikipedia, up to an unspecified date in 2012, represents “a total of 102,673,683 total labor-hours”.

As David White noticed, this is many lifetimes of labour:

Some other ways to visualise these numbers:

  • Three years work by a mid-sized university of around 15,000 people (assuming a working day of eight hours and 250 working days in the year)

  • The users of the British Library reading rooms (capacity ~1500) working for thirty-three years.
  • One thousand “productive lives” (days as above, over fifty years, rather than 24/7 cradle to grave)

Or, in a sharp demonstration of the “cognitive surplus” theory:

  • Seven minutes writing time each from the global audience of the 2012 Olympic opening ceremony.

All of Wikipedia, in all its languages, could have been written in the time it took the world to make a cup of tea during the speeches.