Open questions about the costs of the scholarly publishing system

April 15th, 2016 by

Stuart Lawson (et al)’s new paper on “Opening the Black Box of Scholarly Communication Funding” is now out – it’s an excellent contribution to the discussion and worth a read.

From their conclusion:

The current lack of publicly available information concerning financial flows around scholarly communication systems is an obstacle to evidence-based policy-making – leaving researchers, decision-makers and institutions in the dark about the implications of current models and the resources available for experimenting with new ones.

It prompts me to put together a list I’ve been thinking about for a while – what do we still need to know about the scholarly publishing market?

  • What are the actual totals of author/institutional payments to publishers outside of subscriptions and APCs – page charges, colour charges, submission fees, and so on? I have recently estimated that for the UK this is on the order of a few million pounds per year, but that’s very provisional, and doesn’t include things like reprint payments or delve into the different local practices. All we can say for sure at this stage is “yes, it’s still non-trivial, more work needed”.
  • What are the overall amounts paid by readers to publishers and aggregators for pay-per-view articles? In 2011 I found that (for JSTOR at least) the numbers are vanishingly small. I’ve not seen much other investigation of this, surprisingly – or have I just missed it?
  • Can an overall value be put on the collective “journal support” costs – for example, subsidies from a scholarly society or institution to keep their journal afloat, or grants from funding bodies directly for operating journals? This money fills a gap between subscriptions and publication costs, and is essential to keep many journals operating, but is often skimmed over.
  • How closely do quoted APC prices reflect actual costs paid? After currency fluctuation, VAT, and sometimes offset membership discounting, these can vary widely, which can make it very difficult to anticipate the actual amount which will be invoiced. (A special prize for demonstrating the point here goes to the unnamed publisher who invoices in Euro for a list price in USD, and including annotations showing a GBP tax calculation). Reporting tends to be based on actual price paid, which helps, but a lot of policy and theory is based on list-price estimates.
  • How are double-dipping/hybrid offsetting systems working out, now they’ve had a couple of years to bed in? There has been quite a bit of discussion looking at the top-level figures (total subscriptions paid plus total APCs paid) which suggests that the answer is “total amounts paid are still rising”, which is probably correct. However, there’s very little looking in detail at per-journal costs, how the offsets (if any) are calculated, and whether or not the mechanisms used make sense given the relatively low number of hybrid articles in any given journal. Work here could help come up with a standard way of calculating offsets, which could be used in future negotiations. Hybrids won’t be going away any time soon…
  • What contribution to the subscription\publishing charges market comes from outside academia? We tend to focus on university payments (as these are both substantial and reasonably well-documented) but there are very large markets for subscription academic material in, for example, medicine, scientific industry, and law. These are not well understood.

And, finally, the big one:

  • How much does it cost (indirectly/implicitly) to maintain the current subscription-based system? We have a decent idea of how much the indirect costs of gold/green open access are, thanks to recent work on the ‘total cost of publication’, but no idea of the indirect costs of the status quo. And we really, really need to figure it out.

To illustrate that last point, and why I think it’s important…

A large number of librarians (and others) spend much of their time maintaining access systems, handling subscription payments, negotiating usage agreements, fixing user access problems, and so on. Then the publishers themselves have to pay staff to develop and maintain these systems, handle negotiations, deal with payments, etc. Centralised services like JISC’s collective negotiation mean more labour, and some centralised services like ATHENS can be surprisingly expensive to use.

Let’s make a wild guess that it comes down to one FTE staff member per university (it probably isn’t that much work for Chester, but it’s a lot more for Cambridge, so it might balance out); that’s about 130 in the UK. Ten more for all the non-university institutions. Five more for the central services. Five each at the five biggest publishers and another ten for all the others. Total – for our wild estimate – 180 FTE staff. (While the publisher staff aren’t paid by the universities, they’re ultimately paid out of the cost of subscriptions, and so it’s reasonable to consider them part of the overall system cost.)

This number compares interestingly with the 192 FTE that it was estimated would be needed to deal with the administration of making all 140,000 UK research papers gold OA – they’re certainly in the same ballpark, given the wide margins of error. It has substantial implications for any “just switch everything”-type proposals, for obvious reasons, but would also be a very interesting result in and of itself.

Lee of Portrush: an introduction

February 27th, 2016 by

One of the projects I’ve been meaning to get around to for a while is scanning and dating a boxful of old cabinet photographs and postcards produced by Lee of Portrush in the late nineteenth and early twentieth century.

At least five members and three generations of the Lee family worked as professional photographers in this small Northern Irish town – the last of them was my grandfather, William Lee, who carried the business on into the 1970s. Their later output doesn’t turn up much – I don’t think I’ve run across anything post-1920s – but a steady trickle of their older photographs appear on ebay and on family history sites. They produced a range of monochrome and coloured postcards of Portrush and the surrounding area, did a good trade in portrait photographs, and at one point ended up proprietors of (both temperance and non-temperance) hotels. Briefly, one brother decamped to South Africa (before deciding to come home again) and they proudly announced “Portrush, Coleraine, and Cape Town” – a combination rarely encountered. A more unusual line of work, however, was that they had a studio at the Giant’s Causeway.

The Causeway is the only World Heritage Site in Northern Ireland, and was as popular a tourist attraction then as now. A narrow-gauge electric tramline was built out from Portrush to Bushmills and then the Causeway in the 1880s, bringing in a sharp increase in visitors. And – because the Victorians were more or less the same people as we are now – they decided there was no better way to respond to a wonder of the natural world than to have your photograph taken while standing on it, so that you can show it to all your friends. Granted, you had to pay someone to take the photo, sit still with a rictus grin, then wait for them to faff around with wet plates and developer; not quite an iPhone selfie, but the spirit is the same even if the subjects were wearing crinolines. There is nothing new in this world.

The Lees responded cheerfully to this, and in addition to the profitable postcard trade, made a great deal of money by taking photographs of tourists up from Belfast or Dublin, or even further afield. (They then lost it again over the years; Portrush was not a great place for long-term investment once holidays to the Mediterranean became popular.)

Many of these are sat in shoeboxes; some turn up occasionally on eBay, where I buy them if they’re a few pounds. It’s a nice thing to have, since so little else survives of the business. One problem is that very few are clearly dated, and as all parts of the family seem to have used “Lees Studio”, or a variant, it’s not easy to put them in order, or to give a historical context. For the people who have these as genealogical artefacts, this is something of a problem – ideally, we’d be able to say that this particular card style was early, 1880-1890, that address was later, etc., to help give some clues as to when it was taken.

Fast forward a few years. Last November, I had an email from John Kavanaugh, who’d found a Lee photograph of his great-great-grandfather (John Kavanagh, 1822-1904), and managed to recreate the scene on a visit to the Causeway:

Family resemblance, 1895-2015
Courtesy John Kavanaugh/Efren Gonzalez

It’s quite striking how similar the two are. The stone the elder John was sat on has now crumbled, fallen, or been moved, but the rock formations behind him are unchanged. The original photo is dated c. 1895, so this covers a hundred and twenty years and five generations.

So, taking this as a good impetus to get around to the problem, I borrowed a scanner yesterday and set to. Fifty-odd photographs later, I’ve updated the collection on flickr, and over the next few posts I’ll try and draw together some notes on how to date them.

Android preinstalls – a ticking timebomb

February 17th, 2016 by

So, I got a push notification on my phone today from “Peel Smart Remote”. Never heard of it. This turns out to be one of those applications for people who really need to use their phone as a TV remote; a bit pointless, but hey, I’m sure someone thinks it’s a great idea.

I don’t own a TV, so unsurprisingly, I’m not one of those people. The app turned out to be pre-installed on my phone (originally under a different name), and is undeleteable – but I can “disable” it and delete any data it had recorded. (Data they should, of course, not have, but trying to tell American startups about privacy is like trying to explain delayed gratification to a piranha, so let’s not even go there.)

I then went through my phone’s app list looking for the other junk like this. Four, all with pre-approved push applications, all of which now disabled. (I’m leaving aside the pre-installed ones which I might actually want to use…)

But when I removed them, I happened to scroll down and look at permissions. The Peel app, which has been running quietly in the background for about two years, has had an astonishing range of permissions.

* read contact data (giving the ability to know personal details of anyone stored as a contact – along with metadata about when and how I contact them)
* create calendar events and email guests without my awareness
* read and write anything stored on the SD card
* full internet access

Let’s not even ask why a TV remote would need the ability to find out who all my contacts are.

The others were not much better. Blurb (a small print-on-demand publishing firm) could read my data and find out who was calling me. Flipboard (a social-media aggregator) could read my data. And “ChatON“, which seems to be some kind of now-defunct messaging service run by Samsung; its app could call people, record audio, take pictures, find my location, read all my data (and my contact data), create accounts, shut down other applications, force the phone to remain active – basically every permission in the book. Again, that’s been burbling away for two years. Always on, starting on launch, and… what?.

Now, I’ll be fair here – it’s unlikely that a startup like Peel has a business plan that involves “gather a load of personal data and sell it”. But how could I know for sure? It’s hardly an unknown approach out there. And on reflection, maybe it’s not their business plan we need to worry about.

Let’s imagine a startup made something like ChatON. They get widespread ‘adoption’ (by paying for preinstalls), but ultimately it doesn’t take off. They fail – as ChatON did – but without the ability of a large corporation to write it off as a failure and file it away, the residue of the company and its assets are sold for some trivial sum to whoever turns up.

Their assets that include a hundred million always-on apps on phones worldwide, with security permissions to record everything and transmit, and preapproved automatic updates.

If you’re not grimacing at that, you haven’t thought about it enough.

This is one thing that Apple have got right – very little preinstalled that isn’t from the manufacturer directly. Maybe I could switch to an iPhone, or maybe it’s time to finally think about Cyanogen.

But that’d fix it for me. The underlying systemic risk is still there… and one day we’re all going to get burned. Preinstalled third party apps with broad permissions are a time-bomb and the phone manufactures should probably think hard about their (legal and reputational) liability.

Shifting of the megajournal market

February 5th, 2016 by

One of the most striking developments in the last ten years of scholarly publishing, outside of course open access, was the rise of the “megajournal” – an online-only journal with a very broad remit, no arbitrary size limits, and a low threshold for inclusion.

For many years, the megajournal was more or less synonymous with PLOS One, which peaked in 2013-14 with around 32,000 papers per year, an unprecedented number. The journal began to falter a little in early 2014, and showed a substantial decline in 2015, dropping to a mere (!) 26,000 papers.

One commentator highlighted a point I found very interesting: while PLOS One was shrinking, other megajournals were taking up the slack. The two highlighted here were Scientific Reports (Nature) and RSC Advances (Royal Society of Chemistry) – no others have grown to quite the same extent.

We’re now a month into 2016, and it looks like this trend has continued – and much more dramatically than I expected. Here’s the relative article numbers for the first five weeks of 2016, measured through three different sources – the journals own sites; Scopus; and Web of Science.

megajournals

The journal sites are probably the most accurate measure for what’s been published as of today, and unsurprisingly has the largest number of papers (5965 total). Here we see three similar groups – PLOS One 38%, Scientific Reports 31%, and RSC Advances 31%. (The RSC Advances figure has been adjusted to remove about 450 “accepted manuscripts” nominally dated 2016 – while publicly available, these are simply posted earlier in the process than the other journals would do, and so including them would give an inflated estimate of the numbers actually being published)

Scopus and Web of Science return smaller numbers (2766 and 3499 papers respectively) and show quite divergent patterns – PLOS One is on 36% in Scopus and 52% in Web of Science, with Scientific Reports on 42% and 37%, and RSC Advances on 22% and 11%. It’s not much of a surprise that the major databases are relatively slow to update, though it’s interesting to see that they update different journals at different rates. Scopus is the only one of the three sources to suggest that PLOS One is no longer the largest journal – but for how long?

Whichever source we use, it seems clear that PLOS One is now no longer massively dominant. There’s nothing wrong with that, of course – in many ways, having two or three competing but comparable large megajournals will be a much better situation than simply having one. And I won’t try and speculate on the reasons (changing impact factor? APC cost? Turnaround time? Shifting fashions?)

It will be very interesting to look at these numbers again in two or three months…

Projects and plans

January 15th, 2016 by

15th January. A bit late for New Year’s resolutions, and I’m never much of a one for them anyway.

Still, it’s a good time to take stock. What am I hoping to achieve this year? I have omitted the personal aims, as they’re not of great interest to anyone who’s not me, but otherwise, hopefully without overcommitting myself…

Professionally

2015 was pretty good. I planned a rather complex library move (twice, after the first time was delayed, which is a good way to learn from your mistakes without having actually had to commit them). Two weeks into the new year and ~400 metres of books shifted, it’s looking like it’s actually working, so let’s call that one a conditional success. First order of business: finish it off. And write up some notes on it so that others may learn to not do as I have done.

Secondly, get something published again. I had my first ‘proper’ academic publication in late 2015, and though it’s on a topic that approximately three people care about, I’m still glad it’s done and out there. (I have something to point at next time I’m glibly assured “oh, that approach never happens any more”. This is a recurrent theme in discussions about scholarly publishing; but I digress.) I would recommend it to any academic librarian as an exercise in understanding what your researchers suffer.

(I have a couple of projects on the boil which I’d like to write up properly, of which more anon.)

Thirdly, finish putting together the papers from the 2014 Polar Libraries Colloquy. Call this a public admittance of dragging my heels about this.

Lastly, consider Chartership. I’ve avoided this for many years, seeing it as a rather daunting pile of paperwork, but it’s probably a sensible thing to think about.

Projects

Firstly, I’d like to clear off the History of Parliament work on Wikidata. I haven’t really written this up yet (maybe that’s step 1.1) but, in short, I’m trying to get every MP in the History of Parliament database listed and crossreferenced in Wikidata. At the moment, we have around 5200 of them listed, out of a total of 22200 – so we’re getting there. (Raw data here.) Finding the next couple of thousand who’re listed, and mass-creating the others, is definitely an achievable task.

Secondly, and building on this, I did some work in the autumn of 2015 on building a framework for linking EveryPolitician and Wikidata. I need to pick this back up and work out how we can best represent politicians in general – what are the best data structures for things like constituencies, parliamentary terms, parties?

This leads into the third project, which is the general use of Wikidata as a “biographical spine”. Charles Matthews, Magnus Manske, and I have been working on this for a couple of years, and it really is beginning to bear fruit. We’re working to pull together as many large biographical databases as possible, and have them talking to one another through Wikidata, so that we can start bringing data and links from one to the users of another. This certainly won’t ever be completed in 2015 – but it would be good to write some of it up in a single report so that it’s clear what we’re doing, and hopefully start advertising it to researchers who could benefit.

Fourthly (oh, goodness), the Oxford Dictionary of National Biography. This is a project I embarked on back in 2013; the goal is to get a reliable crossreference between Wikipedia/Wikidata and the ODNB – now complete, mainly thanks to Charles Matthews – and then to fix all the vague unhelpful “see DNB” Wikipedia citations into nicely formatted, linkable ones, which readers can actually benefit from. This second part is going to take a long time, but I’ve made some rudimentary attempts at auto-predicting the required citations to be fixed by hand, and hopefully we’ll get there in time.

Moving away from Wikidata, early last year I started on what has turned into the Birthdays Project – an attempt to study the way in which people misremembered their birthdays when they’re not well-documented. This is generally known and the basic result is kind of obvious, but it has only been (very cursorily) discussed in the academic literature before, and I don’t think anyone’s properly attacked it with substantial data, multiple cultural contexts, etc. I wrote up a few notes on this in early 2015 (part 1, part 2), but since then I’ve nailed down some more data, figured out a useful way of visualising it, and so on. No idea if it’s publishable per se, but it would be good to have it written up.

That… looks like a busy year ahead.

Finally, going places and doing things. I have a couple of long-awaited holidays planned, and some people I’m looking forward to seeing on them. I will be going to the Polar Libraries Colloquy in Alaska, but I won’t be going to Wikimania in June – I’ll be elsewhere, sadly. I’m sad to miss this year, as it looks to be an excellent event.

Most popular videos on Wikipedia, 2015

January 14th, 2016 by

One of the big outstanding questions for many years with Wikipedia was the usage data of images. We had reasonably good data for article pageviews, but not for the usage of images – we had to come up with proxies like the number of times a page containing that image was loaded. This was good enough as it went, but didn’t (for example) count the usage of any files hotlinked elsewhere.

In 2015, we finally got the media-pageviews database up and running, which means we now have a year’s worth of data to look at. In December, someone produced an aggregated dataset of the year to date, covering video & audio files.

This lists some 540,000 files, viewed an aggregated total of 2,869 million times over about 340 days – equivalent to 3,080 million over a year. This covers use on Wikipedia, on other Wikimedia projects, and hotlinked by the web at large. (Note that while we’re historically mostly concerned with Wikipedia pageviews, almost all of these videos will be hosted on Commons.) The top thirty:

14436640 President Obama on Death of Osama bin Laden.ogv
10882048 Bombers of WW1.ogg
10675610 20090124 WeeklyAddress.ogv
10214121 Tanks of WWI.ogg
9922971 Robert J Flaherty – 1922 – Nanook Of The North (Nanuk El Esquimal).ogv
9272975 President Obama Makes a Statement on Iraq – 080714.ogg
7889086 Eurofighter 9803.ogg
7445910 SFP 186 – Flug ueber Berlin.ogv
7127611 Ward Cunningham, Inventor of the Wiki.webm
6870839 A11v 1092338.ogg
6865024 Ich bin ein Berliner Speech (June 26, 1963) John Fitzgerald Kennedy trimmed.theora.ogv
6759350 Editing Hoxne Hoard at the British Museum.ogv
6248188 Dubai’s Rapid Growth.ogv
6212227 Wikipedia Edit 2014.webm
6131081 Newman Laugh-O-Gram (1921).webm
6100278 Kennedy inauguration footage.ogg
5951903 Hiroshima Aftermath 1946 USAF Film.ogg
5902851 Wikimania – the Wikimentary.webm
5692587 Salt March.ogg
5679203 CITIZENFOUR (2014) trailer.webm
5534983 Reagan Space Shuttle Challenger Speech.ogv
5446316 Medical aspect, Hiroshima, Japan, 1946-03-23, 342-USAF-11034.ogv
5434404 Physical damage, blast effect, Hiroshima, 1946-03-13 ~ 1946-04-08, 342-USAF-11071.ogv
5232118 A Day with Thomas Edison (1922).webm
5168431 1965-02-08 Showdown in Vietnam.ogv
5090636 Moon transit of sun large.ogg
4996850 President Kennedy speech on the space effort at Rice University, September 12, 1962.ogg
4983430 Burj Dubai Evolution.ogv
4981183 Message to Scientology.ogv

(Full data is here; note that it’s a 17 MB TSV file)

It’s an interesting mix – and every one of the top 30 is a video, not an audio file. I’m not sure there’s a definite theme there – though “public domain history” does well – but it’d reward further investigation…

Freedom of Information – why universities are, and should remain, subject

December 11th, 2015 by

There has been an awful lot of discussion prompted by the recent Higher Education green paper (Higher education: teaching excellence, social mobility and student choice). The majority of this has focused on the major reforms it proposes to the structure of HE; I am not particularly qualified to comment on this, but I recommend Martin Eve’s ongoing series of responses for discussion of the proposals.

There is one bit, however, which I do feel qualified to comment on – because it happens to be exactly the topic on which I wrote my MSc thesis, some ten years ago. This is the proposal that universities should quietly be exempted from the Freedom of Information Act:

There are a number of requirements placed on HEFCE-funded providers which do not apply to alternative providers. Many derive from treating HEFCE-funded providers as ‘public bodies’. This is despite the fact that the income of nearly all of these providers is no longer principally from direct grant and tuition fee income is not treated as public funding. Alternative providers are not treated as public bodies. As a result there is an uneven playing field in terms of costs and responsibilities. For example, the cost to providers of being within the scope of the Freedom of Information Act is estimated at around £10m per year.
In principle, we want to see all higher education providers subject to the same requirements, and wherever possible we are seeking to reduce burdens and deregulate. However we may wish to consider some exceptions to this general rule if it were in the interest of students and the wider public.
Question 23: Do you agree with the proposed deregulatory measures? Please give reasons for your answer, including how the proposals would change the burden on providers. Please quantify the benefits and/or costs where possible.

Unsurprisingly, many universities are delighted with this suggestion – as any public body would be, if told that a little administrative tweak could remove their Freedom of Information obligations. However, the core problem is that FOI does not work this way; these “deregulatory measures” would have to involve amending the original Freedom of Information Act, which the proposal doesn’t quite seem to realise. And an incidental example in question 23 of an unrelated consultation – especially when a different consultation on FOI has just closed – is a fairly limited basis for making such a move!

What follows is a longer version of my intended response – I will condense it somewhat before submitting in January – but comments are welcome.

There are four problems with the specific proposal to remove Higher Education institutions from the scope of the Freedom of Information Act – i) the legal framework is complex, and a “public funds” test is not the sole issue involved; ii) in any case, institutions would remain publicly funded after these changes; iii) removing institutions from the scope of the Act would not produce a “level playing field” either in the UK or internationally; and iv) all these aside, including institutions in the scope of FOI brings a net benefit to the country.

  1. Firstly, the system by which Higher Education institutions become subject to the Freedom of Information Act 2000 is complex, and does not work as described by the proposal. There is no general “public bodies” test as such. Instead, under the Act, HE institutions can become conditionally subject through receiving HEFCE funds (schedule 1, para 53(1)(b)); through being designated as eligible to receive such funds (53(1)(d)); or through being an associated body (eg a constituent college) of any such institution (53(1)(e)). There is no test for the amount or proportion of income represented by this funding, so the note in para 17 of the proposal that “…the income of nearly all of these providers is no longer principally from direct grant” is moot.
  2. In addition, however, any institution operating within the Further Education sector is automatically subject to the Act (53(1)(a)) as is any institution operated by a Higher Education Corporation (53(1)(c)). These provisions are not conditional and are not affected by their sources of funding. Were all public funds of all kinds to be withdrawn overnight, the Act as it exists would still leave any HEC explicitly subject to FOI.
  3. This sits strangely alongside the general thrust of this section, which is structured around increasing the powers and capabilities of HECs. Removing the link between FOI and HEFCE would exempt one group (predominantly older and more influential institutions) while leaving the other entirely subject to the Act. For example, the University of Oxford would be exempt, but Oxford Brookes University would not. The alternative would be to remove all HE institutions, including HECs, from the scope of the Act – but this is not a proposal raised in the consultation, which has chosen to focus on the argument that public funds are the main driver for FOI applicability.
  4. This leads into the second point, the definition of “public funds”. If we were to accept the position that “public funds” is the key test to determine FOI applicability, it is clear that there would still be substantial public monies channeled into the higher education system after the effects of the ongoing reforms. Tuition fees, though notionally private payments, are supported by a publicly-organised loan scheme. The public purse will underwrite the loans that are used to fund tuition fees, and make good losses that arise through long-term defaults or writedowns. It is hard to see this as devoid of public involvement.
  5. Meanwhile, the broad outline of public research funding will not substantially change. The government has committed to maintaining the dual support system, and while the review is consulting on how best this can be structured (see eg Questions 24 and 25) it is clear that institutions will continue to receive income in a similar form, from a body which has taken over the existing HEFCE research funding role. This is undeniably public funds, and – importantly – as it currently comes through HEFCE, it would trigger the FOI applicability requirements even were tuition costs to vanish entirely from consideration. Funding from the research councils is also substantial, and again comes from public sources.
  6. There are also other non-trivial (though relatively smaller) sources of public income for HEIs, including grants for providing FE courses, public sector capital spending, income from NHS trusts or local authorities, etc. While perhaps not enough to constitute public funding in and of themselves, they do support the position that, broadly speaking, these institutions remain publicly funded despite the question of tuition fees.
  7. Thirdly, the consultation raised concerns about a “level playing field” among institutions. If HEIs were to be removed wholesale from the 2000 Act, it might or might not materially affect the FOI status of Welsh or Northern Irish universities (who would be covered by a change to the 2000 Act, but have different funding systems), but could not affect the FOI status of Scottish universities (controlled by the Freedom of Information Act (Scotland) 2002) – leading inexorably back to an unequal playing field across the UK.
  8. Internationally, there are similar problems. The position that “public” but not “private” universities should be subject to Freedom of Information regulations is a widely accepted principle across a range of countries, ranging from Bulgaria to New Zealand. In 2005, I carried out a study which identified that in 67 countries with FOI-type legislation, 39 included public universities in the scope of the legislation, 27 were unclear, and only one explicitly excluded them – and this one was planning to extend the scope of the law. In the majority of jurisdictions, private universities were not covered, though some countries extended limited FOI powers to certain aspects of their work. Under any reasonable definition, the existing “public” British universities will remain quasi-public institutions. They will continue to receive public funds through various channels, and to be heavily influenced by government policy. If asked, the architects of these proposed reforms would no doubt – emphatically and repeatedly – state that they do not consider it a privatisation, and the university governing bodies would agree. Given this, withdrawing their FOI compliance requirement would be unusual; it would place them in a different legal position to most of their overseas counterparts.
  9. Finally, applying Freedom of Information laws to universities is, and will remain, a net good. The cost to the sector – ultimately borne by the public purse – is minor in comparison to the benefits from transparency and efficiency that FOI can bring. This is true for universities as much as it is for other sectors.
  10. From a national perspective, these bodies are responsible for spending several billion pounds of public money, and for implementing substantial portions of the government’s policies not just on education, but on issues as varied as social inclusion, visitor visas, and industrial development. All of these are matters of substantial public interest. On an individual basis, these bodies can have remarkably broad powers. They regulate employment, housing, and substantial portions of daily life for hundreds of thousands of people. In areas with a very high student population, they can have an impact on their local communities rivalling that of the council! The benefits from public awareness and oversight of these roles is substantial.
  11. One concern raised by universities is that these requests pose a heavy burden on the sector and are often frivolous. It is worth considering some numbers here. In 2013 (a year with a “huge increase” in FOI requests), surveyed institutions received an average of 184 submissions; across the 160 universities in the country (including Scotland), this would suggest a total of around 30,000 submissions. 93% of these queries were handled in good time. 54.4% were disclosed in full, 24.3% were provided in part, and just 8.5% were fully withheld. Only 6.6% were rejected as the information was not held by the institution, and 0.3% rejected as vexatious. The remainder were withdrawn, still in progress, or of unclear status. 1.1% of rejected or partially filled requests prompted a request for an internal review, and slightly over half of these were upheld. Only 0.1% were referred to an external appeal (the Information Commissioner) and exactly half of these were upheld.
  12. These figures suggest that the universities are dealing with their FOI requirements cleanly, sensibly, and in good order – probably better than many other public bodies, and credit to them for it. It does not bear signs of a looming catastrophe. Institutions are disclosing information they are asked for in more than three quarters of cases, indicating that it is material that can and should be publicly available, but has so far required the use of FOI legislation to obtain it. They are not dealing with a substantial number of frivolous requests (in this sample, an average of just five requests per university per year were declined as vexatious or repeated). And, when their actions are challenged and reviewed, the decisions indicate that institutions are striking a reasonable balance between caution and disclosure, and that the enquiries are often reasonable and justified.
  13. It is certainly the case that implementing FOI can be expensive. However, all good records management practice will cost more money than simply ignoring the problem! It is likely that a substantial proportion of the costs currently considered as “FOI compliance” would be required, in any case, to handle compliance with other legislation – such as the Data Protection Act or the Environmental Information Regulations – or to handle routine internal records management work. The quoted figure of £10m per year compliance costs should, thus, be considered with a certain caution – a substantial amount of this money would likely be spent as business as usual without FOI.
  14. FOI has an unusual position here in that it can be dealt with pre-emptively, by transitioning to a policy of routine publication of information that would be routinely disclosed, and by empowering staff to deal with many non-controversial requests for information as “business as usual” rather than referring them for internal FOI review. For example, it is noticeable that the majority of FOI enquiries relate to “student issues and numbers”. A substantial proportion of these relate to admission statistics, and similar topics; this is information that could easily be routinely and uncontroversially published without waiting for a request, reviewing the request, discussing it internally, and then agreeing to publish.
  15. In conclusion, this proposal i) cannot work as planned; ii) is based on a tenuous and restrictive interpretation of what constitutes a public body; iii) if implemented, will affect some institutions substantially more than it does others; and iv) is, in any case, undesirable as a policy, and would be unlikely to lead to significant savings.
  16. Should a “level playing field” be desired, a far more equitable solution would be to consider extending the scope of the Act to encompass the “private” HE institutions, perhaps in a more limited fashion appropriate to their status. The driving factors which make robust freedom of information regulations important for “public” institutions are no less valid for “private” ones; they carry out a similar quasi-public role and, especially from a student perspective, it seems unreasonable for them to have reduced rights simply due to the legal status of their university. Partially extending the legislation to cover private institutions would be unusual, but not unprecedented, by international standards.

Page and colour charges: they’re still a thing

November 24th, 2015 by

So, I have a paper out! Very exciting – this is my first ‘proper’ academic publication (and it came out the day after my birthday, so there’s that, too.)

Gray, Andrew (2015). Considering Non-Open Access Publication Charges in the “Total Cost of Publication”. Publications 2015, 3(4), 248-262; doi:10.3390/publications3040248

Recent research has tried to calculate the “total cost of publication” in the British academic sector, bringing together the costs of journal subscriptions, the article processing charges (APCs) paid to publish open-access content, and the indirect costs of handling open-access mandates. This study adds an estimate for the other publication charges (predominantly page and colour charges) currently paid by research institutions, a significant element which has been neglected by recent studies. When these charges are included in the calculation, the total cost to institutions as of 2013/14 is around 18.5% over and above the cost of journal subscriptions—11% from APCs, 5.5% from indirect costs, and 2% from other publication charges. For the British academic sector as a whole, this represents a total cost of publication around £213 million against a conservatively estimated journal spend of £180 million, with non-APC publication charges representing around £3.6 million. A case study is presented to show that these costs may be unexpectedly high for individual institutions, depending on disciplinary focus. The feasibility of collecting this data on a widespread basis is discussed, along with the possibility of using it to inform future subscription negotiations with publishers.

The problem

So what’s this all about, then?

We (in the UK particularly) have spent a lot of effort trying to reduce the cost of the scholarly publishing system, which is remarkably high; British university libraries collectively spend £180,000,000 per year on subscriptions, comparable to the entire budget of one of the smaller research councils. The major driver here is open access – trying to make research available to read without charges – and so there has been a lot of interest in trying to arrange matters so that the costs of publishing open access don’t rise faster than the corresponding reduction in subscriptions. The general term for this is the “total cost of publication” (TCP) – ie, the costs of all the parts of the system, including both direct spending and indirect management costs (it’s surprising how much it costs to shuffle paperwork).

This is a sensible goal – it keeps the net cost under control – but the focus on OA costs and subscriptions misses out some other contributions to the balance sheet.

Historically, a lot of the cost of scholarly publishing was borne by authors or their institutions through publication charges – page charges, colour charges, submission charges, and a few other oddities. These became less common (for various reasons, and there’s an interesting history to be written) through the 1980s, and – outside of open-access article processing charges – compulsory publication charges are now rare for most journals in most fields. To many researchers (including a lot of those who’ve helped set OA policy), they simply don’t exist as a significant concern.

However, during 2013-14 it became rapidly apparent to me that my institution was spending a lot of money on page charges, which didn’t fit with what was being reported elsewhere, and didn’t fit with the general recommendations from the funding bodies on how to allocate costs. These charges were not being taken into consideration in the various TCP offsetting schemes, with the effect that we were seeing a lot of spending going direct to publishers, but outside the carefully constructed framework for controlling costs.

The study

I dug back through the recent literature on the costs of journal publishing – there had been a flurry of studies in the early 2000s as people began to work out how to handle OA costs – and tried to determine what the levels of other “publication charges” had been just before OA spending took off. It turned out to be tricky to come up with a firm estimate, but my best guess was that non-OA publication charges were around 3-5% of subscription costs in 2004-5, and had dropped since then. By now (ie 2013/14), it’s probably around 2%, assuming a continual gentle decline.

Firstly, this is quite a lot of money. If British universities spend £180,000,000 per year, then 2% is a further £3,600,000 – comparable to forty or fifty PhD studentships. It’s particularly striking when we bear in mind that this is money many institutions may not realise they are spending.

Secondly, it’s clear that the cost is distributed very erratically. My own institution spent the equivalent of 15-18% of its subscription budget on non-OA publication charges, driven mainly by very heavy page charges in certain well-used earth sciences journals. (From another angle, Frank Norman has since reported that his institution, in biomedicine, had non-OA publication charges equal to about 10% of subscriptions, and in the early 2000s it was three times that.) Given the disciplinary concentration, it’s likely that spending in universities is similarly patchy – individual departments may have dramatically higher publication costs than the overall average.

Thirdly, this spending is, currently, invisible to policymakers. Of the 29 institutions who provided article-level spending records for 3,721 papers in 2014, only fifteen individual papers could be identified as having page or colour charges (mostly at Leeds), with another ten mentioned in the general reports. Twenty-five papers is clearly not going to get us anywhere near the overall spending estimates. This data isn’t being collected centrally by RCUK/JISC – who are otherwise doing sterling work on tracking APCs – and it’s not clear if it even gets collected centrally by universities. The majority of non-OA publication charges may just disappear into the morass of “miscellaneous spending” in grant budgets.

Where next?

Firstly, we need to get a good idea of what’s actually being spent. My 2% estimate is a pretty wide one – I wouldn’t be surprised if it was 1% or 3%, or further away. The methodology we used was quite time-consuming – effectively identifying every paper with possible charges and chasing the authors to confirm – but it did work. Perhaps a better method, for larger institutions, would be sampling the departments with probable concentrations of page charges, or it might be that some institutions have robust enough finance systems that a lot of cases can be identified with a bit of research. Perhaps we can even obtain this information direct from publishers. Whatever method is used, the existing RCUK/JISC APC reporting infrastructure offers a good way to report it to a central body for aggregation, deduplication, and republication.

Secondly, we need to account for non-OA publication charges as part of the total cost of publication. They are smaller than APCs, but they are very significant for some institutions. While it may not be appropriate to use the same offsetting schemes, if they’re not brought into the equation there will be an risk that publishers are tempted to increase them dramatically – an extra revenue stream which is not capped and controlled in the way that subscriptions and APCs are. There’s no sign that anyone is doing this now – and most of the major commercial publishers no longer use page charges – but it remains a concern.

Lastly – the “more research is needed” section – there are two big questions still outstanding for the total cost of publication, even with this new element added.

  • What about the indirect costs of subscription publishing? We have a good handle on the indirect costs of running repositories and handling OA payments, but we have no idea what the infrastructure to keep a subscripton system working costs us. This might include, for example, things like – the cost of staff time to manage subscriptions; the cost of staff time to run authentication and proxy servers; the cash cost of third-party authentication services like Athens; the cost to the publishers of maintaining security barriers; the cost in wasted researcher time trying to obtain material; &c.
  • If everything is expressed as a proportion of subscription spending, how much is that? My £180,000,000 figure is an inflation-adjusted estimate, based on data from SCONUL in 2010/11. There have been more recent SCONUL surveys, but not published. A firm understanding of how much we actually spend is vital to actually make sense of these results.

Watching the Antarctic days roll by

November 22nd, 2015 by

[Note: this post embeds some very large gif files. Cancel now if on a slow connection…]

A while ago, I was playing with imagemagick (it’s an amazing tool) and trying to make animated gifs. It worked, sort of. One of the things I’d been meaning to try for a while – but never quite got around to – was animating webcam images. Last week, I finally got around to it.

At work, we have a webcam pointed at the Halley VI Antarctic station. It’s turned on year-round, sending back one picture hourly, fairly reliably. Being on a pole in the middle of Antarctica, it’s also free from the major problem that arises when trying to animate webcams – someone moving them around every now and again.

And the pictures are remarkable. Halley VI is an imposing-looking building at the best of times, but on a dark morning, looming out of a snowstorm, it’s like something from a film.

Twenty days in late November 2014 – note the sun tracking by the top of the image each day.

Ten days at the end of January 2015, with 24-hour daylight and a lot of activity around the station.

One shot each day (at 12.30pm UK time, so about 10am? local solar time), chained over 373 days – so slightly more than a full year. It opens in mid-November 2014, about the time the first aircraft arrive and the summer activities begin, passes through the (very busy) summer season, then quietens down as winter approaches. The nights appear as momentary flashes, then get longer and longer until they’re permanently dark in June/July. Then it slowly returns…

The code for this is pretty simple. Assemble all the files in a single directory – either sourced locally or downloaded with wget/curl – and ensure they’re named in a sequential way. All of these, for example, were of the form halley-2015-01-02-12-30.jpg – the 12.30 shot on January 1st.

Make sure to delete any that returned error messages in the download or are below a certain size. I had one or two zero-content frames that made the system hiccup a bit, and find images/*.jpg -size 0 -delete is good for handling these.

Then run:

convert -resize 500x500 images/*.jpg animation.gif

That’s it. The resize is to prevent it getting disgustingly large; adding -optimize shaves a little more off the filesize. Even so, though, you’ll find that assembling more than a few hundred frames makes your system quite unhappy (it may lock up) and the resulting gif is far too large to be useful. For the images above, some examples of filters on the merge:

convert -resize 500x500 images/halley-2015-01-2*.jpg animation.gif

convert -resize 500x500 images/*12-30.jpg animation.gif

– so it only pulled together the frames we were interested in. Of course, you could do a simpler (or more complex) merge by copying the relevant ones to a separate directory and just merging everything there.

Given the size problems of gifs, making a larger one is probably best left to video. Here’s the entire year, using every frame (23 MB):

A year at Halley VI

Note how short the day/night pulses get towards the ends of the spring/autumn.

For this, you don’t have to resize, and you can produce it at the full size of the webcam images (in this case, 1920×1080):

mencoder mf://images/*.jpg -mf w=1920:h=1080:fps=25:type=jpg -ovc lavc -lavcopts vcodec=mpeg4:mbd=2:trell -oac copy -o halley.avi

The key part here is the images list (you can filter again as before) and the fps=25; I ran it at various speeds and found 40fps seemed to be a happy medium. 25fps is just a little jerky. The version above is reduced to 512px wide:

mencoder mf://images/*.jpg -mf w=1920:h=1080:fps=25:type=jpg -vf scale=512:288 -ovc lavc -lavcopts vcodec=mpeg4:mbd=2:trell -oac copy -o halley.avi

Taking pictures with flying government lasers

October 2nd, 2015 by

Well, sort of.

A few weeks ago, the Environment Agency released the first tranche of their LIDAR survey data. This covers (most of) England, at varying resolution from 2m to 25cm, made via LIDAR airborne survey.

It’s great fun. After a bit of back-and-forth (and hastily figuring out how to use QGIS), here’s two rendered images I made of Durham, one with buildings and one without, now on Commons:

The first is shown with buildings, the second without. Both are at 1m resolution, the best currently available for the area. Note in particular the very striking embankment and cutting for the railway viaduct (top left). These look like they could be very useful things to produce for Commons, especially since it’s – effectively – very recent, openly licensed, aerial imagery…

1. Selecting a suitable area

Generating these was, on the whole, fairly easy. First, install QGIS (simplicity itself on a linux machine, probably not too much hassle elsewhere). Then, go to the main data page and find the area you’re interested in. It’s arranged on an Ordnance Survey grid – click anywhere on the map to select a grid square. Major grid squares (Durham is NZ24) are 10km by 10km, and all data will be downloaded in a zip file containing tiles for that particular region.

Let’s say we want to try Cambridge. The TL45 square neatly cuts off North Cambridge but most of the city is there. If we look at the bottom part of the screen, it offers “Digital Terrain Model” at 2m and 1m resolution, and “Digital Surface Model” likewise. The DTM is the version just showing the terrain (no buildings, trees, etc) while the DSM has all the surface features included. Let’s try the DSM, as Cambridge is not exactly mountainous. The “on/off” slider will show exactly what the DSM covers in this area, though in Cambridge it’s more or less “everything”.

While this is downloading, let’s pick our target area. Zooming in a little further will show thinner blue lines and occasional superimposed blue digits; these define the smaller squares, 1 km by 1 km. For those who don’t remember learning to read OS maps, the number on the left and the number on the bottom, taken together, define the square. So the sector containing all the colleges along the river (a dense clump of black-outlined buildings) is TL4458.

2. Rendering a single tile

Now your zip file has downloaded, drop all the files into a directory somewhere. Note that they’re all named something like tl4356_DSM_1m.asc. Unsurprisingly, this means the 1m DSM data for square TL4356.

Fire up QGIS, go to Layer > Add raster layer, and select your tile – in this case, TL4458. You’ll get a crude-looking monochrome image, immediately recognisable by a broken white line running down the middle. This is the Cam. If you’re seeing this, great, everything’s working so far. (This step is very helpful to check you are looking at the right area)

Now, let’s make the image. Project > New to blank everything (no need to save). Then Raster > Analysis > DEM (terrain models). In the first box, select your chosen input file. In the next box, the output filename – with a .tif suffix. (Caution, linux users: make sure to enter or select a path here, otherwise it seems to default to home). Leave everything else as default – all unticked and mode: hillshade. Click OK, and a few seconds later it’ll give a completed message; cancel out of the dialogue box at this point. It’ll be displaying something like this:

Congratulations! Your first LIDAR rendering. You can quit out of QGIS (you can close without saving, your converted file is saved already) and open this up as a normal TIFF file now; it’ll be about 1MB and cover an area 1km by 1km. If you look closely, you can see some surprisingly subtle details despite the low resolution – the low walls outside Kings College, for example, or cars on the Queen’s Road – Madingley Road roundabout by the top left.

3. Rendering several tiles

Rendering multiple squares is a little trickier. Let’s try doing Barton, which conveniently fits into two squares – TL4055 and TL4155. Open QGIS up, and render TL4055 as above, through Raster > Analysis > DEM (terrain models). Then, with the dialogue window still open, select TL4155 (and a new output filename) and run it again. Do this for as many files as you need.

After all the tiles are prepared, clear the screen by starting a new project (again, no need to save) and go to Raster > Miscellaneous > Merge. In “Input files”, select the two exports you’ve just done. In “Output file”, pick a suitable filename (again ending in .tif). Hit OK, let it process, then close the dialog. You can again close QGIS without saving, as the export’s complete.

The rendering system embeds coordinates in the files, which means that when they’re assembled and merged they’ll automatically slot together in the correct position and orientation – no need to manually tile them. The result should look like this:

The odd black bit in the top right is the edge of the flight track – there’s not quite comprehensive coverage. This is a mainly agricultural area, and you can see field markings – some quite detailed, and a few bits on the bottom of the right-hand tile that might be traces of old buildings.

So… go forth! Make LIDAR images! See what you can spot…

4. Command-line rendering in bulk

Richard Symonds (who started me down this rabbit-hole) points out this very useful post, which explains how to do the rendering and merging via the command line. Let’s try the entire Durham area; 88 files in NZ24, all dumped into a single directory –

for i in `ls *.asc` ; do gdaldem hillshade -compute_edges $i $i.tif ; done

gdal_merge.py -o NZ24-area.tif *.tif

rm *.asc.tif

In order, that a) runs the hillshade program on each individual source file ; b) assembles them into a single giant image file; c) removes the intermediate images (optional, but may as well tidy up). The -compute_edges flag helpfully removes the thin black lines between sectors – I should have turned it on in the earlier sections!