“Journal spend, use and research outcomes: a UK perspective on value for money” (PDF link), by Ian Rowlands at the UK Serials Group Conference, 31st March 2009. In amongst the inevitable science journals (yawn), his group also made a case-study of History ejournals. One interesting factoid…
“86.5 per cent of titles in the arts, humanities and social sciences are now available online”
Only 86.5%?
From the same conference: “Electronic journals, continuing access and long-term preservation: roles, responsibilities and emerging solutions” (Powerpoint link, 2Mb). It seems a useful overview of the problems, and the initiatives (LOCKSS, Portico, etc) currently underway.
Short-run open access titles in the arts and humanities are especially vulnerable to loss, judging from my experience of finding one too many “404 not found” and domain-squatted pages while building JURN. One solution that springs to mind might be to build into open access journal software an automatic “collect all the articles into a single POD-ready printable 8″ x 10″ PDF and upload it on publication to a print-on-demand book printer” (such as Lulu). National deposit libraries could then access a uniform printed (although probably not archival/acid-free) copy for their stacks. And so could anyone else who wanted a printed copy.
Another rather more humourous idea might be to have a Big Red Button integrated into the journal’s software control panel — especially useful for graduate Cultural Studies ejournals perhaps — marked:
“We can’t be bothered any more, upload everything to archive.org and then delete the website”
Of course, a ‘brute force’ approach would be to buy a fat new hard-drive and then run site-ripper software (free tools such as the British Library Web Curator Tool and the independent WinHTTrack spring to mind) on the JURN Directory. But there’s a problem — many independent ejournals keep their article files at a radically different URL than that of the home website. A third of the time you’d end up with a nice snapshot of the website, but no articles. Unless you could specifically tell the software to download all unique off-site files/pages that were being directly linked to by the targetted website (that’s if you’re lucky and the journal doesn’t use scripted “bouncing-bomb” URLs that dynamically bounce into repositories to get the PDF). But then, many journal entry-points are just a page on a larger departmental website — so you could end up hauling in terabytes of unwanted material either way.
Or for a more managed solution, one could spend £12,000 paying students at £12 an hour to spend an average of 40 minutes per title (across 1,700 titles), to go in and hand-archive all the articles and TOCs into named directories on a hard-drive. Even if management bloated the cost, I’d guess an initial archival capture could probably be done for less than £50k? Heck, I’ll do it myself if someone wants to offer me £50k.
Of course, if librarians had made and promoted just one simple little Google-friendly tagging/flagging standard for online open-access journal articles… then none of this would have been needed.