DOAJ study in New Library World

I see there’s a new 2016 study of the DOAJ in New Library World (Vol. 117, 11/12, pages 746-755). The researchers found that in the DOAJ…

“roughly 20-25% of the [journal homepage] URLs redirected to another URL” but that “only 2.11% of 9,073 journals [proved] to be inaccessible”

… once the redirects were followed.

Two automated tests were done (using home-brewed Excel wizardry, rather than dedicated linkbot software) of all 9,073 titles, one month apart, pinging each journal’s homepage. They followed this up with a manual check on all the URLs of the still-inaccessible journals.

The research seems to have been quite thorough, although I’d observe that a homepage URL is far less likely to be broken than the deeper direct article URLs on the DOAJ’s table-of-content pages. Article page / PDF URLs can be easily broken, for instance by the journal moving from WordPress to OJS or visa versa. A similar test might usefully be run on a sample of DOAJ article URLs, although I must say that I haven’t noticed any problem on the DOAJ in that respect.

I see that Bentham Open (aka Bentham Science Publishers, not directly indexed in JURN) provided 67 of the inaccessible titles. For some reason they are still in the DOAJ after the recent purge, but my quick tests on the DOAJ’s Bentham URLs found all those tested to be unresponsive. That was last night, and they were tested again today and again found to be unresponsive. So I’m not too worried about their popping up in JURN results (via the DOAJ indexing) and I presume that the DOAJ will have them out fairly soon for 404-ing.

Perry-Castaneda Map Collection seeks donations

The huge hi-res maps service of the Perry-Castaneda Library Map Collection is now inviting small donations. Their service is free and open, cleanly organised and exposed to the public via search engines. Note that there are several forms to wade through to donate, and it looks like it may be ‘credit cards only’. I think they may do better if they also put a simple and swift PayPal button on the front page.

northumberland

How to get a free and approximate audio transcription via YouTube

How to get a free and approximate audio transcription via YouTube’s automated transcription:

Update: this tutorial may no longer be needed: YouTube now provides a ‘Closed Captions’ panel which allows you to turn off the time-coding, and then copy-paste the text.

1. Use the free Audacity or other desktop audio software to split your .mp3 into segments of less than 15 minutes each. I assume that’s still the limit. Or make it whatever time-limit YouTube sets on uploads in future.

2. Upload the .mp3s to YouTube as a “Public” video via TunesToTube. This is a free service that lets you upload an .mp3 to YouTube and quickly add a single picture visual, to become a video which is then uploaded to YouTube. No longer works. Try Audioship. MP3 to video will also get you a MP3 to MP4 file that can be uploaded to YouTube. Neither are perfect for those with slow uplinks.

Solid desktop software such as Slideshow Studio HD can also quickly create a simple YouTube friendly video, without you having to load a huge lumbering video editor such as Adobe Premiere Elements. If then uploading the .mp4 manually, ensure you tell YouTube that the video is in English, as otherwise it may later get confused and try to use Spanish etc for the captioning. Google also has a 15 minute upload limit, and may still have this in some nations.

3. Once uploaded, then go to YouTube and find your Channel, click the Settings cog on the uploaded video, and turn on “Automatic Subtitling”. If it won’t let you do this, you may need to go into the Dashboard and find the Subtitles tab.

4. Wait a minute or so for the subtitles to be made. Then go to DownSub.com to download and save the video’s subtitles as an .srt standard subtitles file. The Dashboard in YouTube may also let you download a subtitles file without needing this third-party service.

5. Get the Open Source Subtitle Edit 3.5 desktop software. Load the .srt file. In Subtitle Edit: File -> Export -> Plain Text.

6. Load the resulting text into Word, and edit and correct. It’s accurate enough for a ‘speech radio’ type podcast, though without much punctuation and you’ll need to work on it to polish it up.

You can of course get willing hands around the Web to transcribe, but you have to pay them (it’s surprisingly affordable, try Fiverr) and there’s usually at least a 12 hour turnaround time. The above method would help you to meet a much tighter deadline.

African Open Science Platform

The African Open Science Platform has just launched as a pilot…

“The Africa-wide initiative will promote the development and coordination of data policies, data training and data infrastructure. The pilot phase, launched today, is supported by the South African Department of Science and Technology (DST), funded by the National Research Foundation (NRF), directed by CODATA, the Committee on Data of the International Council for Science (ICSU) and implemented by the Academy of Science of South Africa (ASSAf).”

AOSP will…

* coordinate initiatives already underway.
* encourage shared investment in infrastructure.
* circulate good ideas and practice.
* develop the capacities of individuals and institutions.
* promote key applications of relevance to Africa.
* be a conduit to international open data.

Publishing Research Consortium on Early Career Researchers

The Publishing Research Consortium has a new report, “Early Career Researchers: the harbingers of change?”…

“there are no recent investigations into the extent to which their behaviours may prove transformational. This qualitative study of ECRs from seven countries, a first report of a longitudinal study, tracks communication and publication behaviour, and attitudes to peer review, collaboration, sharing, open access, social media and emerging impact mechanisms.”

Mostly that seems to be “change” as in, “We have still have a physical library? How quaint and delightfully old-fashioned”.