Anonymous Google CSEs

There’s a newly released Firefox addon, called Google Custom Search 1.1.2 and made by Kai Londenberg. It creates independently-hosted anonymous Google CSEs, which you can manage and refine from your Google search results / browser. Although it uses the Google API, your engine’s data appears to be stored anonymously on a server in Europe…

“A Google Account is not required anymore, Custom Search Engines can be stored anonymously on quicksear.ch”

Basically, using this addon gives you a seamless melding of the normal Google results format with the major configuration possibilities of a CSE. It’s Google’s SearchWiki on steroids, in an exo-skeleton.

But I don’t see any way to backup your CSE’s XML annotations file of URLs, which means it would be rather risky to invest large amounts of time building a subject-specific CSE this way, rather than using Google’s own interface. Perhaps a backup option will appear once the quicksear.ch site goes live — the addon and service are currently very new, having seemingly been live since September.

There’s no way to upload a “big list ‘o URLs” in the traditional manner, and have them automatically boosted in the CSE’s search rankings. Your CSE is currently a “add one URL at a time” job, as you surf the search results day in and day out. Which perhaps gives your CSE some interesting anti-spam/anti-SEO features, if your CSE is to be used as a mass collaborative anonymous engine (which it apparently can be — tick “accept volunteer contributions” when creating your CSE). And it doesn’t seem to include Google Books results, even when you tell it to include them and boost their rating by 100%.

You currently lose Google’s new “Options…” sidebar, when searching via your quicksear.ch CSE addon (which appears along with the others, in Firefox’s top-right mini search box).

Just like the official Google CSEs, you get cut-and-paste HTML code, which lets others try out your CSE without needing to log in or install anything. I created a new experimental CSE titled JURN collaborative, with permissions for collaborators, but how collaborators contribute to it is currently a mystery.

    Update: it seems that to collaborate you would have to share your quicksear.ch password with your collaborators.

Auto-detect language and auto-translate – all browsers should do this

This is rather nice, and seems to have been released in the last few days. A new Chinese Language translation add-on for Firefox, where the language of the web page is auto-detected and the translation happens seamlessly within the existing page layout. There’s no messing around with tedious right-clicking, highlighting, hovering over buttons, etc. This is one of the first of many such add-ons, I would hope. Future browsers should have this built in, for all the major languages.

The only problem at present it that it’s rather too seamless. Users need a little visual flag to show when it’s been applied to a page. And perhaps a “toggle” button.

Evolving academic publishing

Kyle Grayson summarises his thinking as… “part of an ad hoc working group with colleagues from Newcastle and Durham Universities that has been exploring the future of academic publishing” …

“in the social sciences and humanities, low citation rates and impact factors — even for leading journals — that in part reflect the inability to capture a broad audience within an academic discipline, let alone establish a readership with practitioners and/or the general public” […] “our research findings on the broader trends in media publishing in general, and scholarly publishing in particular, demonstrate that there are problems emerging over the horizon” […] “staying the course” — in terms of content, public interface, and revenue models — will lead to negative outcomes within a decade’s time.”

He suggests certain immediate remedies…

* implementing a dynamic journal website … where content is regularly updated * audio and video recordings of keynote speeches, lectures, interviews, or discussions * on-line book reviews […] invite contributions from the wider readership * blogs run by the editorial team and/or other members at large * alerting potential users of content [with] updates through social networking tools like email, Twitter, Facebook, and RSS feeds

To which I might add things like… * a collaborative subject-specific Custom Search Engine * simple “plain english” summaries of all articles (not the same thing as abstracts) * a curated “overlay” ejournal, linking to free repository content * Amazon pages for all monographs * translate all abstracts into Chinese, Japanese, and Spanish * a concerted campaign to get backlinks to your website * consider purchasing a good $50 template for the journal (it’s not just about the frequency of updating, but about how stylish it feels) * really good photography of the participants

Backlinks are particularly important. For instance, the journal Quaderno which I found yesterday. It’s six full issues of a free academic journal from a reputable university, on interesting aspects of early American history, in a country that’s teeming with re-enactors and amateur historians. Yet, according to Google, it has not a single inbound link — not even from other academic sites. It’s been online since 2004.

How to extract a CSV list of search-result URLs, along with their anchor titles

In this simple tutorial I’ll show you how to rip a page of search result links into a .csv file, along with their link titles, using nothing more than Notepad and a simple bit of javascript.

(Update: January 2011. This tutorial superseded by a new and better one)


1) Have Google run your search in advanced mode, selecting “100 results on a page”. If you prefer Bing, choose Preferences / Results, and select “50 on a page”.

2) Run the search. Once you have your big page o’ results, just leave the page alone and save it locally — doing things like right-clicking on the links will trigger Google’s “url wrapping” behaviour on the clicked link, which you don’t want. So just save the page (In Firefox: File / Save Page As…), renaming it from search.html to something-more-memorable.html

3) Now open up your saved results page in your favourite web page editor, which will probably add some handy colour-coding to tags so you can see what you’re doing. But you can also just open it up in Notepad, if that’s all you have available. Right click on the file, and “Open with…”.

4) Locate the page header (it’s at the very top of the page, where the other scripts are), make some space in there, and then paste in this javascript script

    A hat-tip to richarduie for the original script. I just hacked it a bit, so as to output the results in handy comma-delimited form.

5) Now locate the start of the BODY of your web page, and paste in this code after the body tag…

Save and exit.

6) Now load up your modified page in your web browser (I’m using Firefox). You’ll see a new button marked “Extract all links and anchor titles as a CSV list”…

Press it, and you’ll get a comma-delimited list of all the links on the page, alongside all the anchor text (aka “link titles”), in this standard format…

Highlight and copy the whole list, and then paste it into a new Notepad document. Save it as a .csv file rather than a .txt file. You can do this by manually changing the file extension when saving a file from Notepad.

7) Now you have a normal .csv file that will open up in MS Excel, with all the database columns correctly and automatically filled (if you don’t own MS Office, the free Open Office Calc should work as an alternative). In Excel, highlight the third column (by clicking so as to highlight its top bar) , then choose “Sort and Filter” and then “A-Z”…

You’ll then be asked if you want “Expand the selection”. Agree to expansion (important!), and the column with the anchor text in it will be sorted by A-Z. Expansion means that all the columns stay in sync, when one is re-sorted like this.

Now you can select and delete all the crufty links in the page that came from Google’s “Cached”, “Similar”, “Translate this page” links, etc. These links will all have the same name, so by listing A-Z we’ve made them easy to delete in one fell swoop.

8) You’re done, other than spending a few minutes ferreting out some more unwanted results. Feel free to paste in more such results from Bing, de-duplicate, etc.

If you wanted to re-create a web page of links from the data, delete the first column of numbers, and then save. Open up your saved .csv in Notepad. Now you can do some very simple search and replace operations, to change the list back into HTML…

(Note: you can also use the excellent £20 Sobelsoft Excel Add Data, Text & Characters To All Cells add-in for complex search & replace operations in Excel)


Ideally there would be free Firefox Greasemonkey scripts, simple freeware utilities, etc, that could do all of this automatically. But, believe me, I’ve looked and there aren’t. Shareware Windows URL extractors are ten-a-penny (don’t waste good money on them, use the free URL Extractor), but not one of them also extracts the anchor text and saves the output as .csv.

Yes, I do know there’s the free Firefox addon Outwit Hub, which via its Data / Lists … option can capture URLs and anchors — but it jumbles everything in the link together, anchor text, snippet, Google gunk, etc, and so the link text requires major cleaning and editing for every link. Even with the hit-and-miss home-brew scraping filters, it’s not a reliable solution.