In the news this week, Priscilla Chan and Mark Zuckerberg (Facebook) have purchased an academic search engine Meta, and are set to… “offer Meta’s tools free to all researchers” at some point in the future. Very nice of them.
Currently meta.com’s search is shuttered to the public, but the site is inviting sign-ups. Meta.com is not a name that’s been on the tip of my tongue, or covered here. I don’t recall if public access to it was ever available, but possibly not. Apparently the pre-Zuckerberg Meta was one a clutch of startups trying to apply AI to a limited set of the academic literature — often in the relatively tame-but-lucrative biomedical field. I had a glancing post here on the apparently-similar Iris AI 2.0 back in November. At its search tool level Iris AI seems to propose much the same search capabilities as Meta — but via a demo of 30m+ records harvested from repositories by CORE. In contrast the pre-Zuckerberg Meta.com covered PubMed, according to a November 2015 press-release, combining that with metadata input from “dozens of publishers”. Another November 2015 press release rather ambitiously claimed that Meta.com enabled a user to…
“navigate the entirety of scientific information (25 million papers with 4,000 new ones published daily)”.
“Ambitiously” because there’s no way that the “entirety of scientific information” in journal article form = 25m papers.
After the Zuckerberg-boosted relaunch the stated aim is to expand the functionality via third-party access…
“we will enable developers to build on it or integrate it into third party platforms and services … will embrace the ideas and efforts of researchers in the diverse fields that Meta intersects with – including machine learning, network science, ontologies, science metrics, and data visualization”.
Hopefully that opening up will also include open public access to the most juicy commercial bits of Meta.com, like the ‘early awareness’ Horizon Scanning module. This claimed to be able to descry a predictive map of future research agendas and trends…
“will enable academics and industries to maintain early awareness of emergent scientific and technical advances at a speed, scale and comprehensiveness far beyond human capacity, and years in advance”
Assuming that works as intended (I haven’t encountered any gushing reviews) I’m still not sure I’d want to absolutely rely on a predictive tool that only saw a fraction of the picture. Since a mere “25 million papers” seems a little lightweight, re: a claim to index “the entirety of scientific information”. On the other hand, if it covers all of the output in one’s tight little niche, and has semantic links out into a spread of related and similarly delimited fields, then it could be quite useful for some people.