Green Open Access comes of age

This morning David Prosser, executive director of Research Libraries UK, tweeted, “So we have @unpaywall, @oaDOI_org, PubMed icons – is the green #OA infrastructure reaching maturity?(link).

We love this observation, and not just because two of the three projects he mentioned are from us at Impactstory 😀. We love it because we agree: Green OA infrastructure is at a tipping point where two decades of investment, a slew of new tools, and a flurry of new government mandates is about to make Green OA the scholarly publishing game-changer.

A lot of folks have suggested that Sci-Hub is scholarly publishing’s “Napster moment,” where the internet finally disrupts a very resilient, profitable niche market. That’s probably true. But like music industry shut down Napster, Elsevier will likely be able shut down Sci-Hub. They’ve got both the money and the legal (though not moral) high ground and that’s a tough combo to beat.

But the future is what comes after Napster. It’s in the iTunes and the Spotifys of scholarly communication. We’ve built something to help to create this future. It’s Unpaywall, a browser extension that instantly finds free, legal Green OA copies of paywalled research papers as you browse–like a master key to the research literature. If you haven’t tried it yet, install Unpaywall for free and give it a try.

Unpaywall has reached 5,000 active users in our first ten days of pre-release.

But Unpaywall is far from the only indication that we’re reaching a Green OA inflection point. Today is a great day to appreciate this, as there’s amazing Green OA news everywhere you look:

  • Unpaywall reached the 5000 Active Users milestone. We’re now delivering tens of thousands of OA articles to users in over 100 countries, and growing fast.
  • PubMed announced Institutional Repository LinkOut, which links every PubMed article to a free Green copy in institutional repositories where available. This is huge, since PubMed is one of the world’s most important portals to the research literature.
  • The Open Access Button announced a new integration with interlibrary loan that will make it even more useful for researchers looking for open content. Along with the interlibrary loan request, they send instructions to authors to help them self-archive closed publications.

Over the next few years, we’re going to see an explosion in the amount of research available openly, as government mandates in the US, UK, Europe, and beyond take force. As that happens, the raw material is there to build completely new ways of searching, sharing, and accessing the research literature.
We think Unpaywall is a really powerful example: When there’s a big Get It Free button next to the Pay Money button on publisher pages, it starts to look like the game is changing. And it is changing. Unpaywall is just the beginning of the amazing open-access future we’re going to see. We can’t wait!

How to smash an interstellar paywall

Last month, hundreds of news outlets covered an amazing story: seven earth-sized planets were discovered, orbiting a nearby star. It was awesome. Less awesome: the paper with the details, published in the journal Nature, was paywalled. People couldn’t read it.

That’s messed up. We’re working to fix it, by releasing our new free Chrome extension Unpaywall. Using Unpaywall, you can get access to the article, and millions like it, instantly and legally. Let’s learn more.

First, is this really a problem? Surely google can find the article. I mean, there might be aliens out there. We need to read about this. Here we go, let’s Google for “seven terrestrial planets nature article.” Great, there it is, first result. Click, and…

What, thirty-two bucks to read!? Well that’s that, I quit.

Or maybe there are some ways around the paywall? Well, you can know someone with access. My pal Cindy Wu helped out her journal club out this way, offering on Twitter to email them a copy of the paper. But you have to follow Cindy on Twitter for that to work.

Or you could know the right places to look for access. Astronomers generally post their papers are on a free web server called the ArXiv, and sure enough if you search there, you’ll find the Nature paper.  But you have to know about ArXiv for that to work. And check out those Google search results again: ArXiv doesn’t appear.

Most people don’t know Cindy, or ArXiv. And no one’s paying $32 for an article. So the knowledge in this paper, and thousands of papers like it, is locked away from the taxpayers who funded it. Research becomes the private reserve of those privileged few with the money, experience, or connections to get access.

We’re helping to change that.

Install our new, free Unpaywall Chrome extension and browse to the Nature article. See that little green tab on the right of the page? It means Unpaywall found a free version, the one the authors posted to ArXiv. Click the tab. Read for free. No special knowledge or searches or emails or anything else needed. 

Today you’ll find Unpaywall’s green tab on ten million articles, and that number is growing quickly thanks to the hard work of the open-access movement. Governments in the US, UK, Europe, and beyond are increasingly requiring that taxpayer-funded research be publically available, and as they do Unpaywall will get more and more effective.

Eventually, the paywalls will all fall. Till then, we’ll be standing next to ‘em, handing out ladders. Together with millions of principled scientists, libraries, techies, and activists, we’re helping make scholarly knowledge free to all humans. And whoever else is out there 😀 👽.

How big does our text-mining training set need to be?

We got some great feedback from reviewers our new Sloan grant, including a suggestion that we be more transparent about our process over the course of the grant. We love that idea, and you’re now reading part of our plan for how to do that: we’re going to be blogging a lot more about what we learn as we go.

A big part of the grant is using machine learning to automatically discover mentions of software use in the research literature. It’s going to be a really fun project because we’ll get to play around with some of the very latest in ML, which currently The Hotness everywhere you look. And we’re learning a lot as we go. One of the first questions we’ve tackled (also in response to some good reviewer feedback) is: how big does our training set need to be? The machine learning system needs to be trained to recognized software mentions, and to do that we need to give it a set of annotated papers where we, as humans, have marked what a software mention looks like (and doesn’t look like). That training set is called the gold standard. It’s what the machine learning system learns from. Below is copied from one of our reviewer responses:

We came up with the number of articles to annotate through a combination of theory, experience, and intuition.  As usual in machine learning tasks, we considered the following aspects of the task at hand:

  • prevalence: the number of software mentions we expect in each article
  • task complexity: how much do software-mention words look like other words we don’t want to detect
  • number of features: how many different clues will we give our algorithm to help it decide whether each word is a software mention (eg is it a noun, is it in the Acknowledgements section, is it a mix of uppercase and lowercase, etc)

None of these aspects are clearly understood for this task at this point (one outcome of the proposed project is that we will understand them better once we are done, for future work), but we do have rough estimates.  Software mention prevalence will be different in each domain, but we expect roughly 3 mentions per paper, very roughly, based on previous work by Howison et al. and others.  Our estimate is that the task is moderately complex, based on the moderate f-measures achieved by Pan et al. and Duck et al. with hand-crafted rules.  Finally, we are planning to give our machine learning algorithm about 100 features (50 automatically discovered/generated by word2vec, plus 50 standard and rule-based features, as we discuss in the full proposal).

We then used these estimates.  As is common in machine learning sample size estimation, we started by applying a rule-of-thumb for the number of articles we’d have to annotate if we were to use the most simple algorithm, a multiple linear regression.  A standard rule of thumb (see https://en.wikiversity.org/wiki/Multiple_linear_regression#Sample_size) is 10-20 datapoints are needed for each feature used by the algorithm, which implies we’d need 100 features * 10 datapoints = 1000 datapoints.  At 3 datapoints (software mentions) per article, this rule of thumb suggests we’d need 333 articles per domain.  

From there we modified our estimate based on our specific machine learning circumstance.  Conditional Random Fields (our intended algorithm) is a more complex algorithm than multiple linear regression, which might suggest we’d need more than 333 articles.  On the other hand, our algorithm will also use “negative” datapoints inherent in the article (all the words in the article that are *not* software mentions, annotated implicitly as not software mentions) to help learn information about what is predictive of being vs not being a software mention — the inclusion of this kind of data for this task means our estimate of 333 articles is probably conservative and safe.

Based on this, as well as reviewing the literature for others who have done similar work (Pan et al. used a gold standard of 386 papers to learn their rules, Duck et al. used 1479 database and software mentions to train their rule weighting, etc), we determined that 300-500 articles per domain was appropriate. We also plan to experiment with combining the domains into one general model — in this approach, the domain would be added as an additional feature, which may prove more powerful overall. This would bring all 1000-1500 articles to the test set.

Finally, before proposing 300-500 articles per domain, we did a gut-check whether the proposed annotation burden was a reasonable amount of work and cost for the value of the task, and we felt it was.

References

Duck, G., Nenadic, G., Filannino, M., Brass, A., Robertson, D. L., & Stevens, R. (2016). A Survey of Bioinformatics Database and Software Usage through Mining the Literature. PLOS ONE, 11(6), e0157989. http://doi.org/10.1371/journal.pone.0157989

Howison, J., & Bullard, J. (2015). Software in the scientific literature: Problems with seeing, finding, and using software mentioned in the biology literature. Journal of the Association for Information Science and Technology (JASIST), Article first published online: 13 MAY 2015. http://doi.org/10.1002/asi.23538

Pan, X., Yan, E., Wang, Q., & Hua, W. (2015). Assessing the impact of software on science: A bootstrapped learning of software entities in full-text papers. Journal of Informetrics, 9(4), 860–871. http://doi.org/10.1016/j.joi.2015.07.012

Comparing Sci-Hub and oaDOI

Nature writer Richard Van Noorden recently asked us for our thoughts about Sci-Hub, since in many ways it’s quite similar to our newest project, oaDOI. We love the idea of comparing the two, and thought he had (as usual) good questions. His recent piece on Sci-Hub founder Alexandra Elbakyan quotes some of our responses to him; we’re sharing the rest below:

Like many OA advocates, we see lots to admire in Sci-Hub.

First, of course, Sci-Hub is making actual science available to actual people who otherwise couldn’t read it. Whatever else you can say about it, that is a Good Thing.

Second, SciHub helps illustrate the power of universal OA. Imagine a world where when you wanted to read science, you just…did? Sci-Hub gives us a glimpse of what that will look like, when universal, legal OA becomes a reality. And that glimpse is powerful, a picture that’s worth a thousand words.

Finally, we suspect and hope that SciHub is currently filling toll-access publishers with roaring, existential panic. Because in many cases that’s the only thing that’s going to make them actually do the right thing and move to OA models.

All this said, SciHub is not the future of scholarly communication, and I think you’d be hard pressed to find anyone who thinks it is. The future is universal open access.

And it’s not going to happen tomorrow. But it is going to happen. And we built oaDOI to be a step along that path. While we don’t have the same coverage as SciHub, we are sustainable and built to grow, along with the growing percentage of articles that have open access versions. And as you point out, we offer a simple, straightforward way to get fulltext.

That interface was not exactly inspired by SciHub, but rather I think an example of convergent evolution. The current workflow for getting scholarly articles is, in many cases, absolutely insane. Of course this is the legacy of a publishing system that is built on preventing people from reading scholarship, rather than helping them read it. It doesn’t have to be this hard. Our goal at oaDOI is to make it less miserable to find and read science, and in that we’re quite similar to SciHub. We just think we’re doing it in a way that’s more powerful and sustainable over the long term.

Collaborating on a $635k grant to improve credit for research software

We’re thrilled to announce Impactstory will be collaborating with James Howison at the University of Texas-Austin on a project to improve research software by helping its creators get proper credit for their work. The project will be funded by a three-year, $635k grant from the Alfred P. Sloan foundation.

Research software is an essential component of modern science. But the tradition-bound scholarly credit system does not appropriately reward the academic unsung heroes who create research software, putting further development of software-intensive science in jeopardy. Even when software is mentioned, the mentions are often informal, such as URLs in footnotes or just names in text. Howison, working with doctoral student Julia Bullard, found that 63% of mentions in a random sample of 90 biology articles were informal (Howison and Bullard, 2014).

We’re going to help fix that.

We’ll be working with James and his lab to make a huge database of every research software project used in every paper in the biomedicine, astronomy, and economics literatures. This database will filled in using a deep learning system that’ll automatically extract both formal and informal mentions of software, after being trained on a large, manually-coded gold standard dataset.

We’ll use this database to build and study three cool prototype tools:

  • CiteSuggest will analyze submitted text or code and make recommendations for normalized citations using the software author’s preferred citation,
  • CiteMeAs will help software producers make clear requests for their preferred citations, and
  • Software Impactstory will help software authors demonstrate the scholarly impact of their software in the literature.

We believe these tools will help transform the scholarly reward system into one where where software is a first-class research products, and its authors get full academic credit for their work. This in turn will support the software-intensive open science system we need for the future.

The project will build on our experience creating Depsy, a platform to track the scholarly impact of Python and R packages with an emphasis on dependencies, and on James’ extensive experience researching development in open source software and software in science. For lots more detail on the whole thing, check out the submitted proposal (edit Nov 9, 2016:  note this document is not a complete representation of the proposal, since the application and approval process also involved confidential back and forth with reviewers.  The reviewers added great comments and insight that we’re incorporating into the work as we go forward.)

Thank you, Sloan.  Thanks to Program Director Josh Greenberg for his continued advice and encouragement, and to the grant reviewers for well-informed and helpful feedback. And thanks especially to James, who had this idea in the first place, brought us on board, and has been a patient, good-natured, and ingenious collaborator in a lot of hard work already. We can’t wait to get started!

What’s your #OAscore?

We’re all obsessed with self-measurement.

We measure how much we’re Liked online. We measure how many steps we take in a day. And as academics, we measure our success using publication counts, h-indices, and even Impact Factors.

But we’re missing something.

As academics, our fundamental job is not to amass citations, but to increase the collective wisdom of our species. It’s an important job. Maybe even a sacred one. It matters. And it’s one we profoundly fail at when we lock our work behind paywalls.

Given this, there’s a measurement that must outweigh all the others we use (and misuse) as researchers: how much of our work can be read?

This Open Access Week, we’re rolling out this measurement on Impactstory. It’s a simple number: what percentage of your work is free to read online? We’d argue that it’s perhaps the most important number associated with your professional life (unless maybe it’s the percentage of your work published with a robust license that allows reuse beyond reading…we’re calculating that too). We’re calling it your Open Access Score.

We’d like to issue a challenge to every researcher: find out your open access score, do one thing to raise it, and tell someone you did. It takes ten minutes, and it’s a concrete thing you can do to be proud of yourself as a scholar.

Here’s how to do it:

  1. Make an Impactstory profile. You’ll need a Twitter account and nothing more…it’s free, nonprofit, and takes less than five minutes. Plus along the way you’ll learn cool stuff about how often your research has been tweeted, blogged, and discussed online.
  2. Deposit just one of your papers into an Open Access repository. Again: it’s easy. Here’s instructions.
  3. Once you’re done, update your Impactstory, and see your improved score.
  4. Tweet it. Let your community know you’ve made the world a richer, more beautiful place because you’ve made you’ve increased the knowledge available to humanity. Just like that. Let’s spread that idea.

Measurement is controversial. It has pros and cons. But when you’re measuring the right things, it can be incredibly powerful. This OA Week, join us in measuring the right things. Find your #OAscore, make it better, tweet it out. If we’re going to measure steps, let’s make them steps that matter.

 

Crossposted on the Open Access Week blog.

Why researchers are loving the new Impactstory

We put our heart and soul into the new Impactstory and have been on pins and needles to hear what you think.  Well it’s been a week and the verdict is in — we’re hearing that the new version is awesome, fantastic, and truly excellent, a home run and must-have–an academic profile that’s exciting and relevant.

And so much more. So much more, in fact, that we wanted to a little break from the frenzied responding, bugfixing, and feature-launching we’ve been doing this week and summarize a bit of what we’ve heard.

What do you like?

A lot of users have appreciated that it now takes seconds and is super easy to set up a profile that’s blazing fast and smooth to use: it’s instant insights about your research.

Unlike speed, beauty is in the eye of the beholder–but our beholders seem delightfully agreed that our new look is great, great, great.  Whether users are calling it fresh or beautifully crafted, or sleek or smooth or snazzy, everyone seems to agree that the new version looks awesome, it looks pretty damn awesome. And we are pretty thrilled to hear that.

They’re enjoying that it’s got some fun 🙂 And, we’re not surprised to hear that people like the new price point of Free, making it easier to recommend to others.  

What’s it good for?

Impactstory helps researchers find impacts of their work beyond just citations. People have found mentions they didn’t know about on Wikipedia, discussion in cool blog posts, and reviews on Faculty of 1000. And not just numbers, but impact across the globe. Not just numbers but connecting with people: for instance user Peter van Heusden tweeted, “Using @Impactstory I discovered someone who is consistently promoting work I’m involved in, but who I had no idea existed!”

All this amounts to more than just a lovely ego boost (although it’s that too!). People are telling us that it’s motivating them to adopt more Open Science practices like uploading research slides to a proper repository, getting an ORCID, adding works to their ORCID profile, and celebrating their non-paper publications.

How are you using it?

People are already sending their Impactstory profiles to their funders, and their funders are loving them.  Researchers have added their new profile to their CV, and are planning on using Impactstory data to define innovative ‘pathway to impact’ for UK grants and in tenure and promotion packets.

Folks are including it in workshops.  And even better — building things with our open data! Check out the ferret.io plugin, it rolled out impactstory support this week and it’s really cool 🙂

What have we been doing?

We’ve made a bunch of changes this week in response to your feedback:

  • imports all your publications, not just DOIs.  Everything on your ORCID profile now displays in your Impactstory profile, and we’re working on getting more openness and altmetrics data
  • twitter integration
    • connecting twitter updates your profile pic so you don’t have to fight with gravatar
    • you don’t have to enter email manually–even faster signup
    • we’ll be using your twitter feed for achievements in the future
  • there’s a new Open Sesame achievement
  • we changed the scores at the top of the profile beside your picture; they are now counts of your achievements
  • the achievements and the import process are better documented
  • we rolled out dozens of smaller features, usability enhancements, and bugfixes.

What’s next?

We’re on our way to the FORCE16 conference this week.  We’ll be rolling the feedback from the conference along with your continued feedback into continued improvement to the app.

And you?  Join in with everyone showing off their profile, spread the word (this is how we will grow), and if you don’t have a profile, get one, and tell us what you think!

Finally, thanks.

Finally, we’d like to thank the hundreds of passionate people who have helped us with money and with moral support along the way, from our early days till now. It’s safe to say the new Impactstory is a big hit.  It’s our hit, together.

 

The new Impactstory: Better. Freer.

We are releasing a new version of Impactstory!

https://impactstory.org/u/0000-0001-6728-7745

https://impactstory.org/u/0000-0001-6728-7745

We baked what we’ve learned from hundreds of conversations with researchers into a sleeker, leaner, more useful Impactstory.

Our new Achievements showcase your meaningful accomplishments, not just counts. Our new three-part score helps you track your buzz, engagement, and openness. And next-generation notification emails are improved to tell you what you want to know reliably every week.

And of course we’ve got a slew of other new features as well, including Depsy integration, ORCID sync-on-demand, and full support for mobile.

What’s more, we’re simplifying and streamlining everywhere, eliminating little-used features and doubling down on what users have told us they love. Profile creation is now only via ORCID, we only deal in DOIs, and citation metrics are gone. As a result, creating a profile takes just seconds, our support for diverse research products (preprints, datasets, etc) is bulletproof, and metrics are now consistently clear and up-to-date. Along with a complete code rewrite, these changes make Impactstory faster and more reliable than it’s ever been.

Last but not least, not only are we making Impactstory better: we’re making it cheaper. As in, all the way cheaper. Free!

Why? We heard you love the idea, but not the price–largely because your disciplines or departments aren’t quite ready to use altmetrics for evaluation. We can see this is starting to change, and want to help that change happen as quickly as possible. That means letting as many researchers as possible engage with altmetrics, right now. Free helps that happen.

Alternative sustainability models (like freemium features and new grants) will allow us to continue to build and maintain tools like Impactstory and Depsy to help change how researchers think about understanding and measuring the influence of their work.

Sound good? It is. We think you’ll love it. Go make yourself a profile and see what you learn: https://impactstory.org (and if you’re a current impactstory subscriber check your email for migration details).

We think this new Impactstory the best thing we’ve ever done, and it’s a big step towards creating the open science, altmetrics-powered future we believe in. Thanks building that future with us. We’re looking forward to hearing what you think!

Let’s value the software that powers science: Introducing Depsy

Today we’re proud to officially launch Depsy, an open-source webapp that tracks research software impact.

We made Depsy to solve a problem:  in modern science, research software is often as important as traditional research papers–but it’s not treated that way when it comes to funding and tenure. There, the traditional publish-or-perish, show-me-the-Impact-Factor system still rules.

We need to fix that. We need to provide meaningful incentives for the scientist-developers who make important research software, so that we can keep doing important, software-driven science.

Lots of things have to happen to support this change. Depsy is a shot at making one of those things happen: a system that tracks the impact of software in software-native ways.

That means not just counting up citations to a hastily-written paper about the software, but actual mentions of the software itself in the literature. It means looking how software gets reused by other software, even when it’s not cited at all. And it means understanding the full complexity of software authorship, where one project can involve hundreds of contributors in multiple roles that don’t map to traditional paper authorship.

Ok, this sounds great, but how about some specifics. Check out these examples:

  • GDAL is a geoscience library. Depsy finds this cool NASA-funded ice map paper that mentions GDAL without formally citing it. Also check out key author Even Rouault: the project commit history demonstrates he deserves 27% credit for GDAL, even though he’s overlooked in more traditional credit systems.
  • lubridate improves date handling for R. It’s not highly-cited, but we can see it’s making a different kind of impact: it’s got a very high dependency PageRank, because it’s reused by over 1000 different R projects on GitHub and CRAN.
  • BradleyTerry2 implements a probability technique in R. It’s only directly reused by 8 projects—but Depsy shows that one of those projects is itself highly reused, leading to huge indirect impacts. This indirect reuse gives BradleyTerry2 a very high dependency PageRank score, even though its direct reuse is small, and that makes for a better reflection of real-world impact.
  • Michael Droettboom makes small (under 20%) contributions to other people’s research software, contributions easy to overlook. But the contributions are meaningful, and they’re to high-impact projects, so in Depsy’s transitive credit system he ends up as a highly-ranked contributor. Depsy can help unsung heroes like Micheal get rewarded.
     

Depsy doesn’t do a perfect job of finding citations, tracking dependencies, or crediting authors (see our in-progress paper for more details on limitations). It’s not supposed to. Instead, Depsy is a proof-of-concept to show that we can do them at all. The data and tools are there. We can measure and reward software impact, like we measure and reward the impact of papers.

Embed impact badges in your GitHub README

Given that, it’s not a question of if research software becomes a first-class scientific product, but when and how. Let’s start having the conversations about when and how (here are some great places for that). Let’s improve Depsy, let’s build systems better than Depsy, and let’s (most importantly) start building the cultural and political structures that can use these systems.

For lots more details about Depsy, check out the paper we’re writing (and contribute!), and of course Depsy itself. We’re still in the early stages of this project, and we’re excited to hear your feedback: hit us up on twitter, in the comments below, or in the Hacker News thread about this post.

Depsy is made possible by a grant from the National Science Foundation.
edit nov 15 2015: change embed image to match new badge

Better than a free Ferrari: Why the coming altmetrics revolution needs librarians

This post was originally published as the forward to Meaningful Metrics: A 21st Century Librarian’s Guide to Bibliometrics, Altmetrics, and Research Impact [paywall, embargoed for 6mo]. It’s also persistently archived on figshare.

A few days ago, we were speaking with an ecologist from Simon Fraser University here in Vancouver, about an unsolicited job offer he’d recently received. The offer included an astonishing inducement: anyone from his to-be-created lab who could wangle a first or corresponding authorship of a Nature paper would receive a bonus of one hundred thousand dollars.

Are we seriously this obsessed with a single journal? Who does this benefit? (Not to mention, one imagines the unfortunate middle authors of such a paper, trudging to a rainy bus stop as their endian-authoring colleagues roar by in jewel-encrusted Ferraris.)  Although it’s an extreme case, it’s sadly not an isolated one. Across the world, A Certain Kind of administrator is doubling down on 20th-century, journal-centric metrics like the Impact Factor.

That’s particularly bad timing, because our research communication system is just beginning a transition to 21st-century communication tools and norms. We’re increasingly moving beyond the homogeneous, journal-based system that defined 20th century scholarship.

Today’s scholars increasingly disseminate web-native scholarship. For instance, Jason’s 2008 tweet coining the term “altmetrics” is now more cited than some of his peer-reviewed papers. Heather’s openly published datasets have gone on to fuel new articles written by other researchers. And like a growing number of other researchers, we’ve published research code, slides, videos, blog posts, and figures that have been viewed, reused, and built upon by thousands all over the world. Where we do publish traditional journal papers, we increasingly care about broader impacts, like citation in Wikipedia, bookmarking in reference managers, press coverage, blog mentions, and more. You know what’s not capturing any of this? The Impact Factor.

Many researchers and tenure committees are hungry for alternatives, for broader, more diverse, more nuanced metrics. Altmetrics are in high demand; we see examples at Impactstory (our altmetrics-focused non-profit) all the time. Many faculty share how they are including downloads, views, and other alternative metrics in their tenure and promotion dossiers, and how evaluators have enthused over these numbers. There’s tremendous drive from researchers to support us as a nonprofit, from faculty offering to pay hundreds of extra dollars for profiles, to a Senegalese postdoc refusing to accept a fee waiver. Other altmetrics startups like Plum Analytics and Altmetric.com can tell you similar stories.

At higher levels, forward-thinking policy makers and funders are also seeing the value of 21st-century impact metrics, and are keen to realize their full potential. We’ve been asked to present on 21st-century metrics at the NIH, NSF, the White House, and more. It’s not these folks who are driving the Impact Factor obsession; on the contrary, we find that many high-level policy-makers are deeply disappointed with 20th-century metrics as we’ve come to use them. They know there’s a better way.

But many working scholars and university administrators are wary of the growing momentum behind next-generation metrics. Researchers and administrators off the cutting edge are ill-informed, uncertain, afraid. They worry new metrics represent Taylorism, a loss of rigor, a loss of meaning. This is particularly true among the majority of faculty who are less comfortable with online and web-native environments and products. But even researchers who are excited about the emerging future of altmetrics and web-native scholarship have a lot of questions. It’s a new world out there, and one that most researchers are not well trained to negotiate.

We believe librarians are uniquely qualified to help. Academic librarians know the lay of the land, they keep up-to-date with research, and they’re experienced providing leadership to scholars and decision-makers on campus. That’s why we’re excited that Robin and Rachel have put this book together. To be most effective, librarians need to be familiar with the metrics research, which is currently advancing at breakneck speed. And they need to be familiar with the state of practice–not just now, but what’s coming down the pike over the next few years. This book, with its focus on integrating research with practical tips, gives librarians the tools they need.

It’s an intoxicating time to be involved in scholarly communication. We’ve begun to see the profound effect of the Web here, but we’re just at the beginning. Scholarship is on the brink of Cambrian explosion, a breakneck flourishing of new scholarly products, norms, and audiences. In this new world, research metrics can be adaptive, subtle, multi-dimensional, responsible. We can leave the fatuous, ignorant use of Impact Factors and other misapplied metrics behind us. Forward-thinking librarians have an opportunity to help shape these changes, to take their place at the vanguard of the web-native scholarship revolution. We can make a better scholarship system, together. We think that’s even better than that free Ferrari.