Altmetrics: A “bibliometric nightmare?”

Our growing user base stays pretty excited about using altmetrics to tell better stories about their impacts, and we’re passionate about helping them do it better. So while we both love discussing altmetrics’ pros and cons, we prefer to err on the side of doing over talking, so we don’t blog about it much.

But we appreciated David Colquhoun’s effort to get a discussion going around his recent blog post, so are jotting down a few quick thoughts here in response. It was an interesting read, in part because David may imagine we disagree a lot more than we in fact do.

We agree that bibliometrics is a tricky and complicated topic; folks have been arguing about the applicability and validity of citation mining for many decades now [paywall], in much more detail than either David or we have time to cover completely. But what’s sure is that usage of citation-based metrics like the Impact Factor has become deeply pathological.

That’s why we’re excited to be promoting a conversation reexamining metrics of science, a conversation asking if academia as an institution is really measuring what’s meaningful. And of course the answer is: no. Not yet.  So, as an institution, we need to (1) stop pretending we are and (2) start finding ways to do better. At its core, this what altmetrics is all about–not Twitter or any other particular platform. And we’re just getting started.

We couldn’t agree more that post-publication peer-review is the future of scholarly communication. We think altmetrics will be an important part of this future, too. Scientists won’t have time to Read All The Things in the future, any more than they do now. Future altmetrics systems–especially as we begin to track who discusses papers in various environments, and what they’ve said–will help digest, report, flag, and attract expert assessments, making a publish-than-review ecosystem practical. Even today lists like the Altmetric top 100 can help attract expert review like David’s to the highly shared papers where it’s particularly needed.

We agree that a TL;DR culture does science no favors. That’s why we’re enthusiastic about the potential of social media and open review platforms to help science move beyond the formalized swap meet of journal publishing, on to actual in-depth conversations. It’s why we’re excited about making research conversation, data, analysis, and code first-class scholarly objects that fit into the academic reward system. It’s time to move beyond the TL;DR of the article, and start telling the whole research story.

So we’re happy that David agrees we must “give credit for all forms of research outputs, not only papers.” Although of course, not everyone agrees with David or Jason or Heather. We hear from lots of researchers that they’ve got an uphill battle arguing their datasets, blog posts, code, and other products are really making an impact. And we also hear that Impactstory’s normalized usage, download, and other data helps them make their point, and we’re pretty happy about that. Our data could be a lot more effective here (and stay tuned, we’ve got some features rolling out for this…), but it’s a start. And starts are important.

So are discussions. So thanks, David, for sharing your thoughts on this, and sorry we don’t have time to engage more deeply on it. If you’re ever in Vancouver, drop us a line and we’ll buy you a beer and have a Proper Talk :). And thanks to everyone else in this growing community for keeping great discussions on open science, web-native scholarship, and altmetrics going!

10 thoughts on “Altmetrics: A “bibliometric nightmare?”

  1. Kwner says:

    The “sorry we don’t have time to engage more deeply on it” response ironically seems be reinforcing Dr Colquhoun’s point about evaluation.

    • jasonpriem says:

      Hm…not sure I understand how? I didn’t understand his point to be “everyone has infinite time to respond to everything people write on the internet.” I think there are lots of ways to add to a conversation; hopefully our modest blog post was one of them, though of course you’re free to find otherwise.

  2. Thanks for that. It seems that you have backed off considerably since you said that
    You have said that an advantage of altmetrics was ““Speed: months or weeks, not years: faster evaluations for tenure/hiring”. Do I take it that you know longer believe it is useful for making decisions about tenure and hiring? If so, that’s welcome.

    You say here
    ” we’re excited to be promoting a conversation reexamining metrics of science”,
    But I understand that you are selling altmetrics for money. That seems a bit odd if the conversation has only just started.

    • jasonpriem says:

      Hi David,
      In the talk you link to, I suggest that we’re headed for a more connected, post-publication-review future where most scholarly uses, conversations, and reviews leave online traces. Obviously we’re not there now. But if we do get there, we might have unprecedented access to the aggregated review of the scientific community, in nearly real time. I expect that would be interesting in evaluating work, especially very recent work. I stick by that and say it often, although to be honest it strikes me as such a modest, speculative claim that’s hard to argue about. But you’re welcome to try if you like 🙂

      I understand that you are selling altmetrics for money.

      I’m afraid you’re mistaken here. I (Jason) am not selling altmetrics, and neither is Impactstory (which I’m a cofounder of, along with Heather Piwowar). To be honest, it was a little frustrating to read, since it seems like 1 minute on jasonpriem.org and impactstory.org would’ve made that clear. That said, I reckon I’ve been there before, too…these things happen.

      Thanks for taking the time to think about the directions altmetrics and scholarly communication…I think we need more scientists doing just that.

  3. David Colquhoun says:

    Sorry about the charging thing. I shouldn’t have assumed that you operate in the same way as altmetric.com.

    But I still don’t see how, in the light of the examples we give at http://www.dcscience.net/?p=6369 how you can hold out any hope that on-line attention will ever measure quality. It’s almost the opposite in those examples. I haven’t yet heard of any metrics system that even bothers to count whether the comments on a paper are flattering or derogatory. Do you know of one?

    • jasonpriem says:

      No worries 🙂

      Let’s imagine that we knew exactly WHO wrote online comments about each article, and we could sentiment mine each comment/open review/etc to get a good gist of its content. If I know five Nobelists is raving about your paper as part of it’s post-publication peer review, I could guess it’s got some merit. Or maybe we can find nodes in the network that consistently predict classic papers…even if they’re not prizewinners, their tweets might hold more weight. This is all conceptually possible–and indeed there’s lots of progress toward doing just these kinds of things in the commercial space. You’re right that we are (as is so often the case) rather behind the times on this in scholarly communication, but I think we’re catching up fast–and this will accelerate as post-pub peer review becomes more important.

      But along with this, it’s also important to remember that altmetrics can do more than just measure “quality” (which though it’s an important concept is necessarily a subjective and nebulous one). It can also find which papers are making a public policy or clinical impact, or which datasets are being heavily downloaded and reused, or which software projects are getting heavily forked/downloaded/saved, and so on. The idea is to find the stories about research impacts that go untold right now. The idea is not to REPLACE reading papers. Seems pretty obvious to me that this will always be important.

Leave a Reply