Open Science & Altmetrics Monthly Roundup (April 2014)

Don’t have time to stay on top of the most important Open Science and Altmetrics news? We’ve gathered the very best of the month in this post. Read on!

Funding agencies denying payments to scientists in violation of Open Access mandates

Want to actually get paid from those grants you won? If you haven’t made publications about your grant-funded research Open Access, it’s possible you could be in violation of funders’ public access mandates–and may lose funding because of it.

Richard Van Noorden of Nature News reports,

The London-based Wellcome Trust says that it has withheld grant payments on 63 occasions in the past year because papers resulting from the funding were not open access. And the NIH…says that it has delayed some continuing grant awards since July 2013 because of non-compliance with open-access policies, although the agency does not know the exact numbers.

Post-enforcement, compliance rates increased 14% at the Wellcome Trust and 7% and the NIH. However, they’re still both a ways from seeing full compliance with the mandates.

And that’s not the only shakeup happening in the UK: the higher ed funding bodies warned researchers that any article or conference paper accepted after April 1, 2016 that doesn’t comply with their Open Access policy can’t be used for the UK Research Excellence Framework, by which universities’ worthiness to receive funding is determined.

That means institutions now have a big incentive to make sure their researchers are following the rules–if their researchers are found out of compliance, the institutions’ funding will be in jeopardy.

Post-publication peer review getting a lot of comments

Post-publication peer review via social media was the topic of Dr. Zen Faulkes’ “The Vaccuum Shouts Back” editorial, published in Neuron earlier this month. In it, he points out:

Postpublication peer review can’t do the entire job of filtering the scientific literature right now; it’s too far from being a standard practice….[it’s] an extraordinarily valuable addition to, not a substitute for, the familiar peer review process that journals use before publication. My model is one of continuous evaluation: “filter, publish, and keep filtering.”

So what does that filtering look like? Comments on journal and funder websites, publisher-hosted social networks, and post-pub peer review websites, to start with. But Faulkes argues that “none of these efforts to formalize and centralize postpublication peer review have come close to the effectiveness of social media.” To learn why, check out his article on Neuron’s website.

New evidence supports Faulkes’ claim that post-publication peer review via social media can be very effective. A study by Paul S. Brookes, published this month in PeerJ, found post-publication peer review using blogs makes corrections to the literature an astounding eight times as likely to happen than corrections reported to journal editors in the traditional (private) manner.

For more on post-publication peer review, check out this classic Frontiers in Computational Neuroscience special issue, Tim Gower’s influential blog post, “How might we get to a new model of mathematical publishing?,” or Faculty of 1000 Prime, the highly respected post-pub peer review platform.

Recent altmetrics-related studies of interest

  • Scholarly blog mentions relate to later citations: A recent study published in JASIST (green OA version here) found that mentions of articles on scholarly blogs correlate to later citations.

  • What disciplines have the highest presence of altmetrics? Hint: it’s not the ones you think. Turns out, a higher percentage of humanities and social science articles have altmetrics than for those in the biomedical and life sciences. Researchers also found that only 7% of all papers found in Web of Science had Altmetric.com data.

  • Video abstracts lead to more readers: For articles in the New Journal of Physics, video abstract views correlate to increased article usage counts, according to a study published this month in the Journal of Librarianship and Scholarly Communication.

New data sources available for Impactstory & Altmetric.com

New data sources include post-publication peer review sites Publons and PubPeer, and microblogging site Weibo Sina (the “Chinese Twitter”). Since we get data from Altmetric, that means Impactstory will be reporting this data soon, too!

And another highly-demanded data source will be opening up in the near future: Zotero. The Sloan Foundation has backed research and development for the open source reference management software that will eventually help Zotero build “a preliminary public API that returns anonymous readership counts when fed universal identifiers (e.g. ISBN, DOI).” So, some day soon, we’ll be able to report Zotero readership information alongside Mendeley stats in your profile–a feature that many of you have been asking us about for a long time.

Altmetric.com offering new badges

Altmetric.com founder Euan Adie announced that for those who want to de-emphasize numeric scores on content, the famous “donut” badges will now be available sans Altmetric score–a move heralded by many in the altmetrics research community as being a good move away from “one score to rule them all.”

Must-read blog posts about ORCID and megajournals

We’ve been on a tear publishing about innovations in Open Science and altmetrics on the Impactstory blog. Here are two of our most popular posts for the month:

Stay connected

Do you blog on altmetrics or Open Science and want to share your posts with us? Let us know on our Twitter, Google+, Facebook, or LinkedIn pages. We might just feature your work in next month’s roundup!

And if you don’t want to miss next month’s news, remember that you can sign up to get these posts and other Impactstory news delivered straight to your inbox.

14 thoughts on “Open Science & Altmetrics Monthly Roundup (April 2014)

  1. David Colquhoun says:

    I do wonder why you concatenate “open science and altmetrics”.
    Open science is essential for the future of honest science
    Altmetrics actively harms science by encouraging gaming, hype and triviality. -see http://www.dcscience.net/?p=6369

    I see no relationship between them.

    • Hi David,

      Putting aside our fundamental disagreements about whether altmetrics are useful for traditional research outputs (articles & books), there’s much to be said in favor of altmetrics for other research outputs (software, datasets, etc). Open Science takes metrics like citations into account, but for those that make other research products openly available, they need metrics that motivate them to practice Open Science by helping them understand how their open outputs are being used by other scientists. If anything, I’d venture that altmetrics for datasets and research software–the less sexy stuffs that science is built upon–are more immune to hype than the stories that are told about research (i.e. articles and books). Few people outside a very specific community of interest are usually motivated share or cite datasets or research software. (I’m generalizing here, but still.)

      • David Colquhoun says:

        Stacy. Did you bother to read the link that I gave? If so, I’d appreciate it if you said what you think is wrong about our arguments. You surely can’t really believe that “datasets and research software–the less sexy stuff….” are ever going to score highly under altmetrics.

        I’ve no objection to people trying to make a living by peddling altmetrics or homeopathy (both have roughly the same amount of evidence in favour of them) as long as they don’t exaggerate. The fact of the matter, as we demonstrated, is that altmetrics are by people who don’t understand science, for people who don’t understand science. They promote trivial (and often wrong) science. They encourage meretricious headline-grabbing science, and penalise deep and thorough science.

        Laurie Taylor said it all with his (barely imaginary) reference “The British Journal of Half-Baked Neuroscience Findings with Big Popular Impact” bit.ly/1iNZ0d3

        It’s hard enough to do good science without altmetrics sales people impeding the effort. Please get off our backs.

      • Hi David,

        I read your post and Jason’s reply (which is why I didn’t bother to debate most of your other points about altmetrics as outlined in your post from January, as Jason has responded in detail in the past and I agree with his points). On the flipside, I wonder if you read my comment?

        Altmetrics is not about “scoring high”–it’s about uncovering impacts that were previously hidden/unknowable.

      • David Colquhoun says:

        I’m puzzled by your statement that Altmetrics is not about scoring high. You mean it’s about scoring at all? I’m baffled. I can’t recall ever seeing a tweet about the sort of thing that you mention. In any case, I can easily count the number of times something is downloaded from my blogs. And I can find easily how many people have used software that we provide, free, at OneMol.org.uk. It doesn’t need any commercial intervention to do that.

        The crucial question is to decide what use the information is used for. Jason Priem seems to have a rather different view of that depending on whether he’s talking to customers or to scientists,The only reason that I can think of is as a number that you can quote when applying for a job or a grant. If it has to be useful for that purpose, it must measure quality. You haven’t produced the slightest reason to think that it measures the quality of the work. I have produced a bit of evidence that it measures lack of quality.

        You may say that you are looking for evidence. Homeopaths say that too. But the time to start promoting the product is after you’ve got the evidence, not before.

  2. David,

    You imagine we disagree more than we actually do.

    When I say altmetrics is not about scoring high, I mean that getting the numbers is less important than having the context for those numbers, which we provide (via our data providers at Altmetric.com) by linking to the individual tweets, blog posts, and post-publication peer-reviews a research output gets. Context is king, and we discourage people from taking altmetrics data at face value–in fact, we usually point out that the real gold is in the qualitative data you get from reading the individual blog posts, etc.

    It’s great that your blog platform and OneMol allow you to measure downloads and pageviews. (Sounds like your interest in these metrics is–dare I say?!–an interest in altmetrics 🙂 ) But not everyone can or wants to track metrics themselves, as you have done.

    For those that want help measuring that impact, we–along with companies like Altmetric.com and Plum Analytics–provide a service that many find useful. But we’re not salespeople–we’re a non-profit that’s built by scientists (and governed by a board that’s mostly scientists), to meet the needs of those who practice Open Science.

    Researchers can use their Impactstory information however they’d like. Some have used it when applying for grants & tenure; others just like knowing more about how others are using their work.

    No one I know claims that altmetrics measure quality (if you do find such claims, I encourage you to provide citations here). They measure flavors of impact (public engagement, interest from other scientists, etc), same as citations do not measure quality, but rather scholarly interest. New evidence of these flavors is being uncovered every day–and unlike homeopathic “evidence”, it’s being published in peer-reviewed journals [1] [2] [3].

    Which brings me to my final point: we like to keep the language on this blog (including comments) civil, so as to foster an environment that feels inclusive and safe for folks to express differing opinions. You should continue to comment and disagree with me/us as much as you want, but please refrain from name-calling.

    [1] http://link.springer.com/journal/11192
    [2] http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0064841
    [3] http://onlinelibrary.wiley.com/doi/10.1002/asi.23101/full

    • David Colquhoun says:

      Actually PubMed (disgracefully) indexes something like 30 quackery journals and much homeopathy evidence is published in these journals where they are peer-reviewed by other homeopaths. Similarly, I imagine that most papers in Scientometrics (your first reference) are peer-reviewed by other bibliometricians. All this proves is that any paper, however bad, can be published in a “peer-reviewed” journal. Peer review tells you next to nothing about quality. In order to judge quality, you have to read the paper.

      Your second two references appear to say nothing that justifies the usefulness of altmetrics: if anything they say the opposite.

      As you must know, Jason Priem had a slide show which claimed an advantage of altmetrics was ““Speed: months or weeks, not years: faster evaluations for tenure/hiring”. If that doesn’t amount to a claim that altmetrics measure quality, I don’t know what does. When I challenged him about this absurd claim. he backed down a bit, but what else could he have done?

      I guess anyone who runs a blog is a bit interested in whether people read it, and the downloads etc are available free,and very simply, via google analytics, statcounter.com etc. But blogs are just a hobby for me. I’m almost retired from science so I have tie to write it. Most active scientists don’t. The serious science doesn’t appear on blogs. I’m quite glad when I get 30,000 page views on a blog about acupuncture, but that;s trivial stuff. Serious science, e.g. highly mathematical stuff like http://www.onemol.org.uk/?page_id=175#hjc90 has been cited only 90 times (and, I’d guess, been read far fewer times), despite providing the basis of most of our (real) work in the last 20 years That is yet another example that altmetrics favour the popular and trivial and do nothing for serious hard science.

      It’s true that my first comment was quite strongly worded. That’s because it’s something I feel strongly about. I don’t believe it is too strong to say that most metrics corrupt science, and altmetrics is especially bad in that respect. You say “You imagine we disagree more than we actually do”. I don’t believe that’s true. We disagree radically. No working scientist I know thinks altmetrics are anything but silly and trivialising, But they are too busy doing experiments to have the time to argue about it.. Luckily, I have the time now to speak for them.

      • You do not find altmetrics useful for judging the quality of articles–we agree there (looking deeper into social media conversations surrounding an output + reading the output itself is a better judge of quality, IMO). But you’re not addressing using altmetrics to understand OTHER impacts.

        Blogs are often more of an outreach vehicle than a place for publishing “hard science”–agreed. But what if you want to understand the effects of your outreach efforts? Altmetrics numbers are a good way to do that. Another example: what if you want to (in your P&T package) show that you’ve had an impact in your field by building software that’s become the de facto standard for data analysis in chemical biology? Altmetrics allow you to do that, if you want to.

        I’m finding frustrating your unwillingness to acknowledge that there are other researchers who could have a different opinion from yours, in terms of finding altmetrics useful and wanting to use them to understand the variety of impacts their work has. You say you speak for the “working scientists” that you know, but there’s a world of other researchers out there and neither you nor I can speak for them.

        As for your arguments against bibliometrics research–which seems to be an axe you’ve been grinding for some time now–you’ve provided no evidence that it’s not a valid science. That being said, if you’d like to debate that, I encourage you to write something up on your own blog and invite comments; the topic is unrelated to the content of the original post we’re currently commenting on.

        On a final, related note, you’re welcome to comment on our blog posts in the future if you have responses that pertain to the issues raised in the posts themselves. As for this debate, we’re going to have to agree to disagree and I’m going to step out of this conversation, since it’s starting to go in circles.

        • David Colquhoun says:

          I agree. We are getting nowhere.

          One final point. You say “you’ve provided no evidence that [bibliometrics] is not a valid science”.

          If you are trying to sell something then it is for you to show it’s useful, not for me to spend time showing it’s useless. The argument that you use is much like that used by homeopaths (again).

          It all started with impact factors, which, at least since Seglen (1997), have been universally condemned as a method of assessing people (except, of course, by the people who sell them). Since then, many other ‘metrics’ have been dreamed up and exploited commercially, but I’m not aware of a single study that shows they are a good way to assess people. It really is no better than snake oil.

        • I have no desire to censor David Colquhoun, but in order to balance-up this comment thread, can I just say: thank you ImpactStory for providing an informative & well-sourced round-up of open science news in April 2014.

          /That is all

    • David Colquhoun says:

      @Stacy

      Thanks for the links to people “who have found altmetrics useful for making their case for tenure and promotion, grants, and awards”.

      You seem to have forgotten the claim that altmetrics are not suitable for judging people for promotion etc. But, more to the point, I notice that none of the references says whether the applications in which they specified altmetric scores was successful or not. And even if some were successful, there is no way to know whether that success was because of the altmetrics, or despite the altmetrics. As evidence, your links don’t score highly, I fear.

      If someone included that sort of stuff in an application to me, I’d try not to dismiss there application on that account. I’d just ignore it and read their papers in the normal way (and their blogs if they had one, though if they were applying for an academic research job, the blog would be pretty peripheral to the decision.

      • Emilio and Robert’s apps were successful; I’m not certain about the status of Ahmed’s or Richard’s, you’d have to reach out to them to get an update.

        You’re absolutely right in that no one can say that altmetrics were the reason they got tenure–same as one can’t claim w/ certainty that putting a JIF or citation count next to a paper on a CV in a dossier tipped the scales one way or another.

        And yes, you’re also correct in that above all else, reading the papers themselves is the most important thing reviewers can (and should) do when reviewing dossiers/grant applications/etc.

Leave a Reply