What’s our impact? (August 2014)

You may have noticed a change in our blog in recent months: we’ve added a number of editorial, how-to, and opinion posts, in addition to “behind the scenes” Impactstory updates.

Posts on our blogs and commentary on Twitter serve two purposes for us.  First, they promote our nonprofit goals of education and awareness.  Second, they serve as “content marketing,” a great way to get awareness of Impactstory to a broader audience.

We’ve been tracking the efficacy of this new strategy for a while now, and thought we’d begin to share the numbers with you in the spirit of making Impactstory more transparent. After all, if you’re an Impactstory fan, you’re likely interested in metrics of all stripes.

Here are our numbers for August 2014.

Organic site traffic stats

  • Unique visitors to impactstory.org 3,429
  • New users 378
  • Conversion rate 11.3% (% of visitors who signed up for an Impactstory.org account)

Blog stats

  • Unique visitors 4,381
  • Pageviews 6,431
  • Clickthrough rate (% of people who visited impactstory.org from the blog) 1.6%
  • Conversion rate (% of impactstory.org visitors to blog who went on to sign up for an Impactstory.org account) 9.8%
  • Percent of new user signups 1.8%

Overall: Our blog traffic has been steadily increasing from May onward: from 3896 pageviews to 6431 pageviews per month. And the number of unique visitors to our blog has increased, too: from 2,311 a month to 4,381 per month. We published four blog posts in August, two of which could be considered “content marketing”: an interview with Impactstory Advisor, Megan O’Donnell, and our monthly Open Science and Altmetrics Roundup.

What about clickthrough and conversion rates? On the one hand, it’d be helpful to compare these rates against industry norms; on the other hand, which “industry norms” would those be? Startup norms? Non-profit norms? Academic norms? In the end, I’ve decided it’s best to just use these numbers as a benchmark and forget about comparisons.

Twitter stats

  • New followers 215
  • Increase in followers over previous month 5.11%
  • Mentions 346 (We’re tracking this to answer the question, “How engaged are our followers?”)
  • Tweet reach 3,543,827 (We’re tracking this–the number of people who potentially saw a tweet mentioning Impactstory or our blog–to understand our brand awareness)
  • Referrals to impactstory.org: 271 users
  • Signups: 32

Overall: Our Twitter follower growth rate actually went down from May, from around ~8% new followers to ~5%. I did not (and have not yet) crossed the 5,000 follower threshold: a milestone that I intended to hit around August 20th. That said, engagement was up from the previous month by ~23%, a change that reflects conscious effort.

What does it all mean?

Our August numbers were no doubt affected by our subscription announcements and the new Impactstory features. I’m interested to see how these statistics change through September, which has seen an end to the “early adopter” 30 day free trial, and the debut of all the features we deployed during the 5 Meter sprint.

Our blog receives more unique visitors than our website, at this point, so increasing the number of blog-referred signups is a priority.

We could also stand to improve our conversion rates from organic website traffic, too. Our rates are lower than average when compared to other non-profits, publishing-related organizations, and IT.

Looking ahead

Given our findings from this month’s stats, here are our goals for September (already half-over, I know) and October:

  • Website: Jason and Heather will be working in the coming months to improve conversion rates by introducing new features that drive signups and subscriptions.
  • Blog: Increase unique visitors and the conversion rate for new signups–the former to continue to build brand awareness by publishing blogposts that resonate with scientists, and the latter the latter for obvious reasons. 🙂 One tactic could be to begin offering at least 1 content marketing post per week–a challenging task.
  • Twitter: Increase our growth rate for Twitter followers, pass the 5,000 follower mark, and continue to engage with our audience in ways that provide value–whether by sharing Open Science and altmetrics news and research, answering a question they have about Impactstory, or connecting them with other scientists and resources.
  • In general: Listen to (and act upon) feedback we get via social media. Continue to create useful blog content that meets the needs of practicing scientists, and to scour the web for the most interesting and relevant Open Science and Altmetrics news and research to share with our audience.

Questions?

Are there statistics you’re curious about, or do you have questions about our new approach to marketing? I’m happy to answer them in the comments below. Cheers!

Updated Dec. 31 2014 to reflect more accurate calculation for conversion rates from blog traffic.

Impactstory Advisor of the Month: Guillaume Lobet (September 2014)

September’s Impactstory Advisor of the Month is (drumroll please!)
Guillaume Lobet!

guillaume.png

Guillaume is a post-doc researcher at the Université de Liège in Belgium, in the Plant Physiology lab of Pr. Claire Perilleux. He’s also a dedicated practitioner of open, web native science, creating awesome tools ranging from a Plant Image Analysis software finder to an image analysis toolbox that allows the quantitative analysis of root system architecture. He’s even created an open source webapp that uses Impactstory’s open profile data to automatically create CVs in LaTeX, HTML, and PDF formats. (More on that below.)

I had the pleasure of corresponding with Guillaume this week to talk about his research, what he enjoys about practicing web native science, and his approach to being an Impactstory Advisor.

Tell us a bit about your current research.

I am a plant physiologist. My current work focuses on how the growth and development of different plant organs (e.g. the root and the shoot) are coordinated, and how modifications in one organ affects the others. The project is fascinating, because so far the majority of the plant research is focused on one specific organ or process and few has been done to try to understand how the different parts communicate.

Why did you initially decide to join Impactstory?

A couple of years ago, I created a website referencing the existing plant image analysis software tools (www.plant-image-analysis.org). I wanted to help users understand how well the tools (or more specifically, the scientific papers describing the tools) have been received by the community. At that time, an article-level Impactstory widget was available, and I choose to use it. It was a great addition to the website!

At the same time, I created a Impactstory profile and I’ve used it since then. (A quick word about the new profiles: they look fantastic!)

Why did you decide to become an Advisor?

Mainly because the ideas promoted by the Impactstory team are in line with my own. Researchers’ contributions to the scientific community (or even to society in general) are not only done by publishing peer-reviewed paper (even though it is still a very important way to disseminate our findings). The Web 2.0 brought us a large array of means to contribute to the scientific debate and it would be restrictive not to consider those while evaluating one’s work.

How have you been spreading the word about Impactstory?

I started by talking about it with my direct colleagues. Then, I noticed that science valorisation in general was not well known, so made a presentation about it and shared it on Figshare. To my great surprise, it became my most viewed item (I guess people liked the Lord of the Rings / Impactstory mash up :)). In addition, I also created a small widget to convert any Impactstory online profile into a resume. And of course, I proudly wear my Impactstory t-shirt whenever I go to conferences, which alway bring questions such as “I heard of that, what is it exactly?”.

You’re a web-native scientist (as evidenced by your active presence on sites like Figshare, Github, and Mendeley). When did you start practicing web-native science? What do you like about it? Are there drawbacks?

It really started a couple of years ago, by the end of my PhD. At that time, I needed to apply for a new position, so I set up a webpage, Mendeley account, and so on. I quickly found it to be a great way to get in touch with other researchers.

What I like the most about web-native science is that boundaries are disappearing! You do not need to meet people in person to build a new project or start a new collaboration. It brings together all the researchers of the same fields who are scattered around the globe, into a small digital community where they can easily interact!

As of the drawbacks, I am still looking for them 🙂

Tell us about your “Impact CV” webapp, which converts anyone’s Impactstory profile data into PDF, Markdown, LaTeX, or HTML format. Why’d you create it and how’d you do it?

A few months ago, I needed to update my resume and my IS profile contained all my research outputs. So I thought it would be nice to be able to reuse this information, not only for me, but for everyone who has an Impactstory profile. So instead of copying & pasting my online profile to my resume,  I took advantage of the openness of Impactstory to automatically retrieve the data contained in my profile (everything is stored in a Json file that is readily available from any profile) and re-use it locally. I wrapped it up in a webpage (http://www.guillaumelobet.be/impact) and Voilà!

What’s the best part about your work as a post-doc researcher at the Université de Liège?

Academic freedom is definitely the best part about working in a University. It gives us the latitude to explore unexpected paths. And I work with great people!

Thanks, Guillaume!

As a token of our appreciation for Guillaume’s hard work, we’re sending him an Impactstory t-shirt of his choice from our Zazzle store.

Guillaume is just one part of a growing community of Impactstory Advisors. Want to join the ranks of some of the Web’s most cutting-edge researchers and librarians? Apply to be an Advisor today!

What Jeffrey Beall gets wrong about altmetrics

Not long ago, Jason received an email from an Impactstory user, asking him to respond to the anti-altmetrics claims raised by librarian Jeffrey Beall in a blogpost titled, “Article-Level Metrics: An Ill-Conceived and Meretricious Idea.”

Beall is well-known for his blog, which he uses to expose predatory journals and publishers that abuse Open Access publishing. This has been valuable to the OA community, and we commend Beall’s efforts. But we think his his post on altmetrics was not quite so well-grounded.

In the post, Beall claims that altmetrics don’t measure anything of quality. That they don’t measure the impact that matters. That altmetrics they can be easily gamed.

He’s not alone in making these criticisms; they’re common. But they’re also ill-informed. So, we thought that we’d make our responses public, because if one person is emailing to ask us about them, others must have questions, too.

Citations and the journal impact factor are a better measure of quality than altmetrics

Actually, citations and impact factors don’t measure quality.

Did I just blow your mind?

What citations actually measure

Although early theorists emphasized citation as a dispassionate connector of ideas, more recent research has repeatedly demonstrated that citation actually has more complex motivations, including often as a rhetorical tool or a way to satisfy social obligations (just ask a student who’s failed to cite their advisor). In fact, Simkin and Roychowdhury (2002) estimate that as few as 20% of citers even read the paper they’re citing. That’s before we even start talking about the dramatic disciplinary differences in citation behavior.

When it comes down to it, because we can’t identify citer motivations when looking at a citation count alone (and to date efforts to use sentiment analysis to understand citation motivations have failed to be widely adopted) the only bulletproof way to understand the intent behind citations is to read the paper that cites.

It’s true that some studies have shown that citations correlate with other measures of scientific quality like awards, grant funding, and peer evaluation. We’re not saying they’re not useful. But citations do not directly measure quality, which is something that some scientists seem to forget.

What journal impact factors actually measure

We were surprised that Beall holds up the journal impact factor as a superior way to understand the quality of individual papers. The journal impact factor has been repeatedly criticized throughout the years, and one issue above all others renders Beall’s argument moot: the impact factor is a journal-level measure of impact, and therefore irrelevant to the measure of article-level impact.

What altmetrics actually measure

The point of altmetrics isn’t to measure quality. It’s to better understand impact: both the quantity of impact and the diverse types of impact.

And when we supplement traditional measures of impact like citations with newer, altmetrics-based measures like post-publication peer review counts, scholarly bookmarks, etc we have a better picture of the full extent of impact. Not the only picture. But a better picture.

Altmetrics advocates aim to make everything a number. Only peer review will accurately get at quality.

This criticism is only half-wrong. We agree that informed, impartial expert consensus remains the gold standard for scientific quality. (Though traditional peer-review is certainly far from bullet-proof when it comes to finding this.)

But we take exception to the charge that we’re only interested in quantifying impact. In fact, we think that the compelling thing about altmetrics services is that they bring together important qualitative data (like post-publication peer reviews, mainstream media coverage, who’s bookmarking what on Mendeley, and so on) that can’t be summed up in a number.

The scholarly literature on altmetrics is growing fast, but it’s still early. And altmetrics reporting services can only improve over time, as we discover more and better data and ways to analyze it. Until then, using an altmetrics reporting service like our own (Impactstory), Altmetric.com or PlumX is the best way to discover the qualitative data at the heart of diverse impacts. (More on that below.)

There’s only one type of important impact: scholarly impact. And that’s already quantified in the impact factor and citations.

The idea that “the true impact of science is measured by its influence on subsequent scholarship” would likely be news to patients’ rights advocates, practitioners, educators, and everyone else that isn’t an academic but still uses research findings. And the assertion that laypeople aren’t able to understand scholarship is not only condescending, it’s wrong: cf. Kim Goodsell, Jack Andraka, and others.

Moreover, who are the people and groups that argue in favor of One Impact Above All Others, measured only through the impact factor and citations? Often, it’s the established class of scholars, most of whom have benefited from being good at attaining a very particular type of impact and who have no interest in changing the system to recognize and reward diverse impacts.

wvCL4TW.png

Even if we were to agree that scholarly impact were of paramount importance, let’s be real: the impact factor and citations alone aren’t sufficient to measure and understand scholarly impact in the 21st century.

Why? Because science is moving online. Mendeley and CiteULike bookmarks, Google Scholar citations, ResearchGate and Academia.edu pageviews and downloads, dataset citations, and other measures of scholarly attention have the potential to help us define and better understand new flavors of scholarly attention. Citations and impact factors by themselves just don’t cut the mustard.

I heard you can buy tweets. That proves that altmetrics can be gamed very easily.

There’s no denying that “gaming” happens, and it’s not limited to altmetrics. In fact, there have recently been journals that have been banned from Thomson-Reuters Journal Citation List due to impact factor manipulation, and papers retracted after a “citation ring” was busted. And researchers have proven just how easy it is to game Google Scholar citations.

Most players in the altmetrics world are pretty vigilant about staying one step ahead of the cheaters. (Though, to be clear, there’s not much evidence that scientists are gaming their altmetrics, since altmetrics aren’t yet central to the review and rewards systems in science.) Some good examples are SSRN’s means for finding and banning fraudulent downloaders, PLOS’s “Case Study in Anti-Gaming Mechanisms for Altmetrics,” and Altmetric.com’s thoughts on the complications of rooting out spammers and gamers. And we’re seeing new technology debut monthly that helps us uncover bots on Twitter and Wikipedia, fake reviews and social bookmarking spam.

Crucially, altmetrics reporting services make it easier than ever to sniff out gamed metrics by exposing the underlying data. Now, you can read all the tweets about a paper in one place, for example, or see who’s bookmarking a dataset on Delicious. And by bringing together that data, we help users decide for themselves whether that paper’s altmetrics have been gamed. (Not dissimilar from Beall’s other blog posts, which bring together information on predatory OA publishers in one place for others to easily access and use!)

Altmetrics advocates just want to bring down The Man

We’re not sure about what that means. But we sure are interested in bringing down barriers that keep science from being as efficient, productive, and open as it should be.  One of those barriers is the current incentive system for science, which is heavily dependent upon proprietary, opaque metrics such as the journal impact factor.

Our true endgame is to make all metrics–including those pushed by The Man–accurate, auditable, and meaningful. As Heather and Jason explain in their “Power of Altmetrics on a CV” article in the ASIS&T Bulletin:

Accurate data is up-to-date, well-described and has been filtered to remove attempts at deceitful gaming. Auditable data implies completely open and transparent calculation formulas for aggregation, navigable links to original sources and access by anyone without a subscription. Meaningful data needs context and reference. Categorizing online activity into an engagement framework helps readers understand the metrics without becoming overwhelmed. Reference is also crucial. How many tweets is a lot? What percentage of papers are cited in Wikipedia? Representing raw counts as statistically rigorous percentiles, ideally localized to domain or type of product, makes it easy to interpret the data responsibly.

That’s why we incorporated as a non-profit: to make sure that our goal of building an Open altmetrics infrastructure–which would help make altmetrics accurate, auditable, and meaningful–isn’t corrupted by commercial interests.

Do you have questions related to Beall’s–or others’–claims about altmetrics? Leave them in the comments below.

Impactstory Advisor of the Month: Megan O’Donnell (August 2014)

We’re pleased to announce the August Impactstory Advisor of the Month, Megan O’Donnell!

1000704.jpg

As a Scholarly Communication Librarian at Iowa State University, Megan’s a campus expert on altmetrics and Open Science. Since joining the Advisors program, Megan has educated other campus librarians on altmetrics and Impactstory, and is currently hard at work planning an “intro to altmetrics” faculty workshop for the Fall semester.

We recently chatted with Megan about her job as a Scholarly Communication Librarian, how Impactstory benefits her scholarly activities, and how the new Impactstory subscription model has affected her outreach efforts.

Why did you initially decide to join Impactstory?

I’m still a new librarian in many ways. I just passed my 1 year anniversary as a full-time librarian this spring and my coauthors and I are finishing up what will be my first peer-reviewed work. Impactstory appealed to me because it was a way to showcase and track the work I have been doing outside of traditional publications. Without Impactstory I would never have known that one of my slideshows is considered “highly viewed” and continues to be viewed every week.

Why did you decide to become an Advisor?

A coworker suggested it to me. At first I was uncertain and I found myself thinking “But my profile is so empty! I haven’t ‘published’ anything yet! This won’t work.” In the end I decided that it was an important thing to do as a campus advocate for open access and altmetrics. There are many people who will be in the same position as me, wondering if Impactstory is worth it when they have so little to showcase. All I can say is that I can’t wait to fill my Impactstory profile up.

How have you been spreading the word about Impactstory in your first two months as an Advisor?

There’s not a lot of activity on campus during the summer. Most of our students are gone and many researchers and faculty are away on vacation, field work, or attending conferences so the majority of my time has been spent planning for an altmetrics workshop for fall. The one thing I did do this summer was to set up the chair of one of my departments with a profile. Impactstory provided a nice way to way to start a conversation about faculty and department work that tends to be left out by traditional metrics (such as the materials that her department produces for ISU’s extension program). I don’t think she’s completely convinced about the value of altmetrics, but she was open to creating an account to see what it could do and now she’s aware that there are other tools and measurements.

Once I got my Advisor package I visited other librarians in my department. We have a mix of faculty and academic professionals but everyone, no matter their rank, wanted one of the “I am more than my H-Index” stickers. I ran out within a week. The slogan speaks to everyone: no one wants to be judged solely on their citation numbers.

How has Impactstory’s new subscription model impacted your work as an Advisor?

A couple of my coworkers asked me about the change since I’m an Advisor. I spent a lot of time thinking about this and how it changed my feelings about Impactstory. After the initial knee-jerk reaction to having something “taken away”, I’ve come to the conclusion that it’s an acceptable change. The Paperpile blog post has already outlined many of the possible benefits, so I won’t repeat them here. The bottom line is I feel that I can recommend Impactstory because there’s nothing else like it.

Tell us about the workshops you’re planning on Impactstory for the Fall semester.

Iowa State University only began to having conversations around open access with the launch of our institutional repository, Digital Repository @ Iowa State University, in 2012. While the University Library has been very proactive in providing support with helping faculty prepare for promotion and tenure cases, much of it has revolved around those dreaded numbers: citations, Journal Impact Factor and the H-Index. The workshop I am designing will be an introduction to altmetrics with hands-on activities. It will likely end with having all participants create a trial Impactstory account that way they get an altmetrics experience tailored just for them.

What’s the best part about your work as a Scholarly Communication Librarian for the Iowa State University?

There are huge opportunities on this campus. If you’ve looked at my profile you’ll see that most of my recent work has been on data management planning. That really took off. We got support from the Office of the Vice President of Research, which is also sponsoring a panel discussion planned for Open Access Week, and from other campus units. Everyone is excited about the future of scholarly communications at Iowa State.

What advice would you give other librarians who want to do outreach on altmetrics to their colleagues and faculty?

I think it’s important to frame discussion about altmetrics as part of a larger picture. For example, NSF research grant proposals are judged on something called “broader impacts” which, in brief is “the potential of the proposed activity to benefit society and contribute to the achievement of specific, desired societal outcomes” (NSF Proposal Preparation Instructions). Altmetrics could give us some insight into if a grant has met its broader impact goals. How many views did the grant funded video receive? Was it picked up by a news outlet? Does anyone listen to the podcast? These types of activities are not captured any other way but they are important. Altmetrics can show the reach of research beyond the academy which is becoming increasingly important as research dollars are spread thinner and thinner.

Thanks, Megan!

As a token of our appreciation for Megan’s hard work, we’re sending her an Impactstory t-shirt of her choice from our Zazzle store.

Megan is just one part of a growing community of Impactstory Advisors. Want to join the ranks of some of the Web’s most cutting-edge researchers and librarians? Apply to be an Advisor today!

Impactstory Advisor of the Month: Keith Bradnam (July 2014)

Headshot of Keith Bradnam

Meet our Advisor of the Month for July, Keith Bradnam! Keith is an Associate Project Scientist with the Korf Lab at UC Davis and active science communicator (read his blog, ACGT, and follow him on Twitter at @kbradnam).

Why is Keith our Advisor of the Month? Because he shared his strategies for success as a scientist at a well-attended Impactstory info session he organized at UC Davis earlier this month. Plus, he’s helping us to improve Impactstory every day, submitting bug reports and ideas for new features on our Feedback forum.

We recently emailed Keith to learn more about why he decided to become an Advisor, what made his recent workshop so great, and his thoughts on using blogging to become a more successful scientist.

Why did you initially decide to join Impactstory?

When I first heard about Impactstory, it just seemed like such an incredibly intuitive and useful concept. Publications should not be seen as the only form of scientific ‘output’, and having a simple way to gather together the different aspects of my academic life seemed like such a no-brainer.

In the past, I have worked in positions where I helped develop database resources for other scientists. These type of non-research positions, often only provide an opportunity for one formal publication a year (e.g. a paper in the annual Nucleic Acids Research ‘Database’ issue). This is a really poor reflection of the contributions that many bioinformaticians (and web programmers, database administrators etc.) make to the wider scientific community. In the past we didn’t have tools like GitHub to easily show the world what software we were helping develop.

Why did you decide to become an Advisor?

Impactstory is a great service and the more people that get to know about it and use it, the better it will become. I want to be part of that process, particularly because I still think that there are many people who are stuck in the mindset that a CV or résumé is the only way to list what you have done in your career.

I’m really hopeful that tools like Impactstory will forever change how people assess the academic achievements of others.

How have you been spreading the word about Impactstory in your first month as an Advisor?

I’ve mainly been passing on useful tweets from the @Impactstory Twitter account and keeping an eye on the Impactstory Feedback Forums where I’ve been adding some suggestions of my own and replying to questions from others. Beyond that, I’ve evangelized about Impactstory to my lab, and I gave a talk on campus to Grad students and Postdocs earlier this month.

How did your workshop go?

Well perhaps I’m biased 🙂 but I think it was well-received. There was a good mix of Grad students, Postdocs, and some other staff, and I think people were very receptive to hearing about the ways that Impactstory could be beneficial to them. They also asked lots of pertinent questions which has led to some new feature requests for the Impactstory team to consider. [You can view a video of Keith’s presentation over at his blog.]

You run a great blog about bioinformatics–ACGT. Why do you blog, and would you recommend it to others?

Blogging is such an incredibly easy way to share useful information to your peers. Sometimes that information can be succinct, factual material (these are the steps that I took to install software ‘X’), sometimes it can be opinion or commentary (this is why I think software ‘X’ will change the world), and sometimes it can just be entertainment or fun (how I used software ‘X’ to propose to my wife).

I think we’re currently in a transition period where people no longer see ‘blogging’ as being an overly geeky activity. Instead, I think that many people now appreciate that blogging is just a simple tool for quickly disseminating information.

I particularly recommend blogging to scientists. Having trouble following a scientific protocol and need some help? Blog about it. Think you have made an improvement on an existing protocol? Blog about it. Have some interesting thoughts about a cool paper that you have just read? Blog about it. There are a million and one topics that will never be suitable for a formal peer-reviewed publication, but which would make fantastic ideas for a blog post.

Blogging may be beneficial for your career by increasing your visibility amongst your peers, but more importantly I think it really improves your writing skills and — depending on what you blog about — you are giving something back to the community.

What’s the best part about your current gig as an Associate Project Scientist with the Korf Lab at UC Davis?

I think that most people would agree that if you work on a campus where you get to walk past a herd of cows every day, then that’s pretty hard to beat! However the best part of my job is that I get to spend time mentoring others in the lab (students, not cows), and I like to think that I’m helping them become better scientists, and better communicators of science in particular.

Thanks, Keith!

As a token of our appreciation for Keith’s hard work, we’re sending him an Impactstory t-shirt of his choice from our Zazzle store.

Keith is just one part of a growing community of Impactstory Advisors. Want to join the ranks of some of the Web’s most cutting-edge researchers and librarians? Apply to be an Advisor today!

Open Science & Altmetrics Monthly Roundup (June 2014)

Don’t have time to stay on top of the most important Open Science and Altmetrics news? We’ve gathered the very best of the month in this post. Read on!

UK researchers speak out on assessment metrics

There are few issues more polarizing in academia right now than research assessment metrics. A few months back, the Higher Education Funding Council for England (HEFCE) asked researchers to submit their evidence and views on the issue, and to date many well-reasoned responses have been shared.

Some of the highlights include Ernesto Priego’s thoughtful look at the evidence for and against; this forceful critique of the practice, penned by Sabaratnam and Kirby; a call to accept free market forces “into the internal dynamics of academic knowledge production” by Steve Fuller; and this post by Stephen Curry, who shares his thoughts as a member of the review’s steering group.

Also worth a look is Digital Science’s “Evidence for excellence: has the signal overtaken the substance?”, which studies the unintended effects that past UK assessment initiatives have had on researchers’ publishing habits.

Though the HEFCE’s recommendations will mainly affect UK researchers, the steering group’s findings may set a precedent for academics worldwide.

Altmetrics researchers agree: we know how many, now we need to know why

Researchers gathered in Bloomington, Indiana on June 23 to share cutting-edge bibliometrics and altmetrics research at the ACM WebScience Altmetrics14 workshop.

Some of the highlights include a new study that finds that only 6% of articles that appear in Brazilian journals have 1 or more altmetrics (compared with ~20% of articles published in the “global North”); findings that use of Twitter to share scholarly articles grew by more than 90% from 2012 to 2013; a study that found that most sharing of research articles on Twitter occurs in original tweets, not retweets; and a discovery that more biomedical and “layman” terms appear in the titles of research shared on social media than in titles of highly-cited research articles.

Throughout the day, presenters repeatedly emphasized one point: high-quality qualitative research is now needed to understand what motivates individuals to share, bookmark, recommend, and cite research outputs. In other words, we increasingly know how many altmetrics research outputs tend to accumulate and what those metrics’ correlations are–now we need to know why research is shared on the social Web in the first place, and how those motivations influence various flavors of impact.

Librarians promoting altmetrics like never before

This month’s Impactstory blog post, “4 things every librarian should do with altmetrics,” has generated a lot of buzz and some great feedback from the library community. But it’s just one part of a month filled with librarians doin’ altmetrics!

To start with, College & Research Libraries News named altmetrics a research library trend for 2014, and based on just the explosion of librarian-created presentations on altmetrics in the last 30 days alone, we’re inclined to agree! Plus, there were librarians repping altmetrics at AAUP’s Annual Meeting and the American Library Association Annual Meeting (here and here), and the Special Libraries Association Annual Meeting featured our co-founder, Heather Piwowar, in two great sessions and Impactstory board member, John Wilbanks, as the keynote speaker.

More Open Science & Altmetrics news

Stay connected

We share altmetrics and Open Science news as-it-happens on our Twitter, Google+, Facebook, or LinkedIn pages. And if you don’t want to miss next month’s news roundup, remember that you can sign up to get these posts and other Impactstory news delivered straight to your inbox.

4 things every librarian should do with altmetrics

Researchers are starting to use altmetrics to understand and promote their academic contributions. At the same time, administrators and funders are exploring them to evaluate researchers’ impact.

In light of these changes, how can you, as a librarian, stay relevant by supporting their fast-changing altmetrics needs?

In this post, we’ll give you four ways to stay relevant: staying up-to-date with the latest altmetrics research, experimenting with altmetrics tools, engaging in early altmetrics education and outreach, and defining what altmetrics mean to you as a librarian.

1. Know the literature

Faculty won’t come to you for help navigating the altmetrics landscape if they can tell you don’t know the area very well, will they?

To get familiar with discussions around altmetrics, start with the recent SPARC report on article-level metrics, this excellent overview that appeared in Serials Review (paywall), and the recent ASIS&T Bulletin special issue on altmetrics.

Then, check out this list of “17 Essential Altmetrics Resources” aimed at librarians, this recent article on collection development and altmetrics from Against the Grain, and presentations from Heather and Stacy on why it’s important for librarians to be involved in altmetrics discussions on their campuses.

There’s also a growing body of peer-reviewed research on altmetrics. One important concept from this literature is the idea of “impact flavors”–a way to understand distinctive patterns in the impacts of scholarly products.

For example, an article featured in mainstream media stories, blogged about, and downloaded by the public has a very different flavor of impact than a dataset heavily saved and discussed by scholars, which is in turn different from software that’s highly cited in research papers. Altmetrics can help researchers, funders, and administrators optimize for the mix of flavors that best fits their particular goals.

There’s also been a lot of studies on correlations (or lack thereof) between altmetrics and traditional citations. Some have shown that selected altmetrics sources (Mendeley in particular) are significantly correlated with citations (1, 2, 3), while other sources, like Facebook bookmarks, have only slight correlations with citations. These studies show that different types of altmetrics are capturing different types of impact, beyond just scholarly impact.

Other early touchstones include studies exploring the predictive potential of altmetrics, growing adoption of social media tools that inform altmetrics, and insights from article readership patterns.

But these are far from only studies to be aware of! Stay abreast of new research by reading through the PLOS Altmetrics Collection, joining the Altmetrics Mendeley group and following the #altmetrics hashtag on Twitter.

2. Know the tools

There are now several tools that allow scholars to collect and share the broad impact of their research portfolios.

In the same way a you’d experiment with new features added to Web of Science, you can play around with altmetrics tools and add them to your bibliographic instruction repertoire (more on that in the following section). Familiarity will enable you to do easy demonstrations, discuss strengths and weaknesses, contribute to product development, and serve as a resource for campus scholars and administration.

Here are some of the most popular altmetrics tools:

Impactstory

lopjuza.png

If you’re reading this post, chances are that you’re already familiar with Impactstory, a nonprofit Web application supported by the Alfred P. Sloan Foundation and NSF.

If you’re a newcomer, here’s the scoop: scholars create a free Impactstory profile and then upload their articles, datasets, software, and other products using Google Scholar, ORCID, or lists of permanent identifiers like DOIs, PubMed IDs, and so on. Impactstory then gathers and reports altmetrics and traditional citations for each product. As shown above, metrics are displayed as percentiles relative to similar products. Profile data can be exported for further analysis, and users can receive alerts about new impacts.

Impactstory is built on open-source code, offers open data, and is free to use. Our robust community of users helps us think up new features and prioritize development via our Feedback forum; once you’re familiar with our site, we encourage you to sign up and start contributing, too!

PlumX

PlumX Artifact Screen Shot.pngPlumX is another web application that displays metrics for a wide range of scholarly outputs. The metrics can be viewed and analyzed at any user-defined level, including at the researcher, department, institution, journal, grant, and research topic levels. PlumX reports some metrics that are unique from other altmetrics services, like WorldCat holdings and downloads and pageviews from some publishers, institutional repositories, and EBSCO databases. PlumX is developed and marketed by Plum Analytics, an EBSCO company.

The service is available via a subscription. Individuals who are curious can experiment with the free demo version.

Altmetric

Altmetric-Explorer-Screenshot-University-of-Texas-Sample.pngThe third tool that librarians should know about is Altmetric.com. Originally developed to provide altmetrics for publishers, the tool primarily tracks journal articles and ArXiv.org preprints. In recent years, the service has expanded to include a subscription-based institutional edition, aimed at university administrators.

Altmetric.com offers unique features, including the Altmetric score (a single-number summary of the attention an article has received online) and the Altmetric bookmarklet (a browser widget that allows you to look up altmetrics for any journal article or ArXiv.org preprint with a unique identifier). Sources tracked for mentions of articles include social and traditional media outlets from around the world, post-publication peer-review sites, reference managers like Mendeley, and public policy documents.

Librarians can get free access to the Altmetric Explorer and free services for institutional repositories. You can also request trial access to Altmetric for Institutions.

3. Integrate altmetrics into library outreach and education

Librarians are often asked to describe Open Access publishing choices to both faculty and students and teach how to gather evidence of impact for hiring, promotion, and tenure. These opportunities–whether one on one or in group settings like faculty meetings–can allow librarians to introduce altmetrics.

Discussing altmetrics in the context of Open Access publishing helps “sell” the benefits of OA. Altmetrics, like download counts that appear in PLOS journals and institutional repositories, can highlight the benefits of open access publishing. They can also demonstrate that “impact” is more closely tied to an individual’s scholarship rather than a journal’s impact factor.

Similarly, researchers often use an author’s h-index for hiring, tenure, and promotion, conflating the h-index with the quality of an individual’s work. Librarians are often asked to teach and provide assistance calculating an h-index within various databases (Web of Science, SCOPUS, etc.). Integrating altmetrics into these instruction sessions is akin to providing researchers with additional primary resource choices on a research project. Librarians need to make researchers aware of many tools they can use to evaluate the impact of scholarship, and of the relevant research–including benefits of and drawbacks to different altmetrics.

So, what does altmetrics outreach look like on the ground? To start, check out these great presentations that librarians around the world have given on the benefits of using altmetrics (and particular altmetrics tools) in research and promotion.

Another great way to stay relevant on this subject is to find and recommend to your grad students and faculty readings on ways they can use altmetrics in their career, like this one from our blog on the benefits of including altmetrics on your CV.

4. Discover the benefits that altmetrics offer librarians

There are reasons to learn about altmetrics beyond serving faculty and students. A major one is that many librarians are scholars themselves, and can use altmetrics to better understand the diverse impact of their articles, presentations, and white papers. Consider putting altmetrics on your own CV, and advocating the use of altmetrics among library faculty who are assembling tenure and promotion packages.

Librarians also produce and support terabytes’ worth of scholarly content that’s intended for others’ use, usually in the form of digital special collections and institutional repository holdings. Altmetrics can help librarians understand the impacts of these non-traditional scholarly outputs, and provide hard evidence of their use beyond ‘hits’ and downloads–evidence that’s especially useful when making arguments for increased budgetary and administrative support.

It’s important that librarians explore the unique ways they can apply altmetrics to their own research and jobs, especially in light of recent initiatives to create recommended practices for the collection and use of altmetrics. What is useful to a computational biologist may not be useful for a librarian (and vice versa). Get to know the research and tools and figure out ways to use them to your own ends.

There’s a lot happening right now in the altmetrics space, and it can sometimes be overwhelming for librarians to keep up with and understand. By following the steps outlined above, you’ll be well positioned to inform and support researchers, administrators, and library decision makers in their use. And in doing so, you’ll be indispensable in this new era of web-native research.

Are you a librarian that’s using altmetrics? Share your experiences in the comments below!

This post has been adapted from the 2013 C&RL News article, “Riding the crest of the altmetrics wave: How librarians can help prepare faculty for the next generation of research impact metrics” by Lapinski, Piwowar, and Priem.

Ten reasons you should put altmetrics on your CV right now

If you don’t include altmetrics on your CV, you’re missing out in a big way.

There are many benefits to scholars and scholarship when altmetrics are embedded in a CV.

Altmetrics can:

  1. provide additional information;
  2. de-emphasize inappropriate metrics;
  3. uncover the impact of just-published work;
  4. legitimize all types of scholarly products;
  5. recognize diverse impact flavors;
  6. reward effective efforts to facilitate reuse;
  7. encourage a focus on public engagement;
  8. facilitate qualitative exploration;
  9. empower publication choice; and
  10. spur innovation in research evaluation.

In this post, we’ll detail why these benefits are important to your career, and also recommend the ways you should–and shouldn’t–include altmetrics in your CV.

1. Altmetrics provide additional information

The most obvious benefit of including altmetrics on a CV is that you’re providing more information than your CV’s readers already have.  Readers can still assess the CV items just as they’ve always done: based on title, journal and author list, and maybe–if they’re motivated–by reading or reviewing the research product itself. Altmetrics have the added benefit of allowing readers to dig into post-publication impact of your work.

2. Altmetrics de-emphasize inappropriate metrics

It’s generally regarded as poor form to evaluate an article based on a journal title or impact factor. Why? Because high journal impact factors vary across fields and an article often receives more or less attention than its journal container suggests.

But what else are readers of a CV to do? Most of us don’t have enough domain expertise to dig into each item and assess its merits based on a careful reading, even if we did have time. We need help, but traditional CVs don’t provide enough information to assess the work on anything but journal title.

Providing article-level citations and altmetrics in a CV gives readers more information, thereby de-emphasizing evaluation based on journal rank.

3. Altmetrics uncover the impact of just-published work

Why not suggest that we include citation counts in CVs, and leave it at that? Why go so far as altmetrics? The reason is that altmetrics have benefits that complement the weaknesses of a citation-based solution.

Timeliness is the most obvious benefits of altmetrics. Citations take years to accrue, which can be a problem for graduate students who are applying for jobs soon after publishing their first papers and for those promotion candidates whose most profound work is published only shortly before review.

Multiple research studies have found that counts of downloads, bookmarks and tweets correlate with citations, yet accrue much more quickly, often in weeks or months rather than years. Using timely metrics allows researchers to showcase the impact of their most recent work.

4. Altmetrics legitimize all types of scholarly products

How can readers of a CV know if your included dataset, software project, or technical report is any good?

You can’t judge its quality and impact based on the reputation of the journal that published it, since datasets and software aren’t published in journals. And even if they were, we wouldn’t want to promote the poor practice of judging the impact of an item by the impact of its container.

How, then, can alternative scholarly products be more than just space-filler on a CV?

The answer is product-level metrics. Like article-level metrics do for journal articles, product-level metrics provide the needed evidence to convince evaluators that a dataset or software package or white paper has made a difference. These types of products often make impacts in ways that aren’t captured by standard attribution mechanisms like citations. Altmetrics are key to communicating the full picture of how a product has influenced a field.

5. Altmetrics recognize diverse impact flavors

The impact of a research paper has a flavor. There are scholarly flavors (a great methods sections bookmarked for later reference or controversial claims that change a field), public flavors (“sexy” research that captures the imagination or data from a paper that’s used in the classroom), and flavors that fall into the area in between (research that informs public policy or a paper that’s widely used in clinical practice).

We don’t yet know how many flavors of impact there are, but it would be a safe bet that scholarship and society need them all. The goal isn’t to compare flavors: one flavor isn’t objectively better than another. They each have to be appreciated on their own merits for the needs they meet.

To appreciate the impact flavor of items on a CV, we need to be able to tell the flavors apart. (Citations alone can’t fully inform what kind of difference a research paper has made on the world. They are important, but not enough.) This is where altmetrics come in. By analyzing patterns in what people are reading, bookmarking, sharing, discussing and citing online we can start to figure out what kind – what flavor – of impact a research output is making.

More research is needed to understand the flavor palette, how to classify impact flavor and what it means. In the meantime, exposing raw information about downloads, shares, bookmarks and the like starts to give a peek into impact flavor beyond just citations.

6. Altmetrics reward efforts to facilitate reuse

Reusing research – for replication, follow-up studies and entirely new purposes – reduces waste and spurs innovation. But it does take a bit of work to make your research reusable, and that work should be recognized using altmetrics.

There are a number of ways authors can make their research easier to reuse. They can make article text available for free with broad reuse rights. They can choose to publish in places with liberal text-mining policies, that invest in disseminating machine-friendly versions of articles and figures.

Authors can write detailed descriptions of their methods, materials, datasets and software and make them openly available for reuse. They can even go further, experimenting with executable papers, versioned papers, open peer review, semantic markup and so on.

When these additional steps result in increased reuse, it will likely be reflected in downloads, bookmarks, discussions and possibly citations. Including altmetrics in CVs will reward investigators who have invested their time to make their research reusable, and will encourage others to do so in the future.

7. Altmetrics can encourage a focus on public engagement

The research community, as well as society as a whole, benefits when research results are discussed outside the Ivory Tower. Engaging the public is essential for future funding, recruitment and accountability.

Today, however, researchers have little incentive to engage in outreach or make their research accessible to the public. By highlighting evidence of public engagement like tweets, blog posts and mainstream media coverage, altmetrics on a CV can reward researchers who choose to invest in public engagement activities.

8. Altmetrics facilitate qualitative exploration

Including altmetrics in a CV isn’t all about the numbers! Just as we hope many people who skim our CVs will stop to read our papers and explore our software packages, so too we can hope that interested parties will click through to explore the details of altmetrics engagement for themselves.

Who is discussing an article? What are they saying? Who has bookmarked a dataset? What are they using it for? As we discuss at the end of this post, including provenance information is crucial for trustworthy altmetrics. It also provides great information that helps CV readers move beyond the numbers and jump into qualitative exploration of impact.

9. Altmetrics empower publication choice

Publishing in a new or innovative journal can be risky. Many authors are hesitant to publish their best work somewhere new or with a relatively-low impact factor. Altmetrics can remedy this by highlighting work based on its post-publication impact, rather than the title of the journal it was published in. Authors will be empowered to choose publication venues they feel are most appropriate, leveling the playing field for what might otherwise be considered risky choices.

Successful publishing innovators will also benefit. New journals won’t have to wait two years to get an impact factor before they can compete. Publishing venues that increase access and reuse will be particularly attractive. This change will spur innovation and support the many publishing options that have recently debuted, such as eLife, PeerJ, F1000 Research and others.

10. Altmetrics spur innovation in research evaluation

Finally, including altmetrics on CVs will engage researchers directly in research evaluation. Researchers are evaluated all the time, but often behind closed doors, using data and tools they don’t have access to. Encouraging researchers to tell their own impact stories on their CVs, using broad sources of data, will help spur a much-needed conversation about how research evaluation is done and should be done in the future.

OK, so how can you do it right?

There can be risks to including altmetrics data on a CV, particularly if the data is presented or interpreted without due care or common sense.

Altmetrics data should be presented in a way that is accurate, auditable and meaningful:

  • Accurate data is up-to-date, well-described and has been filtered to remove attempts at deceitful gaming
  • Auditable data implies completely open and transparent calculation formulas for aggregation, navigable links to original sources and access by anyone without a subscription.
  • Meaningful data needs context and reference. Categorizing online activity into an engagement framework helps readers understand the metrics without becoming overwhelmed. Reference is also crucial. How many tweets is a lot? What percentage of papers are cited in Wikipedia? Representing raw counts as statistically rigorous percentiles, localized to domain or type of product, makes it easy to interpret the data responsibly.

Assuming these presentation requirements are met, how should the data be interpreted? We strongly recommend that altmetrics be considered not as a replacement for careful expert evaluation but as a supplement. Because they are still in their infancy, we should view altmetrics as way to ground subjective assessment in real data; a way to start conversations, not end them.

Given this approach, at least three varieties of interpretation are appropriate: signaling, highlighting and discovery. A CV with altmetrics clearly signals that a scholar is abreast of innovations in scholarly communication and serious about communicating the impact of scholarship in meaningful ways. Altmetrics can also be used to highlight research products that might otherwise go unnoticed: a highly downloaded dataset or a track record of F1000-reviewed papers suggests work worthy of a second look. Finally, as we described above, auditable altmetrics data can be used by evaluators as a jumping off point for discovery about who is interested in the research, what they are doing with it, and how they are using it.

How to Get Started

How can you add altmetrics to your own CV or, if you are a librarian, empower scholars to add altmetrics to theirs?

Start by experimenting with altmetrics for yourself. Play with the tools, explore and suggest improvements. Librarians can also spread the word on their campuses and beyond through writing, teaching and outreach. Finally, if you’re in a position to hire, promote, or review grant applications, explicitly welcome diverse evidence of impact when you solicit CVs.

What are your thoughts on using altmetrics on a CV? Would you welcome them as a reviewer, or choose to ignore them? Tell us in the comments section below.

This post has been adapted from “The Power of Altmetrics on a CV,” which appeared in the April/May 2013 issue of ASIS&T Bulletin.

Impactstory Advisor of the Month: Jon Tennant (June 2014)

Jon Tennant (blogTwitter), a PhD candidate studying tetrapod biodiversity and extinction at Imperial College London, was one of the first scientists to join our recently launched Advisor program.

jon.jpeg

Within minutes of receiving his acceptance into the program, Jon was pounding the virtual pavement to let others know about Impactstory and the benefits it brings to scientists. For this reason–and the fact that Jon has done some cool stuff in addition to his research, like write a children’s book!–Jon’s our first Impactstory Advisor of the Month.

We chatted with Jon to learn more about how he uses Impactstory, what it’s like being an Advisor, and what he’s doing in other areas of his professional life.

Why did you initially decide to create an Impactstory profile?

A couple of years ago, I immersed myself into social media and the whole concept of ‘Web 2.0’. It was clear that the internet was capable of changing many aspects of the way in which we practice, communicate, and assess scientific research. There were so many tools though, and so much diversity, it was all a bit daunting, especially as someone so junior in their career. Although I guess that’s one of the advantages of being at this stage – I wasn’t tied down to any particular way of ‘doing science’ yet, and free to experiment.

Having followed the discussions on alternative and article-level metrics, when ImpactStory was released it seemed like a tool that could really make a difference for myself and the broader research community. At the time, it made no sense to me how the outputs of research were assessed – the name or the impact factor of a journal was given far too much meaning, and did nothing to really encapsulate the diversity of ways in which quality or impact, or putative pathways to impact, could be measured. ImpactStory seemed to offer a decent alternative, and hey look – it does! Actually, it’s not an alternative, but complementary tool for a range of methods in assessing how research is used.

Why did you decide to become an Advisor?

Pretty much for the reasons above! One thing I’m learning as a young scientist is that it’s easy to be part of an echo chamber on social media, advocating altmetrics and all the jazzy new aspects of research, but many scientists aren’t online. Getting those people involved in conversations, and alerting them to cool new tools is made a lot easier as an Advisor.

I reckon this type of community engagement is pretty important, especially in what appears to be such a crucial transitional phase for researchers, including things like open access and data, and the way in which research is assessed (e.g., through the REF here in the UK). ImpactStory obviously has a role in making this much easier for academics.

How have you been spreading the word about Impactstory in your first month as an Advisor?

Mostly sharing stickers! They actually work really well in getting people’s attention. They’re even more doubly useful when people ask things like “What’s a h-index”, so you can actually use them as a basis for further discussion. But yeah, I don’t really go out of my way to preach to people about altmetrics and ImpactStory – academics really don’t like being told what they should be doing and things, especially at my university. I prefer to kind of hang back, wait for discussions, and inject that things like altmetrics exist, and could be really useful when combined with things like a social media presence, or an ORCID, and that they are one of an integrated set of tools that can be really useful for assessing how your research is being used, as well as a kind of personal tracking device. I’d love to hold an ImpactStory/altmetrics Q and A or workshop at some point in the future.

You just wrote a children’s book about dinosaurs–tell us about it!

Let it be known that you brought this up, not me 😉

So, pretty much just by having a social media presence (mostly through blogging), I was asked to write a book on kids dinosaurs! Of course I said yes, and along with a talented artist, we created a book with pop-out dinosaurs that you can reconstruct into your very own little models! You can pre-order it here.* I think it’s out in October in the UK and USA. Is there an ImpactStory bit for that…? [ed: Not yet! Perhaps add it as a feature request on our Feedback forum? :)]

* (I don’t get royalties, so it’s not as bad promoting it…)

What’s the best part about your current gig as a PhD student at Imperial College London?

The freedom. I have an excellent supervisor who is happy to let me blog, tweet, attend science communication conferences and a whole range of activities that are complimentary to my PhD, as long as the research gets done. So there’s a real diversity of things to do, and being in London there’s always something science-related going on, and there’s a great community vibe too, with people who work within the broader scope of science always coming together and interacting. Of course, the research itself is amazing – I work with a completely open database called the Palaeobiology Database/Fossilworks, where even the methods are open so anyone can play with science if they wish!

Thanks, Jon!

Jon is just one of a growing community of Impactstory Advisors. Want to join the ranks of some of the Web’s most cutting-edge researchers and librarians? Apply to be an Advisor today!

Open Science & Altmetrics Monthly Roundup (April 2014)

Don’t have time to stay on top of the most important Open Science and Altmetrics news? We’ve gathered the very best of the month in this post. Read on!

Funding agencies denying payments to scientists in violation of Open Access mandates

Want to actually get paid from those grants you won? If you haven’t made publications about your grant-funded research Open Access, it’s possible you could be in violation of funders’ public access mandates–and may lose funding because of it.

Richard Van Noorden of Nature News reports,

The London-based Wellcome Trust says that it has withheld grant payments on 63 occasions in the past year because papers resulting from the funding were not open access. And the NIH…says that it has delayed some continuing grant awards since July 2013 because of non-compliance with open-access policies, although the agency does not know the exact numbers.

Post-enforcement, compliance rates increased 14% at the Wellcome Trust and 7% and the NIH. However, they’re still both a ways from seeing full compliance with the mandates.

And that’s not the only shakeup happening in the UK: the higher ed funding bodies warned researchers that any article or conference paper accepted after April 1, 2016 that doesn’t comply with their Open Access policy can’t be used for the UK Research Excellence Framework, by which universities’ worthiness to receive funding is determined.

That means institutions now have a big incentive to make sure their researchers are following the rules–if their researchers are found out of compliance, the institutions’ funding will be in jeopardy.

Post-publication peer review getting a lot of comments

Post-publication peer review via social media was the topic of Dr. Zen Faulkes’ “The Vaccuum Shouts Back” editorial, published in Neuron earlier this month. In it, he points out:

Postpublication peer review can’t do the entire job of filtering the scientific literature right now; it’s too far from being a standard practice….[it’s] an extraordinarily valuable addition to, not a substitute for, the familiar peer review process that journals use before publication. My model is one of continuous evaluation: “filter, publish, and keep filtering.”

So what does that filtering look like? Comments on journal and funder websites, publisher-hosted social networks, and post-pub peer review websites, to start with. But Faulkes argues that “none of these efforts to formalize and centralize postpublication peer review have come close to the effectiveness of social media.” To learn why, check out his article on Neuron’s website.

New evidence supports Faulkes’ claim that post-publication peer review via social media can be very effective. A study by Paul S. Brookes, published this month in PeerJ, found post-publication peer review using blogs makes corrections to the literature an astounding eight times as likely to happen than corrections reported to journal editors in the traditional (private) manner.

For more on post-publication peer review, check out this classic Frontiers in Computational Neuroscience special issue, Tim Gower’s influential blog post, “How might we get to a new model of mathematical publishing?,” or Faculty of 1000 Prime, the highly respected post-pub peer review platform.

Recent altmetrics-related studies of interest

  • Scholarly blog mentions relate to later citations: A recent study published in JASIST (green OA version here) found that mentions of articles on scholarly blogs correlate to later citations.

  • What disciplines have the highest presence of altmetrics? Hint: it’s not the ones you think. Turns out, a higher percentage of humanities and social science articles have altmetrics than for those in the biomedical and life sciences. Researchers also found that only 7% of all papers found in Web of Science had Altmetric.com data.

  • Video abstracts lead to more readers: For articles in the New Journal of Physics, video abstract views correlate to increased article usage counts, according to a study published this month in the Journal of Librarianship and Scholarly Communication.

New data sources available for Impactstory & Altmetric.com

New data sources include post-publication peer review sites Publons and PubPeer, and microblogging site Weibo Sina (the “Chinese Twitter”). Since we get data from Altmetric, that means Impactstory will be reporting this data soon, too!

And another highly-demanded data source will be opening up in the near future: Zotero. The Sloan Foundation has backed research and development for the open source reference management software that will eventually help Zotero build “a preliminary public API that returns anonymous readership counts when fed universal identifiers (e.g. ISBN, DOI).” So, some day soon, we’ll be able to report Zotero readership information alongside Mendeley stats in your profile–a feature that many of you have been asking us about for a long time.

Altmetric.com offering new badges

Altmetric.com founder Euan Adie announced that for those who want to de-emphasize numeric scores on content, the famous “donut” badges will now be available sans Altmetric score–a move heralded by many in the altmetrics research community as being a good move away from “one score to rule them all.”

Must-read blog posts about ORCID and megajournals

We’ve been on a tear publishing about innovations in Open Science and altmetrics on the Impactstory blog. Here are two of our most popular posts for the month:

Stay connected

Do you blog on altmetrics or Open Science and want to share your posts with us? Let us know on our Twitter, Google+, Facebook, or LinkedIn pages. We might just feature your work in next month’s roundup!

And if you don’t want to miss next month’s news, remember that you can sign up to get these posts and other Impactstory news delivered straight to your inbox.