How do we know Unpaywall won’t be acquired?

Reposted with minor editing from a response Jason gave on the Global Open Access mailing list, July 12 2018.

We’re often asked: How do we know Unpaywall won’t be acquired?  What makes Unpaywall (and the company behind it, Impactstory) different than Bepress, SSRN, Mendeley, Publons, Kopernio, etc?

How can we be sure you won’t be bought by someone whose values don’t align with open science?

There are no credible guarantees I can offer that this won’t happen, and nor can any other organization. However, I think stability in the values and governance of Impactstory is a relatively safe bet.  Here’s why (note: I’m not a lawyer and the below isn’t legal advice, obvs):

We’re incorporated as a 501(c)3 nonprofit. This was not true of recently-acquired open science platforms like Mendeley, SSRN, and Bepress, which were all for-profits. We think that’s fine…the world needs for-profits. But we sure weren’t surprised when any of them were acquired. These are for-profit companies, which means they are, er:

For: Profit.  

Legally, their purpose is profit. They may benefit the world in many additional ways,  but their officers and board have a fiduciary duty to deliver a return to investors.

Our officers and board, on the other hand, have a legal fiduciary duty to fulfill our nonprofit mission, even where this doesn’t make much money. I think instead of “nonprofit” it should be called for-mission. Mission is the goal. That can be a big difference.  Jefferson Pooley did a great job articulating the value of the nonprofit structure for scholcomm organizations in more detail in a much-discussed LSE Impact post last year.

All that said, I’m not going to sit here and tell you nonprofits can’t be acquired…cos although that may be technically true, nonprofits can still be, in all-but-name, acquired. It’s just less common and harder.

So we like to also emphasize that the source code for these projects we are doing is open. That means that for any given project, its main asset–the code that makes our project work–is available for free to anyone who wants it. This makes us much less of an acquisition target. Why buy the cow when the code is free, as it were.

As a 501(c)3 nonprofit, we have a board of directors that helps keep us accountable and helps provide leadership to the organization as well. Past board members have included  Cameron Neylon and John Wilbanks, with a current board of me, Heather, Ethan White and Heather Joseph.  Heather, Ethan, John, and Cameron have each contributed mightily to the Open cause, in ways that would take me much longer than I have to fully chronicle (and most of you probably know anyway). We’re incredibly proud to have (and have had) them tirelessly working to help Impactstory stay on the right course. We think they are people that can be trusted.

Finally, and y’all can make up your own minds about this, I like to think our team has built up some credibility in the space. Me and Heather have both been working entirely on open-source, open science projects for the last ten years, and most of that work’s pretty easy to find if you want to check it out. In that time, it’s safe to assume we’ve turned down some better-paying projects that aligned less closely with the open science mission.

So, being acquired?  Not in our future.  But growth sure is, through grants and partnerships and customer relationships and lots of hard work… all in the service of making scholcomm more open.  Stay tuned 🙂

oaDOI integrated into the SFX link resolver

We’re thrilled to announce that oaDOI is now available for integration with the SFX link resolver. SFX, like other OpenURL link resolvers, makes sure that when library users click a link to a scholarly article, they are directed to a copy the library subscribes to, so they can read it.

But of course, sometimes the library doesn’t subscribe. This is where oaDOI comes to the rescue. We check our database of over 80 million articles to see if there’s a Green Open Access version of that article somewhere. If we find one, the user gets directed there so they can read. Adding oaDOI to SFX is like adding ten million open-access articles to a library’s holdings, and it results in a lot more happy users, and a lot more readers finding full text instead of paywalls. Which is kind of our thing.

The best part is, it’s super easy set up, and of course completely free. Since SFX is used today by over 2000 institutions, we’re really excited about how big a difference this can make.

Edited march 28, 2017. There are now over 600 libraries worldwide using the oaDOI integration, and we’re handling over a million requests for fulltext every day.

 

Impactstory Advisor of the Month: Chris Chan (January 2015)

Photograph of Chris Chan

The first Impactstory Advisor of the Month for 2015 is Chris Chan, Head of Information Services at Hong Kong Baptist University Library.

We interviewed Chris to learn more about his crucial role in implementing ORCID identifiers for HKBU faculty, and also why he’s chosen to be an Impactstory Advisor. Below, he also describes his vision for the role librarians can play in bringing emerging scholarly communication technologies to campus–a vision with which we wholeheartedly agree!

Tell us a bit about your role as the Head of Information Services at the Hong Kong Baptist University Library.

My major responsibilities at HKBU Library include overseeing our instruction and reference services, and advising the senior management team on the future development and direction of these services. I’m fortunate to work with a great team of librarians and paraprofessionals, and never tire of providing information literacy instruction and research help to our students and faculty.

Scholarly communication is a growing part of my duties as well. As part of its strategic plan, the Library is exploring how it can better support the research culture at the University. One initiative that has arisen from this strategic focus is our Research Visibility Project, for which I am the coordinator.

Why did you initially decide to join Impactstory?

Scholarly communication and bibliometrics have been of great interest to me ever since I first encountered them as a newly-minted academic librarian. Furthermore, the strategic direction that the Library is taking has made keeping up to date with the latest developments in this area a must for our librarians.

When I came across Impactstory I was struck by how useful and relatively straightforward (even in that early incarnation) it was for multiple altmetrics to be presented in an attractive and easy to understand way. At the time, I had just been discussing with some of our humanities faculty how poorly served they were by traditional citation metrics. I saw immediately in Impactstory one way that this issue could be addressed.

Why did you decide to become an Advisor?

As mentioned above, in the past year or so I have become heavily involved in our scholarly communication efforts. When the call for applications to be an Advisor came out, I saw it as an opportunity to get the inside scoop on one of the tools that I am most enthusiastic about.

What’s your favorite Impactstory feature?

I would have to say that my favourite feature is the ability to add an ORCID iD to the Impactstory profile! More on why that is below.

You’ve been hard at work recently implementing ORCID at HKBU. (I especially like this video tutorial you produced!) How do you envision the library working in the future to support HKBU researchers using ORCID and other scholarly communication technologies?

Academic libraries around the world are re-positioning themselves to ensure that their collections and services remain relevant to their members. The scholarly communication environment is incredibly dynamic, and I think that librarians have an opportunity to provide tremendous value to our institutions by serving as guides to, and organizers of, emerging scholarly communication technologies.

Our ORCID initiative at HKBU is a good example of this. We have focused heavily on communicating the benefits having an ORCID iD and how in the long run this will streamline research workflows and ensure scholars receive the proper credit for their work. Another guiding principle has been to make adoption as painless as possible for our faculty. They will be able to create an ORCID iD, connect it with our system, and automatically populate it with their latest five years’ of research output (painstakingly checked for accuracy by our team), all in just a few minutes.

I believe that as information professionals, librarians are well-positioned to take on such roles. Also, in contrast to some of our more traditional responsibilities, these services bring us into close contact with faculty, raising the visibility of librarians on campus. These new relationships could open doors to further collaborations on campus.

Thanks, Chris!

As a token of our appreciation for Chris’s outreach efforts, we’re sending him an Impactstory travel mug from our Zazzle store.

Chris is just one part of a growing community of Impactstory Advisors. Want to join the ranks of some of the Web’s most cutting-edge researchers and librarians? Apply to be an Advisor today!

Our 2014 predictions for altmetrics: what we nailed and what we missed

Back in February, we wagered some bets about how the altmetrics landscape would evolve throughout 2014. As you might expect, we got some right, and some wrong.

Let’s take a look back at how altmetrics as a field fared over the last year, through the framework of our 2014 predictions.

More complex modelling

“We’ll see more network-awareness (who tweeted or cited your paper? how authoritative are they?), more context mining (is your work cited from methods or discussion sections?), more visualization (show me a picture of all my impacts this month), more digestion (are there three or four dimensions that can represent my “scientific personality?”), more composite indices (maybe high Mendeley plus low Facebook is likely to be cited later, but high on both not so much).”

Visualizations were indeed big this year: we debuted our new Maps feature, which tells you where your work in the world has been viewed, bookmarked, or tweeted about. We also added “New this week” indicators to the Metrics page on your profile.

And both PlumX and Altmetric.com added new visualizations, too: the cool-looking “Plum Print” and the Altmetric bar visualization were introduced.

Impactstory also launched a new, network-aware feature that shows you what Twitter users gave you the most exposure when they tweeted your work. And we also debuted your profile’s Fans page, which tells you who’s talking about your work and how often, exactly what they’re saying, and how many followers they have.

And a step forward in context mining has come from the recently launched CRediT taxonomy. The taxonomy allows researchers to describe how co-authors on a paper have contributed–whether by creating the study’s methodology, cleaning and maintaining data, or in any of twelve other ways. The taxonomy will soon be piloted by publishers, funders, and other scholarly communication organizations like ORCID.

As for other instances of network-awareness, context mining, digestion, and composite indices? Most of the progress in these areas came from altmetrics researchers. Here are some highlights:

  • This study on ‘semantometrics’ posits that more effective means of determining impact can be found by looking at the full-text of documents, and by measuring the interdisciplinarity of the papers and the articles they cite.

  • A study on the size of research teams since 1900 found that a larger (and more diverse) number of collaborators generally leads to more impactful work (as measured in citations).

  • This preprint determined that around 9% of tweets about ArXiv.org publications come from bots, not humans–which may have big implications for how scholars use and interpret altmetrics.

  • A study showed that papers tagged on F1000 as being “good for teaching” tend to have higher instances of Facebook and Twitter metrics–types of metrics long assumed to relate more to “public” impacts.

  • A study published in JASIST (green OA version here) found that mentions of articles on scholarly blogs correlate to later citations.

Growing interest from administrators and funders

“So in 2014, we’ll see several grant, hiring, and T&P guidelines suggest applicants include altmetrics when relevant.”

Several high-profile announcements from funding agencies confirmed that altmetrics was a hot topic in 2014. In June, the Autism Speaks charity announced that they’d begun using PlumX to track the scholarly and social impacts of the studies they fund. And in December, the Wellcome Trust published an article describing how they use altmetrics in a similar manner.

Are funders and institutions explicitly suggesting that researchers include altmetrics in their applications, when relevant? Not as often as we had hoped. But a positive step in this direction has been from the NIH, which released a new biosketch format that asks applicants to list their most important publications or non-publication research outputs. It also prompts scientists to articulate why they consider those outputs to be important.

The NIH has said that by moving to this new biosketch format, it “will help reviewers evaluate you not by where you’ve published or how many times, but instead by what you’ve accomplished.” We applaud this move, and hope that other funders adopt similar policies in 2015.

Empowered scientists

“As scientists use tools like Impactstory to gather, analyze, and share their own stories, comprehensive metrics become a way for them to articulate more textured, honest narratives of impact in decisive, authoritative terms. Altmetrics will give scientists growing opportunities to show they’re more than their h-indices.”

We’re happy to report that this prediction came true. This year, we’ve heard from more scientists and librarians than ever before, all of whom have used altmetrics data in their tenure dossiers, grant applications and reports, and in annual reviews. And in one-on-one conversations, early career researchers are telling us how important altmetrics are for showcasing the impacts of their research when applying for jobs.

We expect that as more scientists become familiar with altmetrics in the coming year, we’ll see even more empowered scientists using their altmetrics to advance their careers.

Openness

“Since metrics are qualitatively more valuable when we verify, share, remix, and build on them, we see continued progress toward making both  traditional and novel metrics more open. But closedness still offers quick monetization, and so we’ll see continued tension here.”

This is one area where we weren’t exactly wrong, but we weren’t 100% correct, either. Everything stayed more or less the same with regard to openness in 2014: Impactstory continued to make our data available via open API, as did Altmetric.com.

We hope that our prediction will come true in 2015, as the increased drive towards open science and open access puts pressure on those metrics providers that haven’t yet “opened up.”

Acquisitions by the old guard

“In 2014 we’ll likely see more high-profile altmetrics acquisitions, as established megacorps attempt to hedge their bets against industry-destabilizing change.”

2014 didn’t see any acquisitions per se, but publishing behemoth Elsevier made three announcements that hint that the company may be positioning itself for such acquisitions soon: a call for altmetrics research proposals, the hiring of prominent bibliometrician (and co-author of the Altmetrics Manifesto) Paul Groth to be the Disruptive Technology Director of Elsevier Labs, and the launch of  Axon, the company’s invitation-only startup network.

Where do you think altmetrics will go in 2015? Leave your predictions in the comments below.

What’s our impact? (November 2014)

As a platform for users interested in data, we want to share some stats about our successes (and challenges) in spreading the word about Impactstory.

Here are our outreach numbers for November 2014.

impactstory.org traffic

  • Visitors: 4,361 total; 2,754 unique
  • New Users: 247
  • Conversion rate: 8.9% (% of visitors who signed up for a trial account)

Blog stats

  • Unique visitors: 9,443 (31% growth from October)
  • Clickthrough rate: 0.75% (% of people who visited Impactstory.org from the blog)
  • Conversion rate: 19.7% (% of visitors to impactstory.org from blog who went on to sign up for a trial Impactstory account)
  • Percent of new user signups: 5.7%

Twitter stats

  • New followers 318 in November
  • Increase in followers from October 6.7%
  • Mentions 380 (We’re tracking this to answer the question, “How engaged are our followers?”)
  • Tweet reach 840,174 (We’re tracking this–the number of people who potentially saw a tweet mentioning Impactstory or our blog–to understand our brand awareness)
  • Clickthroughs: 180
  • Conversions: 5

What does it all mean?

impactstory.org: Overall traffic to the site was down, consistent with patterns of use we’ve seen in years past. (An end-of-the-semester dip in traffic is common for academic sites.) Conversion rates on impactstory.org went slightly down from October. We’re confident that new landing pages and general homepage changes we make in the coming months will improve conversion rates.

Blog: November saw an increase in unique visitors (another month of double-digit growth!), but what does that mean for our organization? Conversion rates actually went down from October, as did the blog’s share of new user signups for Impactstory. This points to a need to share more Impactstory-related content on the blog, and experiment with unobtrusive sidebars, slide-ins, and other ways that can point people to our main website.

That said, blogging doesn’t always result in direct signups, nor is it meant to. The primary aim of blogging is to educate people about open science and altmetrics (as a non-profit, we’re big on advocacy). And it helps familiarize people with our organization, too, which can result in indirect signups (i.e., readers might come back later and sign up for Impactstory).

Twitter: Our Twitter followers and mentions increased from October by about 1.5% and 25%, respectively. We’ll aim to continue that growth throughout December. (After all, we’re active on Twitter for the same reason we blog–as a form of outreach and advocacy.) We also passed an exciting benchmark: 5,000 Twitter followers!

We’ll continue to blog our progress, while also thinking on ways to share this data in a more automated fashion. If you have questions or comments, we welcome them in the comments below.

Updated 12/31/2014 to fix error in reporting conversion rates of impactstory.org visitors from blog.

Impactstory Advisor of the Month: Lorena Barba (December 2014)

A photograph of Impactstory Advisor Lorena Barba2014’s final Impactstory Advisor of the month is Lorena Barba. Lorena is an associate professor of mechanical and aerospace engineering at the George Washington University in Washington DC, and an advocate for open source, open science, and open education initiatives.

We recently interviewed Lorena to learn more about her lab’s Open Science manifesto, her research in computational methods in aeronautics and biophysics, and George Washington University’s first Massive Open Online Course, “Practical Numerical Methods with Python” (aka “Numerical MOOC”).

Tell us a bit about your research.

I have a PhD in Aeronautics from Caltech and I specialized in computational fluid dynamics. From that launching pad, I have veered dangerously into applied mathematics (working on what we call fast algorithms), supercomputing (which gets you into really techy stuff like cache-aware and memory-avoiding computations, high-throughput and many-core computing), and various application cases for computer simulation.

Fluid dynamics and aerodynamics are mature fields and it’s hard to make new contributions that have impact. So I look for new problems where we can use our skills as computational scientists to advance a field. That’s how we got into biophysics: there are models that apply to interactions of proteins that use electrostatic theory and can be solved computationally with methods similar to ones used in aeronautics, believe it or not.

We have been developing models and software to compute electrostatic interactions between bio-molecules, first, and between bio-molecules and nano-surfaces, more recently. Our goal is to contribute simulation power for aiding in the design of efficient biosensors. And going back to my original passion, aerodynamics, we found an area where there is still much to be discovered: the aerodynamics of flying and gliding animals (like flying snakes).

Why did you initially decide to join Impactstory?

For a long time, I’ve been thinking that science and scientists need to take control of their communication channels and use the web deliberately to convey and increase our impact. I have been sharing the research and educational products of my group online for years and we have a consistent policy with regards to publication that includes, for example, always uploading a preprint to the arXiv repository at the time of submitting a paper for publication. If a journal does not have an arXiv-friendly policy, we don’t submit there and look for another appropriate journal. We started uploading data sets, figures, posters and other research objects to the figshare repository since its beginning, and I’m also a figshare advisor.

Impactstory became part of my communications and impact arsenal immediately, because it aggregates links, views and mentions of our products. And with the latest profile changes, it also offers an elegant online presence.

Why did you decide to become an Advisor?

So many of my colleagues are apathetic to the dire control they put in the hands of for-profit publishers, and simply accept the status quo. I want to be an agent of change in regards to how we measure and communicate the importance of what we do. Part of it is simply being willing to do it yourself, and show by example how these new tools can work for us.

What’s your favorite Impactstory feature?

The automatic aggregation of research objects using my various online IDs, like ORCID, Google Scholar and GitHub. The map is pretty cool, too!

You’ve done a lot to “open up” education in computational methods to the public, in particular via your Numerical MOOC and Youtube video lectures. What have been your biggest successes and challenges in getting these courses online and accessible to all?

In my opinion, the biggest success is doing these things at the grassroots level, with hardly any funding (I had some seed funding for #numericalmooc, but none of the previous efforts had any) or institutional involvement. When I think of how the university, in each case, has been involved in my open education efforts, the most appropriate way to characterize it is that they have let me to do what I wanted to do, staying out of the way. There have not been technologists or instructional designers or any of that involved; I just did it all myself.

The biggest challenge? Resources, I guess—time and money. My scarcest resource is time, and when I work to create open educational resources, I’m stealing time away from research. This gets me disapproving looks, thoughts and comments from my peers. Why am I spending time in open education? “This won’t get you promoted.” SIGH. As for money, I raised some funds for #numericalmooc, but it’s not a lot: merely to cover the course platform and the salary of my teaching assistants. Funding efforts in open education—as an independent educator, rather than a Silicon Valley start-up—is really tough.

Thanks, Lorena!

As a token of our appreciation for Lorena’s outreach efforts, we’re sending her an Impactstory item of her choice from our Zazzle store.

Lorena is just one part of a growing community of Impactstory Advisors. Want to join the ranks of some of the Web’s most cutting-edge researchers and librarians? Apply to be an Advisor today!

Why Nature’s “SciShare” experiment is bad for altmetrics

Early last week, Nature Publishing Group announced that 49 titles on Nature.com will be made free to read for the next year. They’re calling this experiment “SciShare” on social media; we’ll use the term as a shorthand for their initiative throughout this post.

Some have credited Nature on their incremental step towards embracing Open Access. Other scientists criticise the company for diluting true Open Access and encouraging scientists to share DRM-crippled PDFs.

As staunch Open Access advocates ourselves, we agree with our board member John Wilbanks: this ain’t OA. “Open” means open to anyone, including laypeople searching Google, who don’t have access to Nature’s Magic URL. “Open” also means open for all types of reuse, including tools to mine and build next-generation value from the scholarly literature.

But there’s another interesting angle here, beyond the OA issue: this move has real implications for the altmetrics landscape. Since we live and breathe altmetrics here at Impactstory, we thought it’d be a great time to raise some of these issues.

Some smart people have asked, “Is SciShare an attempt by Nature to ‘game’ their altmetrics?” That is, is SciShare an attempt to force readers to view content on Nature.com, thereby increasing total pageview statistics for the company and their authors?

Postdoc Ross Mounce explains:

If [SciShare] converts some dark social sharing of PDFs into public, trackable, traceable sharing of research via non-dark social means (e.g. Twitter, Facebook, Google+ …) this will increase the altmetrics of Nature relative to other journals and that may in-turn be something that benefits Altmetric.com [a company in which Macmillian, Nature’s parent company, is an investor].

No matter Nature’s motivations, SciShare, as it’s implemented now, will have some unexpected negative effects on researchers’ ability to track altmetrics for their work. Below, we describe why, and point to some ways that Nature could improve their SciShare technology to better meet researchers’ needs.

How SciShare works

SciShare is powered by ReadCube, a reference manager and article rental platform that’s funded by Macmillan via their science start-up investment imprint, Digital Science.

Researchers with subscription access to an article on Nature.com copy and paste a special, shortened URL (i.e. http://rdcu.be/bKwJ) into email, Twitter, or anywhere else on the Web.

Readers who click on the link are directed to a version of the article that they can freely read and annotate in their browser, thanks to ReadCube. Readers cannot download, print, or copy from the ReadCube PDF.

The ReadCube-shortened URL resolves to a Nature-branded, hashed URL that looks like this:

Screen Shot 2014-12-04 at 4.18.16 PM.png

The resolved URL doesn’t include a DOI or other permanent identifier.

In the ReadCube interface, users who click on the “Share” icon see a panel that includes a summary of Altmetric.com powered altmetrics (seen here in the lower left corner of the screen):

Screen Shot 2014-12-04 at 6.11.41 PM.png

The ReadCube-based Altmetric.com metrics do not include pageview numbers. Because ReadCube doesn’t work with assistive technology like screen readers, it also disallows for the tracking of the small portion of traffic that visually-impaired readers might account for.

That said, the potential for tracking new, ReadCube-powered metrics is interesting. ReadCube allows annotations and highlighting of content, and could potentially report both raw numbers and also describe the contents of the annotations themselves.

Number of redirects from the ReadCube-branded, shortened URLs could also be illuminating, especially when reported alongside direct traffic to the Nature.com-hosted version of the article. (Such numbers could provide hard evidence as to the proportion of OA vs toll access use of Nature journal articles.) And sources of Web traffic give a lot of context to the raw pageview numbers, as we’ve seen from publishers like PeerJ:

Screen Shot 2014-12-04 at 6.26.31 PM.png

After all, referrals from Reddit usually means something very different than referrals from PubMed.

Digital Science’s Timo Hannay hints that Nature will eventually report download metrics for their authors. There’s no indication as to whether Nature intends to disclose any of the potential altmetrics described above, however.

So, now that we know how SciShare works and the basics of how they’ve integrated altmetrics, let’s talk about the bigger picture. What does SciShare mean for researcher’s altmetrics?

How will SciShare affect researchers’ altmetrics?

Let’s start with the good stuff.

Nature authors will probably reap a big benefit in thanks to SciShare: they’ll likely have higher pageview counts for the Nature.com-hosted version of their articles.

Another positive aspect of SciShare is that it provides easy access to Altmetric.com data. That’s a big win in a world where not all researchers are aware of altmetrics. Thanks to ReadCube’s integration of Altmetric.com, now more researchers can find their article’s impact metrics. (We’re also pleased that Altmetric.com will get a boost in visibility. We’re big fans of their platform, as well as customers–Impactstory’s Twitter data comes from Altmetric.com).

SciShare’s also been implemented in such a way that the ReadCube DRM technology doesn’t affect researchers’ ability to bookmark SciShare’d articles on reference managers like Mendeley. Quick tests for Pocket and Delicious bookmarking services also seems to work well. That means that social bookmarking counts for an author’s work will likely not decline. (I point this out because when I attempted to bookmark a ReadCube.com-hosted article using my Mendeley browser bookmarklet Thursday, Dec. 4th, I was prevented from doing so, and actually redirected to a ReadCube advertisement. I’m glad to say this no longer seems to be true.)

Those are the good things. But there’s also a few issues to be concerned about.

SciShare makes your research metrics harder to track

The premise of SciShare is that you’ll no longer copy and paste an article’s URL when sharing content. Instead, they encourage you to share the ReadCube-shortened URL. That can be a problem.

In general, URLs are difficult to track: they contain weird characters that sometimes break altmetrics aggregators’ search systems, and they go dead often. In fact, there’s no guarantee that these links will be live past the next 12 months, when the SciShare pilot is set to end.

Moreover, neither the ReadCube URL–nor the long, hashed, Nature.com-hosted URL that it resolves to–contain the article’s DOI. DOIs are one of the main ways that altmetrics tracking services like ours at Impactstory can find mentions of your work online. They’re also preferable to use when sharing links because they’ll always resolve to the right place.

So what SciShare essentially does is introduce two new messy URLs that will shared online, and that have a high likelihood of breaking in the future. That means there’s a bigger potential for messier data to appear in altmetrics reports.

SciShare’s metrics aren’t as detailed as they could be

The Altmetric.com-powered altmetrics that ReadCube exposes are fantastic, but they lack two important metrics that other data providers expose: citations and pageviews.

On a standard article page on Nature.com, there’s an Article Metrics tab. The Metrics page includes data not only from Altmetric.com, but also CrossRef, Web of Science, and Scopus’s citation counts, and also pageview counts. And on completely separate systems like Impactstory.org and PlumX, still more citation data is exposed, sourced from Wikipedia and PubMed. (We’d provide pageview data if we could. But that’s currently not possible. More on that in a minute.)

ReadCube’s deployment of Altmetric.com data also decontextualizes articles’ metrics. They have chosen only to show the summary view of the metrics, with a link out to the full Altmetric.com report:

Screen Shot 2014-12-05 at 10.11.47 AM.png

Compare that to what’s available on Nature.com, where the Metrics page showcases the Altmetric.com summary metrics plus Altmetric.com-sourced Context statements (“This article is in the 98th percentile compared to articles published in the same journal”), snippets of news articles and blog posts that mention the article, a graph of the growth in pageviews over time, and a map that points to where your work was shared internationally:

Screen Shot 2014-12-04 at 3.59.38 PM.png

More data and more context are very valuable to have when presenting metrics. So, we think this is a missed opportunity for the SciShare pilot.

SciShare isn’t interoperable with all altmetrics systems

Let’s assume that the SciShare experiment results in a boom in traffic to your article on Nature.com. What can you do with those pageview metrics?

Nature.com–like most publishers–doesn’t share their pageview metrics via API. That means you have to manually look up and copy and paste those numbers each time you want to record them. Not an insurmountable barrier to data reuse, but still–it’s a pain.

Compare that to PLOS. They freely share article view and download data via API, so you can easily import those numbers to your profile on Impactstory or PlumX, or export them to your lab website, or parse them into your CV, and so on. (Oh, the things you can do with open altmetrics data!)

You also cannot use the ReadCube or hashed URLs to embed the article full-text into your Impactstory profile or share it on ResearchGate, meaning that it’s as difficult as ever to share the publisher’s version of your paper in an automated fashion. It’s also unclear whether the “personal use” restriction on SciShare links means that researchers will be prohibited from saving links publicly on Delicious, posting them to their websites, and so on.

How to improve SciShare to benefit altmetrics

We want to reiterate that we think that SciShare’s great for our friends at Altmetric.com, due to their integration with ReadCube. And the greater visibility that their integration brings to altmetrics overall is important.

That said, there’s a lot that Nature can do to improve SciShare for altmetrics. The biggest and most obvious idea is to do away with SciShare altogether and simply make their entire catalogue Open Access. But it looks like Nature (discouragingly) is not ready to do this, and we’re realists. So, what can Nature do to improve matters?

  • Open up their pageview metrics via API to make it easier for researchers to reuse their impact metrics however they want
  • Release ReadCube resolution, referral traffic and annotation metrics via API, adding new metrics that can tell us more about how content is being shared and what readers have to say about articles
  • Add more context to the altmetrics data they display, so viewers have a better sense of what the numbers actually mean
  • Do away with hashed URLs and link shorteners, especially the latter which make it difficult to track all mentions of an article on social media

We’re hopeful that SciShare overall is an incremental step towards full OA for Nature. And we’ll be watching how the SciShare pilot changes over time, especially with respect to altmetrics.

Update: Digital Science reports that the ReadCube implementation has been tested to ensure compatibility with most screen readers.

Is ResearchGate’s new DOI feature a game-changer?

ResearchGate is academia’s most popular social network, with good reason. While some decry the platform for questionable user recruitment tactics, others love to use it to freely share their articles, write post-publication peer reviews, and pose questions to other researchers in their area.

ResearchGate quietly launched a feature recently, one that we think could be a big deal. It may have huge upsides for research–especially for tracking altmetrics for work–but it also highlights how some of the problems of scholarly communication aren’t easily solved, especially when digital persistence is involved.

The feature in question? ResearchGate is now generating DOIs for content. And that’s started to generate interesting conversations among those in the know.

Here’s why: DOIs are unique, persistent identifiers that publishers and repositories issue for their content, with the understanding that URLs break all the time. A preservation strategy is expected when one starts issuing DOIs, and yet ResearchGate hasn’t announced one, nor has DataCite (which issues ResearchGate’s DOIs).

Some other interesting questions: what happens when users decide to delete content, or leave the site altogether? Will ResearchGate force content to remain online, or allow DOIs to redirect to broken URLs?

And what if a publication already has a DOI? ResearchGate does prompt users to provide a DOI if one is available, but there are no automated checks (as far as we can tell). That may leave room for omission or error. And a DOI that potentially can resolve to more than one place will introduce confusion for those searching for an article.

As a librarian, I’m also curious about the implications for repositories. IRs’ main selling point is digital persistence and preservation. So, if ResearchGate does indeed have a preservation policy in place, repositories may have lost their edge.

We’ll be watching future developments with interest. There’s great potential here, and how ResearchGate grows and matures this feature in the future will likely have an influence on how researchers share their work and, quite possibly, what it means to be a “publisher.”

Tracking the impacts of data – beyond citations

This post was originally published on the e-Science Community Blog, a great resource for data management librarians.

"How to find and use altmetrics for research data" text in front of a beaker filled with green liquid

How can you tell if data has been useful to other researchers?

Tracking how often data has been cited (and by whom) is one way, but data citations only tell part of the story, part of the time. (The part that gets published in academic journals, if and when those data are cited correctly.) What about the impact that data has elsewhere?

We’re now able to mine the Web for evidence of diverse impacts (bookmarks, shares, discussions, citations, and so on) for diverse scholarly outputs, including data sets. And that’s great news, because it means that we now can track who’s reusing our data, and how.

All of this is still fairly new, however, which means that you likely need a primer on data metrics beyond citations. So, here you go.

In this post, I’ll give an overview of the different types of data metrics (including citations and altmetrics), the “flavors” of data impact, and specific examples of data metric indicators.

What do data metrics look like?

There are two main types of data metrics: data citations and altmetrics for data. Each of these types of metrics are important for their own reasons, and offer the ability to understand different dimensions of impact.

Data citations

Much like traditional, publication-based citations, data citations are an attempt to track data’s influence and reuse in scholarly literature.

The reason why we want to track scholarly data influence and reuse? Because “rewards” in academia are traditionally counted in the form of formal citations to works, printed in the reference list of a publication.

Data is often cited in two ways: by citing the data package directly (often by pointing to where the data is hosted in a repository), and by citing a “data paper” that describes the dataset, functioning primarily as detailed metadata, and offering the added benefit of being in a format that’s much more appealing to many publishers.

In the rest of this post, I’m going to mostly focus on metrics other than citations, which are being written about extensively elsewhere. But first, here’s some basic information on data citations that can help you understand how data’s scholarly impacts can be tracked.

How data packages are cited

Much like how citations to publications differ depending on whether you’re using Chicago style or APA style formatting, citations to data tend to differ according to the community of practice and the recommended citation style of the repository that hosts data. But there are a core set minimums for what should be included in a citation. Jon Kratz has compiled these “core elements” (as well as “common elements) over on the DataPub blog. The core elements include:

  • Creator(s): Essential, of course, to publicly credit the researchers who did the work. One complication here is that datasets can have large (into the hundreds) numbers of authors, in which case an organizational name might be used.

  • Date: The year of publication or, occasionally, when the dataset was finalized.

  • Title: As is the case with articles, the title of a dataset should help the reader decide whether your dataset is potentially of interest. The title might contain the name of the organization responsible, or information such as the date range covered.

  • Publisher: Many standards split the publisher into separate producer and distributor fields. Sometimes the physical location (City, State) of the organization is included.

  • Identifier: A Digital Object Identifier (DOI), Archival Resource Key (ARK), or other unique and unambiguous label for the dataset.

Arguably the most important principle? The use of a persistent identifier like a DOI, ARK, or Handle. They’re important for two reasons: no matter if the data’s URL changes, others will still be able to access it; and PIDs provide citation aggregators like the Data Citation Index and Impactstory.org an easy, unambiguous way to parse out “mentions” in online forums and journals.

It’s worth noting, however, that as few as 25% of journal articles tend to formally cite data. (Sad, considering that so many major publishers have signed on to FORCE11’s data citation principles, which include the need to cite data packages in the same manner as publications.) Instead, many scholars reference data packages in their Methods section, forgoing formal citations, making text mining necessary to retrieve mentions of those data.

How to track citations to data packages

When you want to track citations to your data packages, the best option is the Data Citation Index. The DCI functions similarly to Web of Science. If your institution has a subscription, you can search the Index for citations that occur in the literature that reference data from a number of well-known repositories, including ICPSR, ANDS, and PANGEA.

Here’s how: login to the DCI, then head to the home screen. In the Search box, type in your name or the dataset’s DOI. Find the dataset in the search results, then click on it to be taken to the item record page. On the item record, find and click the “Create Citation Alert” button on the right hand side of the page, where you’ll also find a list of articles that reference that dataset. Now you have a list of the articles that reference your data to date, and you’ll also receive automated email alerts whenever someone new references your data.

Another option comes from CrossRef Search. This experimental search tool works for any dataset that has a DataCite DOI and is referenced in the scholarly literature that’s indexed by CrossRef. (DataCite issues DOIs for Figshare, Dryad, and a number of other repositories.) Right now, the search is a very rough one: you’ll need to view the entire list of DOIs, then use your browser search (often accessed by hitting CTRL + F or Command +F) to check the list for your specific DOI. It’s not perfect–in fact, sometimes it’s entirely broken–but it does provide a view into your data citations not entirely available elsewhere.

How data papers are cited

Data papers tend to be cited like any other paper: by recording the authors, title, journal of publication, and any other information that’s required by the citation style you’re using. Data papers are also often cited using permanent identifiers like DOIs, which are assigned by publishers.

How to find citations for data papers

To find citations to data papers, search databases like Scopus and Web of Science like you’d search for any traditional publication. Here’s how to track citations in Scopus and Web of Science.

There’s no guarantee that your data paper is included in their database, though, since data paper journals are still a niche publication type in some fields, and thus aren’t tracked by some major databases. You’ll be smart to follow up your database search with a Google Scholar search, too.

Altmetrics for data

Citations are good for tracking the impact of your data in the scholarly literature, but what about other types of impact, among other audiences like the public and practitioners?

Altmetrics are indicators of the reuse, discussion, sharing, and other interactions humans can have with a scholarly object. These interactions tend to leave traces on the scholarly web.

Altmetrics are so broadly defined that they include pretty much any type of indicator sourced from a web service. For the purposes of this post, we’ll separate out citations from our definition of altmetrics, but note that many altmetrics aggregators tend to include citation data.

There are two main types of altmetrics for data: repository-sourced metrics (which often measure not only researchers’ impacts, but also repositories’ and curators’ impacts), and social web metrics (which more often measure other scholars’ and the public’s use and other interactions with data).

First, let’s discuss the nuts and bolts of data altmetrics. Then, we’ll talk about services you can use to find altmetrics for data.

Altmetrics for how data is used on the social web

Data packages can be shared, discussed, bookmarked, viewed, and reused using many of the same services that researchers use for journal articles: blogs, Twitter, social bookmarking sites like Mendeley and CiteULike, and so on. There are also a number of services that are specific to data, and these tend to be repositories with altmetric “indicators” particular to that platform.

For an in-depth look into data metrics and altmetrics, I recommend that you read Costas’ et al’s report, “The Value of Research Data” (2013). Below, I’ve created a basic chart of various altmetrics for data and what they can likely tell us about the use of data.

Quick caveat: there’s been little research done into altmetrics for data. (DataONE, PLOS, and California Digital Library are in fact the first organizations to do major work in this area, and they were recently awarded a grant to do proper research that will likely confirm or negate much of the below list. Keep an eye out for future news from them.) The metrics and their meanings listed below are, at best, estimations based on experience with both research data and altmetrics.

Repository- and publisher-based indicators

Note that some of the repositories below are primarily used for software, but can sometimes be used to host data, as well.

Web Service

Indicator

What it might tell us

Reported on

GitHub

Stars

Akin to “favoriting” a tweet or underlining a favorite passage in a book, GitHub stars may indicate that some who has viewed your dataset wants to remember it for later reference.

GitHub, Impactstory

Watched repositories

A user is interested enough in your dataset (stored in a “repository” on GitHub) that they want to be informed of any updates.

GitHub, PlumX

Forks

A user has adapted your code for their own uses, meaning they likely find it useful or interesting.

GitHub, Impactstory, PlumX

SourceForge

Ratings & Recommendations

What do others think of your data? And do they like it enough to recommend it to others?

SourceForge, PlumX

Dryad, Figshare, and most institutional and subject repositories

Views & Downloads

Is there interest in your work, such that others are searching for and viewing descriptions of it? And are they interested enough to download it for further examination and possible future use?

Dryad, Figshare, and IR platforms; Impactstory (for Dryad & Figshare); PlumX (for Dryad, Figshare, and some IRs)

Figshare

Shares

Implicit endorsement. Do others like your data enough to share it with others?

Figshare, Impactstory, PlumX

PLOS

Supplemental data views, figure views

Are readers of your article interested in the underlying data?

PLOS, Impactstory, PlumX

Bitbucket

Watchers

A user is interested enough in your dataset that they want to be informed of any updates.

Bitbucket

Social web-based indicators

Web Service

Indicator

What it might tell us

Reported on

Twitter

tweets that include links to your product

Others are discussing your data–maybe for good reasons, maybe for bad ones. (You’ll have to read the tweets to find out.)

PlumX, Altmetric.com, Impactstory

Delicious, CiteULike, Mendeley

Bookmarks

Bookmarks may indicate that some who has viewed your dataset wants to remember it for later reference. Mendeley bookmarks may be an indicator for later citations (similar to articles).

Impactstory, PlumX; Altmetric.com (CiteULike & Mendeley only)

Wikipedia

Mentions (sometimes also called “citations”)

Does others think your data is relevant enough to include it in Wikipedia encyclopedia articles?

Impactstory, PlumX

ResearchBlogging, Science Seeker

Blog post mentions

Is your data being discussed in your community?

Altmetric.com, PlumX, Impactstory

How to find altmetrics for data packages and papers

Aside from looking at each platform that offers altmetrics indicators, consider using an aggregator, which will compile them from across the web. Most altmetrics aggregators can track altmetrics for any dataset that’s either got a DOI or is included in a repository that’s connected to the aggregator. Each aggregator tracks slightly different metrics, as we discussed above. For a full list of metrics, visit each aggregator’s site.

Impactstory easily tracks altmetrics for data uploaded to Figshare, GitHub, Dryad, and PLOS journals. Connect your Impactstory account to Figshare and GitHub and it will auto-import your products stored there and find altmetrics for them. To find metrics for Dryad datasets and PLOS supplementary data, provide DOIs when adding products one-by-one to your profile, and the associated altmetrics will be imported. Here’s an example of what a altmetrics for dataset stored on Dryad looks like on Impactstory.

PlumX tracks similar metrics, and offers the added benefit of tracking altmetrics for data stored on institutional repositories, as well. If your university subscribes to PlumX, contact the PlumX team about getting your data included in your researcher profile. Here’s what altmetrics for dataset stored on Figshare looks like on PlumX.

Altmetric.com can track metrics for any dataset that has a DOI or Handle. To track metrics for your dataset, you’ll either need an institutional subscription to Altmetric or the Altmetric bookmarklet, which you can use when on the item page for your dataset on a website like Figshare or in your institutional repository. Here’s what altmetrics for a dataset stored on Figshare looks like on Altmetric.com.

Flavors of data impact

While scholarly impact is very important, it’s far from the only type of impact one’s research can have. Both data citations and altmetrics can be useful in illustrating these flavors. Take the following scenarios for example.

Useful for teaching

What if your field notebook data was used to teach undergraduates how to use and maintain their own field notebooks, and use them to collect data? Or if a longitudinal dataset you created were used to help graduate students learn the programming language, R? These examples are fairly common in practice, and yet they’re often not counted when considering impacts. Potential impact metrics could include full-text mentions in syllabi, views & downloads in Open Educational Resource repositories, and GitHub forks.

Reuse for new discoveries

Researcher, open data advocate, and Impactstory co-founder Heather Piwowar once noted, “the potential benefits of data sharing are impressive:  less money spent on duplicate data collection, reduced fraud, diverse contributions, better tuned methods, training, and tools, and more efficient and effective research progress.” If those outcomes aren’t indicative of impact, I don’t know what is! Potential impact metrics could include data citations in the scholarly literature, GitHub forks, and blog post and Wikipedia mentions.

Curator-related metrics

Could a view-to-download ratio be an indicator of how well a dataset has been described and how usable a repository’s UI is? Or of the overall appropriateness of the dataset for inclusion in the repository? Weber et al (2013) recently proposed a number of indicators that could get at these and other curatorial impacts upon research data, indicators that are closely related to previously-proposed indicators by Ingwersen and Chavan (2011) at the GBIF repository. Potential impact metrics could include those proposed by Weber et al and Ingwersen & Chavan, as well as a repository-based view-to-download ratio.

Ultimately, more research is needed into altmetrics for datasets before these flavors–and others–are accurately captured.

Now that you know about data metrics, how will you use them?

Some options include: in grant applications, your tenure and promotion dossier, and to demonstrate the impacts of your repository to administrators and funders. I’d love to talk more about this on Twitter or in the comments below.

Recommended reading

  • Piwowar HA, Vision TJ. (2013) Data reuse and the open data citation advantage. PeerJ 1:e175 doi: 10.7717/peerj.175

  • CODATA-ICSTI Task Group. (2013). Out of Cite, Out of Mind: The current state of practice, policy, and technology for the citation of data [report]. doi:10.2481/dsj.OSOM13-043

  • Costas, R., Meijer, I., Zahedi, Z., & Wouters, P. (2013). The Value of research data: Metrics for datasets from a cultural and technical point of view. Copenhagen, Denmark. Knowledge Exchange. www.knowledge-exchange.info/datametrics

The Right Metrics for Generation Open: a guide to getting credit for Open Science

You’re not getting all the credit you should be for your research.

As an early career researcher, you’re likely publishing open access journal articles, sharing your research data and software code on GitHub, posting slides and figures on Slideshare and Figshare, and “opening up” your research in many other ways.

Yet these Open Science products and their impacts (on other scholars, the public, policymakers, and other stakeholders) are rarely mentioned when applying for jobs, tenure and promotion, and grants.

The traditional means of sharing your impact–citation counts–don’t meet the needs of today’s researchers. What you and the rest of Generation Open need is altmetrics.

In this post, I’ll describe what altmetrics are and the types of altmetrics you can expect to receive as someone who practices Open Science. We’ll also cover real life examples of scientists who used altmetrics to get grants and tenure–and how you can do the same.

Altmetrics 101

Altmetrics measure the attention your scholarly work receives online, from a variety of audiences.

As a scientist, you create research data, analyses, research narratives, and scholarly conversations on a daily basis. Altmetrics–measures of use sourced from the social web– can account for the uses of all of these varied output types.

Nearly everything that can be measured online has the potential to be an altmetric indicator. Here are just a few examples of the types of information that can be tracked for research articles alone:

scholarly

public

recommended

faculty of 1000

popular press

cited

traditional  citation

wikipedia

discussed

scholarly blogs

blogs, twitter

saved

mendeley, citeulike

delicious

read

pdf views

html views

 

When you add research software, data, slides, posters, and other scholarly outputs to the equation, the list of metrics you can use to understand the reception to your work grows exponentially.

And altmetrics can also help you understand the interest in your work from those both inside and outside of the Ivory Tower. For example, what are members of the public saying about your climate change research? How has it affected the decisions and debates among policy makers? Has it led to the adoption of new technologies in the private sector?

The days when your research only mattered to other academics are gone. And with them also goes the idea that there’s only one type of impact.

Flavors of impact

There are many flavors of impact that altmetrics can illuminate for you, beyond the traditional scholarly impact that’s measured by citations.

This 2012 study was the first to showcase the concept of flavors of impact via altmetrics. These flavors are found by examining the correlations between different altmetric indicators; how does a Mendeley bookmark correlate to a citation, or to a Facebook share? (And so on.) What can groups of correlations tell us about the uses of scholarship?

Among the flavors the researchers identified were a “popular hit” flavor (where scholarship is highly tweeted and shared on Facebook, but not seen much on scholarly sites like Mendeley or in citations) and an “expert pick” flavor (evidenced by F1000 Prime ratings and later citations, but few social shares or mentions). Lutz Bornmann’s 2014 study built upon that work, documenting that articles that are tagged on F1000 Prime as being “good for teaching” had more shares on Twitter–uncovering possible uses among educational audiences.

The correlation that’s on everyone’s mind? How do social media (and other indicators) correlate with citations? Mendeley bookmarks are found to have the most correlations with citations; this points to Mendeley’s use as a leading indicator (that is, if something is bookmarked on Mendeley today, it’s got better chance of being cited down the road than something that’s not bookmarked).

Correlations with citations aren’t the only correlations we should pay attention to, though. They only tell one part of an impact story–an important part, to be sure, but not the only part.

Altmetrics data includes qualitative data, too

Many don’t realize that altmetrics data isn’t only about the numbers. An important function of altmetrics aggregators like Altmetric.com and Impactstory (which we describe in more detail below) is to gather qualitative data from across the web into a single place, making it easy to read exactly what others are saying about your scholarship. Altmetric.com does this by including snippets of the blogs, tweets, and other mentions your work receives online. Impactstory links out to the data providers themselves, allowing you to more easily find and read the full-length mentions from across the web.

Altmetrics for Open Science

Now that you have an understanding of how altmetrics work in general, let’s talk about how they work for you as an Open Scientist. Below, we’ve listed some of the basic metrics you can expect to see on the scholarship that you make Open Access. We’ll discuss how to find these metrics in the next section.

Metrics for all products

Any scholarly object that’s got a URL or other permanent identifier like a DOI–which, if you’re practicing Open Science, would be all of them–can be shared and discussed online.

So, for any of your scholarly outputs that have been discussed online, you can expect to find Twitter mentions, blog posts and blog comments, Facebook and Google+ shares and comments, mainstream media mentions, and Wikipedia mentions.

Open Access Publications

Your open access publications will likely accrue citations same as your publications that appear in subscription journals, with two key differences: you can track citations to work that isn’t formally published (but has instead been shared on a preprint server like ArXiv or other such repository) and you can track citations to work that appear in non-peer reviewed literature. Citation indices like Scopus and Web of Science can help you track the former. Google Scholar is a good way to find citations in the non-peer reviewed literature.

Views and downloads can be found on some journal websites, and often on repositories–whether your university’s institutional repository, a subject repository like BioRXiv, or a general purpose repository like Figshare.

Screen Shot 2014-10-22 at 4.16.36 PM.png

Bookmarks on reference management services like Mendeley and CiteULike can give you a sense of how widely your work is being read, and by what audiences. Mendeley, in particular, offers excellent demographic information for publications bookmarked in the service.

Software & code

Software & code, like other non-paper scholarly products, are often shared on specialized platforms. On these platforms, the type of metrics your work receives is often linked to the platform itself.

SourceForge blazed the trail for data metrics by allowing others to review and rate code–useful, crowd-sourced quality indicators.

On GitHub, you can expect for your work to receive forks (which signal adaptations of your code), stars (a bookmark or virtual fistbump that lets others tell you, “I like this”), pull requests (which can get at others’ engagement with your work, as well as the degree to which you tend to collaborate), and downloads (which may signal software installations or code use). One big advantage to using GitHub to share your code is that it allows you to mint DOIs–making it much easier to track mentions and shares of your code in the scholarly literature and across general purpose platforms, like those outlined above.

Data

Data is often cited in one of two ways: citations to data packages (the dataset itself, stored on a website or repository) and citations to data papers (publications that describe the dataset in detail, and that link out to the dataset). You can often track the former using an altmetrics aggregator (more on that in a moment) or the Data Citation Index, a database that’s similar to Web of Science which searches for mentions of your dataset in the scholarly literature. Citations to data papers can sometimes be found in traditional citation indices like Scopus and Web of Science.

Interest in datasets can also be measured by tracking views and downloads. Often, these metrics are shared on repositories where datasets are stored.

Where data is shared on GitHub, forks and stars (described above) can give an indication of that data’s reuse.

More info on metrics for data can be found on my post for the e-Science Portal Blog, “Tracking the Impacts of Data–Beyond Citations”.

Videos

Videos are created by many researchers to summarize a study for generalist audiences. Other times, videos are a type of data.

YouTube tracks the most varied metrics: views, likes, dislikes, and comments are all reported. On Vimeo and other video sharing sites, likes and views are the most often reported metrics.

Slide decks & posters

Slide decks and posters are among the scholarly outputs that get the least amount of love. Once you’ve returned from your conference, you tend to shelve and forget about the poster that you (or your grad students) have put hours worth of work into–and the same goes for the slide decks you use when presenting.

If you make these “forgotten” products available online, on the other hand, you can expect to see some of the following indicators of interest in your work: views, favorites (sometimes used as a bookmark, other times as a way of saying “job well done!”), downloads, comments, and embeds (which can show you how often–and by whom–your work is being shared and in some cases blogged about).

How to collect your metrics from across the Web

We just covered a heck of a lot of metrics, huh? Luckily, altmetrics aggregators are designed to collect these far-flung data points from across the web and deliver them to you in a single report.

There are three main independent altmetrics aggregators: Impactstory.org, PlumX, and Altmetric.com. Here’s the scoop:

  • Impactstory.org: we’re a non-profit altmetrics service that collects metrics for all scholarly outputs. Impactstory profiles are designed to meet the needs of individual scientists. We regularly introduce new features based on user demand. You can sign up for a 30-day free trial on our website; after that, subscriptions are $10/month or $60/year.

  • PlumX: a commercial service that is designed to meet the needs of administrators and funding agencies. Like Impactstory, PlumX also collects metrics for all scholarly outputs. PlumX boasts the largest data coverage of all altmetrics aggregators.

  • Altmetric.com: a commercial service that collects metrics primarily for publishers and institutions. Altmetric can track any scholarly output with a DOI, PubMed ID, ArXiv ID, or Handle, but it does publications the best. Uniquely, they can find mentions to your scholarship in the mainstream media mentions and policy documents–two notoriously hard to mine locations.

Once you’ve collected your metrics from across the web, what do you do with them? We suggest experimenting with using them in your CV, year-end reporting, grant applications, and even tenure & promotion dossiers.

Skeptical? You needn’t be. An increasing number of scientists are using altmetrics for these purposes.

Researchers who have used altmetrics for tenure & grants

Each of the following researchers used altmetrics, alongside traditional metrics like citation counts and journal impact factors, to document the impact of their work.

Tenure: Dr. Steven Roberts, University of Washington

Steven-Roberts1-528x528.jpgSteven is an Associate Professor in the School of Aquatic & Fishery Sciences at the University of Washington. He decided to use altmetrics data in his tenure dossier to two ends: to showcase his public engagement and to document interest in his work.

To showcase public engagement, Steven included this table in the Education and Outreach section of his dossier, illustrating the effects his various outreach channels (blog, Facebook, Flickr, etc) have had to date:

Screen Shot 2014-10-20 at 2.19.52 PM.png

For evidence of the impact of specific products, he incorporated metrics into his CV like this:

Screen Shot 2014-10-20 at 2.24.04 PM.png

Screen Shot 2014-10-20 at 2.25.35 PM.png

Steven’s bid for tenure was successful.

Want to see more? You can download Steven’s full tenure dossier here.

Tenure: Dr. Ahmed Moustafa, American University in Cairo

ahmed.jpgAhmed’s an Associate Professor in the Department of Biology at American University in Cairo, Egypt.

He used altmetrics data in his tenure dossier in two interesting ways. First, he included a screenshot of his most important scholarly products, as they appear on his Impactstory profile, to summarize the overall impacts of his work:

Screen Shot 2014-10-20 at 2.52.15 PM.png

Note the badges that summarize in a glance the relative impacts of his work among both the public and other scholars. Ahmed also includes a link to his full profile, so his reviewers can drill down into the impact details of all his works, and also review them for themselves.

Ahmed also showcased the impact of a particular software package he created, JAligner, by including a link to a Google Scholar search that showcases all the scholarship that cites his software:

As of August 2013, JAligner has been cited in more than 150 publications, including journal articles, books, and patents, (http://tinyurl.com/jalignercitations) covering a wide range of topics in biomedical and computational research areas and downloaded almost 20,000 times (Figure 6). It is somehow noteworthy that JAligner has claimed its own Wikipedia entry (http://en.wikipedia.org/wiki/JAligner)!

Ahmed received tenure with AUC in 2013.

Grant Reporting: Dr. Holly Bik, University of Birmingham

0167.pngHolly was awarded a major grant from the Alfred P. Sloan Foundation to develop a bioinformatics data visualization tool called Phinch.

When reporting back to Sloan on the success of her project, she included metrics like the Figshare views that related posters and talks received, Github statistics for the Phinch software, and other altmetrics related to the varied outputs that the project created over the last few years.

Holly’s hopeful that these metrics, in addition to the traditional metrics she’s reported to Sloan, will make a great case for renewal funding, so they can continue their work on Phinch.

Will altmetrics work for you?

The remarkable thing about each of these researchers is that their circumstances aren’t extraordinary. The organizations they work for and receive funding from are fairly traditional ones. It follows that you, too, may be able to use altmetrics to document the impacts of your Open Science, no matter where you work or are applying for funding. After all, more and more institutions are starting to incorporate recognition of non-traditional scholarship into their tenure & promotion guidelines. You’ll need non-traditional ways like altmetrics to showcase the impacts of that scholarship.