Our 2014 predictions for altmetrics: what we nailed and what we missed

Back in February, we wagered some bets about how the altmetrics landscape would evolve throughout 2014. As you might expect, we got some right, and some wrong.

Let’s take a look back at how altmetrics as a field fared over the last year, through the framework of our 2014 predictions.

More complex modelling

“We’ll see more network-awareness (who tweeted or cited your paper? how authoritative are they?), more context mining (is your work cited from methods or discussion sections?), more visualization (show me a picture of all my impacts this month), more digestion (are there three or four dimensions that can represent my “scientific personality?”), more composite indices (maybe high Mendeley plus low Facebook is likely to be cited later, but high on both not so much).”

Visualizations were indeed big this year: we debuted our new Maps feature, which tells you where your work in the world has been viewed, bookmarked, or tweeted about. We also added “New this week” indicators to the Metrics page on your profile.

And both PlumX and Altmetric.com added new visualizations, too: the cool-looking “Plum Print” and the Altmetric bar visualization were introduced.

Impactstory also launched a new, network-aware feature that shows you what Twitter users gave you the most exposure when they tweeted your work. And we also debuted your profile’s Fans page, which tells you who’s talking about your work and how often, exactly what they’re saying, and how many followers they have.

And a step forward in context mining has come from the recently launched CRediT taxonomy. The taxonomy allows researchers to describe how co-authors on a paper have contributed–whether by creating the study’s methodology, cleaning and maintaining data, or in any of twelve other ways. The taxonomy will soon be piloted by publishers, funders, and other scholarly communication organizations like ORCID.

As for other instances of network-awareness, context mining, digestion, and composite indices? Most of the progress in these areas came from altmetrics researchers. Here are some highlights:

  • This study on ‘semantometrics’ posits that more effective means of determining impact can be found by looking at the full-text of documents, and by measuring the interdisciplinarity of the papers and the articles they cite.

  • A study on the size of research teams since 1900 found that a larger (and more diverse) number of collaborators generally leads to more impactful work (as measured in citations).

  • This preprint determined that around 9% of tweets about ArXiv.org publications come from bots, not humans–which may have big implications for how scholars use and interpret altmetrics.

  • A study showed that papers tagged on F1000 as being “good for teaching” tend to have higher instances of Facebook and Twitter metrics–types of metrics long assumed to relate more to “public” impacts.

  • A study published in JASIST (green OA version here) found that mentions of articles on scholarly blogs correlate to later citations.

Growing interest from administrators and funders

“So in 2014, we’ll see several grant, hiring, and T&P guidelines suggest applicants include altmetrics when relevant.”

Several high-profile announcements from funding agencies confirmed that altmetrics was a hot topic in 2014. In June, the Autism Speaks charity announced that they’d begun using PlumX to track the scholarly and social impacts of the studies they fund. And in December, the Wellcome Trust published an article describing how they use altmetrics in a similar manner.

Are funders and institutions explicitly suggesting that researchers include altmetrics in their applications, when relevant? Not as often as we had hoped. But a positive step in this direction has been from the NIH, which released a new biosketch format that asks applicants to list their most important publications or non-publication research outputs. It also prompts scientists to articulate why they consider those outputs to be important.

The NIH has said that by moving to this new biosketch format, it “will help reviewers evaluate you not by where you’ve published or how many times, but instead by what you’ve accomplished.” We applaud this move, and hope that other funders adopt similar policies in 2015.

Empowered scientists

“As scientists use tools like Impactstory to gather, analyze, and share their own stories, comprehensive metrics become a way for them to articulate more textured, honest narratives of impact in decisive, authoritative terms. Altmetrics will give scientists growing opportunities to show they’re more than their h-indices.”

We’re happy to report that this prediction came true. This year, we’ve heard from more scientists and librarians than ever before, all of whom have used altmetrics data in their tenure dossiers, grant applications and reports, and in annual reviews. And in one-on-one conversations, early career researchers are telling us how important altmetrics are for showcasing the impacts of their research when applying for jobs.

We expect that as more scientists become familiar with altmetrics in the coming year, we’ll see even more empowered scientists using their altmetrics to advance their careers.

Openness

“Since metrics are qualitatively more valuable when we verify, share, remix, and build on them, we see continued progress toward making both  traditional and novel metrics more open. But closedness still offers quick monetization, and so we’ll see continued tension here.”

This is one area where we weren’t exactly wrong, but we weren’t 100% correct, either. Everything stayed more or less the same with regard to openness in 2014: Impactstory continued to make our data available via open API, as did Altmetric.com.

We hope that our prediction will come true in 2015, as the increased drive towards open science and open access puts pressure on those metrics providers that haven’t yet “opened up.”

Acquisitions by the old guard

“In 2014 we’ll likely see more high-profile altmetrics acquisitions, as established megacorps attempt to hedge their bets against industry-destabilizing change.”

2014 didn’t see any acquisitions per se, but publishing behemoth Elsevier made three announcements that hint that the company may be positioning itself for such acquisitions soon: a call for altmetrics research proposals, the hiring of prominent bibliometrician (and co-author of the Altmetrics Manifesto) Paul Groth to be the Disruptive Technology Director of Elsevier Labs, and the launch of  Axon, the company’s invitation-only startup network.

Where do you think altmetrics will go in 2015? Leave your predictions in the comments below.

3 thoughts on “Our 2014 predictions for altmetrics: what we nailed and what we missed

  1. When you write that papers from larger research teams have higher impact, that is nothing surprising, IMHO. With more authors, there is more change for self-citation, though, for example, Web-of-Science doesn’t count all of them as self-citation. Add to that that the number of befriended scientists is also larger, which never counts as self-citation. Another effect is that the larger the project the more impressive and authoritative it seems. Particularly the cito:citesAsAuthority citations can be expected to be relatively high. A reason why I predict that this year will finally be the year of the Citation Ontology (CiTO) on the desktop .

    But, after a long introduction, my question is: when you state that such papers have a larger impact, is that a normalized/relative claim or an absolute claim?

  2. Heather says:

    Hi Egon! Yeah, I’m not sure. I don’t remember if the number of citations was larger than would have been predicted by division by the number of authors — removing the direct effects of self-citation and networks, as you suggest. Am out of time to reread the article right now to figure out the details of their finding — if you get a chance, I’d be interested to hear!

Leave a Reply