What Jeffrey Beall gets wrong about altmetrics

Not long ago, Jason received an email from an Impactstory user, asking him to respond to the anti-altmetrics claims raised by librarian Jeffrey Beall in a blogpost titled, “Article-Level Metrics: An Ill-Conceived and Meretricious Idea.”

Beall is well-known for his blog, which he uses to expose predatory journals and publishers that abuse Open Access publishing. This has been valuable to the OA community, and we commend Beall’s efforts. But we think his his post on altmetrics was not quite so well-grounded.

In the post, Beall claims that altmetrics don’t measure anything of quality. That they don’t measure the impact that matters. That altmetrics they can be easily gamed.

He’s not alone in making these criticisms; they’re common. But they’re also ill-informed. So, we thought that we’d make our responses public, because if one person is emailing to ask us about them, others must have questions, too.

Citations and the journal impact factor are a better measure of quality than altmetrics

Actually, citations and impact factors don’t measure quality.

Did I just blow your mind?

What citations actually measure

Although early theorists emphasized citation as a dispassionate connector of ideas, more recent research has repeatedly demonstrated that citation actually has more complex motivations, including often as a rhetorical tool or a way to satisfy social obligations (just ask a student who’s failed to cite their advisor). In fact, Simkin and Roychowdhury (2002) estimate that as few as 20% of citers even read the paper they’re citing. That’s before we even start talking about the dramatic disciplinary differences in citation behavior.

When it comes down to it, because we can’t identify citer motivations when looking at a citation count alone (and to date efforts to use sentiment analysis to understand citation motivations have failed to be widely adopted) the only bulletproof way to understand the intent behind citations is to read the paper that cites.

It’s true that some studies have shown that citations correlate with other measures of scientific quality like awards, grant funding, and peer evaluation. We’re not saying they’re not useful. But citations do not directly measure quality, which is something that some scientists seem to forget.

What journal impact factors actually measure

We were surprised that Beall holds up the journal impact factor as a superior way to understand the quality of individual papers. The journal impact factor has been repeatedly criticized throughout the years, and one issue above all others renders Beall’s argument moot: the impact factor is a journal-level measure of impact, and therefore irrelevant to the measure of article-level impact.

What altmetrics actually measure

The point of altmetrics isn’t to measure quality. It’s to better understand impact: both the quantity of impact and the diverse types of impact.

And when we supplement traditional measures of impact like citations with newer, altmetrics-based measures like post-publication peer review counts, scholarly bookmarks, etc we have a better picture of the full extent of impact. Not the only picture. But a better picture.

Altmetrics advocates aim to make everything a number. Only peer review will accurately get at quality.

This criticism is only half-wrong. We agree that informed, impartial expert consensus remains the gold standard for scientific quality. (Though traditional peer-review is certainly far from bullet-proof when it comes to finding this.)

But we take exception to the charge that we’re only interested in quantifying impact. In fact, we think that the compelling thing about altmetrics services is that they bring together important qualitative data (like post-publication peer reviews, mainstream media coverage, who’s bookmarking what on Mendeley, and so on) that can’t be summed up in a number.

The scholarly literature on altmetrics is growing fast, but it’s still early. And altmetrics reporting services can only improve over time, as we discover more and better data and ways to analyze it. Until then, using an altmetrics reporting service like our own (Impactstory), Altmetric.com or PlumX is the best way to discover the qualitative data at the heart of diverse impacts. (More on that below.)

There’s only one type of important impact: scholarly impact. And that’s already quantified in the impact factor and citations.

The idea that “the true impact of science is measured by its influence on subsequent scholarship” would likely be news to patients’ rights advocates, practitioners, educators, and everyone else that isn’t an academic but still uses research findings. And the assertion that laypeople aren’t able to understand scholarship is not only condescending, it’s wrong: cf. Kim Goodsell, Jack Andraka, and others.

Moreover, who are the people and groups that argue in favor of One Impact Above All Others, measured only through the impact factor and citations? Often, it’s the established class of scholars, most of whom have benefited from being good at attaining a very particular type of impact and who have no interest in changing the system to recognize and reward diverse impacts.

wvCL4TW.png

Even if we were to agree that scholarly impact were of paramount importance, let’s be real: the impact factor and citations alone aren’t sufficient to measure and understand scholarly impact in the 21st century.

Why? Because science is moving online. Mendeley and CiteULike bookmarks, Google Scholar citations, ResearchGate and Academia.edu pageviews and downloads, dataset citations, and other measures of scholarly attention have the potential to help us define and better understand new flavors of scholarly attention. Citations and impact factors by themselves just don’t cut the mustard.

I heard you can buy tweets. That proves that altmetrics can be gamed very easily.

There’s no denying that “gaming” happens, and it’s not limited to altmetrics. In fact, there have recently been journals that have been banned from Thomson-Reuters Journal Citation List due to impact factor manipulation, and papers retracted after a “citation ring” was busted. And researchers have proven just how easy it is to game Google Scholar citations.

Most players in the altmetrics world are pretty vigilant about staying one step ahead of the cheaters. (Though, to be clear, there’s not much evidence that scientists are gaming their altmetrics, since altmetrics aren’t yet central to the review and rewards systems in science.) Some good examples are SSRN’s means for finding and banning fraudulent downloaders, PLOS’s “Case Study in Anti-Gaming Mechanisms for Altmetrics,” and Altmetric.com’s thoughts on the complications of rooting out spammers and gamers. And we’re seeing new technology debut monthly that helps us uncover bots on Twitter and Wikipedia, fake reviews and social bookmarking spam.

Crucially, altmetrics reporting services make it easier than ever to sniff out gamed metrics by exposing the underlying data. Now, you can read all the tweets about a paper in one place, for example, or see who’s bookmarking a dataset on Delicious. And by bringing together that data, we help users decide for themselves whether that paper’s altmetrics have been gamed. (Not dissimilar from Beall’s other blog posts, which bring together information on predatory OA publishers in one place for others to easily access and use!)

Altmetrics advocates just want to bring down The Man

We’re not sure about what that means. But we sure are interested in bringing down barriers that keep science from being as efficient, productive, and open as it should be.  One of those barriers is the current incentive system for science, which is heavily dependent upon proprietary, opaque metrics such as the journal impact factor.

Our true endgame is to make all metrics–including those pushed by The Man–accurate, auditable, and meaningful. As Heather and Jason explain in their “Power of Altmetrics on a CV” article in the ASIS&T Bulletin:

Accurate data is up-to-date, well-described and has been filtered to remove attempts at deceitful gaming. Auditable data implies completely open and transparent calculation formulas for aggregation, navigable links to original sources and access by anyone without a subscription. Meaningful data needs context and reference. Categorizing online activity into an engagement framework helps readers understand the metrics without becoming overwhelmed. Reference is also crucial. How many tweets is a lot? What percentage of papers are cited in Wikipedia? Representing raw counts as statistically rigorous percentiles, ideally localized to domain or type of product, makes it easy to interpret the data responsibly.

That’s why we incorporated as a non-profit: to make sure that our goal of building an Open altmetrics infrastructure–which would help make altmetrics accurate, auditable, and meaningful–isn’t corrupted by commercial interests.

Do you have questions related to Beall’s–or others’–claims about altmetrics? Leave them in the comments below.

7 thoughts on “What Jeffrey Beall gets wrong about altmetrics

  1. George Perry says:

    Many serious scientist are questioning IF and the other single number indices of “quality” e.g. http://www.ascb.org/dora/news.html. Scientists routinely bring many devices to determine “truth”, scientometrics is no different. “Objective” measurement and scientific publishing are changing at a rapid pace that will benefit all of us. During this process it is critical to be open to additional evidence and analysis.
    George Perry

  2. Everyone is aware of the limitations of JIF but in those days tracking citation was plain sailing same as capturing the explicit knowledge in knowledge transfer process and there was no better way to capture or measure invisible college or implicit knowledge. Therefore scientists stuck with JIF and other citation based metrics. Recent advancement has opened a way to capture these implicit knowledge. Research Impact is multidimensional and it is obvious that no single metric is sufficient to measure it. Hence, we should positively welcome altmetrics and work on it to have a wider view of impact.

  3. Stacy Konkiel says:

    I also wanted to point readers to Altmetric.com’s response to Beall’s article, published last year: http://www.altmetric.com/blog/broaden-your-horizons-impact-doesnt-need-to-be-all-about-citations/.

    In it, Euan gives lots of great examples of the specific things Altmetric.com is doing to address issues of gaming, etc. It also includes this choice quote (first part being a snippet of Beall’s original post):

    >> “Article-level metrics reflect a naïve view of the scholarly publishing world [..] avoid any system that is prone to gaming, corruption, and lack of transparency, such as article-level metrics”

    No, this is absolutely wrong – it’s exactly the other way around. Relying only on the impact factor is what’s naïve. All other issues aside, measures based on citation counts alone plainly fail to take the true impact of scientific work into account. <<

    Which: http://i.imgur.com/0FvJVxD.gif

    🙂

  4. editor says:

    Jeffrey is not dare enough to add publishers like hindawi, Elseiver etc on his list. He is not even brave enough to look at them simply because they are subscribed and seemingly he can not pay for each paper to read and find out what the problem is, let alone to write a post about them and add them on his list. In case if he does, he may not see the next sunrise. Or he may even forget his own name in less than 24 hours. He is indeed a timid Mafia who approaches only publishers and journals which are running from the third world countries or runned by people from those countries. In many of his posts,authors, publishers, and editors from developing countries have been offended drastically and their talent and capabilities, and confidence have been killed and belittled by Jeffery beall. In contrast, people who only put values and try to spread out his list are from those counties. They think that Jeffery is the descended in America and he is a holy prophet whose mission is to guide and tell people what they need to know and what they don’t. Poor them and we are sorry for those who obey him blindly. Whoever he is, people in the US, UK, Europe, Asia, middle east believe that Jeffery is not connected to scholarly work, but ruins the scholarly world. His harsh posts and illogical comments are like an online virus that aims to affect every academician. if you are an author, you could judge if a journal is valid and is worthwhile to publish or not. Don’t let people like Jeffery to influence your decision and change your mind because whatever he does is biased and in fact they are his “personal opinions” [disclaimer tab]. It is a fact that the publishers and journals are not evaluated by an expert team. How an earth do you expect people to take your blog serious, jeffery? who are you? Who has given that power to you? Are you linked to any authority or any well established organization? Although he has developed a criteria to assess journals and publisher, but we believe his criteria is predatory and vanity as he has robbed the constructs from here and there. Hence, it is strongly suggested to avoid taking beall’s comment. list, etc serious.
    Note: our discussion and argumentation does not target Misleading metrics companies and Hijacked journals.
    More info can be found in this link https://library.ryerson.ca/services/faculty/scholarly-communication/evaluating-open-access-journals/

Leave a Reply