Share your articles, slides and more on Impactstory

We said we were going to have big changes live by Sept 15th when early adopters’ free trials expire. Well here’s our first one:  Impactstory’s now a great place to freely share your articles, slides, videos, and more–and get viewership stats to track impacts even better.

Share everything

NHfvmIP.png

Before, product pages focused just on the metrics for your research products. Those metrics are still there, but now the focus is on the product itself. Yep, that’s right: people can now view and read your work right on Impactstory. So we’re not just a place to share the impact of your work, we’re also a place to share your actual research.

It’s super easy to upload your preprints to Impactstory (and you should!). But it gets even better–for most OA publications, we automatically embed the PDF for you. It’s handy, and it’s a also great example of the kind of interoperability OA makes possible.

But as y’all know, at Impactstory we’re passionate about supporting scholarly products beyond articles. So we’re also automatically embedding a slew of other tasty product types. GitHub repo? We’ve got your README file embedded. Figshare image? Yup, that’s on your profile now too. You want to view videos from Vimeo and YouTube, and slides from Slideshare, right on your Impactstory page? Done.

Discover how many people are viewing your research

We’re also rolling out viewership stats for your Impactstory product page. So not only do you learn when folks are citing, discussing, and saving your work–you learn when they’re reading it, too. Over time we’ll likely add viewership maps and other ways to dig into this data even more.

Why you should upload your work to Impactstory

Sharing your work directly on Impactstory has lots of advantages. It brings all your product types together in one place, under your brand as a researcher, not under the brand of a journal or institution. It also makes the case for your research’s value better than metrics alone–it helps you tell a fuller impact story.

Uploading your work is also a great quick way to just get your work out there. In that regard it’s kind of like what Academia.edu and ResearchGate offer–except we don’t make potential readers create an account to access your work. It’s open.  We don’t yet have comprehensive preservation strategy (persistent IDs, CLOCKSS, etc), but we’ll be listening to see if there’s demand for that.

As you may notice, we are super excited about this feature. We’re going to be working hard to get the word out about it to our users, and we’re counting on all or your help on that. And of course as always we’d also love your feedback, particularly on bugs; a feature this big will certainly need a few as users to kick the tires.

And now we’re transitioning to working our next big set of features…can’t wait to launch those over the next two weeks!

4 things every librarian should do with altmetrics

Researchers are starting to use altmetrics to understand and promote their academic contributions. At the same time, administrators and funders are exploring them to evaluate researchers’ impact.

In light of these changes, how can you, as a librarian, stay relevant by supporting their fast-changing altmetrics needs?

In this post, we’ll give you four ways to stay relevant: staying up-to-date with the latest altmetrics research, experimenting with altmetrics tools, engaging in early altmetrics education and outreach, and defining what altmetrics mean to you as a librarian.

1. Know the literature

Faculty won’t come to you for help navigating the altmetrics landscape if they can tell you don’t know the area very well, will they?

To get familiar with discussions around altmetrics, start with the recent SPARC report on article-level metrics, this excellent overview that appeared in Serials Review (paywall), and the recent ASIS&T Bulletin special issue on altmetrics.

Then, check out this list of “17 Essential Altmetrics Resources” aimed at librarians, this recent article on collection development and altmetrics from Against the Grain, and presentations from Heather and Stacy on why it’s important for librarians to be involved in altmetrics discussions on their campuses.

There’s also a growing body of peer-reviewed research on altmetrics. One important concept from this literature is the idea of “impact flavors”–a way to understand distinctive patterns in the impacts of scholarly products.

For example, an article featured in mainstream media stories, blogged about, and downloaded by the public has a very different flavor of impact than a dataset heavily saved and discussed by scholars, which is in turn different from software that’s highly cited in research papers. Altmetrics can help researchers, funders, and administrators optimize for the mix of flavors that best fits their particular goals.

There’s also been a lot of studies on correlations (or lack thereof) between altmetrics and traditional citations. Some have shown that selected altmetrics sources (Mendeley in particular) are significantly correlated with citations (1, 2, 3), while other sources, like Facebook bookmarks, have only slight correlations with citations. These studies show that different types of altmetrics are capturing different types of impact, beyond just scholarly impact.

Other early touchstones include studies exploring the predictive potential of altmetrics, growing adoption of social media tools that inform altmetrics, and insights from article readership patterns.

But these are far from only studies to be aware of! Stay abreast of new research by reading through the PLOS Altmetrics Collection, joining the Altmetrics Mendeley group and following the #altmetrics hashtag on Twitter.

2. Know the tools

There are now several tools that allow scholars to collect and share the broad impact of their research portfolios.

In the same way a you’d experiment with new features added to Web of Science, you can play around with altmetrics tools and add them to your bibliographic instruction repertoire (more on that in the following section). Familiarity will enable you to do easy demonstrations, discuss strengths and weaknesses, contribute to product development, and serve as a resource for campus scholars and administration.

Here are some of the most popular altmetrics tools:

Impactstory

lopjuza.png

If you’re reading this post, chances are that you’re already familiar with Impactstory, a nonprofit Web application supported by the Alfred P. Sloan Foundation and NSF.

If you’re a newcomer, here’s the scoop: scholars create a free Impactstory profile and then upload their articles, datasets, software, and other products using Google Scholar, ORCID, or lists of permanent identifiers like DOIs, PubMed IDs, and so on. Impactstory then gathers and reports altmetrics and traditional citations for each product. As shown above, metrics are displayed as percentiles relative to similar products. Profile data can be exported for further analysis, and users can receive alerts about new impacts.

Impactstory is built on open-source code, offers open data, and is free to use. Our robust community of users helps us think up new features and prioritize development via our Feedback forum; once you’re familiar with our site, we encourage you to sign up and start contributing, too!

PlumX

PlumX Artifact Screen Shot.pngPlumX is another web application that displays metrics for a wide range of scholarly outputs. The metrics can be viewed and analyzed at any user-defined level, including at the researcher, department, institution, journal, grant, and research topic levels. PlumX reports some metrics that are unique from other altmetrics services, like WorldCat holdings and downloads and pageviews from some publishers, institutional repositories, and EBSCO databases. PlumX is developed and marketed by Plum Analytics, an EBSCO company.

The service is available via a subscription. Individuals who are curious can experiment with the free demo version.

Altmetric

Altmetric-Explorer-Screenshot-University-of-Texas-Sample.pngThe third tool that librarians should know about is Altmetric.com. Originally developed to provide altmetrics for publishers, the tool primarily tracks journal articles and ArXiv.org preprints. In recent years, the service has expanded to include a subscription-based institutional edition, aimed at university administrators.

Altmetric.com offers unique features, including the Altmetric score (a single-number summary of the attention an article has received online) and the Altmetric bookmarklet (a browser widget that allows you to look up altmetrics for any journal article or ArXiv.org preprint with a unique identifier). Sources tracked for mentions of articles include social and traditional media outlets from around the world, post-publication peer-review sites, reference managers like Mendeley, and public policy documents.

Librarians can get free access to the Altmetric Explorer and free services for institutional repositories. You can also request trial access to Altmetric for Institutions.

3. Integrate altmetrics into library outreach and education

Librarians are often asked to describe Open Access publishing choices to both faculty and students and teach how to gather evidence of impact for hiring, promotion, and tenure. These opportunities–whether one on one or in group settings like faculty meetings–can allow librarians to introduce altmetrics.

Discussing altmetrics in the context of Open Access publishing helps “sell” the benefits of OA. Altmetrics, like download counts that appear in PLOS journals and institutional repositories, can highlight the benefits of open access publishing. They can also demonstrate that “impact” is more closely tied to an individual’s scholarship rather than a journal’s impact factor.

Similarly, researchers often use an author’s h-index for hiring, tenure, and promotion, conflating the h-index with the quality of an individual’s work. Librarians are often asked to teach and provide assistance calculating an h-index within various databases (Web of Science, SCOPUS, etc.). Integrating altmetrics into these instruction sessions is akin to providing researchers with additional primary resource choices on a research project. Librarians need to make researchers aware of many tools they can use to evaluate the impact of scholarship, and of the relevant research–including benefits of and drawbacks to different altmetrics.

So, what does altmetrics outreach look like on the ground? To start, check out these great presentations that librarians around the world have given on the benefits of using altmetrics (and particular altmetrics tools) in research and promotion.

Another great way to stay relevant on this subject is to find and recommend to your grad students and faculty readings on ways they can use altmetrics in their career, like this one from our blog on the benefits of including altmetrics on your CV.

4. Discover the benefits that altmetrics offer librarians

There are reasons to learn about altmetrics beyond serving faculty and students. A major one is that many librarians are scholars themselves, and can use altmetrics to better understand the diverse impact of their articles, presentations, and white papers. Consider putting altmetrics on your own CV, and advocating the use of altmetrics among library faculty who are assembling tenure and promotion packages.

Librarians also produce and support terabytes’ worth of scholarly content that’s intended for others’ use, usually in the form of digital special collections and institutional repository holdings. Altmetrics can help librarians understand the impacts of these non-traditional scholarly outputs, and provide hard evidence of their use beyond ‘hits’ and downloads–evidence that’s especially useful when making arguments for increased budgetary and administrative support.

It’s important that librarians explore the unique ways they can apply altmetrics to their own research and jobs, especially in light of recent initiatives to create recommended practices for the collection and use of altmetrics. What is useful to a computational biologist may not be useful for a librarian (and vice versa). Get to know the research and tools and figure out ways to use them to your own ends.

There’s a lot happening right now in the altmetrics space, and it can sometimes be overwhelming for librarians to keep up with and understand. By following the steps outlined above, you’ll be well positioned to inform and support researchers, administrators, and library decision makers in their use. And in doing so, you’ll be indispensable in this new era of web-native research.

Are you a librarian that’s using altmetrics? Share your experiences in the comments below!

This post has been adapted from the 2013 C&RL News article, “Riding the crest of the altmetrics wave: How librarians can help prepare faculty for the next generation of research impact metrics” by Lapinski, Piwowar, and Priem.

Ten reasons you should put altmetrics on your CV right now

If you don’t include altmetrics on your CV, you’re missing out in a big way.

There are many benefits to scholars and scholarship when altmetrics are embedded in a CV.

Altmetrics can:

  1. provide additional information;
  2. de-emphasize inappropriate metrics;
  3. uncover the impact of just-published work;
  4. legitimize all types of scholarly products;
  5. recognize diverse impact flavors;
  6. reward effective efforts to facilitate reuse;
  7. encourage a focus on public engagement;
  8. facilitate qualitative exploration;
  9. empower publication choice; and
  10. spur innovation in research evaluation.

In this post, we’ll detail why these benefits are important to your career, and also recommend the ways you should–and shouldn’t–include altmetrics in your CV.

1. Altmetrics provide additional information

The most obvious benefit of including altmetrics on a CV is that you’re providing more information than your CV’s readers already have.  Readers can still assess the CV items just as they’ve always done: based on title, journal and author list, and maybe–if they’re motivated–by reading or reviewing the research product itself. Altmetrics have the added benefit of allowing readers to dig into post-publication impact of your work.

2. Altmetrics de-emphasize inappropriate metrics

It’s generally regarded as poor form to evaluate an article based on a journal title or impact factor. Why? Because high journal impact factors vary across fields and an article often receives more or less attention than its journal container suggests.

But what else are readers of a CV to do? Most of us don’t have enough domain expertise to dig into each item and assess its merits based on a careful reading, even if we did have time. We need help, but traditional CVs don’t provide enough information to assess the work on anything but journal title.

Providing article-level citations and altmetrics in a CV gives readers more information, thereby de-emphasizing evaluation based on journal rank.

3. Altmetrics uncover the impact of just-published work

Why not suggest that we include citation counts in CVs, and leave it at that? Why go so far as altmetrics? The reason is that altmetrics have benefits that complement the weaknesses of a citation-based solution.

Timeliness is the most obvious benefits of altmetrics. Citations take years to accrue, which can be a problem for graduate students who are applying for jobs soon after publishing their first papers and for those promotion candidates whose most profound work is published only shortly before review.

Multiple research studies have found that counts of downloads, bookmarks and tweets correlate with citations, yet accrue much more quickly, often in weeks or months rather than years. Using timely metrics allows researchers to showcase the impact of their most recent work.

4. Altmetrics legitimize all types of scholarly products

How can readers of a CV know if your included dataset, software project, or technical report is any good?

You can’t judge its quality and impact based on the reputation of the journal that published it, since datasets and software aren’t published in journals. And even if they were, we wouldn’t want to promote the poor practice of judging the impact of an item by the impact of its container.

How, then, can alternative scholarly products be more than just space-filler on a CV?

The answer is product-level metrics. Like article-level metrics do for journal articles, product-level metrics provide the needed evidence to convince evaluators that a dataset or software package or white paper has made a difference. These types of products often make impacts in ways that aren’t captured by standard attribution mechanisms like citations. Altmetrics are key to communicating the full picture of how a product has influenced a field.

5. Altmetrics recognize diverse impact flavors

The impact of a research paper has a flavor. There are scholarly flavors (a great methods sections bookmarked for later reference or controversial claims that change a field), public flavors (“sexy” research that captures the imagination or data from a paper that’s used in the classroom), and flavors that fall into the area in between (research that informs public policy or a paper that’s widely used in clinical practice).

We don’t yet know how many flavors of impact there are, but it would be a safe bet that scholarship and society need them all. The goal isn’t to compare flavors: one flavor isn’t objectively better than another. They each have to be appreciated on their own merits for the needs they meet.

To appreciate the impact flavor of items on a CV, we need to be able to tell the flavors apart. (Citations alone can’t fully inform what kind of difference a research paper has made on the world. They are important, but not enough.) This is where altmetrics come in. By analyzing patterns in what people are reading, bookmarking, sharing, discussing and citing online we can start to figure out what kind – what flavor – of impact a research output is making.

More research is needed to understand the flavor palette, how to classify impact flavor and what it means. In the meantime, exposing raw information about downloads, shares, bookmarks and the like starts to give a peek into impact flavor beyond just citations.

6. Altmetrics reward efforts to facilitate reuse

Reusing research – for replication, follow-up studies and entirely new purposes – reduces waste and spurs innovation. But it does take a bit of work to make your research reusable, and that work should be recognized using altmetrics.

There are a number of ways authors can make their research easier to reuse. They can make article text available for free with broad reuse rights. They can choose to publish in places with liberal text-mining policies, that invest in disseminating machine-friendly versions of articles and figures.

Authors can write detailed descriptions of their methods, materials, datasets and software and make them openly available for reuse. They can even go further, experimenting with executable papers, versioned papers, open peer review, semantic markup and so on.

When these additional steps result in increased reuse, it will likely be reflected in downloads, bookmarks, discussions and possibly citations. Including altmetrics in CVs will reward investigators who have invested their time to make their research reusable, and will encourage others to do so in the future.

7. Altmetrics can encourage a focus on public engagement

The research community, as well as society as a whole, benefits when research results are discussed outside the Ivory Tower. Engaging the public is essential for future funding, recruitment and accountability.

Today, however, researchers have little incentive to engage in outreach or make their research accessible to the public. By highlighting evidence of public engagement like tweets, blog posts and mainstream media coverage, altmetrics on a CV can reward researchers who choose to invest in public engagement activities.

8. Altmetrics facilitate qualitative exploration

Including altmetrics in a CV isn’t all about the numbers! Just as we hope many people who skim our CVs will stop to read our papers and explore our software packages, so too we can hope that interested parties will click through to explore the details of altmetrics engagement for themselves.

Who is discussing an article? What are they saying? Who has bookmarked a dataset? What are they using it for? As we discuss at the end of this post, including provenance information is crucial for trustworthy altmetrics. It also provides great information that helps CV readers move beyond the numbers and jump into qualitative exploration of impact.

9. Altmetrics empower publication choice

Publishing in a new or innovative journal can be risky. Many authors are hesitant to publish their best work somewhere new or with a relatively-low impact factor. Altmetrics can remedy this by highlighting work based on its post-publication impact, rather than the title of the journal it was published in. Authors will be empowered to choose publication venues they feel are most appropriate, leveling the playing field for what might otherwise be considered risky choices.

Successful publishing innovators will also benefit. New journals won’t have to wait two years to get an impact factor before they can compete. Publishing venues that increase access and reuse will be particularly attractive. This change will spur innovation and support the many publishing options that have recently debuted, such as eLife, PeerJ, F1000 Research and others.

10. Altmetrics spur innovation in research evaluation

Finally, including altmetrics on CVs will engage researchers directly in research evaluation. Researchers are evaluated all the time, but often behind closed doors, using data and tools they don’t have access to. Encouraging researchers to tell their own impact stories on their CVs, using broad sources of data, will help spur a much-needed conversation about how research evaluation is done and should be done in the future.

OK, so how can you do it right?

There can be risks to including altmetrics data on a CV, particularly if the data is presented or interpreted without due care or common sense.

Altmetrics data should be presented in a way that is accurate, auditable and meaningful:

  • Accurate data is up-to-date, well-described and has been filtered to remove attempts at deceitful gaming
  • Auditable data implies completely open and transparent calculation formulas for aggregation, navigable links to original sources and access by anyone without a subscription.
  • Meaningful data needs context and reference. Categorizing online activity into an engagement framework helps readers understand the metrics without becoming overwhelmed. Reference is also crucial. How many tweets is a lot? What percentage of papers are cited in Wikipedia? Representing raw counts as statistically rigorous percentiles, localized to domain or type of product, makes it easy to interpret the data responsibly.

Assuming these presentation requirements are met, how should the data be interpreted? We strongly recommend that altmetrics be considered not as a replacement for careful expert evaluation but as a supplement. Because they are still in their infancy, we should view altmetrics as way to ground subjective assessment in real data; a way to start conversations, not end them.

Given this approach, at least three varieties of interpretation are appropriate: signaling, highlighting and discovery. A CV with altmetrics clearly signals that a scholar is abreast of innovations in scholarly communication and serious about communicating the impact of scholarship in meaningful ways. Altmetrics can also be used to highlight research products that might otherwise go unnoticed: a highly downloaded dataset or a track record of F1000-reviewed papers suggests work worthy of a second look. Finally, as we described above, auditable altmetrics data can be used by evaluators as a jumping off point for discovery about who is interested in the research, what they are doing with it, and how they are using it.

How to Get Started

How can you add altmetrics to your own CV or, if you are a librarian, empower scholars to add altmetrics to theirs?

Start by experimenting with altmetrics for yourself. Play with the tools, explore and suggest improvements. Librarians can also spread the word on their campuses and beyond through writing, teaching and outreach. Finally, if you’re in a position to hire, promote, or review grant applications, explicitly welcome diverse evidence of impact when you solicit CVs.

What are your thoughts on using altmetrics on a CV? Would you welcome them as a reviewer, or choose to ignore them? Tell us in the comments section below.

This post has been adapted from “The Power of Altmetrics on a CV,” which appeared in the April/May 2013 issue of ASIS&T Bulletin.

Ten things you need to know about ORCID right now

An ORCID identifier for Mike Eisen (or as we know him, http://orcid.org/0000-0002-7528-738X)

An ORCID identifier for Mike Eisen (aka http://orcid.org/0000-0002-7528-738X)

Have you ever tried to search for an author, only to discover that he shares a name with 113 other researchers? Or realized that Google Scholar stopped tracking citations to your work after you took your spouse’s surname a few years back?

If so, you’ve probably wished for ORCID.

ORCID IDs are permanent identifiers for researchers. Community uptake has increased tenfold over the past year, and continues to be adopted by new institutions, funders, and journals on a daily basis. ORCID may prove to be one of the most important advances in scholarly communication in the past ten years.

Here are ten things you need to know about ORCID and its importance to you.

1. ORCIDs protects your unique scholarly identity

There are approximately 200,000 people per unique surname in China. That’s a lot of “J Wang”s–more than 1200 in nanoscience alone! Same for lots of other names: we’re just not as uniquely named as we think.

Not a Wang? You’ll probably still need ORCID if you plan to assume your spouse’s family name, or accidentally omit your middle initial from the byline when submitting a manuscript.

ORCID solves the author name problem by giving individuals a unique, 16-digit numeric identification number that lasts over time.

The numbers are stored in a central registry, which will power a research infrastructure that ensures that people find the correct “J Wang” and get credit for all their publications.

2. Creating an ORCID identifier takes 30 seconds

Setting up an ORCID record is easier than setting up a Facebook account, and literally only takes 30 seconds.

Plus, if you’ve published before, you likely already have a ResearcherID or Scopus Author ID, or you may have publications indexed in CrossRef–which means that you can easily import information from those systems into your ORCID record, letting those websites do the grunt work for you.

3. ORCID is getting big fast

Growth in ORCID identifiers, from Oct. 2012-Mar. 2014

Growth in ORCID identifiers, from Oct. 2012-Mar. 2014

Even if you haven’t yet encountered ORCID, you likely will soon. The number of ORCID users grew ten-fold over 2013, and continues to grow daily. You’ll likely encounter ORCID identifers more and more often on journal websites and funding applications–a great reason to better understand ORCID’s purpose and uses.

4. ORCID lasts longer than your email address

Anyone who has ever moved institutions knows the pain of losing touch with colleagues once access to your old university email disappears. ORCID eases that pain by storing your most recent email address. If you choose to share it, your email address can be shared across platforms–meaning you spend less time updating your many profiles.

5. ORCID supports 37 types of “works,” from articles to dance performances

Any type of scholarly output you create, ORCID can handle.

Are you a traditional scientists, who writes only papers and the occasional book chapter? ORCID can track ‘em.

Are you instead a cutting-edge computational biologist who releases datasets and figures for your thesis, as they are created? ORCID can track that, too.

Not a scientist at all, but an art professor? You can import your works using ORCID, as well, using ISNI2ORCID… you get the idea.

ORCID will even start importing information about your service to your discipline soon!

6. You control who views your ORCID information

Concerned about the privacy implications of ORCID? You’re in luck–ORCID has granular privacy controls.

When setting up your ORCID record, you can select the default privacy settings for all of your content–Open to everyone, Open to trusted parties (web services that you’ve linked to your ORCID record), or Open only to yourself. Once your profile is populated, you can set custom privacy levels for each item, easy as pie.

7. ORCID is glue for all your research services

You can connect your ORCID account with websites including Web of Science, Figshare, and Impactstory, among many others.

Once they’re connected, you can easily push information back and forth between services–meaning that a complete ORCID record will allow you to automatically import the same information to multiple places, rather than having to enter the same information over and over again on different websites.

And new services are connecting to ORCID every day, sharing information across an increasing number of platforms–repositories, funding agencies, and more!

8. Journals, funders & institutions are moving to ORCID

Some of the world’s largest publishers, funders, and institutions have adopted ORCID.

Over 1000 journals, including publications by PLOS, Nature, and Elsevier, are using ORCID as a way to make it easier for authors to manage their information in manuscript submission systems. ORCID can also collect your publications from across these varied services, making it possible to aggregate author-level metrics.

Funding agencies are integrating their systems with ORCID for similar reasons. Funders from the Wellcome Trust to the NIH now request that grantees use ORCIDs to manage information in their systems, and many other funding agencies across the world are following suit.

In 2013, universities accounted for the largest percentage of all new ORCID members. ORCID helps institutions track your work, compile information for university-level reporting (i.e., total funding received by its scholars), and more efficiently manage information on faculty profiles. By eliminating redundancies and automating some reporting functions, ORCID will be especially helpful in reducing time and monies spent on REF and other assessment activities.

9. When everyone has an ORCID identifier, scholarship gets better

How many hours have you wasted by filling in your address, employment history, collaborator names and affiliations, etc when applying for grants or submitting manuscripts? For many publishers and funders, you can now simply supply your ORCID identifier, saving you precious time to do research.

In addition to increasing efficiency, ORCID can also help connect funding dollars with tangible outputs, track citations beyond journal articles, and help keep author contact information up-to-date.

10. ORCID is open source, open data, and community-driven

ORCID is a community-driven organization. You can help shape its development by adding and voting for ideas on ORCID’s feedback forum.

It’s also Open by design. ORCID is an open source web-app that allows other web-apps to use its open API and mine its open data. (We actually use ORCID’s open API to easily import information into your Impactstory profile.) Openness like ORCID’s supports innovation and transparency, and can keep us from focusing myopically on limited publication types or single indicators of impact.

And there we have it–ten things you now know about ORCID. Reference them and you’ll sound like an expert at your next department meeting (to which you should of course bring your custom ORCID mug). 🙂

Do you use ORCID? Leave your ORCID identifier in the comments, along with your thoughts about the system.

Thanks to ORCID’s Rebecca Bryant for feedback on this post.

The 3 dangers of publishing in “megajournals”–and how you can avoid them

You like the idea of “megajournals”–online-only, open access journals that cover many subjects and publish content based only on whether it is scientifically sound. You get that PLOS ONE, PeerJ and others offer a path to a more efficient, faster, more open scholarly publishing world.

But you’re not publishing there.

Because you’ve heard rumors that they’re not peer reviewed, or that they’re “peer-review lite” journals. You’re concerned they’re journals of last resort, article dumping grounds. You’re worried your co-authors will balk, that your work won’t be read, or that your CV will look bad.

Well, you’re not the only one. And it’s true: although they’ve got great potential for science as a whole, megajournals (which include PLOS ONE as well as BMJ Open, SAGE Open, Scientific Reports, Open Biology, PeerJ, and SpringerPlus) carry some potential career liabilities.

But they don’t have to. With a little savvy, publishing in megajournals can actually boost your career, at the same time as you support a great new trend in science communication. So here are the biggest dangers of megajournal publishing–and the tips that let you not have to worry about them:

1. My co-authors won’t want to publish in megajournals

Sometimes wanting to publish somewhere yourself isn’t enough–you’ve got to convince skeptical co-authors (or advisors!). Luckily, there’s a lot of data about megajournals’ advantages for you to share with the skeptics. And the easiest way to convince a group of scientists of anything is to show them the data.

Megajournals publish prestigious science

Megajournals aren’t for losers: top scientists, including Nobelists,  publish there. They also serve as their editors and advisory board members. So, let your co-authors know: you’ll be in great company if you publish in a megajournal.

Megajournals boost citation and readership impact

Megajournals can get you more readers because they’re Open Access. A 2008 BMJ study showed that “full text downloads were 89% higher, PDF downloads 42% higher, and unique visitors 23% higher for open access articles than for subscription access articles.” These findings have been confirmed for other disciplines, as well. Open Access journals can also get you more citations, as numerous studies have shown.

Megajournals promote real-world use

With more readers comes more applications in the real world–another important form of impact. The most famous example is of Jack Andraka, a teenager who devised a test for pancreatic cancer using information found in Open Access medical literature. Every day, articles about diet and public health in Malawi, how to more efficiently monitor animal species in the face of rapid climate change, and other life-changing applied science is shared in Open Access megajournals.

Megajournals publish fast

If the readership and citation numbers don’t appeal to your co-authors, what about super fast publication times? Megajournals often publish more quickly than other journals. PLOS ONE has a median time-to-publication of around six months; PeerJ’s median time to first decision is 24 days; time to final acceptance is a mere 51 days. Why? Rather than having to prove to your reviewers the significance of your findings, you just have to prove that the underlying science is sound. That leaves you with more time to do other research.

Megajournals save money

Megajournals also often cheaper to publish in, due to economies of scale. Which means that while the Journal of Physical Therapy requires you to pay $4030 for an article, PLOS ONE can get you 29x the article influence for a third of the price. PeerJ claims that their even cheaper prices–$299 flat rate for as many articles as you want to publish, ever–have saved academia over $1 million to date.

2. No one in my field will find out about it

You’ve convinced your co-authors–megajournals are faster, cheaper, and publish great research by renowned scientists. Now, how do you get others in your field to read an article in a journal they’ve never heard of?

Getting your colleagues to read your article is as easy as posting it in places where they go to read. You can start before you publish by posting a preprint to Figshare, or a disciplinary pre-print server like ArXiv or PeerJ Preprints, in order to whet your colleagues’ appetite. Make sure to use good keywords to make it findable–particularly since today, a growing percentage of articles are found via Google Scholar and PubMed searches instead of encountered in journals.

Once your paper has been more formally published in your megajournal of choice, you can leverage the social media interest you’ve already gained to share the final product. Twitter’s a great way to get attention, especially if you use hashtags your colleagues follow. So is posting to disciplinary listserves. A blog post sharing the “story behind the paper” and summarizing your findings can be powerful, too. Together, these can be all it takes to get your article noticed.

Microbiologist Jonathan Eisen is a great example. He promoted his article upon publication with great success, provoking over 80 tweets and 17 comments on a blog post describing his PLOS ONE paper, “Stalking the Fourth Domain in Metagenomic Data”. The article itself has received ~47,000 views, 300 Mendeley readers, 23 comments, 35 Google Scholar citations, and hundreds of social media mentions to date, thanks in part to Eisen’s savvy self-promotion.

3. My CV will look like I couldn’t publish in “good” journals

It’s a sad fact that reviewers for tenure and promotion often judge the quality of articles by the journal of publication when skimming CVs. Most megajournal titles won’t ring any bells (yet) for those sorts of reviewers.

So, it’s your job to demonstrate the impact of your article. Luckily, that’s easier than you might think. Today, we don’t have to rely on the journal brand name as an impact proxy–we can look at the impact of the article itself, using article-level metrics.

One of the most compelling article-level stats is good ol’-fashioned citations. You can find these via Google Scholar, Scopus, or Web of Science, all of which have their pros and cons. Another great one is article downloads, which many megajournals report: even if your article is too new to be cited yet, you can show it’s making an impact with readers.

To demonstrate broader and more immediate impacts, also highlight your diverse audiences and the ways they engage with your research. Social media platforms leave footprints on the web. These ”altmetrics” can be captured and aggregated at the article level:

scholarly audience

public audience

recommended

faculty of 1000 recommendation

popular press mentions

cited

traditional  citation

wikipedia citations

discussed

scholarly blog coverage

blogs, twitter mentions

saved

mendeley and citeulike bookmarks

delicious bookmarks

read

pdf views

html views

There are many places to collect this information; rounding it all up can be a pain. Luckily, many megajournals will compile these metrics for you: PLOS has developed its own article level metrics suite (seen below); Nature Scientific Reports and many other publishers use Altmetric.com’s informative article-level metrics reports.

Article-level metrics for PLOS ONE

Article-level metrics on Eisen’s 2011 PLOS ONE article

If your megajournal doesn’t offer metrics, or you would like to compile metrics for all your megajournal articles in one place, you can pull everything together with an Impactstory profile instead.

And just like that, you’re turning megajournals into valuable assets for both science and your career:  you’ve convinced your co-authors, done some savvy social media promotion to get your discipline’s attention, and turned your megajournal article from a CV liability to a CV victory through the smart use of article-level metrics.  Congratulations!

Have you found success by publishing in megajournals? Got other megajournal publishing tips to offer? Share your story in the comments section below!

 

Four great reasons to stop caring so much about the h-index

You’re surfing the research literature on your lunch break and find an unfamiliar author listed on a great new publication. How do you size them up in a snap?

izaNcrp.png

Google Scholar is an obvious first step. You type their name in, find their profile, and–ah, there it is! Their h-index, right at the top. Now you know their quality as a scholar.

Or do you?

The h-index is an attempt to sum up a scholar in a single number that balances productivity and impact. Anna, our example, has an h-index of 25 because she has 25 papers that have each received at least 25 citations.

Today, this number is used for both informal evaluation (like sizing up colleagues) and formal evaluation (like tenure and promotion).

We think that’s a problem.

The h-index is failing on the job, and here’s how:

1. Comparing h-indices is comparing apples and oranges.

Let’s revisit Anna LLobet, our example. Her h-index is 25. Is that good?

Well, “good” depends on several variables. First, what is her field of study? What’s considered “good” in Clinical Medicine (84) is different than what is considered “good” in Mathematics (19). Some fields simply publish and cite more than others.

Next, how far along is Anna in her career? Junior researchers have a h-index disadvantage. Their h-index can only be as high as the number of papers they have published, even if each paper is highly cited. If she is only 9 years into her career, Anna will not have published as many papers as someone who has been in the field 35 years.

Furthermore, citations take years to accumulate. The consequence is that the h-index doesn’t have much discriminatory power for young scholars, and can’t be used to compare researchers at different stages of their careers. To compare Anna to a more senior researcher would be like comparing apples and oranges.

Did you know that Anna also has more than one h-index? Her h-index (and yours) depends on which database you are looking at, because citation counts differ from database to database. (Which one should she list on her CV? The highest one, of course. :))

2. The h-index ignores science that isn’t shaped like an article.

What if you work in a field that values patents over publications, like chemistry? Sorry, only articles count toward your h-index. Same thing goes for software, blog posts, or other types of “non-traditional” scholarly outputs (and even one you’d consider “traditional”: books).

Similarly, the h-index only uses citations to your work that come from journal articles, written by other scholars. Your h-index can’t capture if you’ve had tremendous influence on public policy or in improving global health outcomes. That doesn’t seem smart.

3. A scholar’s impact can’t be summed up with a single number.

We’ve seen from the journal impact factor that single-number impact indicators can encourage lazy evaluation. At the scariest times in your career–when you are going up for tenure or promotion, for instance–do you really want to encourage that? Of course not. You want your evaluators to see all of the ways you’ve made an impact in your field. Your contributions are too many and too varied to be summed up in a single number. Researchers in some fields are rejecting the h-index for this very reason.

So, why judge Anna by her h-index alone?

Questions of completeness aside, the h-index might not measure the right things for your needs. Its particular balance of quantity versus influence can miss the point of what you care about. For some people, that might be a single hit paper, popular with both other scholars and the public. (This article on the “Big Food” industry and its global health effects is a good example.) Others might care more about how often their many, rarely cited papers are used often by practitioners (like those by CG Bremner, who studied Barrett Syndrome, a lesser known relative of gastroesophageal reflux disease). When evaluating others, the metrics you’re using should get at the root of what you’re trying to understand about their impact.

4. The h-index is dumb when it comes to authorship.

Some physicists are one of a thousand authors on a single paper. Should their fractional authorship weigh equally with your single-author paper? The h-index doesn’t take that into consideration.

What if you are first author on a paper? (Or last author, if that’s the way you indicate lead authorship in your field.) Shouldn’t citations to that paper weigh more for you than it does your co-authors, since you had a larger influence on the development of that publication?

The h-index doesn’t account for these nuances.

So, how should we use the h-index?

more than my h-index.pngMany have attempted to fix the h-index weaknesses with various computational models that, for example, reward highly-cited papers, correct for career length, rank authors’ papers against other papers published in the same year and source, or count just the average citations of the most high-impact “core” of an author’s work.

None of these have been widely adopted, and all of them boil down a scientist’s career to a single number that only measures one type of impact.

What we need is more data.

Altmetrics–new measures of how scholarship is recommended, cited, saved, viewed, and discussed online–are just the solution. Altmetrics measure the influence of all of a researcher’s outputs, not just their papers. A variety of new altmetrics tools can help you get a more complete picture of others’ research impact, beyond the h-index. You can also use these tools to communicate your own, more complete impact story to others.

So what should you do when you run into an h-index? Have fun looking if you are curious, but don’t take the h-index too seriously.

How to be the grad student your advisor brags about

Your advisor is ridiculously busy–so how do you get her to keep track of all the awesome research you are doing? Short answer: do great work that has such high online visibility, she can’t ignore it.

Easy, right?

But if you’re like me, you actually might appreciate a primer on how to maximize and document your research’s impact. Here, I’ve compiled a guide to get you started.

1. Do great work.

To begin with, you need to do work that’s worth bragging about. Self-promotion and great metrics don’t amount to much if your research isn’t sound.

2. Increase your work’s visibility.

Assuming that you’ve got that under control, making your “hidden” work visible is an easy next step. Gather the conference posters, software code, data, and other research products that have been sitting on your hard drive.

Using Figshare, you can upload datasets and make them findable online. You can do the same for your software using GitHub, and for your slide decks using Slideshare.

Want to make your work popular? Consider licensing it openly. Open licenses like CC-BY allow others to reuse your work more easily, advancing science quickly while still giving you credit. Here are some guides to help you license your data, code, and papers.

Making your work openly available has the benefit of allowing others to reuse and repurpose your findings in new and unexpected ways–adding to the number of citations you could potentially receive. These sites can also report metrics that allow you to see often they are viewed, downloaded, and used in other ways. (More about that later.)

3. Raise your own profile by joining the conversation.

Informal exchanges are the heart of scientific communication, but formal “conversations” like written responses to journal articles are also important. Here are three steps to raising your profile.

  1. Engage others in formal forums. You may already participate in conversations in your field at conferences and in the literature. If you do not, you should. Presenting posters, in particular, can be a helpful way to get feedback on your work while at the same time getting to know others in your field in a professional context.

  2. Engage others more and often. Don’t be a wallflower, online nor off. Though it can be intimidating to chat up senior researchers in your field–or even other grad students, for that matter–it’s a necessary step to building a community of collaborators. An easy way to start is by joining the Web equivalent of a ‘water cooler’ conversation: Twitter. There are lots of great guides to help you get started (PDF).  When you’ve gained some confidence and have longform insights to add, start a blog to share your thoughts. This post offers great tips on academic blogging for beginners, as does this article.

  3. Engage others in the open. Conversations that happen via email only serve those who are on the email chain. Two great places to have conversations that can benefit anyone who chooses to listen–while also getting you some name recognition–are disciplinary listservs and Twitter. Open engagement also lets others to join the debate.

4. Know your impact: track your work’s use online.

Once you’ve made your contributions to your discipline more visible, track the ways that your work is being used and discussed by others online. There are great tools that can help: the Altmetric.com bookmarklet, Academia.edu’s visualization dashboard, Mendeley’s Social Statistics summaries, basic metrics on Figshare, Github, and Slideshare, and Impactstory profiles.

See the buzz around articles with the Altmetric.com bookmarklet

The Altmetric.com bookmarklet can help you understand the reach of a particular article. Where altmetrics aren’t already displayed on a journal’s website, you can use the bookmarklet. Drag and drop the Altmetric bookmarklet (available here) into your browser toolbar, and then click it next time you’re looking at an article on a publisher’s website. You’ll get a summary of any buzz around your article–tweets, blog posts, mentions in the press, even Reddit discussions.

Track international impact with Academia.edu download mapDIIQ7HX.png

One of our favorite altmetrics visualization suites can be found on Academia.edu. In addition to a tidy summary of pageviews and referral sources for your documents hosted on their site, they also offer a great map visualization, which can help you to easily see the international reach of your work. This tool can be especially helpful for those in applied, internationally-focused research–for example, Swedish public health researchers studying the spread of disease in Venezuela–to understand the consumption of articles, white papers, and policy documents hosted on Academia.edu. One important limitation is that it doesn’t cover documents hosted elsewhere on the web.

Understand who’s reading your work with Mendeley Social Statistics

Mendeley’s Social Statistics summaries can also help you understand what type of scholars are reading your research, and where they are located. Are they faculty or graduate students? Do they consider themselves biologists, educators, or social scientists? If you’re writing about quantum mechanics, your advisor will be thrilled to see you have many “Faculty” readers in the field of Physics. Like Academia.edu visualizations, Mendeley’s Social Statistics are only available for content hosted on Mendeley.com.

Go beyond the article: track impact for your data, slides, and code

The services above work well for research articles, but what about your data, slides, and code? Luckily, Figshare, Slideshare, and Github (which we discussed in Step 2) track impact in addition to hosting content.

To track your data’s impact, get to know Figshare’s basic social sharing statistics (Twitter, Google+, and Facebook), which are displayed alongside pageviews and cites.

Oy2LxYQ.jpg     IwENKY2.jpg

To understand how others are using your presentations, use Slideshare’s metrics for slide decks. Impact is broken down into three categories: Views, Actions, and Embeds.

6460FQz.jpg

For code, leverage Github’s social functionalities. Stars indicate if others have bookmarked your projects, and Forks allow you to see if others are reusing your code.

Z6qNgWI.jpg

Put it all together with Impactstory

So, there are many great places to discover your impact. Too many, in fact: it’s tough to visit all these individually, and tough to see and share an overall picture of your impact that way.

An Impactstory profile can help. Impactstory compiles information from across the Web on how often people view, cite, reuse, and share your journal articles, datasets, software code, and other research outputs. Send your advisor a link to your Impactstory profile and include it in your annual review–she’ll be impressed when reminded of all the work you’ve done (that software package she had forgotten about!) and all the attention your work is getting online (who knew your code gets such buzz!).

Congrats! You’re on your way.

You’re an awesome researcher who has lots of online visibility. Citations to your work have increased, now that you have name recognition and your work can more easily be found and reused. You’re tracking your impact regularly, and have a better understanding of your audience to show for it. Most importantly, you’re officially brag-worthy.

Are there tips I didn’t cover here that you’d like to share? Tell us in the comments.