“Share your impact story” contest winner announced!

Last week, we asked you to share how Impactstory has helped your career. Today, we’re announcing the contest winner: Dr. Emilio Bruna!

Dr. Emilio Bruna, our contest winnerEmilio is a Professor with the Department of Wildlife Ecology & Conservation at the University of Florida and an Open Science advocate. Here’s his impact story:

I included Impactstory data in my portfolios for 1) promotion to full professor and  2) selection to UF’s Academy of Distinguished Teaching Scholars,  a campus-wide faculty award.  Both were successful.  

But perhaps more importantly, I included Impactstory in my workshop on scientific publishing for graduate students, where in one of the sessions all the participants set up ORCID IDs, Researcher IDs, and Impactstory Profiles – check it out. Students get it.

Emilio’s story echoes many others we’ve heard since founding Impactstory: you’re using our service to uncover all the ways in which your research makes an impact, and you’re using that data when going up for tenure & promotion, applying for grants and awards, and teaching the next generation of scientists what it means to be an influential scholar.

For having the best story, Emilio wins an Impactstory t-shirt of his choice. Congrats, Emilio!

And thanks to all of our contest participants!

Contest: Share your impact story and win a Free T-shirt!

IKmRSst.png

Two years ago, we asked you to share your pains with us. Your feedback enabled us to build a service that helps researchers learn about and share their own “impact stories” every day.

Since then, we’ve grown exponentially. Now, it’s a good time to hear your success stories.

How have you used your Impactstory data, and to what effect? What has Impactstory helped you discover about the reach of your work? How has Impactstory helped your career? 

Some examples of the stories we’ve heard and would love to hear more of include:

  • I used Impactstory to make my case for tenure–and I got it!

  • Impactstory data helped me figure out which of my research projects has “broader impacts,” and I used that information to get a grant!

  • I put Impactstory data on my CV during a job hunt, and got some compliments–and a job!

Knowing more about how you use Impactstory can help us plan which features to implement, and even help us imagine features we haven’t yet dreamed up!

How to participate

Send an email with your story in a paragraph or two to team@impactstory.org, or post it on our Facebook page.

The author of the best story will receive an Impactstory t-shirt of their choice. And everyone who participates will get their very own stash of Impactstory stickers!

The contest closes next Wednesday, April 23rd, at 12 pm EDT. A winner will be announced here on the Impactstory blog on Thursday, April 24th–stay tuned!

Ten things you need to know about ORCID right now

An ORCID identifier for Mike Eisen (or as we know him, http://orcid.org/0000-0002-7528-738X)

An ORCID identifier for Mike Eisen (aka http://orcid.org/0000-0002-7528-738X)

Have you ever tried to search for an author, only to discover that he shares a name with 113 other researchers? Or realized that Google Scholar stopped tracking citations to your work after you took your spouse’s surname a few years back?

If so, you’ve probably wished for ORCID.

ORCID IDs are permanent identifiers for researchers. Community uptake has increased tenfold over the past year, and continues to be adopted by new institutions, funders, and journals on a daily basis. ORCID may prove to be one of the most important advances in scholarly communication in the past ten years.

Here are ten things you need to know about ORCID and its importance to you.

1. ORCIDs protects your unique scholarly identity

There are approximately 200,000 people per unique surname in China. That’s a lot of “J Wang”s–more than 1200 in nanoscience alone! Same for lots of other names: we’re just not as uniquely named as we think.

Not a Wang? You’ll probably still need ORCID if you plan to assume your spouse’s family name, or accidentally omit your middle initial from the byline when submitting a manuscript.

ORCID solves the author name problem by giving individuals a unique, 16-digit numeric identification number that lasts over time.

The numbers are stored in a central registry, which will power a research infrastructure that ensures that people find the correct “J Wang” and get credit for all their publications.

2. Creating an ORCID identifier takes 30 seconds

Setting up an ORCID record is easier than setting up a Facebook account, and literally only takes 30 seconds.

Plus, if you’ve published before, you likely already have a ResearcherID or Scopus Author ID, or you may have publications indexed in CrossRef–which means that you can easily import information from those systems into your ORCID record, letting those websites do the grunt work for you.

3. ORCID is getting big fast

Growth in ORCID identifiers, from Oct. 2012-Mar. 2014

Growth in ORCID identifiers, from Oct. 2012-Mar. 2014

Even if you haven’t yet encountered ORCID, you likely will soon. The number of ORCID users grew ten-fold over 2013, and continues to grow daily. You’ll likely encounter ORCID identifers more and more often on journal websites and funding applications–a great reason to better understand ORCID’s purpose and uses.

4. ORCID lasts longer than your email address

Anyone who has ever moved institutions knows the pain of losing touch with colleagues once access to your old university email disappears. ORCID eases that pain by storing your most recent email address. If you choose to share it, your email address can be shared across platforms–meaning you spend less time updating your many profiles.

5. ORCID supports 37 types of “works,” from articles to dance performances

Any type of scholarly output you create, ORCID can handle.

Are you a traditional scientists, who writes only papers and the occasional book chapter? ORCID can track ‘em.

Are you instead a cutting-edge computational biologist who releases datasets and figures for your thesis, as they are created? ORCID can track that, too.

Not a scientist at all, but an art professor? You can import your works using ORCID, as well, using ISNI2ORCID… you get the idea.

ORCID will even start importing information about your service to your discipline soon!

6. You control who views your ORCID information

Concerned about the privacy implications of ORCID? You’re in luck–ORCID has granular privacy controls.

When setting up your ORCID record, you can select the default privacy settings for all of your content–Open to everyone, Open to trusted parties (web services that you’ve linked to your ORCID record), or Open only to yourself. Once your profile is populated, you can set custom privacy levels for each item, easy as pie.

7. ORCID is glue for all your research services

You can connect your ORCID account with websites including Web of Science, Figshare, and Impactstory, among many others.

Once they’re connected, you can easily push information back and forth between services–meaning that a complete ORCID record will allow you to automatically import the same information to multiple places, rather than having to enter the same information over and over again on different websites.

And new services are connecting to ORCID every day, sharing information across an increasing number of platforms–repositories, funding agencies, and more!

8. Journals, funders & institutions are moving to ORCID

Some of the world’s largest publishers, funders, and institutions have adopted ORCID.

Over 1000 journals, including publications by PLOS, Nature, and Elsevier, are using ORCID as a way to make it easier for authors to manage their information in manuscript submission systems. ORCID can also collect your publications from across these varied services, making it possible to aggregate author-level metrics.

Funding agencies are integrating their systems with ORCID for similar reasons. Funders from the Wellcome Trust to the NIH now request that grantees use ORCIDs to manage information in their systems, and many other funding agencies across the world are following suit.

In 2013, universities accounted for the largest percentage of all new ORCID members. ORCID helps institutions track your work, compile information for university-level reporting (i.e., total funding received by its scholars), and more efficiently manage information on faculty profiles. By eliminating redundancies and automating some reporting functions, ORCID will be especially helpful in reducing time and monies spent on REF and other assessment activities.

9. When everyone has an ORCID identifier, scholarship gets better

How many hours have you wasted by filling in your address, employment history, collaborator names and affiliations, etc when applying for grants or submitting manuscripts? For many publishers and funders, you can now simply supply your ORCID identifier, saving you precious time to do research.

In addition to increasing efficiency, ORCID can also help connect funding dollars with tangible outputs, track citations beyond journal articles, and help keep author contact information up-to-date.

10. ORCID is open source, open data, and community-driven

ORCID is a community-driven organization. You can help shape its development by adding and voting for ideas on ORCID’s feedback forum.

It’s also Open by design. ORCID is an open source web-app that allows other web-apps to use its open API and mine its open data. (We actually use ORCID’s open API to easily import information into your Impactstory profile.) Openness like ORCID’s supports innovation and transparency, and can keep us from focusing myopically on limited publication types or single indicators of impact.

And there we have it–ten things you now know about ORCID. Reference them and you’ll sound like an expert at your next department meeting (to which you should of course bring your custom ORCID mug). 🙂

Do you use ORCID? Leave your ORCID identifier in the comments, along with your thoughts about the system.

Thanks to ORCID’s Rebecca Bryant for feedback on this post.

Announcing a better way to measure your value: the Total Impact Score

Measuring the full impact of a scholar’s work is important to us here at Impactstory. No single metric captures all the flavors of your impact–until now.

We’re announcing a thrilling new feature to be rolled out in the next few days: Total Impact Scores.* Now, using one metric to rule them all, you can capture and calculate not only your value as a Scholar, but your worth as a Human Being.

We are increasingly able to track your productivity, effectiveness, and health thanks to the Quantified Self movement. Smart appliances are able to tell us more than ever about your habits in the home.

By forging partnerships with new data providers, we’re able to get a fuller picture of your value on the job and in your private life. To help you make sense of all that data, we’re summarized your impact in the Total Impact Score.

While the exact algorithms we use to calculate your Total Impact Scores are proprietary, we can share with you some of the data streams that are taken into account when compiling your Total Impact Score:

We have also paid close attention to concerns about the over-dependence upon quantitative measures, and will soon roll out qualitative supplements to the Total Impact Score, including full-text reports on your effectiveness as a parent, spouse, co-worker, and friend–as reported by your loved ones and colleagues.

Stay tuned for future announcements about the Total-Impact Score and other innovations in altmetrics!

* Some might recognize the name–Total-Impact is what we called the first iteration of Impactstory. With our single impact metric, the Total Impact Score, you can truly calculate your total impact, beyond the Academy.

Top 5 altmetrics trends to watch in 2014

Last year was an exciting one for altmetrics. But it’s history. We were recently asked: what’s 2014 going to look like? So, without further ado, here are our top 5 trends to watch:

Openness: This is just part of a larger trend toward open science–something altmetrics is increasingly (and aptly) identified with. In 2013, it became more clear than ever before that we’re winning the fight for universal OA. Since metrics are qualitatively more valuable when we verify, share, remix, and build on them, we see continued progress toward making both  traditional and novel metrics more open. But closedness still offers quick monetization, and so we’ll see continued tension here.

Acquisitions by the old guard: Last year saw the big players start to move in the altmetrics space, with EBSCO getting Plum Analytics, and Elsevier grabbing Mendeley. In 2014 we’ll likely see more high-profile altmetrics acquisitions, as established megacorps attempt to hedge their bets against industry-destabilizing change.  We’re not against this, per se; it’s a sign that altmetrics are quickly coming of age. But we also think it underscores the importance of having a nonprofit, scientist-run altmetrics provider, too.

More complex modelling: Lots of money got invested in altmetrics in 2013. This year it’ll get spent, largely on improving the descriptive power of altmetrics tools. We’ll see more network-awareness (who tweeted or cited your paper? how authoritative are they?), more context mining (is your work cited from methods or discussion sections?), more visualization (show me a picture of all my impacts this month), more digestion (are there three or four dimensions that can represent my “scientific personality?”), more composite indices (maybe high Mendeley plus low Facebook is likely to be cited later, but high on both not so much). The low-hanging altmetrics fruit–thing like simply counting tweets–are increasingly plucked. In 2014 we’ll see the beginning of what comes next.

Growing interest from administrators and funders: We gave multiple invited talks at the NSF, NIH, and White House this year to folks highly placed in the research funding ecosystem. These leaders are keenly aware of the shortcomings of traditional impact assessment, and eager to supplement it with new data. Administrators too want to tell more meaningful, textured stories about faculty impact. So in 2014, we’ll see several grant, hiring, and T&P guidelines suggest applicants include altmetrics when relevant.

Empowered scientists: But this interest from the scholarly communications superstructure is tricky. Historically, metrics of scholarly impact have often been wielded as technologies of control: reductionist, Taylorist management tools. There’s been concern that more metrics will only tighten this control. That’s not misplaced. But nor is it the only story: we believe 2014 will also see the emergence of the opposite trend. As scientists use tools like Impactstory to gather, analyze, and share their own stories, comprehensive metrics become a way for them to articulate more textured, honest narratives of impact in decisive, authoritative terms. Altmetrics will give scientists growing opportunities to show they’re more than their h-indices.

And there you have it, our top five altmetrics trends for 2014. Are we missing any? Let us know in the comments!

Altmetrics: A “bibliometric nightmare?”

Our growing user base stays pretty excited about using altmetrics to tell better stories about their impacts, and we’re passionate about helping them do it better. So while we both love discussing altmetrics’ pros and cons, we prefer to err on the side of doing over talking, so we don’t blog about it much.

But we appreciated David Colquhoun’s effort to get a discussion going around his recent blog post, so are jotting down a few quick thoughts here in response. It was an interesting read, in part because David may imagine we disagree a lot more than we in fact do.

We agree that bibliometrics is a tricky and complicated topic; folks have been arguing about the applicability and validity of citation mining for many decades now [paywall], in much more detail than either David or we have time to cover completely. But what’s sure is that usage of citation-based metrics like the Impact Factor has become deeply pathological.

That’s why we’re excited to be promoting a conversation reexamining metrics of science, a conversation asking if academia as an institution is really measuring what’s meaningful. And of course the answer is: no. Not yet.  So, as an institution, we need to (1) stop pretending we are and (2) start finding ways to do better. At its core, this what altmetrics is all about–not Twitter or any other particular platform. And we’re just getting started.

We couldn’t agree more that post-publication peer-review is the future of scholarly communication. We think altmetrics will be an important part of this future, too. Scientists won’t have time to Read All The Things in the future, any more than they do now. Future altmetrics systems–especially as we begin to track who discusses papers in various environments, and what they’ve said–will help digest, report, flag, and attract expert assessments, making a publish-than-review ecosystem practical. Even today lists like the Altmetric top 100 can help attract expert review like David’s to the highly shared papers where it’s particularly needed.

We agree that a TL;DR culture does science no favors. That’s why we’re enthusiastic about the potential of social media and open review platforms to help science move beyond the formalized swap meet of journal publishing, on to actual in-depth conversations. It’s why we’re excited about making research conversation, data, analysis, and code first-class scholarly objects that fit into the academic reward system. It’s time to move beyond the TL;DR of the article, and start telling the whole research story.

So we’re happy that David agrees we must “give credit for all forms of research outputs, not only papers.” Although of course, not everyone agrees with David or Jason or Heather. We hear from lots of researchers that they’ve got an uphill battle arguing their datasets, blog posts, code, and other products are really making an impact. And we also hear that Impactstory’s normalized usage, download, and other data helps them make their point, and we’re pretty happy about that. Our data could be a lot more effective here (and stay tuned, we’ve got some features rolling out for this…), but it’s a start. And starts are important.

So are discussions. So thanks, David, for sharing your thoughts on this, and sorry we don’t have time to engage more deeply on it. If you’re ever in Vancouver, drop us a line and we’ll buy you a beer and have a Proper Talk :). And thanks to everyone else in this growing community for keeping great discussions on open science, web-native scholarship, and altmetrics going!

Uncovering the impact of software

Academics — and others — increasingly write software.  And we increasingly host it on GitHub.  How can we uncover the impact our software has made, learn from it, and communicate this to people who evaluate our work?

Screen Shot 2013-01-18 at 5.56.20 AM

GitHub itself gets us off to a great start.  GitHub users can “star” repositories they like, and GitHub displays how many people have forked a given software project — started a new project based on the code.  Both are valuable metrics of interest, and great places to start qualitatively exploring who is interested in the project and what they’ve used it for.

What about impact beyond GitHub?  GitHub repositories are discussed on Twitter and Facebook.  For example, the GitHub link to the popular jquery library has been tweeted 556 times and liked on Facebook 24 times (and received 18k stars and almost 3k forks).

Is that a lot?  Yes!  It is one of the runaway successes on GitHub.

How much attention does an average GitHub project receive? We want to know, to give reference points for the impact numbers we report.  Archive.org to the rescue! Archive.org posted a list of all GitHub repositories active in December 2012.  We just wanted a random sample of these, so we wrote some quick code to pull random repos from this list, grouped by year the repo was created on GitHub.

Here is our reference set of 100 random GitHub repositories created in 2011.  Based on this, we’ve calculated that receiving 3 stars puts you in the top 20% of all GitHub repos created in 2011, and 7 stars puts you in the top 10%.  Only a few of the 100 repositories were tweeted, so getting a tweet puts you in the top 15% of repositories.

You can see this reference set in action on this example, rfishbase, a GitHub repository by rOpenSci that provides an R interface to the fishbase.org database:

Screen Shot 2013-01-18 at 5.31.49 AM

So at this point we’ve got recognition within GitHub and social media mentions, but what about contribution to the academic literature?  Have other people used the software in research?

Software use has been frustratingly hard to track for academic software developers, because there are poor standards and norms for citing software as a standalone product in reference lists, and citation databases rarely index these citations even when they exist.  Luckily, publishers and others are beginning to build interfaces that let us query for URLs mentioned within full text of research papers… all of a sudden, we can discover attribution links to software packages that are hidden in not only in reference lists, but also methods sections and acknowledgements!  For example, the GitHub url for a crowdsourced repo on an E Coli outbreak has been mentioned in the full text of two PLOS papers, as discovered on ImpactStory:

Screen Shot 2013-01-18 at 4.45.11 AM

There is still a lot of work for us all to do.  How can we tell the difference between 10  labmates starring a software repo and 10 unknown admirers?  How can we pull in second-order impact, to understand how important the software has been to the research paper, and how impactful the research paper was?

Early days, but we are on the way.  Type in your github username and see what we find!

Nature Comment: Altmetrics for Alt-Products

One of our goals at ImpactStory is widespread respect for all kinds of research products.  We therefore celebrate the upcoming NSF Policy change to BioSketch requirements, instructing investigators to list their notable Products rather than their Publications in all grant proposals.  Yay!

This policy change, and the resulting need to gather altmetrics across scholarship, is discussed in a Comment just published in Nature, authored by yours truly:

    Piwowar H. (2013). Value all research products., Nature, DOI:

The article is will be behind a paywall but is free for a few days, so run over and read it quickly!  🙂

I’ve also written up a few supplementary blog posts to the Comment, on my personal blog:

  • the first draft of the article (quite different, and with some useful details that didn’t make it into the final version)
  • behind-the-scenes look at the editorial and copyright process

And here for convenience is the ImpactStory exemplar mentioned in the article:  a data set on an outbreak of Escherichia coli has received 43 ‘stars’ in the GitHub software repository, 18 tweets and two mentions in peer-reviewed articles (see http://impactstory.org/item/url/https://github.com/ehec-outbreak-crowdsourced/BGI-data-analysis).

A new framework for altmetrics

At total-impact, we love data. So we get a lot of it, and we show a lot of it, like this:


There’s plenty of data here. But we’re missing another thing we love: stories supported by data. The Wall Of Numbers approach tells much, but reveals little.

One way to fix this is to Use Math to condense all of this information into just one, easy-to-understand number. Although this approach has been popular, we think it’s a huge mistake. We are not in the business of assigning relative values to different metrics; the whole point of altmetrics is that depending on the story you’re interested in, they’re all valuable.

So we (and from what they tell us, our users) just want to make those stories more obvious—to connect the metrics with the story they tell. To do that,  we suggest categorizing metrics along two axis: engagement type and audience. This gives us a handy little table:

Now we can make way more sense of the metrics we’re seeing. “I’m being discussed by the public” means a lot more than “I seem to have many blogs, some twitter, and ton of Facebook likes.” We can still show all the data (yay!) in each cell—but we can also present context that gives it meaning.

Of course, that context is always going to involve an element of subjectivity. I’m sure some people will disagree about elements of this table. We categorized tweets as public, but some tweets are certainly from scholars. Sometimes scholars download html, and sometimes the public downloads PDFs.

Those are good points, and there are plenty more. We’re excited to hear them, and we’re excited to modify this based on user feedback. But we’re also excited about the power of this framework to help people understand and engage with metrics. We think it’ll be essential as we grow altmetrics from a source of numbers into a source of data-supported stories that inform real decisions.