Data-driven decisions with Net Promoter Score


Today we’re releasing some changes in the way users sign up for Impactstory profiles, based on research we’ve done to learn more about our users. It’s a great opportunity to share a little about what we learned, and to describe the process we used to do this research–both to add some transparency around our decision making, and to maybe help folks looking to do the same sorts of things. There’s lots to share, so let’s get to it:

Meet the Net Promoter Score

As part of our journey to find product-market fit for the Impactstory webapp, we’ve become big fans of the Net Proscreen-shot-2016-09-15-at-7-26-10-pmmoter Score (NPS), an increasingly popular way to assess how much value users are getting from one’s product. It’s appealingly simple: we ask users to rank how likely they’d be to recommend Impactstory to a colleague, on a scale of 1-10, and why. Answers of 9-10 are Promoters, from 1-6 are Detractors. You subtract %detractors from %supporters and there’s your score.

It’s a useful score. It doesn’t measure how much users like you. It doesn’t measure how much they generally support the idea of what you’re doing. It measures how much you are solving real problems for real users, right now. Solving those problems so well that users will put their own reputation on the line and sing your praises to their friends.

Until we’re doing that, we don’t have product-market fit, we aren’t truly making something people want, and we don’t have a sustainable business. Same as any startup.

As a nonprofit, we’ve got lots of people who support what we’re doing and (correctly!) see that we’re solving a huge problem for academia as a whole. So they’ve got lots of good things to say to us. Which: yay. That’s fuel and we love it. But it can disguise the fact that we may not be solving their personal problems. We need to get at that signal, to help us find that all-important product-market fit.

Getting the data

We used Promoter.io to manage creating, sending, and collecting emails surveys. It just works and it saved us a ton of time. We recommend it.  Our response rate was 28%, which is we figure pretty good for asking help via email from people who don’t know you or owe you anything, and without pestering them with any reminders. We sliced and diced users along many dimensions but they all had about the same response rate, which improves robustness of the findings. Since we assume users who have no altmetrics will hate the app, we only sent surveys to users with relatively complete profiles (at least three Impactstory badges).

Once we had responses, we followed up using Intercom, an app that nicely integrates most of our customer communication (feedback, support, etc). We got lots more qualitative feedback this way.

Once we had all our data, we exported the results into a spreadsheet and had us some Pivot Table Fun Time. Here’s the raw data in Google Docs (with identifying attributes removed to protect privacy) in case you’d like to dive into the data yourself.

Finally, we exported loads of user data from our Postgres app database hosted on Heroku. All that got added into the spreadsheet and pivot tables as well.

Here’s what we found

The overall NPS is 26, which is not amazing. But it is good. And encouragingly, it’s much better than we got when we surveyed users about our old, non-free version in March. Getting better is a good sign. We’ll take it.

Users who have made profiles in both versions (new and old) seem to agree. The overall NPS for these users was 58, which is quite strong. In fact, users of the old version were the group with the highest NPS overall in this survey. Since we made a lot of changes in the new app from the old, this wouldn’t have to have been true. It made us happy.

But we wanted more actionable results. So we sliced and diced everyone into subgroups along several dimensions, looking for features that can predict extra-high NPS in future sign-ups.

We found four of these predictive features. As it happens, each predictor changes the NPS of its group by the same amount: your NPS (on average) goes from 15 (ok) to 35 (good) if you

  1. have a Twitter account,
  2. have more than 20 online mentions of some kind (Tweets, Wikipedia, Pinterest, whatever) pointing to your publications,
  3. have made more than 55% of your publications green or gold open access, or
  4. have been awarded more than 6 Impactstory badges.

Of these, (4) is not super useful since it covaries a lot with numbers of mentions (2) and OA percentage (3); after all, we give out badges for both those things. A bit more surprisingly, users who have Twitter are likely to have more mentions per product, and less likely to have blank profiles, meaning Feature 1 accounts for some of the variance in Feature 2. So simply having a Twitter account is one of our best signals that you’ll love Impactstory.

Surprisingly, having a well-stocked ORCID profile with lots of your works in it doesn’t seem to predict a higher NPS score at all. This was unexpected because we figured the kind of scholcomm enthusiasts who keep their ORCID records scrupulously up-to-date would be more likely to dig the kind of thing we’re doing with Impactstory. Plus they’d have an easier and faster time setting up a profile since their data is super easy for us to import. Good to have the data.

About 60% of response included qualitative feedback. Analysing these, we found three themes:

  • It should include citations. Makes sense users would want this, given that citations are the currency of academia and all. Alas they ain’t gonna get it, not till someone comes out with a open and complete citation database. Our challenge is to help users be less bummed about this, hopefully be positioning Impactstory as a complement to indexes like Google Scholar rather than a competitor.
  • It’s pretty. That’s good to hear, especially since we want folks to share their profiles, make them part of their online identity. That’s way easier if you think it looks sharp.
  • It’s easy. Also great to hear, because the last version was not very easy, mostly as a result of feature bloat. It hurt to lose some features on this version, so it’s good to see the payoff was there.
  • It puts everything all in one place.  Presumably users were going to multiple places to gather all the altmetrics data that Impactstory puts in one spot. 

Here’s what we did

The most powerful takeway from all this was that users who have Twitter get more out of Impactstory and like it more. And that makes sense…scholars with Twitter are more likely be into this whole social media thing, and (in our experience talking with lots of researchers) more ready to believe altmetrics could be a useful tool.

So, we’ll redouble our focus on these users.

The way we’re doing that concretely right away is by changing the signup wizard to start with a “signup with Twitter” button. That’s a big deal because it means you’ll need a Twitter account to sign up, and therefore excludes some potential users. That’s a bummer.

But it’s excluding users who, statistically, are among the least likely to love the app. And it’s making it easier to sign up for the users that are loving Impactstory the most, and most keen to recommend us. That means better word of mouth, a better viral coefficient, and a chance to test a promising hypothesis for achieving product-market fit.

We’re also going to be looking at adding more Twitter-specific features like analysing users’ tweeted content and follower lists. More on that later.

To take advantage of our open-access predictor, we’ll be working hard to reach out to the open access community…we’re already having great informal talks with folks at SPARC and with the OA Button, and are reaching out in other ways as well. More on that later, too.

We’re excited about this approach to user-driven development. It’s something we’ve always valued, but often had a tough time implementing because it has seemed a bit daunting. And honestly, it is a bit daunting. It took a ton of time, and it takes a surprising amount of mental energy to be open-minded in a way that makes the feedback actionable. But overall we’re really pleased with the process, and we’re going to be doing it more, along with these kinds of blog posts to improve the transparency decision-making. Looking forward to hearing your thoughts!

Now, a better way to find and reward open access

There’s always been a wonderful connection between altmetrics and open science.

Altmetrics have helped to demonstrate the impact of open access publication. And since the beginning, altmetrics have excited and provoked ideas for new, open, and revolutionary science communication systems. In fact, the two communities have overlapped so much that altmetrics has been called a “school” of open science.

We’ve always seen it that way at Impactstory. We’re uninterested in bean-counting. We are interested in setting the stage for a second scientific revolution, one that will happen when two open networks intersect: a network of instantly-available diverse research products and a network of comprehensive, open, distributed significance indicators.

So along with promoting altmetrics, we’ve also been big on incentives for open access. And today we’re excited that we got a lot better at it.

We’re launching a new Open Access badge, backed by a really accurate new system for automatically detecting fulltext for online resources. It finds not just Gold OA, but also self-archived Green OA, hybrid OA, and born-open products like research datasets.

A  lot of other projects have worked on this sticky problem before us, including the Open Article Gauge, OACensus, Dissemin, and the Open Access Button. Admirably, these have all been open-source projects, so we’ve been able to reuse lots of their great ideas.

Then we’ve added oodles of our own ideas and techniques, along with plenty of research and testing. The result? Impactstory is now the best, most accurate way to automatically assess openness of publications. We’re proud of that.

And we know this is just the beginning! Fork our code or send us a pull request if you want to make this even better. Here’s a list of where we check for OA to get you started:

  • The Directory of Open Access Journals to see if it’s in their index of OA journals,
  • CrossRef’s license metadata field,  to see if the publisher has uploaded an open license.
  • Our own custom list DOI prefixes, to see if it’s in a known preprint repo
  • DataCite, to see if it’s an open dataset.
  • The wonderful BASE OA search engine to see if there’s a Green OA copy of the article.
  • Repository pages directly, in cases where BASE was unable to determine openness.
  • Journal article pages directly, to see if there’s a free PDF link (this is great for detecting hybrid OA)

What’s it mean for you? Well, Impactstory is now a powerful tool for spreading the word about open access. We’ve found that seeing that openness badge–or OH NOES lack of a badge!–on their new profile is powerful for a researcher who might otherwise not think much about OA.

So, if you care about OA: challenge your colleagues to go make a free profile and see how open they really are. Or you can use our API to learn about the openness of groups of scholars (great for librarians, or for a presentation to your department). Just hit the endpoint http://impactstory.org/u/someones_orcid_id to find out the openness stats for anyone.

Hit us up with any thoughts or comments, and enjoy!

Why researchers are loving the new Impactstory

We put our heart and soul into the new Impactstory and have been on pins and needles to hear what you think.  Well it’s been a week and the verdict is in — we’re hearing that the new version is awesome, fantastic, and truly excellent, a home run and must-have–an academic profile that’s exciting and relevant.

And so much more. So much more, in fact, that we wanted to a little break from the frenzied responding, bugfixing, and feature-launching we’ve been doing this week and summarize a bit of what we’ve heard.

What do you like?

A lot of users have appreciated that it now takes seconds and is super easy to set up a profile that’s blazing fast and smooth to use: it’s instant insights about your research.

Unlike speed, beauty is in the eye of the beholder–but our beholders seem delightfully agreed that our new look is great, great, great.  Whether users are calling it fresh or beautifully crafted, or sleek or smooth or snazzy, everyone seems to agree that the new version looks awesome, it looks pretty damn awesome. And we are pretty thrilled to hear that.

They’re enjoying that it’s got some fun 🙂 And, we’re not surprised to hear that people like the new price point of Free, making it easier to recommend to others.  

What’s it good for?

Impactstory helps researchers find impacts of their work beyond just citations. People have found mentions they didn’t know about on Wikipedia, discussion in cool blog posts, and reviews on Faculty of 1000. And not just numbers, but impact across the globe. Not just numbers but connecting with people: for instance user Peter van Heusden tweeted, “Using @Impactstory I discovered someone who is consistently promoting work I’m involved in, but who I had no idea existed!”

All this amounts to more than just a lovely ego boost (although it’s that too!). People are telling us that it’s motivating them to adopt more Open Science practices like uploading research slides to a proper repository, getting an ORCID, adding works to their ORCID profile, and celebrating their non-paper publications.

How are you using it?

People are already sending their Impactstory profiles to their funders, and their funders are loving them.  Researchers have added their new profile to their CV, and are planning on using Impactstory data to define innovative ‘pathway to impact’ for UK grants and in tenure and promotion packets.

Folks are including it in workshops.  And even better — building things with our open data! Check out the ferret.io plugin, it rolled out impactstory support this week and it’s really cool 🙂

What have we been doing?

We’ve made a bunch of changes this week in response to your feedback:

  • imports all your publications, not just DOIs.  Everything on your ORCID profile now displays in your Impactstory profile, and we’re working on getting more openness and altmetrics data
  • twitter integration
    • connecting twitter updates your profile pic so you don’t have to fight with gravatar
    • you don’t have to enter email manually–even faster signup
    • we’ll be using your twitter feed for achievements in the future
  • there’s a new Open Sesame achievement
  • we changed the scores at the top of the profile beside your picture; they are now counts of your achievements
  • the achievements and the import process are better documented
  • we rolled out dozens of smaller features, usability enhancements, and bugfixes.

What’s next?

We’re on our way to the FORCE16 conference this week.  We’ll be rolling the feedback from the conference along with your continued feedback into continued improvement to the app.

And you?  Join in with everyone showing off their profile, spread the word (this is how we will grow), and if you don’t have a profile, get one, and tell us what you think!

Finally, thanks.

Finally, we’d like to thank the hundreds of passionate people who have helped us with money and with moral support along the way, from our early days till now. It’s safe to say the new Impactstory is a big hit.  It’s our hit, together.

 

The new Impactstory: Better. Freer.

We are releasing a new version of Impactstory!

https://impactstory.org/u/0000-0001-6728-7745

https://impactstory.org/u/0000-0001-6728-7745

We baked what we’ve learned from hundreds of conversations with researchers into a sleeker, leaner, more useful Impactstory.

Our new Achievements showcase your meaningful accomplishments, not just counts. Our new three-part score helps you track your buzz, engagement, and openness. And next-generation notification emails are improved to tell you what you want to know reliably every week.

And of course we’ve got a slew of other new features as well, including Depsy integration, ORCID sync-on-demand, and full support for mobile.

What’s more, we’re simplifying and streamlining everywhere, eliminating little-used features and doubling down on what users have told us they love. Profile creation is now only via ORCID, we only deal in DOIs, and citation metrics are gone. As a result, creating a profile takes just seconds, our support for diverse research products (preprints, datasets, etc) is bulletproof, and metrics are now consistently clear and up-to-date. Along with a complete code rewrite, these changes make Impactstory faster and more reliable than it’s ever been.

Last but not least, not only are we making Impactstory better: we’re making it cheaper. As in, all the way cheaper. Free!

Why? We heard you love the idea, but not the price–largely because your disciplines or departments aren’t quite ready to use altmetrics for evaluation. We can see this is starting to change, and want to help that change happen as quickly as possible. That means letting as many researchers as possible engage with altmetrics, right now. Free helps that happen.

Alternative sustainability models (like freemium features and new grants) will allow us to continue to build and maintain tools like Impactstory and Depsy to help change how researchers think about understanding and measuring the influence of their work.

Sound good? It is. We think you’ll love it. Go make yourself a profile and see what you learn: https://impactstory.org (and if you’re a current impactstory subscriber check your email for migration details).

We think this new Impactstory the best thing we’ve ever done, and it’s a big step towards creating the open science, altmetrics-powered future we believe in. Thanks building that future with us. We’re looking forward to hearing what you think!

Let’s value the software that powers science: Introducing Depsy

Today we’re proud to officially launch Depsy, an open-source webapp that tracks research software impact.

We made Depsy to solve a problem:  in modern science, research software is often as important as traditional research papers–but it’s not treated that way when it comes to funding and tenure. There, the traditional publish-or-perish, show-me-the-Impact-Factor system still rules.

We need to fix that. We need to provide meaningful incentives for the scientist-developers who make important research software, so that we can keep doing important, software-driven science.

Lots of things have to happen to support this change. Depsy is a shot at making one of those things happen: a system that tracks the impact of software in software-native ways.

That means not just counting up citations to a hastily-written paper about the software, but actual mentions of the software itself in the literature. It means looking how software gets reused by other software, even when it’s not cited at all. And it means understanding the full complexity of software authorship, where one project can involve hundreds of contributors in multiple roles that don’t map to traditional paper authorship.

Ok, this sounds great, but how about some specifics. Check out these examples:

  • GDAL is a geoscience library. Depsy finds this cool NASA-funded ice map paper that mentions GDAL without formally citing it. Also check out key author Even Rouault: the project commit history demonstrates he deserves 27% credit for GDAL, even though he’s overlooked in more traditional credit systems.
  • lubridate improves date handling for R. It’s not highly-cited, but we can see it’s making a different kind of impact: it’s got a very high dependency PageRank, because it’s reused by over 1000 different R projects on GitHub and CRAN.
  • BradleyTerry2 implements a probability technique in R. It’s only directly reused by 8 projects—but Depsy shows that one of those projects is itself highly reused, leading to huge indirect impacts. This indirect reuse gives BradleyTerry2 a very high dependency PageRank score, even though its direct reuse is small, and that makes for a better reflection of real-world impact.
  • Michael Droettboom makes small (under 20%) contributions to other people’s research software, contributions easy to overlook. But the contributions are meaningful, and they’re to high-impact projects, so in Depsy’s transitive credit system he ends up as a highly-ranked contributor. Depsy can help unsung heroes like Micheal get rewarded.
     

Depsy doesn’t do a perfect job of finding citations, tracking dependencies, or crediting authors (see our in-progress paper for more details on limitations). It’s not supposed to. Instead, Depsy is a proof-of-concept to show that we can do them at all. The data and tools are there. We can measure and reward software impact, like we measure and reward the impact of papers.

Embed impact badges in your GitHub README

Given that, it’s not a question of if research software becomes a first-class scientific product, but when and how. Let’s start having the conversations about when and how (here are some great places for that). Let’s improve Depsy, let’s build systems better than Depsy, and let’s (most importantly) start building the cultural and political structures that can use these systems.

For lots more details about Depsy, check out the paper we’re writing (and contribute!), and of course Depsy itself. We’re still in the early stages of this project, and we’re excited to hear your feedback: hit us up on twitter, in the comments below, or in the Hacker News thread about this post.

Depsy is made possible by a grant from the National Science Foundation.
edit nov 15 2015: change embed image to match new badge

Better than a free Ferrari: Why the coming altmetrics revolution needs librarians

This post was originally published as the forward to Meaningful Metrics: A 21st Century Librarian’s Guide to Bibliometrics, Altmetrics, and Research Impact [paywall, embargoed for 6mo]. It’s also persistently archived on figshare.

A few days ago, we were speaking with an ecologist from Simon Fraser University here in Vancouver, about an unsolicited job offer he’d recently received. The offer included an astonishing inducement: anyone from his to-be-created lab who could wangle a first or corresponding authorship of a Nature paper would receive a bonus of one hundred thousand dollars.

Are we seriously this obsessed with a single journal? Who does this benefit? (Not to mention, one imagines the unfortunate middle authors of such a paper, trudging to a rainy bus stop as their endian-authoring colleagues roar by in jewel-encrusted Ferraris.)  Although it’s an extreme case, it’s sadly not an isolated one. Across the world, A Certain Kind of administrator is doubling down on 20th-century, journal-centric metrics like the Impact Factor.

That’s particularly bad timing, because our research communication system is just beginning a transition to 21st-century communication tools and norms. We’re increasingly moving beyond the homogeneous, journal-based system that defined 20th century scholarship.

Today’s scholars increasingly disseminate web-native scholarship. For instance, Jason’s 2008 tweet coining the term “altmetrics” is now more cited than some of his peer-reviewed papers. Heather’s openly published datasets have gone on to fuel new articles written by other researchers. And like a growing number of other researchers, we’ve published research code, slides, videos, blog posts, and figures that have been viewed, reused, and built upon by thousands all over the world. Where we do publish traditional journal papers, we increasingly care about broader impacts, like citation in Wikipedia, bookmarking in reference managers, press coverage, blog mentions, and more. You know what’s not capturing any of this? The Impact Factor.

Many researchers and tenure committees are hungry for alternatives, for broader, more diverse, more nuanced metrics. Altmetrics are in high demand; we see examples at Impactstory (our altmetrics-focused non-profit) all the time. Many faculty share how they are including downloads, views, and other alternative metrics in their tenure and promotion dossiers, and how evaluators have enthused over these numbers. There’s tremendous drive from researchers to support us as a nonprofit, from faculty offering to pay hundreds of extra dollars for profiles, to a Senegalese postdoc refusing to accept a fee waiver. Other altmetrics startups like Plum Analytics and Altmetric.com can tell you similar stories.

At higher levels, forward-thinking policy makers and funders are also seeing the value of 21st-century impact metrics, and are keen to realize their full potential. We’ve been asked to present on 21st-century metrics at the NIH, NSF, the White House, and more. It’s not these folks who are driving the Impact Factor obsession; on the contrary, we find that many high-level policy-makers are deeply disappointed with 20th-century metrics as we’ve come to use them. They know there’s a better way.

But many working scholars and university administrators are wary of the growing momentum behind next-generation metrics. Researchers and administrators off the cutting edge are ill-informed, uncertain, afraid. They worry new metrics represent Taylorism, a loss of rigor, a loss of meaning. This is particularly true among the majority of faculty who are less comfortable with online and web-native environments and products. But even researchers who are excited about the emerging future of altmetrics and web-native scholarship have a lot of questions. It’s a new world out there, and one that most researchers are not well trained to negotiate.

We believe librarians are uniquely qualified to help. Academic librarians know the lay of the land, they keep up-to-date with research, and they’re experienced providing leadership to scholars and decision-makers on campus. That’s why we’re excited that Robin and Rachel have put this book together. To be most effective, librarians need to be familiar with the metrics research, which is currently advancing at breakneck speed. And they need to be familiar with the state of practice–not just now, but what’s coming down the pike over the next few years. This book, with its focus on integrating research with practical tips, gives librarians the tools they need.

It’s an intoxicating time to be involved in scholarly communication. We’ve begun to see the profound effect of the Web here, but we’re just at the beginning. Scholarship is on the brink of Cambrian explosion, a breakneck flourishing of new scholarly products, norms, and audiences. In this new world, research metrics can be adaptive, subtle, multi-dimensional, responsible. We can leave the fatuous, ignorant use of Impact Factors and other misapplied metrics behind us. Forward-thinking librarians have an opportunity to help shape these changes, to take their place at the vanguard of the web-native scholarship revolution. We can make a better scholarship system, together. We think that’s even better than that free Ferrari.

Farewell to Stacy

We’ve made a lot of happy announcements here on our blog, but today we’re making a sad one: Friday was Stacy’s last day at Impactstory. We’re eliminating our Director of Marketing position, because we need to focus significantly less on marketing and significantly more on finding product-market fit. We’re at a point where we must double down on understanding our users’ needs, and building the product it takes to meet them.

Stacy accomplished amazing things at Impactstory.  Here are just a few:

  • Turned our blog into the top source of information on altmetrics (not just our opinion…we’ve had lots of folks tell us this) for thousands of readers
  • Authored a terrific free e-book on how to raise the profile of scholars’ research
  • Created and ran our successful Advisor program, which is now comprised of researchers and librarians from all over the world
  • Quintupled our followers on Twitter

Stacy is amazing. She’s smart, thorough, engaging, and a terrific combination of idealistic and practical. We’re so proud to have worked beside her.

Impactstory is going to move forward. We’re going to keep learning, keep improving, and we’re ultimately going to transform the world of scholarly communication–thanks in part to the great work that Stacy’s done. That’ll happen. But today, we miss our teammate, and our friend.

 

PS If you want to hire someone awesome, drop Stacy a line at stacykonkiel@fastmail.fm. Drop us a line and we’ll tell you in more detail just how awesome she is.

 

Open Science & Altmetrics Monthly Roundup (January 2015)

2015 kicked off with good news about Nature Publishing Group’s increased commitment towards Open Access, the launch of Frontiers’ research impact social network, Loop, and seven more cool developments in the world of Open Science and altmetrics. Read on!

Nature Publishing Group’s OA journals go CC-BY

Twenty Open Access journals published by Nature Publishing Group recently made the move to offering CC-BY by default. Previously, CC-BY-NC was the default license available for most NPG OA journals, and many authors had to pay higher article processing charges to use a CC-BY license. We applaud this move, which was one of many towards Open Access that NPG made in 2014. For more information, read Claire Calder’s recap of her team’s efforts on the Of Schemes and Memes blog.

How to pay for Gold Open Access fees, even if you’re not well-funded

Self-described “scientific have-not” Zen Faulkes recently blogged about the many strategies he uses to pay for the article processing charges (APCs) his Open Access publications incur.  They include: finding OA journals that waive APCs, petitioning his department chair, and sometimes asking co-authors at other institutions to cover the costs. It’s a great read for anyone concerned about making their work Open Access who lacks grant funding to cover the fees. Read the full list on Dr. Zen’s blog.

Elsevier acquires news monitoring service NewsFlo

Elsevier announced their acquisition of NewsFlo this month. The news monitoring service–which mines over 50,000 news outlets for mentions of research articles–will be integrated into reference management and social bookmarking site Mendeley. This partnership will pave the way for new altmetrics reports for articles and other content added to the platform. Currently, Altmetric.com is the only altmetrics aggregator that reports mainstream media mentions. More information on the acquisition can be found on TechCrunch.

Other open science & research metrics news

  • Altmetrics strategy meeting recap available for all to read: In December, altmetrics researchers and organizations from around the world convened at the PLOS headquarters in San Francisco to discuss ways to improve metrics for all. A report of the meeting’s results is now available on Figshare.

  • Frontiers launches new research impact social network, Loop: Loop is designed to bring together download and pageview metrics from a variety of publisher and academic websites into a researcher-centered profile. (Currently, these metrics are only sourced from Nature Publishing Group and Frontiers journals.) Researchers can follow each others’ profiles to get updates on new publications, and authors’ research networks (sourced from article co-author lists) can be easily explored. The free service plans to monetize in the future by possibly selling ads or selling its users’ data to advertisers. You can learn more about the service on the Loop website.

  • The many ways in which researchers use the scientific literature (hint: it ain’t only about citations): Paleontologist Andy Farke shared how he uses articles in his day-to-day work, and (not surprisingly) “citing in his own papers” isn’t high on the list. Instead, he uses articles to inform his teaching, when reviewing manuscripts, to help prepare him for talking to the public and the media about newly published studies, and more. So why then does academia value citations over the other ways we can measure articles’ use? Read Andy’s full list on his blog.

  • Impactstory Advisor of the Month, Chris Chan, on the library’s role in scholcomm innovations: We recently chatted with Chris on his work to bring ORCID to his campus, and what he thinks all librarians should do to foster the adoption of emerging scholarly communication technologies at their universities. Read the full interview on the Impactstory blog.

  • New resources available for librarians interested in altmetrics: We recently published two LibGuides (one for researchers and one for librarians) that can help librarians do altmetrics outreach at their university. We’re also now hosting virtual “office hours”, where librarians can message Stacy (our Director of Marketing & Research who’s also an academic librarian) to chat and ask questions about altmetrics and Impactstory. And for those in search of altmetrics professional development opportunities, Library Juice Academy is hosting an altmetrics & bibliometrics course.

  • 85% of research data is uncited & only 4-9% have altmetrics: a new study digs deep into citations and altmetrics for research data. Read the full study on Arxiv.

Want updates like these as-they-happen? Follow us on Twitter! In addition to open science and altmetrics news, you can also get Impactstory news–the only altmetrics non-profit.

Steal these altmetrics LibGuides!

Screenshot of the "Ultimate Guide to Altmetrics (Librarian Edition)" libguide page

We’re pleased to announce yet another altmetrics resource for librarians: ready-to-reuse altmetrics LibGuides!

As an academic librarian, I know how hard it can be to find and compile timely, trustworthy resources on a topic like altmetrics. That’s why I’ve created two altmetrics LibGuides, now available for reuse under a CC-BY 4.0 license.

These “Ultimate Guides to Altmetrics” can help researchers and librarians better understand the benefits to (and limitations of) altmetrics. They include:

  • Examples of ways that researchers have used altmetrics in their CVs and for tenure and grants

  • Up-to-date tutorials on finding citation counts and altmetrics for articles, books, data, software, and more

  • Detailed comparisons of Altmetric.com, Impactstory, and PlumX

  • Curated videos, handouts, presentations, and other librarian-created altmetrics outreach materials

These are among the most up-to-date, comprehensive altmetrics LibGuides currently available. Check them out, I think you’ll agree:

The Ultimate Guide to Altmetrics (Researcher Edition)

The Ultimate Guide to Altmetrics (Librarian Edition)

Reuse these LibGuides at your own library or pass them along to a colleague. And please do let me know if you have questions or suggestions for improvements (team@impactstory.org).

Impactstory Advisor of the Month: Chris Chan (January 2015)

Photograph of Chris Chan

The first Impactstory Advisor of the Month for 2015 is Chris Chan, Head of Information Services at Hong Kong Baptist University Library.

We interviewed Chris to learn more about his crucial role in implementing ORCID identifiers for HKBU faculty, and also why he’s chosen to be an Impactstory Advisor. Below, he also describes his vision for the role librarians can play in bringing emerging scholarly communication technologies to campus–a vision with which we wholeheartedly agree!

Tell us a bit about your role as the Head of Information Services at the Hong Kong Baptist University Library.

My major responsibilities at HKBU Library include overseeing our instruction and reference services, and advising the senior management team on the future development and direction of these services. I’m fortunate to work with a great team of librarians and paraprofessionals, and never tire of providing information literacy instruction and research help to our students and faculty.

Scholarly communication is a growing part of my duties as well. As part of its strategic plan, the Library is exploring how it can better support the research culture at the University. One initiative that has arisen from this strategic focus is our Research Visibility Project, for which I am the coordinator.

Why did you initially decide to join Impactstory?

Scholarly communication and bibliometrics have been of great interest to me ever since I first encountered them as a newly-minted academic librarian. Furthermore, the strategic direction that the Library is taking has made keeping up to date with the latest developments in this area a must for our librarians.

When I came across Impactstory I was struck by how useful and relatively straightforward (even in that early incarnation) it was for multiple altmetrics to be presented in an attractive and easy to understand way. At the time, I had just been discussing with some of our humanities faculty how poorly served they were by traditional citation metrics. I saw immediately in Impactstory one way that this issue could be addressed.

Why did you decide to become an Advisor?

As mentioned above, in the past year or so I have become heavily involved in our scholarly communication efforts. When the call for applications to be an Advisor came out, I saw it as an opportunity to get the inside scoop on one of the tools that I am most enthusiastic about.

What’s your favorite Impactstory feature?

I would have to say that my favourite feature is the ability to add an ORCID iD to the Impactstory profile! More on why that is below.

You’ve been hard at work recently implementing ORCID at HKBU. (I especially like this video tutorial you produced!) How do you envision the library working in the future to support HKBU researchers using ORCID and other scholarly communication technologies?

Academic libraries around the world are re-positioning themselves to ensure that their collections and services remain relevant to their members. The scholarly communication environment is incredibly dynamic, and I think that librarians have an opportunity to provide tremendous value to our institutions by serving as guides to, and organizers of, emerging scholarly communication technologies.

Our ORCID initiative at HKBU is a good example of this. We have focused heavily on communicating the benefits having an ORCID iD and how in the long run this will streamline research workflows and ensure scholars receive the proper credit for their work. Another guiding principle has been to make adoption as painless as possible for our faculty. They will be able to create an ORCID iD, connect it with our system, and automatically populate it with their latest five years’ of research output (painstakingly checked for accuracy by our team), all in just a few minutes.

I believe that as information professionals, librarians are well-positioned to take on such roles. Also, in contrast to some of our more traditional responsibilities, these services bring us into close contact with faculty, raising the visibility of librarians on campus. These new relationships could open doors to further collaborations on campus.

Thanks, Chris!

As a token of our appreciation for Chris’s outreach efforts, we’re sending him an Impactstory travel mug from our Zazzle store.

Chris is just one part of a growing community of Impactstory Advisors. Want to join the ranks of some of the Web’s most cutting-edge researchers and librarians? Apply to be an Advisor today!