New OpenAlex API features!

We’ve got a ton of great API improvements to report! If you’re an API user, there’s a good chance there’s something in here you’re gonna love.

Search

You can now search both titles and abstracts. We’ve also implemented stemming, so a search for “frogs” now automatically gets your results mentioning “frog,” too. Thanks to these changes, searches for works now deliver around 10x more results. This can all be accessed using the new search query parameter.

New entity filters

We’ve added support for tons of new filters, which are documented here. You can now:

  • get all of a work’s outgoing citations (ie, its references section) with a single query. 
  • search within each work’s raw affiliation data to find an arbitrary string (eg a specific department within an organization)
  • filter on whether or not an entity has a canonical external ID (works: has_doi, authors: has_orcid, etc)

Request multiple records by ID at once

This has been our most-requested feature and we’re super excited to roll it out! By using the new OR operator, you can request up to 50 entities in a single API call. You can use any ID we support–DOI, ISSN, OpenAlex ID, etc.

Deep paging

Using cursor-based paging, you can now retrieve an infinite number of results (it used to be just the top 10,000). But remember: if you want to download the entire dataset, please use the snapshot, not the API! The snapshot is the exact same data in the exact same format, but much much faster and cheaper for you and us.

More groups in group_by queries

We now return the top 200 groups (it used to be just the top 50).

New Autocomplete endpoint

Our new autocomplete endpoint dead easy to use our data to power an autocomplete/typeahead widget in your own projects. It works for any of our five entity types (works, authors, venues, institutions, or concepts). If you’ve got users inputting the names of journals, institutions, or other entities, now you can easily let them choose an entity instead of entering free text–and then you can store the ID (ISSN, ROR, whatever) instead of passing strings around everywhere. 

Better docs

In addition to documenting the new features above, we’ve also added lots of new documentation for existing features, addressing our most frequent questions and requests:

Thanks to everyone who’s been in touch to ask for new features, report bugs, and tell us where we can improve (also where we’re doing well, we’re ok with that too).
We’ll continue improving the API and the docs. We’re also putting tons of work into improving the underlying dataset’s accuracy and coverage, and we’re happy to report that we’ve improved a lot on what we inherited from MAG, with more improvements to come. We’ve delayed the launch of the full web UI, but expect that in the summer…we are so excited about all the possibilities that’s going to open up.

Green Open Access comes of age


This morning David Prosser, executive director of Research Libraries UK, tweeted, “So we have @unpaywall, @oaDOI_org, PubMed icons – is the green #OA infrastructure reaching maturity?(link).

We love this observation, and not just because two of the three projects he mentioned are from us at Impactstory 😀. We love it because we agree: Green OA infrastructure is at a tipping point where two decades of investment, a slew of new tools, and a flurry of new government mandates is about to make Green OA the scholarly publishing game-changer.

A lot of folks have suggested that Sci-Hub is scholarly publishing’s “Napster moment,” where the internet finally disrupts a very resilient, profitable niche market. That’s probably true. But like music industry shut down Napster, Elsevier will likely be able shut down Sci-Hub. They’ve got both the money and the legal (though not moral) high ground and that’s a tough combo to beat.

But the future is what comes after Napster. It’s in the iTunes and the Spotifys of scholarly communication. We’ve built something to help to create this future. It’s Unpaywall, a browser extension that instantly finds free, legal Green OA copies of paywalled research papers as you browse–like a master key to the research literature. If you haven’t tried it yet, install Unpaywall for free and give it a try.

Unpaywall has reached 5,000 active users in our first ten days of pre-release.

But Unpaywall is far from the only indication that we’re reaching a Green OA inflection point. Today is a great day to appreciate this, as there’s amazing Green OA news everywhere you look:

  • Unpaywall reached the 5000 Active Users milestone. We’re now delivering tens of thousands of OA articles to users in over 100 countries, and growing fast.
  • PubMed announced Institutional Repository LinkOut, which links every PubMed article to a free Green copy in institutional repositories where available. This is huge, since PubMed is one of the world’s most important portals to the research literature.
  • The Open Access Button announced a new integration with interlibrary loan that will make it even more useful for researchers looking for open content. Along with the interlibrary loan request, they send instructions to authors to help them self-archive closed publications.

Over the next few years, we’re going to see an explosion in the amount of research available openly, as government mandates in the US, UK, Europe, and beyond take force. As that happens, the raw material is there to build completely new ways of searching, sharing, and accessing the research literature.
We think Unpaywall is a really powerful example: When there’s a big Get It Free button next to the Pay Money button on publisher pages, it starts to look like the game is changing. And it is changing. Unpaywall is just the beginning of the amazing open-access future we’re going to see. We can’t wait!

How to smash an interstellar paywall


Last month, hundreds of news outlets covered an amazing story: seven earth-sized planets were discovered, orbiting a nearby star. It was awesome. Less awesome: the paper with the details, published in the journal Nature, was paywalled. People couldn’t read it.

That’s messed up. We’re working to fix it, by releasing our new free Chrome extension Unpaywall. Using Unpaywall, you can get access to the article, and millions like it, instantly and legally. Let’s learn more.

First, is this really a problem? Surely google can find the article. I mean, there might be aliens out there. We need to read about this. Here we go, let’s Google for “seven terrestrial planets nature article.” Great, there it is, first result. Click, and…

What, thirty-two bucks to read!? Well that’s that, I quit.

Or maybe there are some ways around the paywall? Well, you can know someone with access. My pal Cindy Wu helped out her journal club out this way, offering on Twitter to email them a copy of the paper. But you have to follow Cindy on Twitter for that to work.

Or you could know the right places to look for access. Astronomers generally post their papers are on a free web server called the ArXiv, and sure enough if you search there, you’ll find the Nature paper.  But you have to know about ArXiv for that to work. And check out those Google search results again: ArXiv doesn’t appear.

Most people don’t know Cindy, or ArXiv. And no one’s paying $32 for an article. So the knowledge in this paper, and thousands of papers like it, is locked away from the taxpayers who funded it. Research becomes the private reserve of those privileged few with the money, experience, or connections to get access.

We’re helping to change that.

Install our new, free Unpaywall Chrome extension and browse to the Nature article. See that little green tab on the right of the page? It means Unpaywall found a free version, the one the authors posted to ArXiv. Click the tab. Read for free. No special knowledge or searches or emails or anything else needed. 

Today you’ll find Unpaywall’s green tab on ten million articles, and that number is growing quickly thanks to the hard work of the open-access movement. Governments in the US, UK, Europe, and beyond are increasingly requiring that taxpayer-funded research be publically available, and as they do Unpaywall will get more and more effective.

Eventually, the paywalls will all fall. Till then, we’ll be standing next to ‘em, handing out ladders. Together with millions of principled scientists, libraries, techies, and activists, we’re helping make scholarly knowledge free to all humans. And whoever else is out there 😀 👽.

behind the scenes: cleaning dirty data


Dirty Data.  It’s everywhere!  And that’s expected and ok and even frankly good imho — it happens when people are doing complicated things, in the real world, with lots of edge cases, and moving fast.  Perfect is the enemy of good.

Thanks http://www.navigo.com.au/2015/05/cleaning-out-the-closet-how-to-deal-with-dirty-data/ for the image

Alas it’s definitely behind-the-scenes work to find and fix dirty data problems, which means none of us learn from each other in the process.  So — here’s a quick post about a dirty data issue we recently dealt with 🙂  Hopefully it’ll help you feel comradery, and maybe help some people using the BASE data.

We traced some oaDOI bugs to dirty records from PMC in the BASE open access aggregation database.

Most PMC records in BASE are really helpful — they include the title, author, and link to the full text resource in PMC.  For example, this record lists valid PMC and PubMed urls:

and this one lists the PMC and DOI urls:

The vast majority of PMC records in BASE look like this.  So until last week, to find PMC article links for oaDOI we looked up article titles in BASE and used the URL listed there to point to the free resource.

But!  We learned!  There is sometimes a bug!  This record has a broken PMC url — it lists http://www.ncbi.nlm.nih.gov/pmc/articles/PMC with no PMC id in it (see, look at the URL — there’s nothing about it that points to a specific article, right?).  To get the PMC link you’d have to follow the Pubmed link and then click to PMC from there.  (which does exist — here’s the PMC page which we wish the BASE record had pointed to).

That’s some dirty data.  And it gets worse.  Sometimes there is no pubmed link at all, like this one (correct PMC link exists):

and sometimes there is no valid URL, so there’s really no way to get there from here:

(pretty cool PMC lists this article from 1899, eh?.  Edge cases for papers published more than 100 years ago seems fair, I’ve gotta admit 🙂 )

Anyway.  We found this dirty PMC data in base is infrequent but common enough to cause more bugs than we’re comfortable with.  To work around the dirty data we’ve added a step — oaDOI now uses the the DOI->PMCID lookup file offered by PMC to find PMC articles we might otherwise miss.  Adds a bit more complexity, but worth it in this case.

 

 

So, that’s This Week In Dirty Data from oaDOI!  🙂  Tune in next week for, um, something else 🙂

And don’t forget Open Data Day is Saturday March 4, 2017.   Perfect is the enemy of the good — make it open.

Introducing oaDOI: resolve a DOI straight to OA


Most papers that are free-to-read are available thanks to “green OA” copies posted in institutional or subject repositories.  The fact these copies are available for free is fantastic because anyone can read the research, but it does present a major challenge: given the DOI of a paper, how can we find the open version, given there are so many different repositories?screen-shot-2016-10-25-at-9-07-11-am

The obvious answer is “Google Scholar” 🙂  And yup, that works great, and given the resources of Google will probably always be the most comprehensive solution.  But Google’s interface requires an extra search step, and its data isn’t open for others to build tools on top of.

We made a thing to fix that.  Introducing oaDOI:

We look for open copies of articles using the following data sources:

  • The Directory of Open Access Journals to see if it’s in their index of OA journals.
  • CrossRef’s license metadata field, to see if the publisher has reported an open license.
  • Our own custom list DOI prefixes, to see if it’s in a known preprint repository.
  • DataCite, to see if it’s an open dataset.
  • The wonderful BASE OA search engine to see if there’s a Green OA copy of the article. BASE indexes 90mil+ open documents in 4000+ repositories by harvesting OAI-PMH metadata.
  • Repository pages directly, in cases where BASE was unable to determine openness.
  • Journal article pages directly, to see if there’s a free PDF link (this is great for detecting hybrid OA)

oaDOI was inspired by the really cool DOAI.  oaDOI is a wrapper around the OA detection used by Impactstory. It’s open source of course, can be used as a lookup engine in Zotero, and has an easy and powerful API that returns license data and other good stuff.

Check it out at oadoi.org, let us know what you think (@oadoi_org), and help us spread the word!

What’s your #OAscore?


We’re all obsessed with self-measurement.

We measure how much we’re Liked online. We measure how many steps we take in a day. And as academics, we measure our success using publication counts, h-indices, and even Impact Factors.

But we’re missing something.

As academics, our fundamental job is not to amass citations, but to increase the collective wisdom of our species. It’s an important job. Maybe even a sacred one. It matters. And it’s one we profoundly fail at when we lock our work behind paywalls.

Given this, there’s a measurement that must outweigh all the others we use (and misuse) as researchers: how much of our work can be read?

This Open Access Week, we’re rolling out this measurement on Impactstory. It’s a simple number: what percentage of your work is free to read online? We’d argue that it’s perhaps the most important number associated with your professional life (unless maybe it’s the percentage of your work published with a robust license that allows reuse beyond reading…we’re calculating that too). We’re calling it your Open Access Score.

We’d like to issue a challenge to every researcher: find out your open access score, do one thing to raise it, and tell someone you did. It takes ten minutes, and it’s a concrete thing you can do to be proud of yourself as a scholar.

Here’s how to do it:

  1. Make an Impactstory profile. You’ll need a Twitter account and nothing more…it’s free, nonprofit, and takes less than five minutes. Plus along the way you’ll learn cool stuff about how often your research has been tweeted, blogged, and discussed online.
  2. Deposit just one of your papers into an Open Access repository. Again: it’s easy. Here’s instructions.
  3. Once you’re done, update your Impactstory, and see your improved score.
  4. Tweet it. Let your community know you’ve made the world a richer, more beautiful place because you’ve made you’ve increased the knowledge available to humanity. Just like that. Let’s spread that idea.

Measurement is controversial. It has pros and cons. But when you’re measuring the right things, it can be incredibly powerful. This OA Week, join us in measuring the right things. Find your #OAscore, make it better, tweet it out. If we’re going to measure steps, let’s make them steps that matter.

 

Crossposted on the Open Access Week blog.

Now, a better way to find and reward open access


There’s always been a wonderful connection between altmetrics and open science.

Altmetrics have helped to demonstrate the impact of open access publication. And since the beginning, altmetrics have excited and provoked ideas for new, open, and revolutionary science communication systems. In fact, the two communities have overlapped so much that altmetrics has been called a “school” of open science.

We’ve always seen it that way at Impactstory. We’re uninterested in bean-counting. We are interested in setting the stage for a second scientific revolution, one that will happen when two open networks intersect: a network of instantly-available diverse research products and a network of comprehensive, open, distributed significance indicators.

So along with promoting altmetrics, we’ve also been big on incentives for open access. And today we’re excited that we got a lot better at it.

We’re launching a new Open Access badge, backed by a really accurate new system for automatically detecting fulltext for online resources. It finds not just Gold OA, but also self-archived Green OA, hybrid OA, and born-open products like research datasets.

A  lot of other projects have worked on this sticky problem before us, including the Open Article Gauge, OACensus, Dissemin, and the Open Access Button. Admirably, these have all been open-source projects, so we’ve been able to reuse lots of their great ideas.

Then we’ve added oodles of our own ideas and techniques, along with plenty of research and testing. The result? Impactstory is now the best, most accurate way to automatically assess openness of publications. We’re proud of that.

And we know this is just the beginning! Fork our code or send us a pull request if you want to make this even better. Here’s a list of where we check for OA to get you started:

  • The Directory of Open Access Journals to see if it’s in their index of OA journals,
  • CrossRef’s license metadata field,  to see if the publisher has uploaded an open license.
  • Our own custom list DOI prefixes, to see if it’s in a known preprint repo
  • DataCite, to see if it’s an open dataset.
  • The wonderful BASE OA search engine to see if there’s a Green OA copy of the article.
  • Repository pages directly, in cases where BASE was unable to determine openness.
  • Journal article pages directly, to see if there’s a free PDF link (this is great for detecting hybrid OA)

What’s it mean for you? Well, Impactstory is now a powerful tool for spreading the word about open access. We’ve found that seeing that openness badge–or OH NOES lack of a badge!–on their new profile is powerful for a researcher who might otherwise not think much about OA.

So, if you care about OA: challenge your colleagues to go make a free profile and see how open they really are. Or you can use our API to learn about the openness of groups of scholars (great for librarians, or for a presentation to your department). Just hit the endpoint http://impactstory.org/u/someones_orcid_id to find out the openness stats for anyone.

Hit us up with any thoughts or comments, and enjoy!

3 important steps to getting more credit for your peer reviews


A few years back, Scholarly Kitchen editor-in-chief David Crotty informally polled a dozen biologists about the burden of peer review. He found that most peer review around 3 papers per month. For senior scientists, that number can reach 15 papers per month.

And yet, no matter how much time they spend reviewing, the credit they get is the same, and it looks like this on their CV:

“Service: Reviewer for Physical Review B and PLOS ONE.”

What if your work could be counted as more than just “service”? After all, peer review is dependent upon scientists doing a lot of intellectual heavy lifting for the benefit of their discipline.

And what if you could track the impacts your peer reviews have had on your field? Credit–in the form of citations and altmetrics–could be included in your CV to show the many ways that you’ve contributed intellectually to your discipline.

The good news? You can get credit for your peer reviews. By participating in Open Peer Review and making reviews discoverable and citable, researchers across the world have begun to get the credit they deserve for improving science for the better.

But this practice isn’t yet widespread. So, we’ve compiled a short guide to getting started with getting credit for your peer reviews.

1. Participate in Open Peer Review

Open Peer Review is a radical notion predicated on a simple idea: that by making author and reviewer identities public, more civil and constructive peer reviews will be submitted, and peer reviews can be put into context.

Here’s how it works, more or less: reviewers are assigned to a paper, and they know the author’s identity. They review the paper and sign their name. The reviews are then submitted to the editor and author (who now knows their reviewers’ identities, thanks to the signed reviews). When the paper is published, the signed reviews are published alongside it.

Sounds simple enough, but if you’re reviewing for a traditional journal, this might be a challenge. Open Peer Review is still rarely practiced by most traditional publishers.

For a very long time, publishers favored private, anonymous (‘blinded’) peer review, under the assumption that it would reduce bias and that authors would prefer for criticisms of their work to remain private. Turns out, their assumptions weren’t backed up by evidence.

Blinded peer review is argued to be beneficial for early career researchers, who might find themselves in a position where they’re required to give honest feedback to a scientist who’s influential in their field. Anonymity would protect these ECR-reviewers from their colleagues, who could theoretically retaliate for receiving critical reviews.

Yet many have pointed out that it can be easy for authors to guess the identities of their reviewers (especially in small fields, where everyone tends to know what their colleagues/competitors are working on, or in lax peer review environments, where all one has to do is ask!). And as Mick Watson argues, any retaliation that could theoretically occur would be considered a form of scientific misconduct, on par with plagiarism–and therefore off-limits to scientists with any sense.

In any event, a consequence of this anonymous legacy system is that you, as a reviewer, can’t take credit for your work. Sure, you can say you’re a reviewer for Physical Review B, but you’re unable to point to specific reviews or discuss how your feedback made a difference. (Your peer reviews go into the garbage can of oblivion once the article’s been published, as illustrated below.) That means that others can’t read your reviews to understand your intellectual contributions to your field, which–in the case of some reviews–can be enormous.

Image CC-BY Kriegeskorte N from “Open evaluation: a vision for entirely transparent post-publication peer review and rating for science” Front. Comput. Neurosci., 2012

Image CC-BY Kriegeskorte N from “Open evaluation: a vision for entirely transparent post-publication peer review and rating for science” Front. Comput. Neurosci., 2012

So, if you want to get credit for your work, you can choose to review for journals that already offer Open Peer Review. A number of forward-thinking journals allow it (BMJ, PeerJ, and F1000 Research, among others).

To find others, use Cofactor’s excellent journal selector tool:

  • Head over to the Cofactor journal selector tool

  • Click “Peer review,”

  • Select “Fully Open,” and

  • Click “Search” to see a full list of Open Peer Review journals

Some stand-alone peer review platforms also allow Open Peer Review. Faculty of 1000 Prime is probably the best known example. Publons is the largest platform that offers Open peer review. Dozens of other platforms offer it, too.

Once your reviews are attributable to you, the next step is making sure others can read them.

2. Make your reviews (and references to them) discoverable

You might think that discoverability goes hand in hand with Open Peer Review, but you’d only be half-right. Thing is: URLs break every day. Persistent access to an article over time, on the other hand, will help ensure that those who seek out your work can find it, years from now.

Persistent access often comes in the form of identifiers like DOIs. Having a DOI associated with your review means that, even if your review’s URL were to change in the future, others can still find your work. That’s because DOIs are set up to resolve to an active URL when other URLs break.

Persistent IDs also have another major benefit: they make it easy to track citations, mentions on scholarly blogs, or new Mendeley readers for your reviews. Tracking citations and altmetrics (social web indicators that tell you when others are sharing, discussing, saving, and reusing your work online) can help you better understand how your work is having an impact, and with whom. It also means you can share those impacts with others when applying for jobs, tenure, grants, and so on.

There are two main ways you can get a DOI for your reviews:

  • Review for a journal like PeerJ or peer review platform like Publons that issues DOIs automatically

  • Archive your review in a repository that issues DOIs, like Figshare

Once you have a DOI, use it! Include it on your CV (more on that below), as a link when sharing your reviews with others, and so on. And encourage others to always link to your review using the DOI resolver link (these are created by putting “http://dx.doi.org/” in front of your DOI; here’s an example of what one looks like: http://dx.doi.org/10.7287/peerj.603v0.1/reviews/2).

DOIs and other unique, persistent identifiers help altmetrics aggregators like Impactstory and PlumX pick up mentions of your reviews in the literature and on the social web. And when we’re able to report on your citations and altmetrics, you can start to get credit for them!

3. Help shape a system that values peer review as a scholarly output

Peer review may be viewed primarily as a “service” activity, but things are changing–and you can help change ‘em even more quickly. Here’s how.

As a reviewer, raise awareness by listing and linking to your reviews on your CV, adjacent to any mentions of the journals you review for. By linking to your specific reviews (using the DOI resolver link we talked about above), anyone looking at your CV can easily read the reviews themselves.

You can also illustrate the impacts of Open Peer Review for others by including citations and altmetrics for your reviews on your CV. An easy way to do that is to include on your CV a link to the review on your Impactstory or PlumX profile. You can also include other quantitative measures of your reviews’ quality, like Peerage of Science’s Peerage Essay Quality scores, Publons’ merit scores, or a number of other quantitative indicators of peer-review quality. Just be sure to provide context to any numbers you include.

If you’re a decision-maker, you can “shape the system” by making sure that tenure & promotion and grant award guidelines at your organization acknowledge peer review as a scholarly output. Actively encouraging early career researchers and students in your lab to participate in Open Peer Review can also go a long way. The biggest thing you can do? Educate other decision-makers so they, too, respect peer review as a standalone scholarly output.

Finally, if you’re a publisher or altmetrics aggregator, you can help “shape the system” by building products that accommodate and reward new modes of peer review.

Publishers can partner with standalone peer review platforms to accept their “portable peer reviews” as a substitute (or addition to) in-house peer reviews.

Altmetrics aggregators can build systems that better track mentions of peer reviews online, or–as we’ve recently done at Impactstory–connect directly with peer review platforms like Publons to import both the reviews and metrics related to the reviews. (See our “PS” below for more info on this new feature!)

How will you take credit for your peer review work?

Do you plan to participate in Open Peer Review and start using persistent identifiers to link to and showcase your contributions to your field? Will you start advocating for peer review as a standalone scholarly product to your colleagues? Or do you disagree with our premise, believing instead that traditional, blinded peer review–and our means of recognizing it as service–are just fine as-is?

We want to hear your thoughts in the comments below!

Further Reading

 

ps.  Impactstory now showcases your open peer reviews!

 

Starting today, there is one more great way to get credit by your peer reviews, in addition to those above:  on your Impactstory profile!

We’re partnering with Publons, a startup that aggregates Open and anonymous peer reviews written for  PeerJ, GigaScience, Biology Direct, F1000 Research, and many other journals.

Have you written Open reviews in these places?  Want to feature them on your Impactstory profile, complete with viewership stats? Just Sign up for a Publons account and then connect it to your Impactstory profile to start showing off your peer reviewing awesomeness :).

Your new Impactstory


Today, it’s yours: the way to showcase your research online.

You’re proud of your research.  You want people to read your papers, download your slide decks, and talk about your datasets.  You want to learn when they do, and you want to make it easy for others to learn about it too, so everyone can understand your impact. We know, because as scientists, that’s how we feel, too.

The new Impactstory design is built around researchers. You and your research are at the center: you decide how you want to tell the story of your research impact.

What does that mean?  Here’s a sampling of what’s new in today’s release:

9ep0z5c.png

A streamlined front page showcases Selected Publications and Key Metrics that you select and arrange from your full list of publications.  There’s a spot for a bio so people learn about your research passion and approach.

Reading your research has become an easy and natural part of learning about your work: your publications are directly embedded on the site!  Everyone can read as they browse your profile.  We automatically embed all the free online versions we can find — uploading everything else only takes a few clicks.

gbazFPT.png

None of this is any good if your publication list gets stale, so keeping your publication list current is easier than ever: zoom an email publications@impactstory.org whenever you publish something new with a link to the new publication, and poof: it’ll appear in your profile, just like that.

Want to learn things you didn’t know before?  Your papers now include Twitter Impressions — the number of times your publication has been mentioned in someone’s twitter timeline.  You may be surprised how much exposure your research has had…we’re discovering many articles reaching tens of thousands of potential readers.

We could talk about the dozens of other features in this release. But instead: go check out your new profile. Make it yours.  We’re extending the free trial for all users for two more days — subscribe before your trial expires and it is just $45/year.

As of today, the three of us have taken down our old-fashioned academic websites. Impactstory is our online research home, and we’re glad it’ll be yours too.

 

Sincerely,
Jason, Heather and Stacy

Share your articles, slides and more on Impactstory


We said we were going to have big changes live by Sept 15th when early adopters’ free trials expire. Well here’s our first one:  Impactstory’s now a great place to freely share your articles, slides, videos, and more–and get viewership stats to track impacts even better.

Share everything

NHfvmIP.png

Before, product pages focused just on the metrics for your research products. Those metrics are still there, but now the focus is on the product itself. Yep, that’s right: people can now view and read your work right on Impactstory. So we’re not just a place to share the impact of your work, we’re also a place to share your actual research.

It’s super easy to upload your preprints to Impactstory (and you should!). But it gets even better–for most OA publications, we automatically embed the PDF for you. It’s handy, and it’s a also great example of the kind of interoperability OA makes possible.

But as y’all know, at Impactstory we’re passionate about supporting scholarly products beyond articles. So we’re also automatically embedding a slew of other tasty product types. GitHub repo? We’ve got your README file embedded. Figshare image? Yup, that’s on your profile now too. You want to view videos from Vimeo and YouTube, and slides from Slideshare, right on your Impactstory page? Done.

Discover how many people are viewing your research

We’re also rolling out viewership stats for your Impactstory product page. So not only do you learn when folks are citing, discussing, and saving your work–you learn when they’re reading it, too. Over time we’ll likely add viewership maps and other ways to dig into this data even more.

Why you should upload your work to Impactstory

Sharing your work directly on Impactstory has lots of advantages. It brings all your product types together in one place, under your brand as a researcher, not under the brand of a journal or institution. It also makes the case for your research’s value better than metrics alone–it helps you tell a fuller impact story.

Uploading your work is also a great quick way to just get your work out there. In that regard it’s kind of like what Academia.edu and ResearchGate offer–except we don’t make potential readers create an account to access your work. It’s open.  We don’t yet have comprehensive preservation strategy (persistent IDs, CLOCKSS, etc), but we’ll be listening to see if there’s demand for that.

As you may notice, we are super excited about this feature. We’re going to be working hard to get the word out about it to our users, and we’re counting on all or your help on that. And of course as always we’d also love your feedback, particularly on bugs; a feature this big will certainly need a few as users to kick the tires.

And now we’re transitioning to working our next big set of features…can’t wait to launch those over the next two weeks!