Image credits: Nature News
Scientists can now offer “preprints” to their colleagues for review, rating, and comment. These preprints are available online from the bioRxiv database. Working with this database, a new application (app) called “Papr” can be used to rate these preprints and provide comments to the authors. The app can be downloaded to either your phone or desktop computer.
Papr was created and launched in 2016 by Jeff Leek, Lucy D’Agostino McGowan, and Nick Strayer of the Johns Hopkins Data Science Lab. The objective was to provide a method by which scientists could conveniently and quickly rate a preprint of a scientific paper. The authors considered the app the “Tinder for preprints,” which likens it to that of the famous social and dating app. The purpose of Papr is to help weed out the barrage of scientific papers regularly reviewed by scientist and provide only those that are of interest to them by analyzing the ratings from each source. Scientists often use Twitter and other social media to find new studies of interest. This app can connect scientists with the same interests through social media or other accounts to share information and encourage collaboration.
How It Works
Scientific preprints are displayed from only bioRxiv at this point, however, the authors might expand the source if there is enough interest. To use the app, download to your phone or computer and create a user ID, as with any online application. This ID is stored together with your preprint ratings. No user information, such as email accounts, are publicly listed.
After loading the app and signing in, a preprint will appear. At this point, only abstracts are available; however, this might change. The app provides you with the ability to “rate” the abstracts by “swiping” across your cell phone screen (or dragging on your computer screen) according to the following four categories:
- Exciting and probable.
- Exciting and questionable.
- Boring and probable.
- Boring and questionable.
As a check against bias ratings, the app does not reveal the names of the authors or allow searches by author name. After some recent additional changes to the app, new features were added. These features allow the reviewer to download a compilation of his or her own ratings. In addition, the links to the full text would then be provided through bioRxiv.
Some additional features could be added in the future, such as a listing of the most popular papers or subjects in a “leaderboard’ type presentation. According to Brian Nosek at the Center for Open Science, the app is important because it shows new evaluation methods; however, others believe that the app is merely entertainment. Although the information can be used to provide data on measures of the quality of the manuscripts, it should be noted that these ratings don’t constitute or replace formal peer reviews. Even Leek has indicated that the app should not be relied on to provide information on papers other than to look at overall ratings from colleagues and possibly some comments.
Comments are closed for this post.