Who are you?
Research2.0 is an open-access community initiative launched from the USC School of Pharmacy in 2012. We promote reform of academic research to transition from an introspective, closed community of academics, to an outward-facing hub of excellence that connects and empowers a diverse ecosystem of bioscience research. In practice, this means internal reforms of academic administration to create streamlined, science-led support systems to optimize the pace of research, as well as exposing academic researchers to the many new technologies emerging from the communications revolution that can support faster, more rigorous research and data sharing. You can learn more about Research2.0 here.
What is March Metrics?
March Metrics is an open-source competition that we created to promote the use of modern research tools and metrics by academics. At first envisioned as a competition within the USC School of Pharmacy, the idea has spread to allow any scientist in the world to compete. Unlike all other current models of scientific achievement and research impact, everyone in the world starts March Metrics on an equal base.
How do I win?
Check out the scoresheet that will tell you how to accrue impact points. Find your metrics using free services such as altmetric, academia.edu, ResearchGate, etc. Go to the submission site and enter a URL or doi for each scoring item, and your altmetrics.
Check out our posts on figshare during March to see where you stand against the rest of the World’s scientists!
What do I win?
The title of March Metrics World Champion 2013! Unfortunately, we are not able to offer tangible prizes in 2013, as we operate on a budget that makes us jealous of people with shoestrings. As noted below, we will post all the competition data openly online for anyone to see and analyze, as well as the winners and runners-up, so there will be a permanent record of your achievements online.
We may do a Google+ hangout or similar after the competition to promote the winners, Research2.0 and the movement to Open Science – we are an open-source movement, so we’re open to all ideas and suggestions – please DM @MarchMetrics on twitter with any ideas for 2013 or the future!
*If you are a grad student or postdoc at the USC School of Pharmacy, there will be prizes that we’ll give out at a ceremony in early April, so you guys have even more incentive to participate!
How will you share data?
We understand that the data produced by this competition is inherently valuable as a research resource into many aspects of scientific publishing, data sharing, etc. in the scientific community of 2013. As such, and in keeping with our support of open access, the raw data, and maybe some simple analyses and the final “official” results will be posted on figshare. We hope that scientists will take advantage of this to a) audit submissions to self-police fairness and b) use the data to conduct research for the betterment of the scientific community around the world.
I’m not comfortable with you sharing my email address with everyone on Earth!
We understand. We will scrub all emails from the raw data before we post it online, and we will not share your address with any outside groups. We will need your email to group your submissions and track multiple entries.
Why did you choose these metrics and not my favorite “XYZ factor”
There are a lot of ways to measure research impact, and that is why we endeavored to incorporate as many of them into the competition as we could reasonably track. There were several metrics, especially on the data sharing and financial side that we had to drop late in development, as we were not sure we would be able to track and audit submissions for 2013. Our hope is that we can build on momentum from this inaugural competition to grow March Metrics to incorporate many more elements in future years, and to ensure that we reaches our goal of using “All The Data” to create a holistic, multifactorial analysis of Research Impact.
Your scoring system is arbitrary and stupid!
So is your (# of papers) x ( Impact Factor) method.
Of course, all systems that seek to compare different aspects of work are bound to be somewhat arbitrary as we look to balance many different aspects of research productivity. For 2013, we tried to balance the scoring systems such that each element had the potential to score comparably, based on some limited dry runs and advise from other scientists. By looking back at 2013 data, we will surely modify and hopefully add considerably to the scoring regime for future years – this is science, so we need real data to analyze before we come to any conclusions about it’s utility!