Advice to junior faculty who want to do get promoted doing Open Science | The OpenScience Project - http://www.openscience.org/blog...
Dec 10, 2011
from
Is that post about you?
- Anthony Salvagno
Can u include any more info for ur dossier? I can give you the analytics for my notebook.
- Anthony Salvagno
After reading this, I'll ad a section to my CV: 'peer-reviewed methods' right after 'peer-reviewed original research'. This new section will also have download stats to our software at http://buridan.sourceforge.net (as soon as our paper describing that package is out).
- Björn Brembs
"Use as many metrics to back up your contributions as you can." Anyone have metrics they'd want to help make their open science case that they can't easily get to now?
- Heather Piwowar
@Ant: Yes! Dan was extraordinarily helpful with advice and letter for me. He's one of many generous, helpful, and successful open scientists out there & why I know regardless of my own tenure decision, open science contributions will be rewarded. Fundamentally, it makes for much, much higher impact science and since most scientists want that, the reward system will come around.
- Steve Koch
@Heather the system you've been working on is great! Every tool that is developed in the coming years needs metrics. As Nielsen points out in his book, many of the most successful endeavors (such as the Mathworks competition and fold.it) provide instant feedback and scoring. Some very important features of those successes are (1) numerical score, (2) instant scoring, and (3) EASY relative scoring. #3 is an important issue, especially with tenure cases. I was able to provide rudimentary metrics, but I failed at providing context. I'm sure I could have done a better job, but I didn't in the time available and with my skills. So, the faculty evaluating me don't have a way of relative comparison. For example, on one of Andy's projects, we opted to publish on "Instructables," instead of wasting pre-publication peer review resources on, say, a "review of scientific instruments." The Instructable (http://www.instructables.com/id...) got over 10,000 page views, 4.4/5 stars, and was nominated for an Instructables award. But the faculty evaluating the dossier have no way of comparing that to other contributions. Of course I and at least one other tenured person thought that was a success on an absolute scale. But how does it compare to a "regular" publication with 10 citations? Very difficult question for faculty who are making a serious decision. With your own service you've created to compile metrics, have you yet developed a way of comparing between different people? Maybe some field-specific and age-noted reference points?
- Steve Koch
Steve, very useful, thanks. Nope, total-Impact doesn't have any relative metrics yet, we're trying to figure out the best way to do it...
- Heather Piwowar
The post was indeed about Steve. From my own tenure documents, providing metrics on open source software like OpenMD and Jmol was fairly easy - downloads statistics and lists of other groups actively using the software for their own research was helpful. I think finding convincing metrics for other scientific contributions is much harder, particularly for Wikipedia contributions, OpenWetware protocols, and blog posts. Page views would probably not have as much weight as a metric that proves that your contribution is used by others. Perhaps a count of the number of dissertations that have entire paragraphs stolen from your wikipedia article. (That suggestion is only partly tongue-in-cheek.)
- Dan Gezelter