Midterm evaluations go OpenScience!

Kudos to MatRiC – for sharing material end experiences from the Centre’s mid-term evaluation

Centers of Excellence are large-scale, long-term projects, and it’s only fair that our funders want to touch base properly (beyond the annual reports) at least once during that period to evaluate whether the funding is well spent and whether we are delivering what we promised.

That said, mid-term evaluations are also stressful. A lot is riding on a successful outcome; the Centre’s existence, funding and continued activity, of course, but also institutional and personal prestige. But these evaluations are also part of the larger picture; the broader academic enterprise. As such they are a deliverable, a skill, and an academic genre to be mastered; alongside teaching and learning, scientific research, writing and communication, grant proposal writing, peer review, et cetera. I could go on. The point here is that as academics we need to master, and therefore somehow need to acquire, a broad set of skills, some of which (like successfully jumping the hoops in a midterm evaluation) our educations are not really preparing us for.

And since the outcome of the midterm evaluation determines how an not insignificant amount of public funding will be spent, the quality of these evaluations are also really important for the government and for the public. This latter point is actually a bit comforting for the likes of us, as it puts pressure not only on the Centers, but also on the evaluation panels.

The ‘Open Science’ movement is rapidly gaining strength in academia. Many are familiar with the idea and ideal that publicly funded research should be available for all. As the OpenScience movement is picking up momentum, many of its proponents are realizing that a truly open science should be open about much more than our results and datasets. Field methods and data handling issues such as data cleaning, outlier removal, and other procedures quickly come to mind, but also details of the pre-publication data analyses, and of course funding sources, authorship contributions, conflict of interest, and the identity and contributions of peer reviewers.

 

simon-goodchild-uia

Simon Goodchild. Photo by UiA.

This week, director of MatRIC Simon Goodchild took this whole issue a step further, when he published all documents and reports form the entire midterm evaluation process on the MatRIC homepage. All templates, reports, meeting documents, feedback form the committee, replies, et cetera. Wow. This will be really useful for future Center’s going through the same process, of course, but it is also a really nice way to approach this important (and yes, stressful) part of academic life in an Open Science way.

So Kudos to MatRIC! And of course, we cannot do anything less! On this page you will find the entire and annotated paper trail from bioCEED’s midterm evaluation.

 

Vigdis Vandvik

bioCEED leader

You may also like...