There is something rotten in the Research Council of Norway

In the last couple of years, there has been a debate, especially in Khrono, concerning how the Research Council of Norway evaluates and funds scientific proposals.

There have been several stories whereby submissions to the Research Council of Norway (RCN) have been unfavourably assessed but then secured funding from the European Research Council (ERC).

ERC funds basic science, same as the RCN’s FRIPRO, or “Ground-breaking research”,  programme.

For example, this story shows that two rejections from the Research Council of Norway resulted in a Starting Grant from ERC.

Or this story, where the RCN assigned very low grades on a proposal but the outcome in ERC was securing an ERC Consolidator grant.

This story is illuminating because the proposal submitted to RCN was just a shortened version of the one submitted to ERC. In effect, the main underlying idea and approach were the same.

Nevertheless, the average grade in the Research Council of Norway was 4 and represented the lowest grade the PI had ever received before!

In contrast, to receive funding from ERC, you need grade A (fully meets the ERC’s excellence criterion and is recommended for funding if sufficient funds are available) during two evaluation steps.

Similar stories abound, and the Research Council of Norway’s explanation is that evaluations are by nature subjective.

This is, of course, true but begs the question of whether there are some fundamental flaws in the evaluation process in the Research Council of Norway.

For example, I now find myself in the same situation: I’ve submitted more or less the same proposal 3 different times to the RCN and have received the average grade of 4 with no funding (see a news story here).

Last year I submitted an expanded version to ERC and was notified on the 9th of March that the project was approved for funding (ERC Consolidator Grant, see here for details about the project).

While the evaluations are subjective, especially concerning scientific excellence, some criteria are less so.

For example, PIs and scientific group evaluations are somewhat objective as they are based on track records documented by publications, previous grants, etc.

Compare, for example, assessments I’ve received in 2012 vis-vis assessments I’ve received in the last 3 submissions:

In 2012 I submitted a proposal where ‘The project manager and project group’ assessment got a 6 – Excellent. The meaning of this grade is:

“The project leader and/or research/project group is qualified at a high international level, has contacts within the foremost national and international research environments and will be able to play an important role in ensuring the success of the project.”

10 years on, with more leadership experience, loads more publications ++ and the evaluation has fallen to 4 – Good on all 3 submissions!

Granted, there are differences in how these grades are assigned: PI and group are now evaluated under ‘Implementation’, which also includes ‘feasibility’. Previously, ‘feasibility’ was a separate category (by the way, I received a 6 on this in 2012 also!).

But this only goes to show that the evaluation criteria used by the RCN have changed.

A change that apparently opens for more vague and less concrete evaluations: the fact is that before 2019 all the proposals I submitted scored on overall 6 (or, in one case, 7 whereby I got a Young Research Talent grant), but after 2018 I’ve scored on overall 4.

Compare this with the assessment from ERC, where 7 different reviewers scored my capabilities as a PI as ‘Exceptional’, ‘Excellent’ or ‘Very Good’.

If one subscribes to an idea that the evaluation process in ERC is better suited to pinpoint ground-breaking research—whatever that might be—the inescapable conclusion is that the RCN has designed an evaluation system that is missing the most innovative (a buzzword that RCN is mightily fond off) projects, at least in terms scientific excellence.

In fact, looking at the level of written evaluations from the RCN, it would be better to randomly select people in the Norwegian population and ask them what they think.

That would at least provide an unbiased evaluation process.