The Growth of Performance Based Funding for Research


Corporations have demonstrated a clear preference for either funding short-term applied research projects, or stepping away from research entirely and simply buying-up competitors who bring new products or treatments to the market. The billionaire owners (or majority shareholders) of those corporations have the financial freedom to fund their own research centers, and some do that by endowing such centers at major research institutions. For the rest of the academic institutions, however, research funding is often limited to smaller grant-funded projects, alumni endowments, or a portion of the national research budgets of federal agencies. How those agencies calculate the size of those respective portions has been coming under increasing scrutiny.


As governments around the world continue to struggle with lower tax revenues and the consequences of profligate spending in their respective pasts, the amount of government funding awarded to scholarly research has come under increasing scrutiny. Justification of funding for ongoing research projects has been easier to achieve provided those projects are remaining on budget and on target, but with each new project, funds have to be justified against alternative potential expenditures, which leads us to the problem of metrics and the consistent measurement of research output.


In Europe the metrics being developed for Performance Based Research Funding Systems (PRFS) are still very much under debate. Norway, for example, implemented its’ system back in 2002, developing an assessment matrix using four key indicators – two output and two input:

• Output
o Publications as indicated by citations (30%)
o Ph.D. graduates from the institution (30%)

• Input
o External funding from the Norwegian Research Council (20%)
o External Funding from the European Union (20%)

In ongoing discussions with faculty, the publications indicator has been refined even further, drawing distinctions between books, articles, research papers, conference papers, and contributions to anthologies.

The most obvious complaint about the matrix is that it clearly favors larger established institutions with senior faculty at the top of their fields. Smaller or newer institutions with younger staff and less prestigious rankings see themselves as being punished with a perceived grandfathering of the top tier institutions that already have high scores on all the ranking indicators.


Evaluation matrices are rarely if ever rescinded once they have been put in place. They may get modified over time, but once research becomes performance based, irrespective of the perceived equitability of the process (or lack thereof), it’s unlikely that it will be replaced. In the US, budget cuts as a result of sequestrations and direct challenges to research efficacy in the proposed High Quality Research Act have already put researchers on notice that an era of performance based assessment is upon us. It remains to be seen if we will be able to incorporate the lessons learned by our European contemporaries.