The concept of research excellence must be broadened

Lotteries for viable funding applications may be one way forward, say Lisette Jong, Thomas Franssen, Stephen Pinfield and James Wilsdon

十月 7, 2021
A lorry with a "wide load" sign
Source: iStock

The notion of “excellence” is omnipresent in the modern research ecosystem, but how do we identify this elusive quality? What defines “excellent” work or makes an “excellent” researcher?

Too often, excellence is portrayed as a universal, objective quality that can be consistently measured and neutrally applied, but recent research by the Research on Research Institute’s Transforming Excellence project confirms the growing belief that this idea is nonsense.

In truth, definitions of excellence when applied to research have been thoroughly shaped by sociopolitical and historical trends whose continuing relevance is, at the very least, deserving of examination.

For example, the roots of excellence culture can be traced back to the very first university rankings created in 1910 by the American psychologist James McKeen Catell, based on his view of the number of “eminent scientists” at each university and their reputation among their peers. These efforts were informed by the eugenicist panic and aimed to arrest a perceived decline in the number of “great men” in society by encouraging pockets of excellence.

From the 1950s, excellence was prioritised as means to drive productivity and economic growth, picking up speed in the 1980s. The concept of excellence became firmly established in the European Union’s science policy as part of the 2000 Lisbon strategy to improve the bloc’s position in the global knowledge economy. This push resulted in the creation of merit-led funding body the European Research Council.

While such pushes for excellence have led to many wins, they have not been an unmitigated success. Moreover, the failure to examine the nature of excellence – combined with narrow and non-inclusive definitions – is having a negative impact on the culture and practices of research. Sadly, it often steers researchers to prioritise instrumental ways of working that score “excellence points”, rather than concentrating on practices that will have lasting impact.

This is perhaps most evident in the unintended consequences of publication metrics’ widespread adoption as a measurement of excellence. It has led researchers to “salami slice” results into multiple papers or obsess over publishing in high-impact journals, even if books or monographs may be more appropriate and valued by the discipline – as in history, for example. It also suppresses innovation, restrains critical and creative thinking and discourages important but non-glamorous work, such as replication studies.

The presentation of excellence measures and criteria as neutral, objective and international also encourages the reproduction and reinforcement of inequities in the global research ecosystem. When countries have vastly differing economies, needs and academic infrastructure, it is ridiculous to believe that there is one international excellence standard against which all research can be measured. Excellence in some countries may mean, for example, demonstrating a direct impact or practical solution to a problem facing the population, as opposed to publishing a high-impact paper.

Five decades ago, the prominent sociologist Robert K. Merton said that “many of us are persuaded that we know what we mean by excellence and would prefer not to be asked to explain. We act as though we believe that close inspection of the idea of excellence will cause it to dissolve into nothing.”

Does this still hold true? Maybe. But while people in the world of research and funding are increasingly pointing out the flaws in the concept of excellence and their discomfort in applying it, few alternatives have emerged.

Our survey reveals three areas where possible solutions are emerging, however.

First, various research funding organisations are applying workarounds to address some of the emerging problems with current excellence frameworks. These include the more responsible use of metrics and a reduced reliance on bibliometrics in decision-making.

Moreover, while matters of equality, diversity and inclusion have generally been perceived as being in tension with conventional ideas of research excellence, efforts are made by funding organisations to bring these closer together by expanding their understanding of excellence, such as the concept of “inclusive excellence”.

A more ambitious alternative is to fundamentally change the system of assessing research and allocating funding. And as a research community, it is time we started experimenting to see if we can find better, fairer approaches to the challenge of deciding who and what to fund.

One solution could be to move away from the zero-sum excellence model used by most funding and research evaluation systems, where being the “best of the best” is a moving target, dependent on the number and quality of other applicants, as well as the money available.

Instead, funders could apply the concept of “threshold excellence”, based on meeting a fixed performance target that incorporates a range of inclusive measures. Projects that pass this threshold could then be selected for funding in a variety of ways, perhaps even entering them into a random lottery-style mechanism. It’s a radical idea, but it’s one that deserves further empirical and experimental investigation – something colleagues at RoRI are doing through our randomisation project.

That may prove a step too far for some funders at the moment, but continuing the status quo has its own drawbacks. And broadening the idea of excellence and testing new ways of allocating research resources doesn’t mean that we won’t fund the best research. In fact, it could make it more likely that we will.

Lisette Jong is a researcher and Thomas Franssen a senior researcher, both at the Centre for Science and Technology Studies, Leiden University. Stephen Pinfield and James Wilsdon, from the University of Sheffield, are associate director and director respectively of the Research on Research Institute.

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.

相关文章

Reader's comments (1)

I cannot see how lotteries are fair but anonymisation would be useful so that the quality of the work and ideas can be judged alone. However, at some point there has to be a decision as to whether the work can be completed at the proposed location so the process cannot be totally anonymous. I am speaking from the STEM point of view, where there are significant equipment and facilities implications that cannot be ignored. Personally, I carry out computational work that is relatively inexpensive but this is not true of many of my colleagues.