City, University of London has produced a policy statement and associated guidance on responsible research assessment, including the appropriate use of quantitative research metrics.
This policy statement builds on a number of prominent external initiatives on the same task, including the San Francisco Declaration on Research Assessment (DORA), the Leiden Manifesto for Research Metrics and the Metric Tide report.
These initiatives and the development of institutional policies are also supported or mandated by research funders in the UK (for example UKRI, Wellcome Trust).
Our aim is to balance the benefits and limitations of, for example, bibliometric use to create a framework for responsible research assessment at the City University of London and to suggest ways in which they can be used to deliver the ambitious vision for excellence in research and teaching, which is embodied in the City University of London vision and strategy.
Responsible use of metrics
We recognise that City, University of London is a dynamic and diverse university and no metric or set of metrics could universally be applied across our institution. Many disciplines or departments do not use research metrics in any way, because they are not appropriate in the context of their field. City University of London recognises this and will not seek to impose the use of metrics in these cases.
This Statement is deliberately broad and flexible to take account of the diversity of contexts and is not intended to provide a comprehensive set of rules. To help put this into practice, we will provide an evolving set of guidance material with more detailed discussion and examples of how these principles could be applied. City, University of London is committed to valuing research and researchers based on their own merits, not the merits of metrics.
Furthermore, research 'excellence' and 'quality' are abstract concepts that are difficult to measure directly but are often inferred from metrics. Such superficial use of research metrics in research evaluations can be misleading. Inaccurate assessment of research can become unethical when metrics take precedence over expert judgement, where the complexities and nuances of research or a researcher’s profile cannot be quantified.
When applied in the wrong contexts, such as hiring, promotion and funding decisions, irresponsible metric use can incentivise undesirable behaviours, such as chasing publications in journals with a high Journal Impact Factor (JIF) regardless of whether this is the most appropriate venue for publication, or discouraging the use of open research approaches such as preprints or data sharing.
Bibliometrics
Bibliometrics is a term describing the quantification of publications and their characteristics. It includes a range of approaches, such as the use of citation data to quantify the influence or impact of scholarly publications and other approaches (known as altmetrics) that capture wider engagement across media, social media and other platforms.
When used in appropriate contexts, bibliometrics can provide valuable insights into aspects of research in some disciplines. However, bibliometrics are sometimes used uncritically, which can be problematic for researchers and research progress when used in inappropriate contexts.
For example, some bibliometrics have been commandeered for purposes beyond their original design; the JIF was reasonably developed to indicate average journal citations (over a defined time period), but is often used inappropriately as a proxy for the quality of individual articles. It is important to recognise that some bibliometrics (e.g. JIF) do not apply to some scholarly outputs (e.g. books, monographs).
Principles
City, University of London is committed to applying the following guiding principles where applicable (e.g. in hiring and promotion decisions):
- Quantitative evaluation should support qualitative, expert assessment.
- Measure performance against the research missions of the institution, group or researcher.
- Keep data collection and analytical processes open, transparent and simple.
- Allow those evaluated to verify data and analysis.
- Account for variation by field in research practices.
- Base assessment of individual researchers on a qualitative judgement of their portfolio.
- Avoid misplaced concreteness and false precision.
- Recognise the systemic effects of assessment and indicators.
- Scrutinise indicators regularly and update them.
- Ensure those who generate and interpret research metrics do so in line with the University’s responsible metrics policy.