09 November 2007

Bibliometrics and research assessment

A study for Universities UK (previously the Committee of Vice Chancellors and Principals - a much better title, which actually told you who was involved!) has come to a rather predictable conclusion:

It seems extremely unlikely that research metrics, which will tend to favour some modes of research more than others (e.g. basic over applied), will prove sufficiently comprehensive and acceptable to support quality assurance benchmarking for all institutions.

However, at least that conclusion has been reached and, rather importantly, the report is mainly concerned with the potential for applying bibliometric measures to fields in science, technology, engineering and medicine (STEM) (the areas targeted by the Higher Education Funding Council). Some differences between STEM fields and the social sciences and humanites are pointed to, but there is no detailed analysis of the problems in these areas, which, of course, are even more difficult to deal with than those in STEM.

Readers outside the UK might be somewhat bemused by this post: the explanation for the concern over this matter is that the Higher Education Funding Councils have proposed the use of 'metrics' (i.e., bibliometrics) for the Research Assessment Exercise. This Exercise has taken place every four or five years for the past 20 years and is crucially important for the universities, since it is the basis upon which the research element of national funding is distributed.

No comments:

Post a Comment