The Research Information Network has just released a very interesting report (prepared by a team from the University of Loughborough and Manchester Metropolitan University) on the publication and communication behaviour of academic researchers. 'Communicating knowledge: how and why researchers publish and disseminate their findings' The report is backed up by technical reports on the methods employed.
The message that comes across strongly is that researcher behaviour has been significantly influenced by the Research Assessment Exercise (soon to be the Research Excellence Framework - a rather less immediately meaningful phrase!). In particular, the report demonstrates changes in the choice of research outlets, with a pronounced swing towards more journal papers, and the researchers themselves comment on the potentially damaging influence of the RAE (mediated by their universities in what is often a confused manner) as they are impelled to submit to higher-ranked journals, forsaking their previous practice of, for example, publishing in conference proceedings or book chapters.
One point noted in the report (although it is not expressed in quite this way!) is the totally stupid practice of Universities, their Faculties and Departments insisting that researchers must get published in a limited list of higher-ranked journals. Sometimes the sources of the ranking are rather curious - does the Financial Times, for example, have the inside track on which journals in business and economics are 'best'? And yet there are business schools that use its ranking as the basis for practice.
The authors do not say it but, reading between the lines, the RAE (and even more so, the yet to happen REF) is past its sell-by date. Massive change in university funding for research has now been accomplished and it is difficult to see what further benefits (i.e., for government policy, not research'excellence') can be achieved.
a spin-off from the e-journal dedicated to informal publication of ideas and comment on current affairs in the information world — and occasional personal posts.
Showing posts with label research assessment. Show all posts
Showing posts with label research assessment. Show all posts
21 December 2009
18 December 2008
Research Assessment Exercise
Well, the results are finally out for the 2008 exercise - no doubt all those involved will be glad that the waiting is over. However, what the implications may be seems to be open to question. For example, Ian Postlethwaite pro-vice-chancellor for research at the University of Leicester suggests that league tables based on the results cannot be a guide to which are the best research institutions because:
Be that as it may - the league tables are duly being produced. The two UK institutions with which I'm affiliated are ranked joint 14th (University of Sheffield and University of Leeds - along with Durham, St. Andrews, Southampton and Bristol). Of the two departments I'm associated with, Information Studies at Sheffield scores 2.850 (the highest score in the Library and Information Management group) and the Business School at Leeds scores the same, but that makes it joint 11th in the Business and Management Studies group. Postlethwaite has a point, however, download the data, create a column to multiply the average point score by the number of staff returned and you'll get a rather different ranking :-)
There are some oddities in the category, Library and Information Management, since only eleven of the twenty-one institutions submitting staff to this panel have what we might call 'traditional' library and information management departments. Some have clearly adopted a 'strategic' approach by submitting information systems departments to this panel -
Brunel, Salford, Sheffield Hallam, for example - whereas others submit such units to computer science. Some might call this sharp practice, but then, 'all is fair...' in university financing :-) King's College, London submits a research unit on 'digital humanities', where, again, most of the research appears to be in aspects of computer science, rather than information management, and it clearly disadvantages staff who have both teaching and research responsibilities when they are assessed against staff with only research responsibilities: some weighting ought to be applied, perhaps by treating all research staff as only half of a full time teaching and research staff member.
In other words, everyone is playing games with the RAE, as they have done since the Higher Education Funding Councils adopted the process - what difference it makes to the quality of research is anyone's guess, but some have argued that the increase in the publish or perish syndrome results in a decline in quality.
It is assumed that the next RAE will adopt, at least in part, a 'metrics-driven' approach, i.e., bibliometrics. What the result may be for some can be pursued in a couple of papers in the journal - check the subject index under "Research Assessment Exercise"
The Higher Education Statistics Agency (Hesa) made a decision in October not to publish data showing what proportion of academics who were eligible to be included in the RAE were actually entered into the exercise by each institution. But this has left a gaping hole in the information needed to produce accurate league tables.
Be that as it may - the league tables are duly being produced. The two UK institutions with which I'm affiliated are ranked joint 14th (University of Sheffield and University of Leeds - along with Durham, St. Andrews, Southampton and Bristol). Of the two departments I'm associated with, Information Studies at Sheffield scores 2.850 (the highest score in the Library and Information Management group) and the Business School at Leeds scores the same, but that makes it joint 11th in the Business and Management Studies group. Postlethwaite has a point, however, download the data, create a column to multiply the average point score by the number of staff returned and you'll get a rather different ranking :-)
There are some oddities in the category, Library and Information Management, since only eleven of the twenty-one institutions submitting staff to this panel have what we might call 'traditional' library and information management departments. Some have clearly adopted a 'strategic' approach by submitting information systems departments to this panel -
Brunel, Salford, Sheffield Hallam, for example - whereas others submit such units to computer science. Some might call this sharp practice, but then, 'all is fair...' in university financing :-) King's College, London submits a research unit on 'digital humanities', where, again, most of the research appears to be in aspects of computer science, rather than information management, and it clearly disadvantages staff who have both teaching and research responsibilities when they are assessed against staff with only research responsibilities: some weighting ought to be applied, perhaps by treating all research staff as only half of a full time teaching and research staff member.
In other words, everyone is playing games with the RAE, as they have done since the Higher Education Funding Councils adopted the process - what difference it makes to the quality of research is anyone's guess, but some have argued that the increase in the publish or perish syndrome results in a decline in quality.
It is assumed that the next RAE will adopt, at least in part, a 'metrics-driven' approach, i.e., bibliometrics. What the result may be for some can be pursued in a couple of papers in the journal - check the subject index under "Research Assessment Exercise"
01 September 2008
Research assessment and bibliometrics
The Higher Education Funding Councils in the UK have issued an announcement on a pilot excercise (involving twenty-two UK universities) on the use of bibliometrics in the new "Research Excellence Framework", which will take over from the Research Assessment Excercise now underway.
[As an aside, it looks as though the marketing men have infiltrated the HEFC - "Research Assessment Exercise" was obviously far too explicit for them and so it has to be something new that completely hides what is actually going on - just as the "Committee of Vice Chancellors and Principals" became the totally fuzzy "Universities UK"! Makes one wonder about the intelligence of those at the top of the academic tree.]
However, back to the message. The announcement points to another document, Bibliometrics and the Research Excellence Framework. This tells us how the exercise will actually be carried out. Research output data will be collected from the participating institutions (why is this necessary, given that the HEFC already has such data for the current RAE?) and processed by Evidence Ltd., a data processing company based in Leeds.
Both documents express caution in using bibliometric indicators and the point is specifically made that journal impact factors will not be used. The bibliomtric indicators for each institution in each field will be 'normalised' by comparison with the "field norm", that is "the average number of citations for all papers published worldwide in the same field, over the same period". This is where Evidence Ltd. will need to be very careful indeed, since what constitutes the "same field" is open to wide interpretations. It will be especially risky to rely upon the journal groupings used by Web of Knowledge and SCOPUS to defined the "field". I referred in my earlier Weblog to this problem as far as defining the field of "Information Science & Library Science" is concerned, and I have no doubt that similar problems exist in other fields.
"Bibliometrics and the Research Excellence Framework" also notes that, because of the difficulty of using bibliometric indicators across all disciplines, "other indicators" will also be used. But we are not told what these "other indicators" might be - perhaps they don't actually know yet? The document also proposes the use of a "citation profile" which will show how the papers produced by a particular institution relate to "worldwide norms", so that papers are labelled, for example, "Below world average" or "Above world average". Quite what this means is difficult to understand - does HEFC seriously believe that this would be anything other than a completely arbitrary measure? Especially in social science fields, which are very much culture-bound, comparison of work done in the UK, with work carried out "worldwide" - which would actually mean (because of the volume of output) "carried out in the USA" - would simply result in nonsense.
Having retired from having anything to do with the administration of higher education I shall gaze on, fascinated, by what might emerge :-)
[As an aside, it looks as though the marketing men have infiltrated the HEFC - "Research Assessment Exercise" was obviously far too explicit for them and so it has to be something new that completely hides what is actually going on - just as the "Committee of Vice Chancellors and Principals" became the totally fuzzy "Universities UK"! Makes one wonder about the intelligence of those at the top of the academic tree.]
However, back to the message. The announcement points to another document, Bibliometrics and the Research Excellence Framework. This tells us how the exercise will actually be carried out. Research output data will be collected from the participating institutions (why is this necessary, given that the HEFC already has such data for the current RAE?) and processed by Evidence Ltd., a data processing company based in Leeds.
Both documents express caution in using bibliometric indicators and the point is specifically made that journal impact factors will not be used. The bibliomtric indicators for each institution in each field will be 'normalised' by comparison with the "field norm", that is "the average number of citations for all papers published worldwide in the same field, over the same period". This is where Evidence Ltd. will need to be very careful indeed, since what constitutes the "same field" is open to wide interpretations. It will be especially risky to rely upon the journal groupings used by Web of Knowledge and SCOPUS to defined the "field". I referred in my earlier Weblog to this problem as far as defining the field of "Information Science & Library Science" is concerned, and I have no doubt that similar problems exist in other fields.
"Bibliometrics and the Research Excellence Framework" also notes that, because of the difficulty of using bibliometric indicators across all disciplines, "other indicators" will also be used. But we are not told what these "other indicators" might be - perhaps they don't actually know yet? The document also proposes the use of a "citation profile" which will show how the papers produced by a particular institution relate to "worldwide norms", so that papers are labelled, for example, "Below world average" or "Above world average". Quite what this means is difficult to understand - does HEFC seriously believe that this would be anything other than a completely arbitrary measure? Especially in social science fields, which are very much culture-bound, comparison of work done in the UK, with work carried out "worldwide" - which would actually mean (because of the volume of output) "carried out in the USA" - would simply result in nonsense.
Having retired from having anything to do with the administration of higher education I shall gaze on, fascinated, by what might emerge :-)
09 November 2007
Bibliometrics and research assessment
A study for Universities UK (previously the Committee of Vice Chancellors and Principals - a much better title, which actually told you who was involved!) has come to a rather predictable conclusion:
It seems extremely unlikely that research metrics, which will tend to favour some modes of research more than others (e.g. basic over applied), will prove sufficiently comprehensive and acceptable to support quality assurance benchmarking for all institutions.
However, at least that conclusion has been reached and, rather importantly, the report is mainly concerned with the potential for applying bibliometric measures to fields in science, technology, engineering and medicine (STEM) (the areas targeted by the Higher Education Funding Council). Some differences between STEM fields and the social sciences and humanites are pointed to, but there is no detailed analysis of the problems in these areas, which, of course, are even more difficult to deal with than those in STEM.
Readers outside the UK might be somewhat bemused by this post: the explanation for the concern over this matter is that the Higher Education Funding Councils have proposed the use of 'metrics' (i.e., bibliometrics) for the Research Assessment Exercise. This Exercise has taken place every four or five years for the past 20 years and is crucially important for the universities, since it is the basis upon which the research element of national funding is distributed.
Subscribe to:
Posts (Atom)