Showing posts with label bibliometrics. Show all posts
Showing posts with label bibliometrics. Show all posts

03 April 2011

Impact factors and true 'influence'

I've never been a particular fan of bibliometrics, although many years ago (40 to be exact!) I taught a course in the subject. There are uses for the methods, such as determining the journals one might subscribe to when setting up a new library service - although making the rounds of the clients and seeking their advice probably serves as well - but generally, it seems that many bibliometric studies are carried out simply as an exercise in the methods, or to refine them, without much being said about the practical applications of the methods.
I was pleased, therefore, to come across a paper that appears to have something useful to say about 'impact factors' - drawing attention to some of the problems and proposing an alternative method of determining 'impact', or, as I would prefer 'influence'.   'Impact' is one of those macho, aggressive words, chosen, it seems, to impress, whereas what one is talking about is the influence of a journal within a scholarly field.

The paper is Integrated Impact Indicators (I3) compared with Impact Factors (IFs): an alternative research design with policy implications, by Loet Leydesdorff and Lutz Bornmann.  The authors argue that the journal impact factor is flawed as a consequence of being based on the two-year average of citations, whereas a 'true' indicator of 'impact' would be based on the sums of the citations.  This is argued on the analogy that the impact force of two bodies is the sum of their mass times the velocity of impact (if I have understood things aright, not being a statistician!).  On this basis, the total number of citations (the analogy of 'velocity') needs to be taken into account.

The authors use the LIS category of Web of Knowledge, showing that on the basis of 'summed impact', JASIST is ranked ahead of MIS Quarterly, rather than behind it.  However, this gives rise to another problem: how to categorise the journals in the first place.   MIS Quarterly's primary classification in Web of Knowledge is in the information systems category and it is something of a mystery as to why it appears in the LIS classification at all.  Inevitably, then, a core journal like JASIST, must appear ahead of one that is misplaced in the classification scheme.  This is not to dispute the argument of Leydesdorff and Bornmann, I simply raise the issue.  

The LIS category in Web of Knowledge is a complete mess, with journals having a secondary home there and others placed there, seemingly because there was nowhere else to put them, such as The Scientist, which, in any event, is a kind of news magazine, rather than a scholarly journal. Other examples of journals in the LIS list that have their primary location somewhere else include, International Journal of Computer-Supported Collaborative Learning, Information Systems Research,  Journal of Computer-Mediated Communication, Journal of Information Technology, and too many more to list.  Anyone wishing to compare rankings (arrived at by any means) would need to clean up the list on the basis of which journals LIS scholars are likely to seek publication in - I suspect that, instead of there being 66 journals to consider, one would probably have something like half that number.  Urging LIS researchers to publish in MIS Quarterly or Information Systems Research is a completely pointless exercise, since their papers are likely to be ruled, "out of scope".  This of importance when it comes to journal ranking, since, using the 5-year impact factor, out of the top 20 journals, I would argue that seven are 'out of scope'.

Another point that might be addressed is that of the general versus the specific. We might expect that a specialist journal, catering for a well-developed area of research, will have higher 'impact' than a more general purpose journal.  Thus, in the top 20 journals, ranked by  5-year impact factor, six are what I would call 'niche' journals, such as the International Journal of Geographical Information Science and Scientometrics. If, as is increasingly the case, researchers are not simply urged, but required, to publish in the top-ranked journals, this leaves (excluding the out of scope) not 20, but seven, 'general purpose' journals in which to seek publication - or, since ARIST is now no longer published, six. That is, Information Management; JASIST; Information Processing and Management; Journal of Information Science; Journal of Documentation; Library and Information Science Research. 

If we want to increase that to the "top ten" general purpose LIS journals, we would include: International Journal of Information Management; Information Research; Library Quarterly; Journal of Library and Information Science, which takes us down to number 34 in the WoK rankings.

All of which serves to demonstrate that lists and rankings are treacherous things that are probably best avoided. :-)

01 September 2008

Research assessment and bibliometrics

The Higher Education Funding Councils in the UK have issued an announcement on a pilot excercise (involving twenty-two UK universities) on the use of bibliometrics in the new "Research Excellence Framework", which will take over from the Research Assessment Excercise now underway.

[As an aside, it looks as though the marketing men have infiltrated the HEFC - "Research Assessment Exercise" was obviously far too explicit for them and so it has to be something new that completely hides what is actually going on - just as the "Committee of Vice Chancellors and Principals" became the totally fuzzy "Universities UK"! Makes one wonder about the intelligence of those at the top of the academic tree.]

However, back to the message. The announcement points to another document, Bibliometrics and the Research Excellence Framework. This tells us how the exercise will actually be carried out. Research output data will be collected from the participating institutions (why is this necessary, given that the HEFC already has such data for the current RAE?) and processed by Evidence Ltd., a data processing company based in Leeds.

Both documents express caution in using bibliometric indicators and the point is specifically made that journal impact factors will not be used. The bibliomtric indicators for each institution in each field will be 'normalised' by comparison with the "field norm", that is "the average number of citations for all papers published worldwide in the same field, over the same period". This is where Evidence Ltd. will need to be very careful indeed, since what constitutes the "same field" is open to wide interpretations. It will be especially risky to rely upon the journal groupings used by Web of Knowledge and SCOPUS to defined the "field". I referred in my earlier Weblog to this problem as far as defining the field of "Information Science & Library Science" is concerned, and I have no doubt that similar problems exist in other fields.

"Bibliometrics and the Research Excellence Framework" also notes that, because of the difficulty of using bibliometric indicators across all disciplines, "other indicators" will also be used. But we are not told what these "other indicators" might be - perhaps they don't actually know yet? The document also proposes the use of a "citation profile" which will show how the papers produced by a particular institution relate to "worldwide norms", so that papers are labelled, for example, "Below world average" or "Above world average". Quite what this means is difficult to understand - does HEFC seriously believe that this would be anything other than a completely arbitrary measure? Especially in social science fields, which are very much culture-bound, comparison of work done in the UK, with work carried out "worldwide" - which would actually mean (because of the volume of output) "carried out in the USA" - would simply result in nonsense.

Having retired from having anything to do with the administration of higher education I shall gaze on, fascinated, by what might emerge :-)

09 November 2007

Bibliometrics and research assessment

A study for Universities UK (previously the Committee of Vice Chancellors and Principals - a much better title, which actually told you who was involved!) has come to a rather predictable conclusion:

It seems extremely unlikely that research metrics, which will tend to favour some modes of research more than others (e.g. basic over applied), will prove sufficiently comprehensive and acceptable to support quality assurance benchmarking for all institutions.

However, at least that conclusion has been reached and, rather importantly, the report is mainly concerned with the potential for applying bibliometric measures to fields in science, technology, engineering and medicine (STEM) (the areas targeted by the Higher Education Funding Council). Some differences between STEM fields and the social sciences and humanites are pointed to, but there is no detailed analysis of the problems in these areas, which, of course, are even more difficult to deal with than those in STEM.

Readers outside the UK might be somewhat bemused by this post: the explanation for the concern over this matter is that the Higher Education Funding Councils have proposed the use of 'metrics' (i.e., bibliometrics) for the Research Assessment Exercise. This Exercise has taken place every four or five years for the past 20 years and is crucially important for the universities, since it is the basis upon which the research element of national funding is distributed.