Journal ranking and the use of impact factors and the like does seem to raise the blood pressure :-) The Journal of Cell Biology (v.179, no. 6 pp1091-1092) published an editorial, "Show me the Data", which took Thomson to task over the Journal Citation Reports, claiming that Rockefeller University Press had bought the database and then found that the numbers didn't tally with the published data.
Thomson has now responded with THOMSON SCIENTIFIC CORRECTS INACCURACIES IN EDITORIAL, giving a closely argued and, to me, quite convincing rebuttal of the charges.
However, I doubt the value of 'impact' measures overall: and particularly doubt their value in measuring the 'quality' of a journal and guiding promotion and research assessment exercises. This is partly because like is not being compared with like. Take the 'information science and library science' category used by Thomson (which is then used by other manipulators of the data (such as Eigenfactor.org). In the Thomson database, this category has 53 journals listed: but not all are of the same type, and some are not even part of the designated field. Sixteen of the journals belong elsewhere and are not recognizable as belonging to the field of library and information science - indeed, most of these have their primary listing in other categories. We then have the Annual Review of Information Science and Technology, which is a serial, not a journal, and which, because of its character, is widely cited. There is also Library Journal, which might be called the 'trade journal' of the profession in the USA, which has relatively few citable papers, mostly of a non-research nature. Next, we have 20 journals that can be described, on one basis or another, as 'niche journals': such publications as Knowledge Organization, which deals with classficiation; the Law Library Journal; and the Journal of the Medical Library Association. Finally, we have one German language periodical, which has a low impact factor, but which is a highly-rated journal it its own language group.
This leaves us with 14 journals that can be said to be the core journals of the field. Any measure that takes no account of the kinds of differences among journals that I have outlined cannot sensibly be used for any comparative purpose and certainly not for judging the quality of a researcher's work. What would be the point, for example, of criticising a information researcher in the field of law for publishing in the Law Library Journal (impact factor 0.508) instead of in MIS Quarterly (impact factor 4.731)? Comparisons among journals only work if we can assume that any paper is likely to receive the same treatment from each journal - a paper on legal information services operated, say, by voluntary community agencies, is hardly likely to be deemed 'in scope' by the editors of MIS Quarterly.
Also in the news is the SCImago journal ranking system, which I've written about recently: Declan Butler writes about it in Nature News
No comments:
Post a Comment