12 August 2008

Citations and performance evaluation

Gerry McKiernan has drawn attention on the BOAI discussion list to a special issue of Ethics in Science and Environmental Politics, devoted to 'The use and misuse of bibliometric indices in evaluating scholarly performance'. All of the papers are open access and the authors include Philip Campbell, Editor in Chief of Nature on escaping from the Impact Factor and concluding, Although the current system may be effective at measuring merit on national and institutional scales, the most effective and fair analysis of a person’s contribution derives from a direct assessment of individual papers, regardless of where they were published; Peter Lawrence on how measurement harms science; Anne-Wil K. Harzing and Ron van der Wal on Google Scholar as a new source for citation analysis; and Stevan Harnad on Validating research performance metrics against peer rankings. There are fourteen papers in all including the introduction to the issue and all are worth at least dipping into.

The Harzing and van der Wal paper is likely to be somewhat contentious, since other research into the use of Google Scholar has downplayed its value. I first raised the question of using Google in performance measurement in a message to the JESSE mailing list in 2002. I commented that:
My most cited paper is "On user studies and information needs" (1981) - a Web search (using Google) revealed 118 pages that listed the title. The pages were reading lists, free electronic journals, and documents that would never be covered by SSCI, such as reports from various agencies. SSCI revealed, if I recall aright, 79 citations of the paper. The question is: is the Web revealing impact more effectively than SSCI? Citation in scholarly papers takes a variety of forms and much citation is of a token variety - x is cited because x is always cited. On the other hand citation on reading lists implies some positive recommendation of the text, and mention in policy documents and the like, implies (at least in some cases) that some benefit has been found in the cited document.

This led to an extended and interesting discussion. It was followed by a paper on the subject in JASIST by Vaughan and Shaw (Volume 54, Issue 14, 1313-1322) and by a comparison of Google Scholar and Web of Science by Peter Jasco in Current Science, v.89, no. 9, 1537-1547 - incidentally, Harzing and van der Wal reply to some of Jasco's criticisms of Scholar.

All of this suggests that interest in the application of 'metrics' in performance measurement is growing and, perhaps, there is also a growing awareness that the impact factor provided by Web of Knowledge is not necessarily the only tool that can be used. We have also seen the emergence of the Scimago analysis of impact, which I have referred to earlier and these developments, as well as providing for a new academic industry of papers on Web citation, are likely to bring about a re-assessment of metrics for performance measurement and, perhaps, greater wariness about their use than is currently shown by some administrators.

09 August 2008

Open access and scholarly neglect

Thanks to Peter Suber's OA News for drawing attention to a forthcoming Communications of the ACM paper, 'Open access publishing in science: why it is highly appreciated but rarely used', by Florian Mann and colleagues. And what is holding things back? According to Mann et al. it is the "short-term performance related concerns and the wait and see attitude of the majority of researchers". One can understand, at least in the UK - because of the impact of the Research Assessment Exercise, the concerns over performance, but I am less impressed by the wait and see attitude. It speaks of a total lack of concern over the wider dissemination of scholarly information that says more about ego than it does about social responsibility.

It is not entirely clear what Mann and co. mean by the Golden route to open access: once again, as so often, there is a suspicion that they mean the use of author charging to subsidise publishers. The position would be made much clearer if the notion of the Platinum route was separated from author-charging, and, of course, by Platinum I mean publication in journals that are open access and free of author charges. Only the Platinum route gives truly open access, since it is 'open' at both ends of the process - no author charges and no subscription charges. If all the resources that are currently, in my view, wasted on supporting repositories and author charging were put to the development of Platinum journals, true open access would rapidly become the dominant mode of scholarly publishing. However, it is not likely to happen as long as university administrators remain ignorant of the potential and as long as the scholarly community remains in a wait and see posture. If you believe that open access is beneficial to society, why are you publishing in restricted journals? Instead of waiting and seeing, start getting out and doing!

02 August 2008

Another OA categorisation

Peter Suber, whose Open Access Newsletter is a great source for anyone interested in OA has come up with a new classification of types of open access - he suggests the term 'gratis' where 'price barriers' are removed, and 'libre' for the removal of 'permission barriers'. However, since both words mean "free", I'm not sure that the distinction would be retained in general parlance.

However, I'm not sure it is necessary - the aim seems to be to overcome the problem that so-call "Gold OA" (in terms of journals) can mean those that are completely free, like Information Research, and those that use author charging to enable free access. I distinquish between these by reserving "Gold" for the author-charging mode, since it is that mode that has become associated with the notion in the mind of officialdom, and use "Platinum" for what Peter would now call "libre Gold" (or "Gold libre") - I think :-)

The situation is confused by the association between open access publishing and institutional or disciplinary repositories. While the latter provide open access to a proportion of the total literature in any field they are at present a somewhat disorganised collection of sources - some of which provide good coverage of an institution's output, some of which merely skim the surface. A couple of years ago I surveyed the repositories in the UK and found, for example, that although the University of Cambridge had more than 30,000 items in its repository only 16 were preprints of scientific papers. I regard repostitories as an interim solution: the future, inevitably, will be the "Platinum" publishing mode. The economics will drive inexorably towards this mode of scholarly communication - not, perhaps, in what is left of my lifetime, but inevitably.