Wouter on the Web draws attention to the latest webometrics ranking of world universities, rightly noting that "at the moment we have to take these results with a spoon full of salt rather than a pinch".
I have to agree that this measure, whatever is doing, is hardly likely to be a measure of academic quality. Can one really believe, for example, that Oxford, Cambridge and Imperial College in the UK are really no where in the top twenty on the basis of quality? Or that the University of Minnesota ranks 34 places above the California Institute of Technology?
So, what is this webometrics ranking doing? Well, a number of measures are taken to identify the extent of the Web presence of the University: the size of its presence in Web pages, the extent to which external sites link to it, the number of so-called 'rich files' (i.e., pdf, ps, doc and ppt files) on the site and the number of papers and citations in Google Scholar. In other words it is simply a composite measure of the size of the institution's Web presence.
The danger, of course, is that as in the case of citation measures, university administrators will see the magic word "ranking" and assume that there is some need to rise up the ranks. Quite the opposite is necessary; they should ignore this kind of thing - quite how anyone can find the time to devote to it, instead of doing something useful, I'm at a loss to understand!
Thank you for your critical comments. The keyword here is feasibility. For a complete and precise description of world universities we will need several dozens of variables and a high level of standardization of such variables (what is a professor in different countries?). Moreover for many of these variables there is no reliable source even from the own university. This is the reason other Rankings usually focus on Top 200 or Top 500 institutions.
ReplyDeleteWhen checking ARWU (Shanghai) or THES (Times) some of the variables are not very useful for ranking large sets. Consider there are only a few universities with 2 or more Nobel Prizes, so ranking with low numbers is not fair. This is can explain the lack of reliability of positions below 200 in the cited rankings.
So, why web ranking? First, the numbers involved are far larger, millions instead of hundreds; Second, it is easier to obtain data for institutions worldwide: Webometrics rank 16000 universities including most of the developing countries not included in other rankings; Third, Web reflects (or at least should in the near future) the full set of activities of an academic institution (teaching, research, transfer, community involvement), not only number of papers published. Probably there is a lot of "noise" but there are certainly very important aspects of this noise for candidate students (sports!) or other citizens (general knowledge, half of the faculty members came from Hunmanities and Social Sciences) not very prone to publication; Fourth, Web is reaching hundreds of millions (paper thousands at the best). Electronic publication should be mandatory, or at least an evaluation system should take it very seriously. Webometrics is paving such road.
Regarding methodology, it is possible to use crawlers instead of search engines, but this is unfeasible for such a large task. Moreover, most of the users use Google and Co. for recovering information, so if you information are not in that databases it is invisible, it does not exist. For avoiding inconsistencies we choose specific data centers, not the central webservers, collect the figures twice, correct the biased results and select the most relevant formats. Of course there are problems with Scholar, but we have contacted with them and they will provide new tools in the non-beta version.
Finally not to forget the results. Webometrics ranks highly with THES & ARWU (0.6-0.7), provide useful insights about crazy web policies (Imperial College, Paris & Catalonian Universities, Johns Hopkins, ..) or great repository initiatives (Penn State, Linkoping, Complutense).
Please, take the Web seriously