a spin-off from the e-journal dedicated to informal publication of ideas and comment on current affairs in the information world — and occasional personal posts.
31 July 2008
Google competitor - really?
A new search engine called "Cuil" - the names get weirder and more unpronouncable! - is getting some publicity at the moment. It's been on the BBC Technology site and now Jack Schofield of the Guardian has an article on the subject. The article has generated quite a lot of discussion, which is worth having a look at. I can't say that I'm impressed: I've just tried searching for "information research" and it comes up with 'No results were found for: "information research"' Very strange! When I remove the inverted commas, I get results, but some of them are odd and I wonder if Information Research is actually scanned by the service. This feeling is increased when I search for titles of papers in the journal and find nothing - so perhaps the vaunted "biggest search engine" isn't really doing a very good job? There are also weird things going on, for example, a photograph is attached to the entry for this Weblog, but it isn't a picture of me! In fact, the same pictures are placed on the page in other places in relation to completely different topics - I've no clue as to what is going on here, but it doesn't fill me with either confidence or enthusiasm. I'll stick to Google.
29 July 2008
Speed reading
Digg has a notice of a new 'speed reading' site, spreeder.com - paste in a block of text and spreeder will present it a word at a time at a given speed. Personally, I doubt that this will work - scientific evidence on 'speed reading' apprears to be, at best, equivocal and there's a lot to suggest that we recognize blocks of text rather than individual words. Looking at other sites, I took a test and found that I was currently reading at more than 700 words a minute and I very much doubt that reading faster would do me any good at all. I recall that, years ago, a colleague of mine went on a speed reading course and, on the first test of reading speed and comprehension, was performing better than the targets for the end of the course! One of the contributors to the discussion on Digg suggests a package called EyeQ which does use blocks of text and appears to work by training the eye to take in larger blocks - after following the trial I found that text presented at 1,400 words a minute was readable - though I doubt I'd want to read at that speed normally! And I certainly don't want to pay $250.00 to learn how to do so!
Ranking universities
Wouter on the Web draws attention to the latest webometrics ranking of world universities, rightly noting that "at the moment we have to take these results with a spoon full of salt rather than a pinch".
I have to agree that this measure, whatever is doing, is hardly likely to be a measure of academic quality. Can one really believe, for example, that Oxford, Cambridge and Imperial College in the UK are really no where in the top twenty on the basis of quality? Or that the University of Minnesota ranks 34 places above the California Institute of Technology?
So, what is this webometrics ranking doing? Well, a number of measures are taken to identify the extent of the Web presence of the University: the size of its presence in Web pages, the extent to which external sites link to it, the number of so-called 'rich files' (i.e., pdf, ps, doc and ppt files) on the site and the number of papers and citations in Google Scholar. In other words it is simply a composite measure of the size of the institution's Web presence.
The danger, of course, is that as in the case of citation measures, university administrators will see the magic word "ranking" and assume that there is some need to rise up the ranks. Quite the opposite is necessary; they should ignore this kind of thing - quite how anyone can find the time to devote to it, instead of doing something useful, I'm at a loss to understand!
I have to agree that this measure, whatever is doing, is hardly likely to be a measure of academic quality. Can one really believe, for example, that Oxford, Cambridge and Imperial College in the UK are really no where in the top twenty on the basis of quality? Or that the University of Minnesota ranks 34 places above the California Institute of Technology?
So, what is this webometrics ranking doing? Well, a number of measures are taken to identify the extent of the Web presence of the University: the size of its presence in Web pages, the extent to which external sites link to it, the number of so-called 'rich files' (i.e., pdf, ps, doc and ppt files) on the site and the number of papers and citations in Google Scholar. In other words it is simply a composite measure of the size of the institution's Web presence.
The danger, of course, is that as in the case of citation measures, university administrators will see the magic word "ranking" and assume that there is some need to rise up the ranks. Quite the opposite is necessary; they should ignore this kind of thing - quite how anyone can find the time to devote to it, instead of doing something useful, I'm at a loss to understand!
23 July 2008
Google's "Knol"
Today, Google has announced the public availability of "Knol" - described as a Web authoring system for creating longer length and, by implication, more serious bits of writing than are created on Weblogs.
The items on the home page seem to show a bias towards medical issues, with articles on Carpal tunnel syndrome, Chronic stomach pain and Thoracic outlet syndrome. However their are also links to more mundane things, such as how to install a kitchen tap. The general idea seems to be that all is grist to the Knol mill.
Articles are signed and may be edited - but any edits have to be approved by the author(s). The obvious comparison is with Wikipedia and Citizendium - Knol appears to be more like the latter than the former and I imagine we may see the same persons contributing to all three. Of the three, however, Citizendium seems to have the better editorial control - which is why my own developing article on Information Management is there.
The key principle behind Knol is authorship. Every knol will have an author (or group of authors) who put their name behind their content. It's their knol, their voice, their opinion. We expect that there will be multiple knols on the same subject, and we think that is good.
With Knol, we are introducing a new method for authors to work together that we call "moderated collaboration." With this feature, any reader can make suggested edits to a knol which the author may then choose to accept, reject, or modify before these contributions become visible to the public. This allows authors to accept suggestions from everyone in the world while remaining in control of their content. After all, their name is associated with it!
The items on the home page seem to show a bias towards medical issues, with articles on Carpal tunnel syndrome, Chronic stomach pain and Thoracic outlet syndrome. However their are also links to more mundane things, such as how to install a kitchen tap. The general idea seems to be that all is grist to the Knol mill.
Articles are signed and may be edited - but any edits have to be approved by the author(s). The obvious comparison is with Wikipedia and Citizendium - Knol appears to be more like the latter than the former and I imagine we may see the same persons contributing to all three. Of the three, however, Citizendium seems to have the better editorial control - which is why my own developing article on Information Management is there.
21 July 2008
OA Bibliography
Peter Suber's Open Access News points out that Charles Bailey's Open Access Bibliography has now been transferred to the Open Access Directory, which is a wiki-based site. Readers are encouraged to add to the bibliography: this will be essential if its currency is to be maintained, since the work was frozen in 2004.
So, if you know of items have been published since 2004, do contribute!
So, if you know of items have been published since 2004, do contribute!
20 July 2008
Has the Internet degraded scholarship?
An article in Science, Electronic Publication and the Narrowing of Science and Scholarship, by James A. Evans is causing a certain amount of interest (unfortunately, not it's not openly available, so you'll have to check out your institution's subscription to read the July 18th 2008 issue). One of the suggestions made by Evans is this:
Bill Hooker in his blog, Open Reading Frame takes issue with some of what Evans discovers. In particular he notes Evans's statement:
and he comments:
Well, I think I am with Evans here - would it were true that authors are not ignorant of earlier work. In my experience as an Editor and a PhD supervisor, I am continually amazed at the extent to which authors and students are unaware of pre-WWW work. It seems that if the work was done before 1995 it is assumed to have no relevance to the present day. In many cases, of course, that will be true and in some cases the research record is a record of building upon earlier work. In the case of many subfields in information science, however, it isn't the case. A great deal of work was done in the 1970s, which is now completely ignored. Researchers rediscover wheels again and again, when a search of the earlier literature would have revealed that what they think of as novel, was novel 50 years ago!
I believe that everything we do needs to be rooted in the historical context, without it we assume that everything that has gone before has nothing to teach us, whereas the reality is that much has been done that could be of relevance, if only it was known about.
To take just one example, a project at Hamline University in the USA in the 1970s explored how librarians could support teaching. Assistants were appointed to work closely with teachers, sitting in on courses, identifying material that was often of more use to undergraduates than the research papers the teacher was citing, and generally performing the kind of 'information scientist' role that Jason Farradane (another forgotten name?) promoted in industry. The report demonstrated the efficacy of employing librarians in this role but also pointed to the economic costs and, as a result, the initiative was abandoned and the report forgotten. But that report says more about how to engage with teaching and how to support the learner than the vast majority of publications on 'information literacy' do today - sadly, it is never cited :-)
I show that as more journal issues came online, the articles referenced tended to be more recent, fewer journals and articles were cited, and more of those citations were to fewer journals and articles. The forced browsing of print archives may have stretched scientists and scholars to anchor findings deeply into past and present scholarship. Searching online is more efficient and following hyperlinks quickly puts researchers in touch with prevailing opinion, but this may accelerate consensus and narrow the range of findings and ideas built upon
Bill Hooker in his blog, Open Reading Frame takes issue with some of what Evans discovers. In particular he notes Evans's statement:
I show that as more journal issues came online, the articles referenced tended to be more recent, fewer journals and articles were cited, and more of those citations were to fewer journals and articles.
and he comments:
OK, suppose you do show that -- it's only a bad thing if you assume that the authors who are citing fewer and more recent articles are somehow ignorant of the earlier work. They're not: as I said, later work builds on earlier. Evans makes no attempt to demonstrate that there is a break in the citation trail -- that these authors who are citing fewer and more recent articles are in any way missing something relevant. Rather, I'd say they're simply citing what they need to get their point across, and leaving readers who want to cast a wider net to do that for themselves (which, of course, they can do much more rapidly and thoroughly now that they can do it online).
Well, I think I am with Evans here - would it were true that authors are not ignorant of earlier work. In my experience as an Editor and a PhD supervisor, I am continually amazed at the extent to which authors and students are unaware of pre-WWW work. It seems that if the work was done before 1995 it is assumed to have no relevance to the present day. In many cases, of course, that will be true and in some cases the research record is a record of building upon earlier work. In the case of many subfields in information science, however, it isn't the case. A great deal of work was done in the 1970s, which is now completely ignored. Researchers rediscover wheels again and again, when a search of the earlier literature would have revealed that what they think of as novel, was novel 50 years ago!
I believe that everything we do needs to be rooted in the historical context, without it we assume that everything that has gone before has nothing to teach us, whereas the reality is that much has been done that could be of relevance, if only it was known about.
To take just one example, a project at Hamline University in the USA in the 1970s explored how librarians could support teaching. Assistants were appointed to work closely with teachers, sitting in on courses, identifying material that was often of more use to undergraduates than the research papers the teacher was citing, and generally performing the kind of 'information scientist' role that Jason Farradane (another forgotten name?) promoted in industry. The report demonstrated the efficacy of employing librarians in this role but also pointed to the economic costs and, as a result, the initiative was abandoned and the report forgotten. But that report says more about how to engage with teaching and how to support the learner than the vast majority of publications on 'information literacy' do today - sadly, it is never cited :-)
19 July 2008
Google search technology
The Technologies Behind Google Ranking are the subject of an item in the Official Google Blog. The title is a little misleading, since the article tells us what the technology does rather what it is :-)
16 July 2008
WebCite
Thanks to Jim Till's Weblog for drawing my attention to Gunther Eysenbach's paper at the E-PUB conference on WebCite. Readers of Information Research will probably be aware that we now ask authors to archive any referenced Web documents to WebCite to avoid the common problem of "link rot". Eysenbach's paper is a useful account of WebCite and of its plans for the future.
09 July 2008
"Suing George W. Bush: A bizarre and troubling tale"
Read about what happens when the Bush intelligence system meets the courts.
05 July 2008
A vote for Opera
In Thursday's issue of the Technology supplement to the Guardian newspaper, Andrew Brown promotes Opera as his browser-of-choice over Firefox. Brown likes the fact that:
He also likes the mail client incorporated into Opera and, from his description, if you want to keep your e-mails on your own hard disc, Opera would seem to have things to recommend it.
Before Firefox came along, I often used Opera in preference to IE, and, of course, Opera introduced a number of features (such as tabs) which many now associate with Firefox. However, I'm now permanently hooked to Firefox and, although I have tried out the latest version (Opera 9.5) I think I'm pretty unlikely to go over to it now. But, you never do know...
"It does the two things that I really need in any browser, which are tab management and ad-blocking, very well indeed. It has a crude but effective note facility which can be synchronised across computers. The bookmarks and the history are both indexed and can be searched almost instantaneously"
He also likes the mail client incorporated into Opera and, from his description, if you want to keep your e-mails on your own hard disc, Opera would seem to have things to recommend it.
Before Firefox came along, I often used Opera in preference to IE, and, of course, Opera introduced a number of features (such as tabs) which many now associate with Firefox. However, I'm now permanently hooked to Firefox and, although I have tried out the latest version (Opera 9.5) I think I'm pretty unlikely to go over to it now. But, you never do know...
The IR reader survey
This little widget tells you how many people have responded to the readership survey:
The curious thing is that people appear to drop out without completing the survey. So, as of now, 455 people have answered question 1, but only 428 are shown as having completed, and by the time we get to the final question, only 421 are shown to have answered. If anyone has experienced difficulty in completing the questionnaire, please let me know.
The curious thing is that people appear to drop out without completing the survey. So, as of now, 455 people have answered question 1, but only 428 are shown as having completed, and by the time we get to the final question, only 421 are shown to have answered. If anyone has experienced difficulty in completing the questionnaire, please let me know.
Report on open access
Peter Suber's regular report on the state of open access deals this month with "Open access and the last-mile problem for knowledge". The last mile problem is "the one at the end of the process [of research and scholarly communication]: making individualized connections to all the individual users who need to read that research". The point of OA, of course, is that as it grows, the 'last mile problem' reduces, since all one requires for access is an Internet connection. A very interesting and thoughtful piece.
Subscribe to:
Posts (Atom)