30 December 2007

A new Pew Internet and American Life report

Just published: Information searches that solve problems. How people use the internet, libraries, and government agencies when they need help..

There's a great deal of interesting reading in this report by Leigh Esterbrook (Prof. Emerita, Univ. of Illinois), Evans Witt and Lee Rainie. One table records the "Sources for Help in Dealing with a Specific Problem" from which we find that the Internet has the highest proportion of users:

Use the internet - 58%
Ask professional advisors, such as doctors, lawyers or financial experts - 53%
Ask friends and family members - 45%
Use newspapers, magazines and books - 36%
Contact a government office or agency - 34%
Use television and radio - 16%
Go to a public library - 13%
Use another source not mentioned already - 11%

Particularly interesting is that while only 42% did NOT use the Internet for information on specific problems, 87% did NOT use the public library. So, while public libraries may still serve important functions in their communities, it seems that the answering of specific problems has not become so firmly established as to enable that function to persist in the age of the Internet. It may have something to do with the fact that on the Internet one can find not only information but also advice from trusted sources (e.g., on health problems), while public librarians have always steered away from offering advice, or in most cases, serving as a venue for advisory services offered by other agencies.

Whatever the ultimate outcome in the future of the public library, it seems that the answering of specific problems is unlikely to be part of that future.

A metasearch engine

Thanks to Research Buzz for drawing my attention to Zuula - a new-ish metasearch engine. I don't use these things much myself, but Zuula might change my mind, so I've added it to my Firefox search engines.

It's interesting to see what comes up from the different engines when searching for information research: for Web searches Zuula uses Google, Yahoo, MSN, Gigablast, Exalead, Alexa, Accoona and Mojeek (yes - I'd never heard of some of those, either!) and I score these on the scale: 3 - the journal link appears as the top item; 2 - ...in the top five; 1 - ... on the first page; 0 ...not on the first page. On this basis, the scores are:

3: Google, Yahoo, MSN, Gigablast
1: Exalead
0: Alexa, Accoona, Mojeek.

Using the blog search proved interesting: Zuula uses Google, Technorati, IceRocket, Blogpulse, Sphere, Bloglines, Blogdigger, BlogDimension and Topix. In terms of finding items relating to either the journal or the Weblog (in either its old or its new manifestation) on the top page of results, I had expected Technorati to win, but no! Using a simple count of the number of references to the journal or my Weblog, we find:

9: Topix
4: Google
3: Blog-Dimension
2: Blogdigger
1: Bloglines
0: Technorati, IceRocket, Blogpulse, Sphere

I hadn't heard of Topix, but it is obviously worth a look.

29 December 2007

On the longevity of papers in OA e-journals

As the year end is approaching, I thought I would take a look at my Google Analytics reports, to see what is going on. At least one thing seems worth reporting, in that the most hit journal paper on the InformationR.net site (which includes many other things than the journal!) was published in Vol. 4 No. 3, February 1999. This was Joyce Kirk's paper on information management and it has racked up 4,177 page views in the past year. Looking further, I found that nineteen papers from volumes 3 and 4 appeared in the top 100 papers (measured by page views). Currently, Joyce's paper has over 48,000 'hits' and, according to Google Scholar, 21 cites. The other early papers in the top 20 were:

Ranked 5: Student attitudes towards electronic information resources, by Kathryn Ray & Joan Day (Vol 4 No. 2 paper 54 )

Ranked 6: Ethnomethodology and the Study of Online Communities: Exploring the Cyber Streets, by Steven R. Thomsen, Joseph D. Straubhaar, and Drew M. Bolyard (Vol 4 No. 1 paper 50)

Ranked 17: "In the catalogue ye go for men": evaluation criteria for information retrieval systems, by Julian Warner, (Vol. 4 No. 4 paper 62)

Ranked 20: MISCO: a conceptual model for MIS implementation in SMEs, by R.Bali, G.Cockerham (Vol. 4 No. 4 paper 61)

In carrying out this exercise, I discovered that not all of the papers in the journal have Google Analytics code in them, so I'll have to remedy that!

28 December 2007

Portrait view

I treated myself to a new screen for Christmas, a 19" Viewsonic VP930b, which has the big advantage of having a 'pivot' mode, allowing me to use it in either landscape or portrait view. Most of the time, I use it in portrait view, since most of the time I'm composing text and being able to see an entire A4 page on the screen at 100% magnification is really worth having. For many Web pages, too, the portrait view is superior, since one can scan more of the page at a single glimpse. Some pages (although I suspect that they may be a minority these days) assume a landscape view, and there are those sites that have an opening page, and sometimes more, in the form of a small, landscape oriented box (usually with incredibly small type size), for which the landscape orientation is normal. Film viewing will also require the landscape view, but I suspect that there will be more and more demand for pivot screens that can be used in either mode.

Problems of privacy and Google Reader?

Like millions more, I guess, I use Google Reader for my RSS feeds and I've always found it perfectly satisfactory. However, hackles are being raised by Google's decision to allow the links you have shared with friends or family to be available to anyone. Now, this doesn't bother me, because I rarely share links, in fact, I think I've only done it once, but others might well be put off.

It doesn't stop there: if you are a GTalk user for chat, then the links will also be shared with anyone with whom you have chatted - again, I don't use GTalk, so it doesn't affect me.

Read Jack Schofield's column for the low-down.

25 December 2007

Merry Christmas

I really should have said this before the previous message, but...

Never mind - better late than never: a very Merry Christmas and Happy New Year to one and all - especially if you are a regular reader of Information Research!

...and don't get me started on the collapse of tradition that results in people saying 'Happy Christmas' :-)

Farewell Browster, hello Cool Iris

I had been using the Browster add-in for Firefox for some time, but was experiencing a problem, so went looking for an update. That's when I discovered that it had died. However, something better turned up, Cool Iris - another link previewer, which works in much the same way as Browster, but which I find to be more user-friendly. With Browster, the preview pane often popped up when you didn't want it to do so, and that rarely happens with Cool Iris. So, if you are looking for something of the kind, Cool Iris will do the job for you.

23 December 2007

The "SCImago Influence Measure"

I mentioned the new SCImago journal ranking site a little while back and thought I would explore it a little further. In doing so, I find that the "Cites per doc" measure, which is given for one, two, three and four year periods might be called the 'SCImago Influence Measure' or 'SIM', since it is more or less equivalent to the Web of Knowledge Impact Factor. I prefer 'influence' to 'impact', since the latter is rather macho and percussive, while the former is much more subtle and, I think, more appropriate, since what we are talking about is the influence that a journal has within its field.

The four-year SIM is particularly interesting, I think, since it allows for a much longer period of time within which the documents have a possibility of being cited. Using the SCImago database to download the data also gives the opportunity for producing some interesting comparisons. The graph below shows the four-year SIM for a long-established journal, the Journal of Documentation, compared with three, now established, open access journals - Information Research, the Journal of Digital Information and the Journal of Electronic Publishing. It is striking that on this measure all three OA journals are now approaching the same level of 'influence' as the older journal. JEP has had some problems in maintaining publication, hence the dip in 2006, but with its future now established (I believe), I imagine that the growth in its influence will resume.

21 December 2007

Open access and Esposito - again

Joseph Esposito, whose article in The Scientist raised some OA hackles last month is at it again - and has been roundly answered by Alma Swan. Read them both for a comparison of ill-thought-out comment vs. sound rebuttal.

17 December 2007

Free Rice!

Let me recommend the online vocabulary improvement game, Free Rice. Not just a game, but a way of donating rice to the hungry of the Third World. For every word you get right, 20 grains of rice are donated. I got up to 2,400 grains before I packed in, so you can tell that it is rather addictive. There are some difficult words in there, but, very often you can select the right meaning because all of the other meanings appear to be wrong! What the 'vocabulary level' might be is not explained, but I guess it has something to do with the number of words you get right.
Currently donations stand at over nine thousand million (or nine billion in American) grains of rice donated. That sounds a lot, but it would be nice to see it translated into kilos.

16 December 2007

Get a life at Google

At the Official Google Blog, Aseem Sood, Product Manager, Google Toolbar Team, writes:
I've started to notice something peculiar about the Toolbar team, and that's this: we literally can't seem to stop carrying the Toolbar around with us. When we moved to a new space in our Mountain View campus, we brought along a hallway-sized printout of it. For Halloween, eighteen of us dressed up as the different parts of the Toolbar itself.

Oh, how sad! :-( No pumpkin lantern, no trick or treat, just dressing up as Toolbar elements! I think this is the saddest thing I've read this week.

The state of public libraries

Reading an old issue of the The Guardian Review I came across a piece by Alasdair Gray on the writing of his novel Lanark (started 1953, published 1981 - you can't say he rushed it!) - fortunately still available on the Website. It the piece he remarks:
The notion of Lanark and Thaw's stories being parts of the same book came from The English Epic and its Background by EMW Tillyard, published in 1954, discovered in Denniston public library. It astonishes me to think there was a time when the non-fiction shelves of libraries in working-class Glasgow districts had recently published books of advanced criticism!

Ah, yes - I remember those days. Sadly, the British public library has been in decline since Margaret Thatcher's romantic involvement with the market (continued by T. Blair and G. Brown) and the decline of any feeling in government for responsibility for the 'public sphere'. Once upon a time librarians from the Nordic countries used to visit Britain to see examples of the best in public library systems and services - all the traffic would have to be in the other direction today.

Another search engine

Aficionados of search engines might be interested in Carrot. This uses multiple search engines and then clusters the results by Topics, Sources and Sites. This is a demo site, but it seems to have possibilities.

15 December 2007

Another journal ranking measure

Biomedical Digital Libraries has a paper by William Barendse on "The strike rate index: a new index for journal quality based on journal size and the h-index of citations" The strike rate index (SRI) is 10logh/logN, where h is the h-index and N is the number of citeable papers published in the period covered.
The author argues:
The strike rate index appears to identify journals that are superior in their field and to allow different fields to be compared without recourse to additional data. A good way to select journals is to rank them within a narrow field on impact factor, then ask how difficult is it to get published in that journal, how respected is the editor and their staff, who else publishes in that journal, and how long does it take to get published. All of that is valid, but once the impact factor is reified into a universal measure of journal ranking, those other aspects are apt to be forgotten. When organizations or governments set universal thresholds based on the impact factor, it can be hard for individual scientists to argue against them. The strike rate index helps to address the gap in knowledge of the meta-data associated with the publishing of science, by looking at the long term record of a journal in publishing highly cited material relative to the number of articles published.

We now have at least four different ranking measures: the Impact Factor, probably the oldest and best known and often used by journal ranking sites at least as part of the ranking formula, the h-index (which produces oddities when applied to journals because of the age factor), the SCImago Journal Rank - which appears to produce a ranking very close to that produced by the Impact Factor (and which is a little problematical, since it produces ties), and now the Strike Rate Index - again, presumably because it uses the h-index with its age bias, produces a different ranking from the Impact Factor and the SCImago Journal Rank: for example, in the list I posted the other day Library Quarterly ranks 10th with the IF, =6th with SJR, 2nd with the SRI. Perhaps even more surprising is that JASIST, which ranks 2nd with both the IF and the SJR, ranks 10th with the SRI.

Take your pick - the assessment of 'quality' is always going to be problematical and one criterion (possibly the best?) - the acceptance/rejection rates of journals rarely gets released by publishers :-)

11 December 2007

SCImago Journal and Country Rank

News of a new journal ranking site from the SCImago research group at the University of Granada. Described as follows:
The SCImago Journal & Country Rank is a portal that includes the journals and country scientific indicators developed from the information contained in the Scopus® database (Elsevier B.V.). These indicators could be used to assess and analyze scientific domains.

This platform takes its name from the SCImago Journal Rank (SJR) indicatorpdf, developed by SCImago from the widely known algorithm Google PageRank™. This indicator shows the visibility of the journals contained in the Scopus® database from 1996.

A natural question for me, then, is: How does Information Research show up in this new ranking? So, I took the journals that are similar to Information Research, in that they are not 'niche' journals, but publish widely across information science, information management, librarianship, etc., from ISI's Journal Citation Reports and then gathered the data from SCImago. To reduce the effort of creating a table (not as easy in Blogger as it is in Free-Conversant) I have taken the top 10 journals from the list:

Journal                h-index SJR   cites/doc     JIF

Info & Mgt             29     0.069    3.65     2.119

Journal of ASIST       27     0.068    2.48     1.555

Info Pro & Mgt         27     0.058    2.11     1.546

J of Doc               23     0.058    1.61     1.439

Info Research          12     0.053    1.77     0.870

Lib & Info Sci Res     14     0.053    1.26     1.059

Int J Info Mgt         18     0.051    1.55     0.754

Lib Qly                14     0.051    1.23     0.528

J Info Sci             17     0.051    1.01     0.852

Lib Trends             14     0.050    0.85     0.545


The use of the h-index is well known in the bibliometrics fraternity and is normally used to measure the productivity and impact of an individual scholar. One of its problems, particularly significant in ranking journals, is that the longer the period in which the scholar (journal) has been active, the more likely it is that the scholar (journal) will receive a high h-index, so it's usefulness here may be limited. However, it is interesting to see that Information Research has an h-index of 12, while older journals have lower measures.

The SJR measure is explained as,
...an indicator that expresses the number of connections that a journal receives through the citation of its documents divided between the total of documents published in the year selected by the publication, weighted according to the amount of incoming and outgoing connections of the sources.

The 'cites/doc' measure is based the number of citations received in the previous four years and the total number of documents published in 2006.

JIF is the ISI Journal Impact Factor.

10 December 2007

DOAJ the biggest 'big deal'?

Heather Morrison suggests that the Directory of Open Access Journals now offers the biggest 'big deal' with, right now, 2996 journals listed.

But is it so? Many of the journals in DOAJ do not fit the model of the scholarly, peer-reviewed journal: for example, in the Library and Information Science area there are journals that are simply the bulletins of professional associations and it is difficult to discover whether or not the contributions are peer-reviewed.

Also, nothing stays still. I checked the eighty journals in the Library and Information Science area and found that thirteen had published nothing in 2007. Of these, two appeared to be completely dead (although one retained the archive of papers) and four had published nothing since 2005.

However, even if this pattern was repeated in other fields (and I suspect that this field might be more prone than others to the optimistic publishing of new journals) and, say, 15% of the journals were inactive this would still leave the DOAJ ahead of the field in the total number of journals 'packaged'. If 'quality' (however we measure it) is taken into account then perhaps another 5% would be suspect, but this would still leave DOAJ with more than 2,300 journals, compared with Science Direct's 2000.

One of the problems is that we still don't have a citation index that covers all OA journals - should SPARC and DOAJ look at that possibility as a further development of the already excellent service?

Universal Digital Library

Carnegie Mellon and the other universities, world-wide, are rightly receiving accolades in the press for the fact that the Universal Digital Library has exceeded its target by having digitised 1,500,000 volumes.

However, the UDL is not without its problems. For example, you need to download and install a viewer - either DjVu (of which I've heard) or the Tiff viewer - of which I'd never heard. Still, I downloaded the latter and then found that it appears to call up Quicktime actually to view the pages - all of them being images, rather than transcriptions. Needless to say, this is rather slow.

Also, I found that not all of the items are 100% open access. For example, I went looking for something on the history of Alsace (having a friend there who grows some excellent wines :-) and found that only 15% of Townroe's A Wayfarer in Alsace is actually available - and it ends in the middle of a chapter; in fact in the middle of a sentence!

Then there's the problem of blank pages being digitised. I located Hazen's Alsace-Lorraine under German Rule and found that the first seven pages were blank, so I skipped to the end and found blank pages from page 262 to 268. However, I persevered, and found text on page 261, so I skipped back to the start and found the start of the text on page 8. I image that others, not as determined as I might give up!

As for printing, you have a choice - you can print either the whole file or the current page and, because the files are all pictures, you can't select text for quoting. And there's no search function within a file.

Clearly digital library technology has a long way to go before it becomes user-friendly and the Universal Digital Library seems to have futher to go than many.

07 December 2007

Knowledge management?

I never cease to be amazed (and amused) by the lengths folks go to to justify the use of the term 'knowledfge management'. The latest is on the Science Commons blog where D. Wentworth seeks to answer the question, What’s “open source knowledge management”?:
'Knowledge management,' or KM, is a term often used by businesses to describe the systems they have for organizing, accessing and using information — everything from the data in personnel files to the number of products on store shelves.

Fair enough - that's what we've been calling 'information management' for about the past 40 years. But no!:
One reason that it’s “knowledge” management rather than “information” management is that the word knowledge connotes use of information, not just its availability. Having the ability to use information is what makes it valuable. One classic example is Wal-Mart, which used real-time data about its inventory to realize tremendous, game-changing efficiency gains and cost-savings.

Now what is it that 'information' does? Information - 'informs', in other words the notion of its use is implicit in the definition and its curious how no definition of 'km' can do without the notion of information. The only reason for the existence of information is that it should be used - calling information that is used, 'knowledge' is simply silly. As Peter Drucker famously said, 'Knowledge exists between two ears, and only between two ears.'

The blogger's ideas also ignore the fact that there are at least two other communities that use the term 'knowledge management': those building 'knowledge-based systems' in the AI fraternity; and those concerned with the more effective management of organizational communications through the creation of 'communities of practice' and similar ideas. When a term has such competing demands from totally different use communities it becomes worthless. I suggest that Science Commons should exercise a little 'scientific' commonsense and stick to 'information management'. When we look at the Neurcommons site (which is being blessed with this composite term), what do we find?
With this system, scientists will be able to load in lists of genes that come off the lab robots, and get back those lists of genes with relevant information around them based on the public knowledge. They’ll be able to find the papers and pieces of data where that information came from, much faster and more relevant than Google or a full text literature search, because for all the content in our system, we’ve got links back to the underlying sources. And they’ve each got an incentive to put their own papers into the system, or to make their corner of the system more accurate for the better the system models their research, the better results they’ll get.

In other words it's a database, constructed, it seems, using information extraction methods, which will deliver information items to the searcher.
It's a little difficult to understand what is meant by the following:
They’ll be able to find the papers and pieces of data where that information came from, much faster and more relevant than Google or a full text literature search, because for all the content in our system, we’ve got links back to the underlying sources.

What are those urls for each item retrieved by Google other than 'links back to the underlying sources'? And quite what 'the papers and pieces of data where that information came from' means is anyone's guess. It seems that once anyone gets into the mire of language associated with 'km' the critical faculty disappears altogether and hype prevails.

26 November 2007

A hiatus in the Weblog

Things have been quiet for the past week here, simply because I was in Amsterdam doing a keynote at the First International Conference on Information Management. Not the first, of course, but the first organized by Prof. Rik Maes of the business school at the University of Amsterdam. And a pretty entertaining event it was too. Apart from doing the keynote I ran a group discussion on future challenges for IM with Chun Wei Choo - like Rik, a member of the Editorial Board of IR - we came to the conclusion that the main challenge was 'managing complexity' - the complexity of systems, of the technology itself, of the proliferation of information resources and the complexity of the information users' approaches to information.

So, that was last week - tomorrow I'm off to another conference in Vilnius, Lithuania - the conference of the Nordic Network for Intercultural Communication, where another Board member, Elena Maceviciute, and myself will be presenting the results of a pilot project we ran earlier in the year on research network building by re-located scholars in Sweden. Well - a change is as good as a rest, they say :-)

It's not likely I'll be attending to the Weblog at all in this period, so nothing more for at least a week - unless some truly startling development takes place!

20 November 2007

Firefox 3.0 now in testable beta

Firefox 3.0 is on its way and the brave and developers can now test it. Well, I'm not particularly brave in the use of beta software, but I downloaded it anyway. On the surface, nothing much seems to have changed. However, 'under the hood' as they say, the rewriting of 2,000 lines of code has resulted in a lot of changes - too many to list in this Weblog.

However, one development:

"One click site info: Click the site favicon in the location bar to see who owns the site. Identity verification is prominently displayed and easier to understand. In later versions, Extended Validation SSL certificate information will be displayed."

The comic thing is that, when you click on the Mozilla icon in the address bar the pop says: "Identity unknown. This Website does not supply identity information". In fact, so far, I haven't found a site that does supply such information - perhaps that feature is there just for Web developers to take advantage of in the future

21 Nov: There's an enthusiastic review at Ars Technica

The PR benefits of OA

The Public Library of Science Weblog has an item on the publicity surrounding one of its papers on the discovery that Nigersaurus taqueti, a vegetarian dinosaur. At the time the blog entry was written, there had been 583 news reports and 1,855 blog entries on the story.

This, of course, is one of the benefits of open access - if a paper is newsworthy, the fact will be discovered readily by journalists and access will not be a problem. It's a little unlikely that any paper in Information Research will achieve such notoriety, but you never know... :-)

19 November 2007

Amazon and e-books

Cyberspaces is abuzz with news of Amazon's e-book reader, Kindle, for example at the ZDNet site there's a review and pictures. In the review, Jeff Bezos is quoted as saying


"This is a 500 year technology and we forget that it’s a technology. As readers we don’t think about this too often", said Bezos. "An interesting question is why are books the last bastion of analog".

The answer: Books disappear when you read them. They fill their role and get out of the way. "What remains is the author’s world", said Bezos, referring to the reader "flow state".

It seems very odd that a bookseller - on the other hand, he's not a real bookseller, is he? - should say that books disappear when your read them. That suggests that he has no idea of what people do with books - they are an instrument of social interaction: we talk about them, we exchange them, we lend them (occasionally) to friends, we pass them on to charity shops and many of us keep those we treasure to read again and again, and even if we pass the physical object on, some of what we read remains in our consciousness.

E-book readers may become a new fashion item, but unless I am very much mistaken, they'll never replace the printed book - the book just has too many 'affordances' that a computer screen lacks - and apart from anything else, if I leave a paperback on the train before having read it, I can pick up another secondhand copy from Amazon for a fiver - if I leave my 'Kindle' behind I'm nearly $400 out of pocket!

18 November 2007

The EU and Open Access

Thanks, as usual, to Peter Suber for drawing attention to the documents and minutes of an EU meeting on open access. It seems that no general point of access to the files exist, since Peter gives links to each, and I have searched the European site without success.

However, the point I want to make (and I begin to seem like a rusty record) relates to the so-called 'green' and 'gold' routes to open access. One of the points arising out of the discussions and reported in the minutes is:


The debate persists on whether to move towards open access through repositories and funding body mandates (“green” open access) or through paid open access models/'reader pays' solutions (“gold” open access). Are there are other paths towards open access? Can the two options coexist?

So, once again, we have an official body which, at present, equates the 'gold' route to OA with author charging and wonders whether or not some other method exists! Of course another method exists and it is the only one that maximises the social benefit of open access - it is the 'platinum' route of subsidised, collaborative publication of OA journals - and this comment from the EU demonstrates why this route needs to be separated from the 'gold'.

Of course, it is possible to conceive of other methods. From the point of view of what the technology allows, the notion of the quarterly journal issue with its package of papers is something of an anachronism. It would be perfectly feasible to set up a peer-review process which resulted not in an electronic journal, but in an electronic archive. By this, I do not mean the equivalent of the 'green' route, but a new, peer-reviewed repository, which used, say, RSS to notify interested parties of new items admitted to the repository.

It would be relatively easy to do this for languages with a relatively small number of native language speakers and probably easiest there in the humanities and social sciences, where the cultural context is most relevant. So, rather than having, for the sake of argument, the Electronic Journal of Bulgarian Literary Criticism (or whatever that would be in Bulgarian!) one would have the 'Bulgarian Humanities Research Repository' - run by a national research body, or by a consortium of universities - which would include not only papers on literary criticism, but on any other humanities discipline. Humanities scholars of all kinds would have point to which to submit papers and one point from which to receive papers. This idea would also have the secondary benefit of allowing national funding agencies to determine the research performance of departments, through the volume of material submitted and accepted and also through the possibility of developing a national citation index for the disciplines.

It is, of course, in the publishers' interest to encourage the assumption that 'gold' involvs user charging, since if this mode of support spreads, they have income from two directions, instead of being exposed by having only one source - subscriptions. So perhaps the EU would benefit by having less close ties to the industry and exercising a little more imagination about the options.

17 November 2007

Annual Review of Information Science and Technology

Readers of the new ARIST (vol. 42) who come across my chapter, entitled 'Activity theory and information seeking' might be rather puzzled to discover that only one page is devoted to that subject.

The intended title was simply 'Activity theory', since the chapter ranges over the origins of AT and its application in a variety of fields, including (but not exclusively) information science. However, glitches happen, even in the best regulated publishing houses, and at some point after the proof copy was corrected, the title was changed. Apologies from all concerned!

16 November 2007

Repositories and digital libraries

There are a couple of paper in the latest issue of ILFA Journal of interest:

Open Access and Institutional Repositories – a developing country perspective: a case study of India, by S.B. Ghosh and Anup Kumar Das, and

The Joint Czech and Slovak Digital Parliamentary Library, by Eva Malackova and Karel Sosna

14 November 2007

Lawrence Lessig on copyright

That wonderful TED site has just posted a recording of a speech by Lawrence Lessig - it's an excellent presentation. Go take a look.

'Adaptation' and 'derivation' in Creative Commons' licences

SPARC Europe is cooperating with the Directory of Open Access Journals DOAJ) in the promotion of a 'Seal of approval' for OA journals. There is nothing about this, at the moment, on the SPARC Europe Website but it seems that the award of the 'Seal' will depend upon the journal using the Creative Commons BY licence and supplying DOAJ with metadata for the papers. The CC-BY licence is suggested as being the most 'open' of the licences allowing for 'derived works' and, by implication, it would seem, allowing archiving, text-mining, etc.

Originally, Information Research used the CC-BY licence (or its equivalent at the time I adopted the licence) but I found that the situation was rather ambiguous, since the full licence contains no full definition of 'derived work'. Rather, it defines 'adaptation' and mentions 'derivative work' as part of that definition. Nothing in the full licence says anything about archiving, text-mining or any other post-publication use of OA papers. The definition of 'adaptation' suggests why this is the case:

'Adaptation' means a work based upon the Work, or upon the Work and other pre-existing works, such as a translation, adaptation, derivative work, arrangement of music or other alterations of a literary or artistic work, or phonogram or performance and includes cinematographic adaptations or any other form in which the Work may be recast, transformed, or adapted including in any form recognizably derived from the original, except that a work that constitutes a Collection will not be considered an Adaptation for the purpose of this License. For the avoidance of doubt, where the Work is a musical work, performance or phonogram, the synchronization of the Work in timed-relation with a moving image ("synching") will be considered an Adaptation for the purpose of this License.

From this, we understand why the "Creative" Commons is so called - it is a licence for creative products in the sense of literary, musical and artistic works, which may undergo various forms of adaptation, a novel into a radio programme, a work of art into an advertisement, etc. - it is does not appear to be designed for works of academic scholarship.

The key questions for scholars are, 'What is meant by derivative work', and 'What kind of derivative work is permitted under this licence?' and I cannot imagine many academics being happy with the idea that their work can be 'built upon' other than in the way we normally think of that, i.e., someone taking an author's ideas as expressed in a work, using those ideas, building upon them to produce a novel work of scholarship with quotation, citation and referencing. Any other form of 'adaptation' that brings about a new product surely deserves the original author to be treated as joint author of the new production.

It should also be noted that the CC-BY licence permits commercial re-use of an author's work and I would find this completely unacceptable for Information Research for the simple reason that the journal is genuinely 'open', i.e., not closed at the input end through author charges, and neither I nor any of the Associate Editors, nor Lund University Libraries (which hosts the journal) receives any financial support for its publication. The notion that any commercial organization could then take the papers and use them for commercial purposes is a complete anathema to me, and I imagine, to the authors whose work would be used in such a way.

My conclusion after exploring this issue is that the present Creative Commons' licences do not properly protect scholarly work, if a BY (or 'attribution') licence is adopted. I turned to Science Commons to see what happens there, but that organization simply uses the CC licences and has not produced a separate licence for scientific works.

At present the only sensible licence for scholarly works is the "Attribution-Non Commercial-No Derivs" (or BY-NC-ND) licence, which allows open access and anyone may:
a. ...Reproduce the Work, to incorporate the Work into one or more Collections, and to Reproduce the Work as incorporated in the Collections; and,
b. ...Distribute and Publicly Perform the Work including as incorporated in Collections.

Perhaps it is time for a new 'Scholarly Commons' licence, which makes clear what a 'derivative work' is and removes the present ambiguity and uncertainty.

13 November 2007

"A First Look at the Google Phone"

A First Look at the Google Phone is the title of an article in the New York Times technology section. It has an interesting couple of videos from Google describing the kind of phone that can be built using the Android platform, using prototypes to demonstration the capacity of the system

The comments are worth reading - partly for their comic character, with Apple adherents whinging about the iPhone being 'ripped off' - what they don't seem to realise is that Google must have been working on Android for hundreds of man-months to get it working and that, rather than a phone, it is a platform for development of applications for phones. What any Android-based phone will actually look like is going to depend upon the phone manufacturers.

What success Android will have is still an open question - many manufacturers are locked into the Symbian platform and I imagine that these videos are at this moment being carefully watched by Symbian engineers - expect something from that direction in the not too distant future :-)

12 November 2007

Short versus long articles

Jakob Nielsen has an interesting little article on his Alertbox site, which discusses the advantages and disadvantages of short and long articles from a cost-benefit perspective.

Of course, a scholarly journal cannot avoid having long articles and readers come to the site expecting to find them. However, the abstracts are a summary of what is in the paper and my guess is that people read the abstract to decide whether or not the paper is for them - in effect, the abstract is the short article that leads to the long one. So I hope we are getting it right!

10 November 2007

More on Brass and Platinum

No sooner had my last comment on the topic of Green, Gold (aka Brass) and Platinum hit cyberspace than Peter Suber comes up with yet another bit of misleading information, this time from Jan Velterop, who, in his own Weblog, notes:

Applied to OA, ‘green’ and ‘gold’ are qualifiers of a different order. ‘Gold’ is straightforward: you pay for the service of being published in a peer-reviewed journal and your article is unambiguously Open Access. ‘Green’, however, is little more than an indulgence allowed by the publisher. This, for most publishers at least, is fine, as long as it doesn’t undermine their capability to make money with the work they do. But a 'green' policy is reversible.

Of course, Velterop is entirely right that the Green route of open archiving is dependent, at present, on the 'indulgence' of the publishers - I have suggested elsewhere that open archiving can only be a temporary approach to open access, since either the publishers may withdraw their permissions, or what I have called the Platinum Route, or, possibly more likely, some alternative process of scholarly communication will come to dominate.

However, Velterop conveys the same mis-information about the Gold (Brass) route as I drew attention to in that earlier post: the statement that it involves paying the publisher to open up access. This is true for commercial publishers, but not for those journals, like Information Research, that are published freely on the basis of subsidy and collaborative effort.

I can see that I am going to have to keep on plugging away at this distinction for as long as the notion of 'Gold' is used ambiguously for all OA journals, whether they author charge or not. Let's get into the entirely sensible habit of referring to Platinum for the latter.

09 November 2007

Firefox 3.0

There's news around about the imminent release of Firefox 3.0 and a nice article about it, with screenshots, on Lifehacker.

Green, Brass and Platinum - three routes to open access

Heather Morrison in a very useful post re-stating the nature of open access states:

There are two basic types of open access:

Open Access Archiving (or the green approach): the author (or someone representing the author) makes a copy of the author's work openly available, separate from the publication process. That is, the article may be published in a traditional subscription-based journal. The version of the article that is self-archived is the author's own copy of the work, reflecting changes from the peer review process (all the work that is provided for free), not the publisher's version.

Open Access Publishing (or the gold approach): the publisher makes the work open access, as part of the process of publication.

However, this is not really the whole story and is in danger of perpetuating the myth that the only form of open access publishing is that made available through the commercial publishers, by author charging. This is why I distinguish between open access through author charging, which is what the Gold Route is usually promoted as being (and which all official bodies from the NIH to the UK research councils assume as 'open'), and the Platinum Route of open access publishing which is free, open access to the publications and no author charges. In other words the Platinum Route is open at both ends of the process: submission and access, where as the Gold Route is seen as open only at the access end.

Harnad has argued that the distinction is unnecessary because at present about half of the Gold Route open access journals do not make author charges. However, if different modes exist we should categorise them clearly and not confuse them: author charging is the publishers' way of maintaining their incomes at the same level as is achieved through subscriptions - rather than being Gold from an open access point of view, we should label it as Brass (Yorkshire dialect for 'money'!), whereas the Platinum Route is the scholar's way of making his/her publications completely open.

We have three ways of achieving open access: archiving, author charging, and completely free - let's make sure the distinction is known and appreciated.

Bibliometrics and research assessment

A study for Universities UK (previously the Committee of Vice Chancellors and Principals - a much better title, which actually told you who was involved!) has come to a rather predictable conclusion:

It seems extremely unlikely that research metrics, which will tend to favour some modes of research more than others (e.g. basic over applied), will prove sufficiently comprehensive and acceptable to support quality assurance benchmarking for all institutions.

However, at least that conclusion has been reached and, rather importantly, the report is mainly concerned with the potential for applying bibliometric measures to fields in science, technology, engineering and medicine (STEM) (the areas targeted by the Higher Education Funding Council). Some differences between STEM fields and the social sciences and humanites are pointed to, but there is no detailed analysis of the problems in these areas, which, of course, are even more difficult to deal with than those in STEM.

Readers outside the UK might be somewhat bemused by this post: the explanation for the concern over this matter is that the Higher Education Funding Councils have proposed the use of 'metrics' (i.e., bibliometrics) for the Research Assessment Exercise. This Exercise has taken place every four or five years for the past 20 years and is crucially important for the universities, since it is the basis upon which the research element of national funding is distributed.

07 November 2007

More on OA

Peter Suber's Open Access News drew my attention to an article in The Scientist, by Joseph Esposito. I'm publishing here the comments I made on The Scientist's site:

Joseph Esposito's article is both thought-provoking and, in parts, a little dangerous. Out the outset he notes: "Many continue to argue one side or the other of a binary choice: Either all research publishing should be open access, or only traditional publishing can maintain peer review and editorial integrity." This is a dangerous comment, since he is picking up on the 'big lie', promoted by PRISM, that OA does not involve peer review. This, of course, is nonsense: every genuinely scholarly OA journal that I know of uses peer review as part of the publishing process - it could never achieve any kind of reputation if it didn't do so. Jan Velterop also seeks to perpetuate this association in his comment on the article - yes, developing and maintaining the brand does take time and effort, as he suggests, but that time and effort is invested by the unpaid peer-reviewers and they are just as happy to work unpaid for non-commercial OA journals as for commercial publishers.

Later Esposito appears at times to conflate 'open access' with 'open archives' - confusingly both can be reduced to the same initials - when he writes of authors choosing to make their work available outside of the formal publishing process. This ignores the fact that OA journals are formally published: they have ISSNs, regular publication intervals, they are indexed by the same indexing and abstracting services as the commercial journals.

There is also the association of OA with 'author charging', and what I have called elsewhere the 'Platinum Route' of subsidised, collaborative OA publishing is ignored - and yet it is this mode that is increasingly adopted by newly-published journals. And new journals are not the exceptional case that Esposito suggests: they are appearing almost every day and many of them adopt the Platinum Route. Case studies of such journals have appeared in Information Research, which is also a Platinum Route journal. The 'one click' push that Esposito refers to is not an exceptional situation, but a common one for new open access journals and the notion that this only works at the fringe of scholarly communication is rather silly - scholarly communication consists of a multitude of 'fringes', each of little relevance to the rest of the community: like any other scholar in a specific discipline I have no interest in what is published in physics, chemistry, biology, pharmacology, Near Eastern studies, Scandinavian folklore and most of the rest of scholarship, but what is available to me openly within my own discipline is going to be central.

As another commentator has noted the costs of OA publishing are exaggerated, especially if the Platinum Route is adopted. No money at all flows in the publishing system for many OA journals, which use freely given time. That time is also given to commercial publishers, and if they had to pay true market rates for the time of editors and reviewers, the economics of scholarly publishing might be different. They would be markedly different if publishers had to pay for their raw materials - the papers - the way companies in other industries have to pay.

The suggestion of a novel OA publishing platform chimes with my suggestion that, on the analogy with music tracks and iTunes, "One future model of scholarly communication could see collaborative peer reviewing in disciplines leading to archived papers that are delivered as tracks are today - the individual (who is always going to be more interested in the paper than in the journal as a whole) downloads papers of interest, and universities provide the finance for the open archive rather than subscriptions to the now-defunct journals". I don't see such a model requiring huge additional investment - as the system changes, as it inevitably will, what is saved in subscriptions can be transferred into the development costs of the new platform.

As I note in the same Weblog entry, commercial scholarly publishing is facing the same kind of threat, brought about by technological change, as the music industry and is reacting in much the same way as the music industry has reacted up to now. Neither industry will survive simply by defending the present model - the dissemination of music and the dissemination of scholarly research are changing in analogous ways and the direction of that change is towards openness and new entrepreneurial models. Just as the old computer companies were never the leaders in change in that industry - think of the switch from mainframe to mini-computer to desktop - so it is unlikely that the giants of scholarly publishing will be at the forefront of change in their industry.

05 November 2007

Peak usage day for Information Research

I just took a look at my counter stats and discovered that 17th October, 2007 was the busiest day ever for the top page of the journal, with 3,574 hits - the previous high peak of 2,915 hits occured in July 2006.

The counter also tells me that the top page has had almost 212,000 hits so far this year, with an average of 684 page-views a day.

Turning to Google Analytics, this service tells me that the paper with most hits so far this year is Joyce Kirk's 'Information in organisations: directions for information management' from 1999, with 3,435 hits.

Odds and ends

Peter Suber's excellent Open Access news Weblog has been mentioned frequently here and recently he's had a couple of particularly interesting (to me) posts. One relates to Eric von Hippel's making available a couple of his books, with the agreement of the publishers, as open access e-books. The interesting thing is that sales exceeded expectations in both cases. As von Hippel says, this is counter-intuitive for publishers, but it simply shows that publishers have not thought through the logic. They know that, for example, for every thousand mailings of a publicity shot they're likely to get only 2% or 3% response - or even less - so they ought to understand that publicity in the form of open access, which reaches millions of people, rather than a few thousand, is going to increase sales, even if only one or two percent of the downloaders actually buy the book. I could also see benefits if publishers make books OA when the main sales have been made and the order stream is reduced to a trickle: this could give a boost to sales well beyond what would have been anticipated.


The second item is somewhat more esoteric and legalistic. Peter has been engaged in a debate on whether or not real OA includes the right to make 'derivatives' of the work in quetion - referring to the Creative Commons' licences. There are those who hold that the right to make derivative works is a required characteristic of OA works and those who protest the opposite. What is not clear for me is what constitutes a 'derivative work': if someone uses my work to create something related, using, for example, a theoretical model and quoting from my work, I don't see that as 'derivative' in any way other than all scientific work is 'derivative', in that it builds on the earlier research. To be truly 'derivative' in my book means taking my work and re-working it, using the text and the arguments, along with new insights and ideas to create something closely associated and 'derived' from my work. In that kind of work - and I know of none - I would be, in effect, a silent collaborator and I think I would be justified in claiming to be the joint author! So I think the debate may be about two different things: creating a work that simply refers, textually and otherwise, to my own, and creating a composite work, based on my ideas, but extending, etc. I would be perfectly happy with the first form of 'derived' work, but I think that for the second I would deserve a stronger form of acknowledgement than mere citation. Should I, therefore, adopt the 'no derivatives' form of the CC licence?


While pursuing this at the CC site another question occurred to me. The CC licence has a 'no commercial use' element, which simply means that, you cannot use my work for commercial gain. However, if you publish through a toll access publisher, who is in the business of making a profit, can the publisher profit from the inclusion of my work in yours? I think I shall have to watch this carefully in future, since I get numerous requests to use the diagram of my 1996 'General Model of Information Behaviour' - in the past, I've given permission without question, but now perhaps I should say - fine, if you publish in a true (Platinum Route) OA journal, but, if not, your publisher will have to pay.

31 October 2007

A new view on heritage

Thanks to Elena Maceviciute for this interesting picture: it shows the headquarters of the Federal Agency of Construction, Housing and Communal Services in Russia. One of its responsibilities is the construction work for preservation projects.

Google's experiments

Google's October newsletter points to new search developments. At Google Experimental you can try out, and give feedback on, a number of experimental features. These include a timeline presentation of results, a map view and the additional information view. There are also some keyboard shortcuts for navigating through search output and a couple of views that provide contextual navigation bars to the left or right of the search output. Of these, the timeline presentation and the keyboard shortcuts seem the most useful to me.

28 October 2007

ISIC-2008, Vilnius, Lithuania

The organizing committee for ISIC-2008 reminds everyone about the important dates of the international conference Information Seeking in Context 2008. The conference will be held in Vilnius on September 17-20, 2008. A doctoral workshop will be held in conjunction with the conference on September 16, 2008.

Conference paper submission deadline: February 1, 2008.

Doctoral workshop paper submission deadline: March 1, 2008.

For more information please visit the Website of the conference.

Contact person for the conference: Dr. Erika Janiuniene

The March 2008 issue of Information Research

Because of my need to make the journal publication year conterminous with the calendar year, there will be quite a long time gap between Volume 12, Number 4 (just published) and the first issue of Volume 13 in March 2008.

Consequently, I have decided that, for this issue, I shall publish the papers (and provide the index entries) and reviews on the site as they are ready and then publish the final paper(s) in March 2008 along with the contents page.

This introduces some oddities in relation to date of publication, since the formal publication date will be March 2008, but the papers will be actually published from, probably, November 2007 onwards. To overcome this, I shall add to the 'How to cite this paper' element on the page the information on when the paper was made available. A fictional example:

Carpenter, C. & Smith, P.A. (2008). "Web users' online information behaviour: marrying HCI and information behaviour" Information Research, 13(1) paper 333. [Available 14 November 2007 at http://InformationR.net/ir/13-1/paper333.html]

20 October 2007

OA and the lobby industry

Heather Morrison has another thoughtful piece on open access in her Weblog, suggesting that the publishers' anti-OA consortium PRISM has imploded.

I'm not too sure about this: PRISM is only the tip of the iceberg in terms of lobbying. We can be sure that the publishing industry is lobbying away vigorously, with people, rather than a Website and it's that personal lobbying that makes the difference, rather than what is on public view. My suggestion is that fellow OA advocates in the USA need to lobby just as vigorously, writing to their senators and congressmen/women and generally countering the misinformation that the lobbyists inevitably purvey. We've seen time and again under this US administration that the truth does not necessarily prevail; the key is how much money the industry is prepared to spend to swing the votes of the legislators, whether it is to damage the Alaskan environment by oil drillling, open the virgin forests of the national parks to the logging industry, or run the worst medical care programme in the Western world for the benefit of the drug companies and the mis-named 'health care industry'.

Constant vigilence and persistence in telling the truth about the warped economics of the existing scholarly communication system is the only weapon we have.

17 October 2007

Lessig moves to tackle corruption

Perhaps most readers of this Weblog are now aware that Lawrence Lessig - the motivating force behind Creative Commons - is shifting his sphere of interest to corruption in American political life. Now there's a target!

To catch up with what's going on, see an interview with him and listen to his lecture at Stanford Law School

There's interesting follow up on the Weblog.

Journal statistics

I'm sometimes asked, presumably by those who need to justify publishing in an electronic-only journal, about the journal's acceptance and rejection rates.

Fortunately, the journal management system we've adopted enables me to provide a more precise answer than previously. Under the system, we've handled 88 submissions, of which 68 went on to peer-review and of those:

24 (27%) were accepted, with revisions required;
30 (34%) were rejected, and
13 (15%) were required to be re-submitted.
(Presumably one is still in process)

Another useful statistic is that the average time to review has been 37 days - I don't know what the range is, but, certainly, some have taken much longer.

One additional point to bear in mind is that some of the papers that require extensive revision are never seen again.

16 October 2007

Information Research Volume 12 Number 4

The latest issue of Information Research (Vol. 12, No. 4) is now online. The index page for the journal as a whole is not yet online, but will be by about 22.00 GMT this evening.

An extract from the Editorial:

This is a bumper issue of Information Research as it includes a supplement containing the proceedings of the 6th Conference on Conceptions of Library and Information Science (CoLIS 6) as well as the usual clutch of papers and reviews.

I shan't say much about the CoLIS Proceedings, since they have their own introduction, except to thank the Editors, and particularly Nils Pharo, who have spent a great deal of time in getting the papers into publishable form. Without their efforts it would have been completely impossible to publish the proceedings so quickly after the conference.

It has not been possible, however, to put the papers through the usual copy-editing and revision process used by the journal, so readers may find the occasional typographic error or other blemish. It is for this reason that the papers are published as a supplement, with their own numbering series, rather than as papers in the main part of the journal.

Read the rest of the Editorial

15 October 2007

Scholarly communication symposium

Thanks to Peter Suber's Open Access News for alerting me to the symposium on The Future of Scholarly Communication, which is being run, online, by Princeton University's Center for Information Technology Policy.

The starting point for the Symposium is a report from a non-profit organization called Ithaka on University Publishing In A Digital Age - not a great deal of attention is paid to open access in the report and when it is mentioned we have the usual, false equation of open access with author charging;

The academic community seems to be looking to open access models as a solution to these challenges. But while open access may well be a sustainable solution in STM disciplines, where federal and private research grants can conceivably be extended to support publication fees, one model will not serve as a panacea.

Why is it that the notion of collaborative, subsidised, open-access publishing continues to escape the attention of bodies like this when there are now so many examples of its effectiveness? It is all the more curious in a report aimed at considering the future of university publishing, when that future could include collaboration across institutions to promote subsidised, genuinely 'open' journals.

In spite of all their work it seems that, in the end, the report's authors are too timid to explore the logical consequences of the technological revolution that has hit scholarly communication: they, like the publishing industry generally, are mired in the present patterns of communication, but those patterns are changing irrevocably and numerous alternative new patterns may evolve as habits change. One possibility lies in an analogy with the music industry, which has similarly been hit by technological change: the unit of interest is now the 'track', not the CD or the 'album', and iTunes and other providers offer a delivery service for tracks. One future model of scholarly communication could see collaborative peer reviewing in disciplines leading to archived papers that are delivered as tracks are today - the individual (who is always going to be more interested in the paper than in the journal as a whole) downloads papers of interest, and universities provide the finance for the open archive rather than subscriptions to the now-defunct journals.

In small, niche areas this could happen quite quickly: for example, if a free, open access journal already exists, which is operating a standard peer-review process, it already has the characteristics of an open archive of papers and no-one ever downloads the entire journal issue. The papers are found, predominantly, by the search engines and the individual paper is downloaded or read - further collaboration among interested universities could see the expansion of the journal until it covers virtually the entire output of the niche area.

Or perhaps it will be all down to authors announcing their papers on their Weblogs and making them available without peer review and letting the scholarly community make up its collective mind about the quality, accuracy, etc. Again, the parallel with the music industry is there: bands are ignoring the record companies and putting their music straight on the Web.

Whatever happens, and, given the First Law of Forecasting, we can be sure that the future will be nothing like what the Ithaka report suggests, and nothing like what I have suggested :-)

09 October 2007

OA move by a TA publisher

Three friends and colleagues have recently drawn my attention to a letter they have received from Benthan Science Publishers - a relatively small outfit in the STM world, with 79 highly priced titles. Bentham is seeking Editorial Board members for a new 'open access' (i.e., author charges) journal, The Open Information Science Journal. This is part of a move on the publisher's part to create 200 new 'open access' titles across a range of disciplines and to be, in its own words, "...the largest publisher of quality open access journals..." Sure - the world needs another 200 journals desperately, doesn't it?

Curiously, the Open Information Science Journal is not listed at the Bentham Open site, which presumably means that the editorial arrangements have not yet been established.

Bentham may have a hard time with trying to implement author charging in the information science field, where there are no precedents, where the research community is relatively small and where the existing quality journals are more than sufficient to satisfy the output of quality papers. The author charges quoted by Bentham are:


  • Letters: The publication fee for each published Letter article submitted is $600.

  • Research Articles: The publication fee for each published Research article is $800.

  • Mini-Review Articles: The publication fee for each published Mini-Review article is $600.

  • Review Articles: The publication fee for each published Review article is $900.


Bentham may also have a hard time getting editors and editorial board members for some of their journals - all three of my colleagues have turned down the invitation to join the Editorial Board

The interesting aspect of this move is that a publisher has seen a business opportunity in author charging - there is no doubt, judging from my more than 25 years experience of editing journals that there are many poor quality papers around with authors desperate to get them published. How likely is it that a publisher, whose business model requires author charges, will resist the tempation to accept what the quality journals consider to be dross?

05 October 2007

More on open access

Thanks to the BOAI Forum and Steven Harnad for drawing attention to the paper ©Copyright and research: an archivangelist’s perspective© by A.A.Adams, which refutes another paper by K. Taylor (Copyright and research:an academic publisher's perspective.

It's a well-argued piece and my only complaint is that, once again, the case for what I have called the Platinum Route of collaboration and subsidy is ignored and 'Gold OA' is associated with author payments.

With today's technology, collaboration in the production of a journal is very straightforward and, rather than subsidising journal publishers by allowing time for editorial work and peer reviewing, universities could be subsidising OA journals in the same way. The only office you need is Microsoft Office - and, really, not even that - you can get by with a browser and an html editor. There will be questions about how far this model scales and, as far as I am aware, no OA journal published on this basis has yet reached the point at which the question becomes important. There are many niche research areas with relatively low numbers of active researchers who can be provided for under this model and scalability is only an issue in terms of dealing with submissions. Scalability in use is not an issue, since the technology can cope.

I noted in an earlier post that the JISC in the UK invested well over £300,000 in author payments to publishers, when the same amount of money could have got to subsidising new OA journals. I wonder if anyone is listening?

02 October 2007

Open Access in non-OA journal

It's always ironic when papers on OA are published in non-OA journals. Such is the case with a couple of papers in the current LIBER Quarterly:

One is "Embedding Open Access into the European Landscape – the Contribution of LIBER" by Paul Ayris:

Abstract. This paper continues an earlier published history of the OAI Workshops, organised under the aegis of the LIBER Access Division, in CERN Geneva. It discusses the OAI5 Workshop, held on 18-20 April 2007, which underlines the emerging importance of Open Access to support information provision and exchange across Europe.

The other is "Public Policy and the Politics of Open Access" by David C. Prosser:
Abstract In the five years since the launch of the Budapest Open Access Initiative in February 2002, one of the most striking developments in the scholarly communications landscape has been the increasing interest taken in open access at a policy level. Today, open access (in the form of both self-archiving and open access journals) is routinely discussed and debated at an institutional-level, within research-funding bodies, nationally, and internationally. The debate has moved out of the library and publisher communities to take a more central place in discussions on the ‘knowledge economy’, return on investment in research, and the nature of e-science. This paper looks at some of the public policy drivers that are impacting on scholarly communications and describes the major policy initiatives that are supporting a move to open access.

The first of these doesn't look particularly fascinating, but I would have like to have the possibility of reading the second, without having to subscribe, but to do that I have to wait six months.

30 September 2007

NOT the British library?

A certain amount of publicity has surrounded the British Library's announcement that it is to make available in digital form some 100,000 old titles - mainly from the 19th century. It is good news that the collection will be open to staff and students of UK universities, but bad news that it will not be open access to all. The British Library Act of 1972 gives the Board the right to impose charges, with the approval of the relevant Secretary of State, but it seems to be entirely against the intent of the major provision of the Act. The Library is defined as consisting of "a comprehensive collection of books, manuscripts, periodicals, films and other recorded matter, whether printed or otherwise." and when reasons of preservation, or otherwise, require the provision of digital versions of documents, they surely become part of the 'collections' referred to, and have no special status, and, therefore, no special reason for charging. With the likelihood that the provision of digital resources will become more and more the norm for the British Library, can we expect more charged-for services? The costs of running the Library and the costs of digitisation are derived from tax - with the exception that relationships with such as Microsoft may provide additional resources - and we have the usual situation in the UK of the citizen being required to pay twice for anything connected in any way with government information and data. Write to your MP today! The British Library needs more money to preserve its present services, and is going to need more to enable it to fulfil the role of national library more effectively and more openly.

28 September 2007

Thoughts on the Berlin 5 OA conference

The presentations from the Berlin 5 conference on open access (held not in Berlin but in Padua, Italy) are now online.

I haven't read all of them yet, and probably won't, since, as far as I can tell they are all Powerpoint presentations without the accompanying paper. As a result, some elements of practically all of the presentations are unintelligible without the context and in one or two cases the presentation as a whole seems to bear little resemblance to the title of the paper or to the abstract.

By and large, it looks to have been a pretty humdrum affair, with the same old issues being debated, wheels being reinvented and nothing new emerging.

Repositories and the 'author pays' models seems to be the only models discussed and mention of the collaborative, no-money-changes-hands model of Information Research (and of other journals covered in our Case Studies series) is non-existent.

Fred Friend of UCL and JISC tells the audience that JISC (the UK's Joint Information Systems Comittee of the Higher Education Funding Councils) "is now working with other organizations on models which fund gold OA publication charges as part of the research process and budget" having experimented with spending £384,000 to persuade publishers to adopt author-charges and finding that it 'did not scale' - i.e., it would cost to much to continue.

I wonder if JISC has any idea of how many OA journals, operating on a subsidy and collaboration basis, that amount of money could have funded? With a £10,000 start-up subsidy, JISC could have got 38 OA journals under way - or 15 journals could have been given a £5,000 a year for five years with the same amount of money (or, rather, a little less). That could have made a very significant impact on the development of open access in the UK and could have persuaded a number of small-circulation, scholarly journals to have converted to the OA route. As it is, £384,000 has gone into the pockets of shareholders. Great thinking, JISC!

25 September 2007

Migrating to the e-world

It seems that Harvard's top economists are looking more to electronic dissemination of their work than they are to publication in the top journals. The explanation from Dani Rodrik's Weblog:

Several pieces of evidence bolster the view that one factor contributing to these trends is that the role of journals in disseminating research has been reduced. One is that the citation benefit to publishing in a top general-interest journal now appears to be fairly small for top-department authors. Another is that Harvard authors appear to be quite successful in garnering citations to papers that are not published in top journals. The fact that the publication declines appear to be a top-department phenomenon (as opposed to a prolific-author phenomenon) suggests that a top-department affiliation may be an important determinant of an author’s ability to sidestep the traditional journal system.

Rodrik is Editor of The Review of Economics and Statistics and he notes that his own experience as an editor of a prestigious journal supports this conclusion.

24 September 2007

Popular papers in Information Research

It's been a while since I last checked on the 'hits' on papers published in Information Research, so here's an update. It's a pretty crude measure of popularity, but the best we can do at the moment. Here we have the most hit papers in each of the published volumes of the journal from 1 to 11:


The data reveal that it takes an average of 2,884 hits to generate 1 citation in Google Scholar. I shall have to get round to checking out that number with more of the papers.

23 September 2007

The advice to PRISM

The advice of the lobbyist retained by the publishers has been revealed on the Web (Peter Suber has more of the story). The first paragraph of that advice is very telling:

The Coalition faces the daunting task of trying to win support for an issue in which publishers are not sympathetic - continuing to charge fees for access to scientific journals. It's hard to fight an adversary that manages to be both elusive and in possession of a better message: Free information. There's no magical sound bit that will cure this issue, however, at the present time there is little or no "pushback" from the publishing industry. To inject the industry's position into the debate, we recommend bypassing mass "consumer" audiences in favor of reaching a more elite group of decision makers employing strategies that emphasize "high-concept" rhetoric and in-the-trenches political-style communications.

Mmmm, interesting, eh? There's an even more interesting set of Rhetorical campaign points:


  • Develop simple messages (e.g., Public access equals government censorship; Scientific journals preserve the quality/pedigree of science; government seeking to nationalize science and be a publisher) for use by Coalition members

  • Develop analogies that put the public access issue into a context whereby target audiences will understand its pitfalls and perilous implications not to mention the hypocrisy of science leaders getting salaries and honoraria but declaring the publishing industry's need for capital as being somehow immoral

    • Paint a picture of what the world would look like without peer-reviewed articles.

    • In theory this may provide free taxpayer access to research that they fund, but they will pay eventually with substandard articles and their money being used to develop and maintain an electronic article depot rather than to fund new research.





Enough said, I think. It's beginning to dawn on the PRISM Coalition that they have shot themselves in the foot by adopting some of what was proposed and, clearly, for them to adopt some of the other ideas would be even more disastrous. For example, how much of the industry's profits go to investment in capital developments? Well, these companies' reports are on the Web and Reed Elsevier, for example, report that out of an operating profit of £1,210 million in 2006 (up 9% on 2005), capital expenditure was £196 million, while dividends paid to shareholders amounted to £371 million (up 10% on 2005), with a further £271 million being spent on share repurchases. So, 16% on capital developments and a total of 53% on dividends and share repurchasing. I think we can see where the company's priorities lie.

Not that this is a bad thing - companies are in business to pay dividends to shareholders, but I wonder what the profits would be if the publishers had to pay for their raw material and for the peer-review? I suspect that we would see many fewer journals and an even more rapid increase in true OA publishing.

22 September 2007

Good news for Open Access

Good news on the Open Access front. The Canadian Journal of Sociology/Cahiers canadiens de sociologie is moving from toll access (i.e., subscription based) to open access. Keven Haggerty, the editor of CJS/CCS, writes in his editorial:

The financial implications of this move remain somewhat opaque, and I have agonized over this issue. The situation of independent scholarly publishing in Canadian has always been precarious. This is particularly true with the CJS/CCS which does not receive any association funds. Retiring the hard copy version of the journal eliminates subscription revenue, which is one of our major sources of funding. That said, mimicking wider publishing trends, the journal’s subscriptions have been substantially declining at the same time that our electronic readership (through Project MUSE and other venues) has increased dramatically. Moreover, it was always the case that most of our subscription revenues went to cover the costs associated with producing a hard copy volume, such as printing, subscription management and postage.

He goes on to note that CJS/CCS has been subsidised by the Canadian Social Sciences and Humanities Research Council, but he expects to continue to receive these funds, since the SSHRC is an advocate of open access.

Many scholarly journals, published by universities and university presses must be in very similar situations - living off subsidy and subscriptions, the latter paying for most of the paper-associated costs (as well as those of maintaining the record of subscribers). With the move to OA, such costs are wiped out at an instant and what is then need to live on is a very much smaller amount of money. In the case of Information Research it is a zero amount of money, since there is no income and no monetary subsidy. Perhaps with this example, and the example of new scholarly journals taking the free OA track from the beginning, universities will begin to realise the advantages of OA. True OA - not the author-charge model - what I have called the Platinum Route.

This news picked up from Heather Morrison, via Peter Suber

18 September 2007

PRISM - a language change

The publishers' lobby organization has changed the language of its top page - no doubt the result of the wave of opposition it aroused by attempting to mislead researchers, funders and, most importantly, the policy makers.

However, no one should imagine that this means that the organization's ideas have changed, nor its way of putting a spin on just about everything it says. For example, we are directed from the top page to:

Learn more about government intervention and the risks and unintended consequences of proposed legislation;"

and, clicking on the link, we find:


Various initiatives and proposals have been put forth by special interest groups and some legislators that would force private sector publishers to surrender to the federal government all peer-reviewed articles that report on research supported by federal research grants.

Such undue government intervention in scholarly publishing poses inherent risks and problems, including:


  • Threats to the economic viability of journals and the independent system of peer review

  • The potential for introducing selective bias into the scientific record

  • Government data repositories being subject to budget uncertainties

  • Unwarranted increases in government spending to compete with private sector publishing

  • Expropriation of publishers' investments in copyrighted articles

  • Undermining the reasonable protections of copyright holders"
Let's look at these in turn: the first links the economic viability of journals with the independent system of peer review, as though if the former is threatened, as it is, the latter will also be detrimentally affected. However, this is not the case: the system of peer review exists because of the willingness of academics to give their time freely to ensure the integrity of published research work. True, it is not perfect, but it works and it would continue to work in an open access world: there is no reason whatsoever to assume that if the academic community wished peer-review to continue, it would not do so. However, the academic community could do the established commercial publishers considerable damage if they withdrew their voluntary labour. How, then, would the publishers ensure the integrity of the research record? Presumably, if the subscriptions continued to flow, they would be quite happy. Methinks they do protest too much on this point!

The second point on the introduction of 'selective bias' is presumably related to the first: they are suggesting that if peer review did not exist in an open access world, and continued in the commercial publishing world, the quality of what is published would be lowered and 'bias' would result. But this is nonsense: the answer is to repeat the points above. There is no necessary connection between commercial publishing and peer review. Indeed some publishers are quite happy to publish journals with no scholarly review, or with editorial review only - are they leaping to the barricades to prevent the rise of open access? Of course not.

The notion that somehow the existence of commercial publishing is some kind of fall-back system if government-funded data repositories were to be underfunded to the extent of ceasing to exist is also nonsense. Publishers do not maintain alternative data repositories, nor do they contribute to them. Organizations such as the ESRC Data Archive in the UK collect raw data from the researchers who collect it, along with the research instruments, coding manuals, etc. No publisher does any of this work, so to link their publishing activities to the existence of data archives is simply silly.

'Unwarranted increases in government spending' - oh my, that's really a beauty! Here is an industry that obtains its raw material free of charge as a result of government, charitable foundation and industry spending on research, and then benefits from the subscriptions of the institutions that employ those same researchers, complaining that the government might cut their profits by encouraging open access publishing. That's very rich. In effect the publishers are saying: "Look government, you spend all this money to give us raw material from which we can make a profit, so please don't encourage anything that might limit those profits!" And I love the idea of 'private sector publishing'! If only! Let us imagine what 'private sector publishing' would actually involve: first, the publishers would have to pay authors to write for them, as they pay novelists and the authors of travel books, biographies, etc., etc. Then, they would have to pay academics to review the papers they had paid for to determine whether they were appropriate to publish (of course, under this system, they would rapidly forget about peer-review, since it would eat into their profits), and then they would have to market vigorously to persuade institutions to buy their products. And, at the same time they would have to compete with a public sector open access system. Can you guess what would happen? I leave it to your imaginations.

So there's a danger of government expropriating industry's investment in copyrighted articles and, final point, of undermining the rights of copyright holders. Well now, what are we to make of this? First, the industry has invested nothing in the copyrighted articles - the investment has been made by government, etc. They have an investment in the published article, but not in the original copyrighted source. And it is a moot point, I understand, from lawyer friends as to whether an author can actually sign away his or her copyright. I believe there is no case law in the UK on this point and publishers are unwilling to take a case to court in case they lose. If this is so, then the copyright holder is the author of the text and/or his or her institution, depending upon the practice of the institution and all that can be granted to the publisher is a licence to publish under negotiated terms. Perhaps those threatened copyright holders (the authors) should bear this in mind and, instead of signing away their rights - which may not actually be lawful - they should negotiate. After all, they are now in a strong position, given the existence of open access, and free, journals in so many fields.

Take all this stuff with a pinch of salt and make sure your representative in Congress or Parliament understands that lobby talk is not necessarily reporting with integrity.