I show that as more journal issues came online, the articles referenced tended to be more recent, fewer journals and articles were cited, and more of those citations were to fewer journals and articles. The forced browsing of print archives may have stretched scientists and scholars to anchor findings deeply into past and present scholarship. Searching online is more efficient and following hyperlinks quickly puts researchers in touch with prevailing opinion, but this may accelerate consensus and narrow the range of findings and ideas built upon
Bill Hooker in his blog, Open Reading Frame takes issue with some of what Evans discovers. In particular he notes Evans's statement:
I show that as more journal issues came online, the articles referenced tended to be more recent, fewer journals and articles were cited, and more of those citations were to fewer journals and articles.
and he comments:
OK, suppose you do show that -- it's only a bad thing if you assume that the authors who are citing fewer and more recent articles are somehow ignorant of the earlier work. They're not: as I said, later work builds on earlier. Evans makes no attempt to demonstrate that there is a break in the citation trail -- that these authors who are citing fewer and more recent articles are in any way missing something relevant. Rather, I'd say they're simply citing what they need to get their point across, and leaving readers who want to cast a wider net to do that for themselves (which, of course, they can do much more rapidly and thoroughly now that they can do it online).
Well, I think I am with Evans here - would it were true that authors are not ignorant of earlier work. In my experience as an Editor and a PhD supervisor, I am continually amazed at the extent to which authors and students are unaware of pre-WWW work. It seems that if the work was done before 1995 it is assumed to have no relevance to the present day. In many cases, of course, that will be true and in some cases the research record is a record of building upon earlier work. In the case of many subfields in information science, however, it isn't the case. A great deal of work was done in the 1970s, which is now completely ignored. Researchers rediscover wheels again and again, when a search of the earlier literature would have revealed that what they think of as novel, was novel 50 years ago!
I believe that everything we do needs to be rooted in the historical context, without it we assume that everything that has gone before has nothing to teach us, whereas the reality is that much has been done that could be of relevance, if only it was known about.
To take just one example, a project at Hamline University in the USA in the 1970s explored how librarians could support teaching. Assistants were appointed to work closely with teachers, sitting in on courses, identifying material that was often of more use to undergraduates than the research papers the teacher was citing, and generally performing the kind of 'information scientist' role that Jason Farradane (another forgotten name?) promoted in industry. The report demonstrated the efficacy of employing librarians in this role but also pointed to the economic costs and, as a result, the initiative was abandoned and the report forgotten. But that report says more about how to engage with teaching and how to support the learner than the vast majority of publications on 'information literacy' do today - sadly, it is never cited :-)