This page is older archived content from an older version of the Emerald Publishing website.

As such, it may not display exactly as originally intended.

An interview with Charles Oppenheim

Interview by: Margaret Adolphus

Options:     PDF Version - An interview with Charles Oppenheim Print view

Photo: Charles Oppenheim Professor Charles Oppenheim is professor of information science at Loughborough University, where he heads up the department.

His research interests include legal issues in information work, knowledge management, citation studies, bibliometrics, national information policy, the electronic information and publishing industries, ethical issues, patents information and issues to do with the digital library and the Internet, and the economics of the electronic publishing industry and of electronic libraries. He has also consulted widely in these areas. Charles spoke to editor, Margaret Adolphus at the Online Information Exhibition, 2007.


You have recently been part of a major research project obtaining an evidence base of data on UK scholarly journals. Can you tell us about this project?

This was the UK Scholarly Journals: 2006 Baseline Report which was funded by the Research Information Network. Its object was to identify where there are reliable facts and figures about the UK scholarly publishing business and where there are gaps, and to comment on the reliability of some of the reports and articles. It was a joint project between Loughborough University and Electronic Publishing Services, and I coordinated the Loughborough end.

The outcome – which was not surprising – was that there’s an awful lot of speculation around and a lot of statements made on the state of the scholarly publishing industry, what’s happening to it, how many journals, how many articles there are, what the former’s level of profitability is, and what the issues are for the organizations, but very little reliable data. So in the end, our recommendations were that a lot more research is needed and in fact the Research Information Network is now funding more such research.

Can you tell us about some of the other current research you are doing?

I am in charge of a Joint Information Systems Committee (JISC) project called the Registry of Electronic Licences, which is currently [December 2007] six months through a two-year period. A main objective is to make it clear exactly what rights a user has to the use of the material his or her organization has a licence for – what types of access and use are permitted. At the moment, there’s no way of doing that – people are aware that something is available, it may be a journal article, or an image, they may know whether they are an authorized user, but they typically don’t know for sure what they can do with it.

The plan is to convert the major licences (including Emerald’s) into machine readable format. There will be a traffic light system of red, amber, green. Red means you can’t do that, green means you can do that, amber means you need to talk to the librarian, or "this is a grey area". We have a number of partners including the independent consultancy Rightscom Ltd, and some publishers such as OUP and the Open University. The publishers will supply some of their licences to us; we will convert them, and then check that we have interpreted them correctly. We are also talking to various library automation companies to make sure that what we do is compatible with what they are doing now or plan to do in the future.

The main output will be a prototype, a proof of concept, with a relatively small number of publishers’ materials. We have not promised a service as such, but if the community likes what we have done, it would be up to someone to invest the money to convert the prototype into a service. We consider that this service will benefit both the publishers as rights holders, improving clarity of understanding as to what can and cannot be done with particular content, and librarians, supporting compliance without the need for human intervention.

Much research has been done on tools for measuring and ascertaining behaviour of users of electronic products. What for you are some of the most interesting developments?

In terms of tools, I think deep log analysis, some of the research carried out by David Nicholas the director of the School of Library, Archive and Information Studies at University College London. Researchers are collecting literally millions of records which show us for the first time what people are doing: time of day, day of the week, the search terms used, who is doing the searching, patterns of downloading, and so on.

How is all this data analysed?

Well that’s a bit of a nightmare! Part of the problem is there is just so much stuff out there. The good thing about electronic information is that you can collect types of data which you can’t do with print, the bad thing is that you become awash with data.

What do we want the data to tell us?

What we would like the data to tell us is how people choose to search, as opposed to how the producers would like people to search. If it turns out that people use incredibly simple search terms, for example one keyword, then the question arises why are we endeavouring to create all these sophisticated advanced searches, Boolean logic and so on? And identifying incorrect search strategies and disastrous searching mistakes might give a clue as to how we teach people to use search services. So I think these would be the two main benefits: seeing what people do in reality as opposed to what we’d like them to do, and how we can address the mistakes (if any) they make.

Do you think that we are moving towards new business models for scholarly publishing, if so, what are they and how will they affect the various stakeholders, i.e. authors, publishers and librarians?

Well the obvious new business model is open access. Basically the old one is upfront subscription, unlimited use; that has been transferred somewhat into the big deal with licences for a number of sites under a consortium arrangement, where you still have upfront subscription and unlimited view. In both, the library pays and the users are the beneficiaries. The concept of open access turns this all on its head and transfers the costs from the library to, in theory, the author (if you are talking about the author pays model), but I find the term "author pays" misleading because it’s very, very rare that the author pays; it’s normally the employer or funder who pays, or the author pleads poverty and nobody pays. So, the number of times the author actually pays is virtually zero, in fact I’d love to find out if an author has ever actually paid personally! There is no reason why such models can’t be profitable: the source of the flow of money changes but that doesn’t necessarily stop profitability.

Presumably what happens in the case of open access is that the publisher still publishes the title, the author pays to be printed, but the librarian pays for the cost of the licence

That could be and some people have assumed all librarians will love open access because it saves them money. However, librarians have got their eyes wide open, they know they’ve still got to run a library, advise their patrons as to what the best sources are and catalogue the journals. So there’s very little reduction in costs. In fact, right now we’ve got a combination of all the different models and librarians have to deal with subscription journals, open access journals, as well as Google Scholar and so on. In fact their work is heavier than ever and they’re not saving money at all.

But the other interesting, and potentially more dangerous, model is the one where content is put into institutional repositories and nobody’s paying, or rather the costs are just lost. These models represent potentially genuine competition for publishers. There will, I’m sure, come a time when libraries will start cancelling subscription-based services because they say, "we’ve got more than enough: 90 per cent of the stuff is now available in repositories around the world, therefore we have access to them anyway". That hasn’t happened yet but I have no doubt it will sooner or later.

There are two conflicting views, both of which I can see are intuitively reasonable. The first is that libraries will cancel because they can get content via open access. And the other is open access means lower quality content because there’s no peer review.

I see no evidence for either of these scenarios at the moment, but there could be a future situation where libraries will cancel because material is available on an open access platform, while the quality of material in open access might decline because anybody can become a self publisher.

Is there no sort of peer review in open access?

There can be and there is no reason for it not to be, but the risk is that people might just put potentially poor quality material onto their web pages, which then gets picked up by Google Scholar and read by students and researchers.

On the other hand, there’s the pressure exerted by people’s heads of departments to publish in the top, and usually American journals, might that counteract the trend you describe?

Just like your subscription-based journals where you have the very top ones, the middle ones and the rather poor ones, I think you will get a hierarchy of open access journals, ranging from the really high quality with very strict rules, all the way down to the ones which will take anything. And I think that just as everybody knows the top subscription journals, such as Nature and Proceedings of the National Academy of Sciences, so people will know that the Open Access Journal of Chemistry or whatever is the top journal.

Do you think that the principle of peer review is so established in academia that it won’t go away?

Yes, I think that’s fair comment, although the process might change. You could envisage circumstances where I just throw something up, and wait for people to comment, rather than going through a formal "please sir can I be peer reviewed" process. But certainly the whole concept of peer review won’t go, so I agree that it is embedded.

Let’s turn to the Research Assessment Exercise (RAE). Traditionally, the RAE has not been kind to applied or multidisciplinary research – library and information science is both. How do you feel that the discipline will fare under RAE 2008?

There is an alternative pressure in that our subject area, like all subject areas, is evaluated by a panel from the discipline, who won’t want to shoot their own subject in the foot by saying the quality of research is really poor. So they will assess what they are presented with, and even if it isn’t terribly good, they won’t want to risk saying that the UK can’t produce library and information science research. So, the RAE panels are under subconscious pressure not to be too hard, despite the fact as you rightly said, it is a subject which doesn’t really lend itself to the RAE process. In a previous RAE, there was a major upset in nursing, when that panel gave the highest score as three out of a possible five to the very best of nursing research. The General Nursing Council was furious because it was tantamount to doing down the nursing profession, and I think that message was taken on board by everybody. So, if you like, there is a self-correcting mechanism in the process; they take into account the characteristics of the subject and some panels are not as hard as you might think they might be.

What is your view of the new Research Excellence Framework (REF), which is based on bibliometrics?

In general I’m in favour of the decision because I’ve done research which has demonstrated that the previous RAE produced the same ordering of departments as a simple bibliometrics system. Bibliometric systems are much cheaper, easier and less stressful to do than having to submit a programme and be evaluated, etc. What worries me is that the funding councils seem to have adopted a very simplistic approach to bibliometrics. As it happens I am on the advisory board advising the Higher Education Funding Council for England (HEFCE) on its use of bibliometrics and I will advise them that their document is too simplistic.

Although in principle I am in favour of bibliometrics, because it has the potential to be a cheaper and less stressful system, you need to be sensitive to the subtleties of bibliometrics, particularly in subject areas with relatively low citation counts or with an emphasis on conference, book or patent publication rather than journal publication. I think the problem with the REF is that it has been imposed by politicians; it was sprung in a budget statement and the then chancellor, Gordon Brown, announced it without, I suspect, understanding the subtleties.

So having been forced into it, I think perhaps the attitude has been to keep it as simple as possible. But if you make it too simple, you may get some rogue results.

Do you have confidence that you can influence the advisory board?

Not at all, because I imagine the HEFCE has been told there’s only so much money to spend. So I’ve full confidence I won’t have any influence whatsoever.

The Department of Information Science at Loughborough University, which you head, has received top ratings for both research and teaching. What helps you maintain this top position?

I would say it's two things. One is our physical layout, which is pentagonal on just one floor containing the whole department – staff offices, PC labs, workgroup areas for students. It’s as much to do with psychology: it’s an enclosed space so it’s like a return to the womb. Second, we regard ourselves as a warm, cuddly approachable group of people and we have an open door policy. We try to be helpful to the students and nurture them; we don’t regard them as just bums on seats. I think it’s that psychological factor that gives us a really good score.

As to our research, it’s something to do with being able to work well together so we get good brainstorming of ideas and we do attract high quality academics and PhD students. We only accept about one in seven or eight PhD applications, we pick the best quality students, we have high quality staff and we have lots of research ideas. There is a much more collegial feel in the department than in any other place I have worked in, so that obviously helps in both the teaching and the research environment.

Editorial note:

The final report of the Research Information Network can be read at: http://www.rin.ac.uk/files/UK%20Scholarly%20Journals%202006%20Baseline%20Report.pdf

More information about the Registry of Electronic Licences can be found at: http://www.jisc.ac.uk/media/documents/programmes/sharedservices/reli.pdf