This page is older archived content from an older version of the Emerald Publishing website.

As such, it may not display exactly as originally intended.

Assessing the value of library services

Why performance measurement?

You are at a meeting of senior university managers. Car parking comes up under "any other business". You twitch (you are a bike user, anyway), wanting to get away. Then the vice chancellor informs the meeting that he wants to enlarge the car park, as many students work and want to be able to park quickly and easily on campus when they attend classes. He proposes to fund this by a major cut in library services; after all, most students go direct to the Internet for information so why do they need the library anyway?

Now fully alert, you mutter something about Google’s inadequacies, wondering how on earth you can stop this mad idea in its tracks. You know that you have statistics about the number of article downloads, reference enquiries, etc., but how can you demonstrate to your vice chancellor, in terms that he will understand (which means numbers), that the library’s value to your institution is indisputable?

Fortunately this case is totally hypothetical, but performance and its measurement is one of the main concerns of librarians in both the academic and the public sector. In December 2007, the Association of Research Libraries (ARL) published the results of a survey of their members, in which it found that out of a 60 per cent response rate, all but one engaged in assessment activities beyond collecting the obligatory data for ARL (Wright and White, 2007).

The change in the information landscape from physical to virtual, an increased concern for cost accountability in public sector institutions, desire to provide what the customers want in a changing market – all these are reasons why libraries need to assess the value they provide and the impact on the organizations which they serve.

Why measure performance?

There are a number of drivers behind performance measurement: these are some of them.

Service enhancement

Good librarians realize the importance of understanding their users and their needs – do they want more electronic journals? What sort of space do they require – somewhere quiet to study, or somewhere for group discussions? Are existing services appropriate, what new services should be offered?

Desire to know about library processes

How efficiently do particular services work? Is a disproportionate amount of time being spent on a particular task? How much are resources being used?

Decision making

Senior library managers use data on library functions and services as a basis for their strategic planning and resource allocation. In a study of the impact of performance measurement on decision making in mid-sized academic libraries in the American mid-West, Dole et al. (2006) found that all senior managers used assessment data for decision making, with that on collection use and user satisfaction being deemed most useful.

Value for money

The ability to demonstrate the value of your service for the organization which you serve, not just in some general philosophical sense ("the library promotes liberal values") but in actual financial terms (the library saves us £x/$x in terms of teacher time), is probably one of the most important reasons for measuring performance. Dole et al. (2006) quote another study’s finding that one of the most important core competences today’s librarian needs is the ability to measure their library’s impact on higher education. This means demonstrating outcomes which only the library could achieve and which serve organizational goals (Poll and Payne, 2006).

Value for money may also relate to a particular project funded by an external body: here it is important to demonstrate that the money has been well spent.

Benchmarking

Local collections of statistics are amalgamated to provide a national collection of information on different institutions, which means that a particular library can compare itself with another of similar size and type. There may be a national requirement to provide statistics.

In the UK, the Society of College and University Libraries (SCONUL) has been collecting and publishing statistics from university libraries since 1987. It uses these statistics for policy decisions, while individual libraries can compare themselves with similar institutions. They may use this information to press for more funds, or to plan new services.

There have been a number of national benchmarking projects (Poll, 2008), covering Germany (public and academic libraries), Australian public libraries, Sweden (a quality handbook covering all types of libraries), the UK (HELMS), the Netherlands (university libraries).

What should be measured?

Performance measurement looks at practically every aspect of the library: how many issues, reference enquiries, article downloads, number of visits, how the website is performing, the satisfaction levels of users, etc. Broadly speaking, types of measure can be divided into inputs, outputs and outcomes.

Inputs

This generally refers to the cost of employing staff and holding collections. Usage data for the latter – particularly e-resources – will be available (see the information management viewpoint, "Usage").

Outputs

This relates to all services provided by the library: level of satisfaction with staff gauged via surveys, together with statistical data on all activities. For example, hits to the website, speed with which items are returned to shelves and delivered from remote or closed stacks, queuing time at the lending desk, visits to the library and seat occupancy, items available to use, services to external users, and reference enquiries.

These measures are often expressed in economic terms – what is the cost per active user, per visit, per use? How cost-effective is the library, for example, looking at total operating expenditure against actual use, number of downloads compared with the total cost of collections?

The ARL survey found that 80 per cent of respondents assessed areas which directly affected their users, such as the website, the catalogue, the facilities, the reference service, and how the collections are used and managed (Wright and White, 2007).

Outcomes

The savvy librarian looks not only at outputs, important though these are, for so to do means looking at the library in isolation. For example, there has been a vast increase in downloads of electronic resources. But how has this impacted on the behaviour of the user – are students getting higher grades? Lecturers producing more articles in peer reviewed journals? The librarian concerned with outcomes will thus be looking at the effect on the user, on the organization, and asking, what is the cost of not providing this particular service?

How should it be measured? Tools and models

There are as many different tools and techniques for measuring performance in libraries as there are functions to measure. The nearest to a standard measurement tool is probably LibQUAL.

  • Such is the variety that the UK’s SCONUL has created a "Performance Portal" with information on the various measurement methods.
  • For North America, the Association of Research Libraries (ARL) devotes a significant part of its website to statistics and measurement.
  • The Council of Australian University Librarians (CAUL) provides a Performance Indicator database. This includes links to survey tools, as well as case studies.

Tools and techniques fall broadly speaking into three categories:

  1. those that emphasize service to the customer, and which take the form of satisfaction surveys;
  2. statistics, usually collected centrally by organizations such as SCONUL or ARL; and
  3. business models which look at the library’s impact on the organization as a whole.

Of course, these methods can be, and frequently are, combined, with satisfaction surveys obtaining statistical measures, and business models drawing on both surveys and statistics.

Customer satisfaction surveys

Wanting to get close to customers, and understand their needs, is one of the strongest motivations for librarians to undertake assessment, and was the main motivation for 91 per cent of the ARL survey respondents (Wright and White, 2007). Many libraries carry out regular surveys of their users.

Newcastle University Library, for example, carries out a survey every four years, which helps it find out not only what people think about current provision, but also about what services the library should improve or develop. Like several other SCONUL members, it has achieved Charter Mark status – the UK government’s award for excellence in customer service.

LibQUAL+

Some libraries develop their own survey instrument, but there are a number of standard ones. Probably the best known of these is LibQUAL+, a web-based survey tool which libraries use to obtain, study and act on users’ perceptions of service quality.

LibQUAL+ was developed by Texas A&M University library, and is an adaptation of the SERVQUAL instrument, which is based on the gap theory of service quality. It is easy to administer and because it is standardized it can help benchmarking at a national, local and international level. It is widely used in North America and the UK (Creaser et al., 2007 reported that about a third of SCONUL members had run it at some point), as well as in other countries such as Sweden, Japan and Australia.

The instrument includes questions about various aspects of service, demographics and level of library use. There are 22 general questions on service level, with a scale rating according to minimum acceptable, perceived and desired service, grouped according to:

  • Affect of service – the effectiveness of the library staff.
  • Library as place – the physical environment.
  • Information control – how easy is it to find information, including website and access tools.

There is also scope for the library to add its own individual questions, and customize it with the institution’s logo.

Results from the survey are sent to the LibQUAL+ database in the USA for analysis. Average scores are published, which help libraries measure themselves against a standard.

 

Image - Figure 1: Screenshot showing first page of LibQUAL.

Figure 1. Screenshot showing first page of LibQUAL

 

Other user surveys include the following:

United Kingdom

  • SCONUL provides its own satisfaction survey, with two variants according to whether or not the library is an independent unit or has converged with other services, such as IT. Like LibQUAL it is intended to be relatively painless and groups questions into five sections: demographics, activity, satisfaction with key services and facilities, importance of key services and facilities, and overall satisfaction levels.
  • The LIBRA package from the research organization Priority Research, which according to a survey conducted by West (2004) was the most popular platform.

USA

  • DigiQUAL, which is based on LibQUAL+, and is designed to assess services provided for the user communities of the National Science, Math, Engineering and Technology Education Digital Library (NSDL) programme.
  • ClimateQUAL™ – Organizational Climate and Diversity Assessment (OCDA), a survey which helps assess diversity and organizational health.
  • MINES (Measuring the impact of networked information services) – this is an online transaction based survey that collects information on the use of electronic resources and user demographics.
  • Effective, sustainable, practical assessment – an ARL sponsored programme which looks at ways of measuring librarians’ contribution to teaching and learning.

Australia and New Zealand

  • Library Client Survey from Insync, formerly the Rodski survey, which helps libraries find out which services users find most important, how they perform in these areas, and where improvements are required. It helps benchmark university libraries in Australia and New Zealand.

Sometimes, when libraries establish what is important to their users, they develop standards based on an optimum level of performance, encapsulated in key performance indicators. The University of Birmingham Information Services department, for example, of which the library is a part, publishes its performance standards, which include availability of IT services, efficient operation of the library, access to e-resources and meeting postgraduate needs for information literacy training, on www.isquality.bham.ac.uk/kpi.htm.

The National Library of Australia also publishes its standards in the form of a charter – for example to "deliver 90% of items stored onsite within 45 minutes", and "ensure that the Library's main website at www.nla.gov.au is available 99.5% of time (24 hours a day, 7 days a week)".

Statistics

Statistics are the "hard facts" in library assessment which are essential for benchmarking. Many statistics are gathered through the surveys described above, however, other records are kept, often by a central body such as SCONUL, ARL and CAUL, on such subjects and resources, usage, income and expenditure, using special statistical questionnaires or other sources such as COUNTER (see the viewpoint on "Usage").

  • In the UK, SCONUL has been collecting library statistics since 1987, by means of its statistical questionnaire, with the data processing being carried out by the research and information centre LISU at Loughborough University. The statistics are used as a basis for its own policy decisions and are regularly consulted by library directors. "SCONUL statistics on the web" is a tool to enable members to extract and manipulate data. LISU also prepares, from SCONUL’s statistics, trend reports every ten years. As a supplement to this activity, the Higher Education Library Management Statistics (HELMS) contain data about library management which are extracted partly from SCONUL and partly from national data.
  • CAUL coordinates the collection of statistics for Australian university libraries (http://www.caul.edu.au/stats/), providing "a rich database of statistical measurements" that "can be mined to situate a library’s performance for a range of indicators on ranked lists" (Jordan, 2008). The University of Queensland library, which has a considerable history with assessment, has produced an open source tool LibStats, which "facilitates the collection, recording, analysis and reporting of statistics in an academic library" (Jordan, 2008).
  • In North America, the Association of Research Libraries has collected statistics from its members since 1961-62 on collections, expenditure, staffing and services.

Looking at the library’s impact on the organization

More and more libraries are wanting to demonstrate their value to the organization in the form of measurable outcomes. In a survey of SCONUL members, two key themes were:

  1. the desire for evidence that the library was providing value for money and/or making a difference to users, and
  2. linking outcomes to organizational strategic objectives (Creaser et al., 2007).

Below we outline some of the more popular tools and techniques used.

Impact initiative

From 2003-2005, the Library and Information Research Group (LIRG) of SCONUL conducted an exercise to look at the impact of particular initiatives undertaken by 22 UK libraries, mostly in the area of information literacy or electronic resources. The objective was to look particularly at how users changed, change being defined as:

  • affective (for example, were they more satisfied with services?),
  • behavioural (for example, were they more independent in their use of the library?),
  • knowledge-based (for example, did they gain greater knowledge about electronic resources?), or
  • competence-based (for example, did they improve their search techniques?).

This was a planned programme, with all participants being supported by various means including a series of workshops and visits where necessary. The approach was that of impact evaluation, the theory being that the evaluation benefits the services, while the systematic nature of the initiative enabled information to be gathered across a range of practice.

Assuming the role of practitioner/action researchers, participants were responsible for choice of objectives, impact indicators or success criteria (how they wanted to measure themselves) and data gathering methods, but they were required to provide evidence of impact.

Markless and Streatfield (2008) conclude that supported impact evaluation is a powerful tool to bring about real change. The real benefit of the approach is that it helps libraries articulate their outcomes as benefits to the organization. For example, the growth of the collection of electronic journals is not seen in terms of library holdings but of customer behaviour, so that "our collection has increased by x per cent" becomes "as a result of our new collection of e-resources, staff are able to be more productive in terms of curriculum development and publication output".

The library at the University of Warwick looked at ways of supporting academics in the publication process. It conducted semi-structured interviews to gain a picture of current publishing practice, and eventually decided to promote Ulrich's Serials Analysis System along with the Social Sciences Journal Contents Reports. This intervention was particularly welcomed by the Institute of Education, the only department to score a 4 at the last Research Assessment Exercise, and which felt a strong need to improve its rating with a system of targeted publishing (Bradford, 2005).

At the Open University, information literacy is particularly important because students study by distance learning and nearly all library work is done online. The library has its own information literacy unit, but wanted to develop objective measures. It therefore developed a series of statistically robust questions to help test information literacy skills, pointing the user to sources of further information (Baker and Needham, 2005).

Overall, 17 institutions finished the project and some of their reports can be seen in a special edition of Library and Information Research, at http://www.cilip.org.uk/specialinterestgroups/bysubject/research/publications/journal/archive/lir91/.

Contingent valuation and value-in-use

The above studies are all about measuring outcomes in terms of user behaviour. Other methods look to place a financial value on the service they offer.

Contingent valuation is an economic methodology which helps determine the value of a particular good or service to the customer, for example what people are willing to pay. It can also be used to determine the cost of doing without the service, as a way of calculating the return on investment of public funding.

Missingham (2005) looks at a number of significant contingent valuation studies, including ones of a number of leading public libraries in North America. These studies generate not only soft measures – for example, a particular library service is seen to enhance quality of life and personal fulfilment – but they also put an actual figure on value. For example:

  • A study of the British Library found that the Library generates around 4.4 times the level of its public funding.
  • A study of the economic impact of public libraries in South Carolina calculated the value of the loan of books and other media items at $102 million (by comparing the cost of buying the books), while the reference service is worth $26 million in time saved. Overall, the direct economic impact of public libraries in the state was estimated at $222 million, while the actual cost of the service is $77.5 million – a return on investment (ROI) of nearly 350 per cent.
  • An ROI study was done in 2004 of Florida public libraries looking at benefit-to-cost ratios, by calculating (among other things) the cost of using alternatives. Libraries were seen to play an important role in economic development, for example in providing information for small businesses and helping people develop marketable skills, and the total economic impact to the state economy was estimated at US$6 billion a year.

Contingent valuation compares the perceived value of library services, their cost, and the financial input of the users, to calculate a total return against costs (Missingham, 2005). It can go some way towards establishing a figure for the economic value of library services, but Missingham warns about the lack of comparative studies and pleads for more research in this area (2005: p. 156).

Value-in-use is similar in that it also tries to determine the value of a particular product or service to a customer, using in-depth interviews with recipients. For example, how should one calculate the value of information literacy training?

In 2006-2007, a librarian and faculty member used value-in-use to calculate the value of course-integrated information literacy delivered by reference librarians at Bryant University (Simmel, 2007). They found that the cost in terms of time spent was far outweighed by the benefits of high calibre research instruction, which included improved research productivity with the aid of new techniques and resources, time saved in preparation of research classes, and new directions for course development mapped around specific resources.

Information audits

Value-in-use and contingent valuation both look at cost in terms of the individual user; as an employee or a tax payer, the user is part of a wider social network and therefore their gains also benefit the organization or the community. Another technique, the information audit, looks specifically at the organization’s strategy and business processes, and tries to align them to information requirements. Aslib defines an information audit as:

"A systematic evaluation of information use, resources and flows, with a verification by reference to both people and existing documents, in order to establish the extent to which they are contributing to an organisation's objectives."
(Quoted in Henczel, 2006)

The benefits of an information audit are that information is seen directly in relation to its strategic value, hence making it easy to assess the extent to which services and products map on to organizational goals (Henczel, 2006).

Conclusion

There is little doubt that the pressures on public funding and expectations of good customer service will ensure that performance measurement and assessment will remain high on the agenda of most libraries throughout the world. The object is to find data which do not merely assist the performance of a particular library, but which also add to the sum total of knowledge about libraries in a particular region or country by helping establish key performance indicators or standards, which in turn help benchmarking.

One of the problems is that useful indicators can be highly elusive. While 4 national benchmarking projects generated 55 standards, only 20 of these are used in more than one project (Poll, 2007), so libraries remain highly individualistic. At the end of a long meeting looking for invincible standards, one benchmarking group came up with what it considered the ultimate measure:

  • Percentage of users smiling when they leave the library (observed by a camera).

However, the counter argument was that this indicator would very susceptible to the outside weather conditions (Poll, 2007).

No doubt the search for reliable performance measures will keep both practitioners and researchers busy for the foreseeable future, while finding a reliable means of assessing value for money will continue to be a fruitful research topic.

References

Baker, K. and Needham, G. (2005), "Open University Library: impact and effectiveness of information literacy interventions", Library and Information Research, Vol. 29 No. 91 [accessed June 1 2008].

Bradford, C. (2005), "University of Warwick: impact of the Library on the research process", Library and Information Research, Vol. 29 No. 91 [accessed June 1 2008].

Creaser, C., Conyers, A. and Lockyer, S. (2007), "VAMP – laying the foundations", Performance Measurement and Metrics, Vol. 8 No. 3, pp. 157-169.

Dole, W.V., Liebst, A. and Hurych, J.M. (2006), "Using performance measurement for decision making in mid-sized academic libraries", Performance Measurement and Metrics, Vol. 7 No. 3, pp. 173-184.

Henczel, S. (2006), "Measuring and evaluating the library's contribution to organisational success", Performance Measurement and Metrics, Vol. 7 No. 1, pp. 7-16.

Jordan, E. (2008), "LibStats: An open source online tool for collecting and reporting on statistics in an academic library", Performance Measurement and Metrics, Vol. 9 No. 1, pp. 18-25.

Markless, S. and Streatfield, D. (2008) "Supported self-evaluation in assessing the impact of HE libraries", Performance Measurement and Metrics, Vol. 9 No. 1, pp. 38-47.

Missingham, R. (2005), "Libraries and economic value: a review of recent studies", Performance Measurement and Metrics, Vol. 6 No. 3, pp. 142-158.

Poll, R. (2007), "Benchmarking with quality indicators: national projects", Performance Measurement and Metrics, Vol. 8 No. 1, pp. 41-53.

Poll, R. (2008), "Ten years after: Measuring Quality revised", Performance Measurement and Metrics, Vol. 9 No. 1, pp. 26-37.

Poll, R. and Payne, P. (2006), "Impact measures for libraries and information services", Library Hi Tech, Vol. 24 No. 4, pp. 547-562.

Simmel, L. (2007), "Building your value story and business case: Observations from a marketing faculty and (former) librarian perspective", C&RL News, Vol. 68 No. 2 [accessed June 1 2008].

West, C. (2004), "A survey of surveys", SCONUL Newsletter, Vol. 31, Spring [accessed May 29 2008].

Wright, S. and White, L. (2007), Library Assessment, SPEC Kit 303, Association of Research Libraries [accessed June 1 2008].