CAUL PERFORMANCE INDICATORS

CAUL Indicators A, B and C

The first three CAUL performance indicators have been published:

A. Library/clientele congruence (i.e. satisfaction) indicator - Professor Brian Cornish and Dr Gary Gorman, School of Information Studies, Charles Sturt University.

B. Document delivery quality indicator - Ms Jan Novak and Ms Margaret Robertson, Library, Queensland University of Technology.

C. Proportion of sought material obtained at time of visit - Mr Colin Taylor and Ms Jan Hiscock, Library, University of South Australia.

Each member of CAUL has received a complementary copy of each indicator. Of the 100 sets produced, at 30 September 58 sets (ie all 3 indicators) had been distributed, plus 6 distributions of 1 title only. Of these, 43 were free sets (CAUL members, copies to the authors, legal deposit and review copies). The remaining copies were sold for a total of $3995, of which $535 remains with CAVAL for processing costs and the rest, $3420, has been sent to CAUL.

Several members of CAUL have indicated that they are beginning to use the indicators and some have expressed interest in benchmarking. Experience should provide useful inter library comparison data but will also, probably, identify ways in which the indicators could be improved. This can be addressed in a second edition of each.

International Work on Performance Indicators for Academic Libraries

At this stage, I believe the following are the most significant projects outside Australia:

IFLA Guidelines for Performance Measurement in Academic Libraries - This project, based at the University of Munster, has described indicators for market penetration, opening hours, remote use, collection coverage, collection use, catalogue effectiveness, acquisition/processing/ILL speed, availability, reference fill rate and user satisfaction. The same group are now working on cost efficiency indicators.

The Effective Academic Library: A Framework for Evaluating the performance of UK Academic Libraries emerged from the work of SCONUL and the Follet Report. It is at this stage a framework with the detailed description of the indicators to follow.

The ISO 11620 Standard is concerned with libraries of all types and attempts to provide a standard terminology but is necessarily pitched at a less detailed level than the practical approach taken by IFLA and CAUL.

I have spoken with the leaders of these projects about the possibility of collaboration with CAUL. The first two responded positively but are committed to their own programs so collaboration would depend on a conjunction of interests. A number of projects in Europe are now moving towards the development of decision support systems.

Future Indicators

Although only a few responses were received to my email questionnaire on future indicators, there appears to be general support for the development of another raft. The responses were mixed as is indicated in the discussion below. I have taken members' comments on board but also considered future work in the wider context of international developments in performance indicators for academic libraries. I have starred those which I recommend we should pursue.

GENERAL LIBRARY USE AND FACILITIES

*1. Market penetration

2. Opening hours compared to demand

Useful if crude indicators of our success in our markets (undergraduate, postgraduate, academic, community, business, ...) but need a market research context if they are to help us in improving our performance. Possible methodologies include surveys and focus group discussions. I asked whether this was obvious and too trivial for CAUL's consideration or, in this time of benchmarking, would it be desirable for us to develop a standard methodology?

Sydney commented as follows "Indicator 1: I'm not sure whether university libraries need to measure market penetration as much as, say, public libraries would. The major part of our "market" is pretty much captive (our students/academics) and most if not all will use their university library to some extent. External students may not use their "home" library much if they live far away, but a low score on this is not necessarily a bad thing if the library has made alternative arrangements with other libraries. I'd rate this low priority. Indicator 2: Opening hours "compared to demand" will vary with the demand, which is difficult to measure anyway. (Difference between what they'd like and what they really need, etc., and who decides, etc.) Too hard, and probably not comparable between institutions. Low priority."

Queensland's view was "1. It is probably useful , but I suspect for in-house comparisons over time only. We would also need to develop more clearly our product offerings to determine penetration, e.g. loans, info skills classes, etc. 2. Almost impossible to measure "demand" Occupancy rates useful for in-house analysis."

Charles Sturt responded "Standard methodology would assist. I note the continuing interest in cross-sectoral use studies and am somewhat appalled at the likely expenditure on one in NSW between universities and TAFE. In determining how well we meet our prime market, we will learn about secondary market coverage."

Swinburne felt that market penetration would be useful.

Given the interest of DEET in services to the "target groups" (Aboriginal and Torres Strait Islander, women, remote, disabled, non English speaking background) and our interest in assessing the extent to which our services are used by particular segments of our clienteles, I tend to agree that market penetration is worth pursuing and that it would be useful to have a standard methodology. I think this would be easy to develop.

I, too, can see little value in a standard measure for library hours. Depending on the size of the library, we talk directly to clients or survey them, look at usage figures, comments in the suggestion boxes etc to provide a basis for varying hours.

COLLECTION QUALITY

3. Expert checklists (% of checklists/bibliographies held)

I commented: "Seems to me to be covered by Conspectus procedures if used in that context or trivial if we are just checking off a bibliography. For benchmarking purposes, I would think that we would need to use Conspectus in the absence of any other generally accepted methodology." All respondents agreed.

4. Collection use

I asked: "Seems very useful but would be covered by reports from our circulation systems. Would it be desirable to have a standardised approach developed for benchmarking purposes by a working party drawn from interested institutions?"

Sydney was in favour: "We think this is both important and potentially useful, and we would be willing to join a working party to develop it."

Queesland agreed: "Useful but again need to develop targets. Study at State Library of New South Wales did some work here."

Charles Sturt was less enthusiastic: "The most useful data will be, as you suggest, from circulation statistics. This is less important for CAUL to pursue, although if standardised measures were developed Charles Sturt would like to assess its collection use against other libraries'."

This has been addressed in the IFLA project so I suggest we wait to see if we can use their methodology.

CATALOGUE QUALITY

5 Known item search

6. Subject search

I was thinking of ways of assessing the quality of the software (using decision point methodologies or similar) when I commented "Indicators 5 and 6 (catalogue quality) are, I think, potentially very valuable as tools to improve the presentation and accessibility of our various electronic access services. But I am not at all sure how they might be developed - perhaps in similar vein to the software evaluation methodologies? Is this worth pursuing?" However, the IFLA project appear to have focussed on the catalogue as a finding aid (ie the combination of data and software).

Sydney: "Catalogue "quality" in terms of known item and subject searches depends on (a) the quality of the data in the catalogue, and (b) the quality of the software used for searching. Most of us get our data from ABN, (and through ABN from LC, BL, etc.) but would have some local non-standard records. More and more of us get our software from overseas as part of a library automation package. It might be better to keep an eye on measures developed overseas. Probably a low priority for CAUL."

Queensland: "Several studies have been done on this - useful, but difficult to know what one is really evaluating, e.g. access, software, catalogue quality, customer awareness."

Charles Sturt: "Yes, worth pursuing. The depressing hit rates reported in the literature and confirmed by local survey (admittedly some years ago) make testing this important. We need to test catalogue quality itself AND user knowledge of how to access the data. Extension to electronic access services is vital."

I think this is important from both perspectives and relates to the whole spectrum of catalogues and databases we use and offer to our clients. However, I agree with Sydney that we should monitor developments elsewhere. It would be desirable to see if a member of staff in one of our libraries could write a discussion paper on this question.

AVAILABILITY OF ITEMS IN THE COLLECTION

7. Acquisition speed - covered by Indicator B

8. Book processing speed - covered by Indicator B

9. Availability - covered by Indicator C

10 Document delivery - covered by Indicator B

11. ILL speed - covered by Indicator B

REFERENCE SERVICE

*12. Reference fill rate

I noted that this "is of particular interest to us at NTU because of its importance to our best practice program. It is an area in which we need to benchmark. However, there are serious difficulties in developing a reliable and valid methodology as has been extensively documented in the literature. Again, I think this should be tackled and would be best tackled by a working party drawn from interested institutions. What do you think?"

Sydney and Charles Sturt agreed and Queensland commented: "12. I've done research on this myself. There are plenty of useful models around. We need to have better understanding however of new electronic services in order to measure these."

At the time of writing I had forgotten the work of the CAVAL Reference Interest Group. I have since been in correspondence with Barbara Paton to suggest that their excellent work in this area might be published in the CAUL Indicators series (but with them retaining the copyright). I recommend that we pursue this approach.

USER SATISFACTION

13 User satisfaction - covered by Indicator A

ELECTRONIC RESOURCES

I observed "The absence of indicators of quality, availability, etc for our burgeoning electronic resources is now glaringly obvious. Do we need to look at this area? How?"

Sydney agreed that this was important: "Yes, this is a gap, which should be looked at, but we can't suggest how the answer might best be found. We'd be willing to join a working party on it, though."

Charles Sturt suggested that it was premature: "Yes, but I suggest we wait a little longer for initial electronic services to settle in."

Queensland was unsure: "Do not understand what we would be measuring. Focus groups probably more helpful."

On reflection, I think this is really an amalgam of the issues of 'collection' quality and 'catalogue' quality as discussed above. In other words, a combination of:

The first will be covered through our collection assessment and development methodologies, the second and third need further consideration as suggested above.

COST EFFICIENCY

In his original report, Colin Taylor also suggested a number of measures of cost efficiency:

The IFLA group are proposing to examine such issues and I am a corresponding member of that group. However, I feel that the recent email discussion of budgets has highlighted our need for reliable costing methodologies which will produce genuinely comparable results. I believe our need is more pressing than the likely response time from the IFLA group. Consequently, I suggest that we establish a working party to further define this question and to then develop, or propose the commissioned development, of one or more standard costing methodologies, in cooperation with the IFLA group if possible.

Recommendations

  1. Commission the development of a market penetration indicator.
  2. Consider IFLA methodology for assessing collection use.
  3. Commission a paper on assessing adequacy of retrieval software and/or data.
  4. Negotiate with CAVAL Reference Interest Group to publish the manual they have developed.
  5. Establish working party to consider costing methodologies.
Alex Byrne
4 October, 1995

Updated 28 November 1995
Return to CAUL Home Page
Comments/Suggestions