Loading
 

LibQUAL+(tm) from a Technological Perspective: A Scalable Web-Survey Protocol across Libraries

Jonathan D. Sousa
Technical Applications Development Manager for New Measures Initiative
Association of Research Libraries

Fred M. Heath
Dean and Director of University Libraries
Texas A & M University

LibQUAL+(tm) is a suite of scalable Web-survey tools developed to measure library service quality from the user’s perspective. The project is currently in its fourth year and, with more than 300 libraries participating in North America and Europe, LibQUAL+(tm) has become the largest evaluation project ever developed for libraries. The protocolís success and the increasing interest of a wide range of libraries have encouraged rapid scaling of the project to handle seven types of institutions, four languages, and 150,000 potential respondents this year. The technological issues involved include the hardware and software demands of a high-availability, high-load, online application; programming that necessitates efficiency, flexibility, and compatibility with virtually every Web browser in use; and data sharing between relational databases and flat-file statistical analysis packages. Additionally, the project is managed though a Web-based interface to which project administrators and institutional liaisons have varying degrees of access. LibQUAL+(tm) is a research partnership between the Association of Research Libraries and Texas A&M University Libraries and is based on an earlier implementation of the SERVQUAL survey at Texas A&M University and on rigorous qualitative and quantitative regrounding and analysis.

Web Links:
http://www.libqual.org/

Linking Biomedical Information Resources: Update from The National Library of Medicine

Betsy Humphreys
Associate Director for Library Operations
National Library of Medicine

The National Library of Medicine (NLM) has developed new tools for linking its major genetic data and biomedical literature resources so that users can more efficiently move between sequence databases, bibliographic descriptions, and full text of articles. In December 2002, PubMed Central (PMC), NLMís digital archive of life sciences journal literature, was integrated into the Entrez retrieval system, which supports access to PubMed, online books, the sequence databases, a taxonomy database, and more. Entrezís LinkOut feature provides external links to a variety of relevant Web resources at more than 750 external organizations around the world. In a truly unique collaboration over the last two years, nearly 500 libraries (academic, research, and health care) from 25 countries have provided external links to PubMed. Because of the unique nature of LinkOut, users searching a centralized database, such as PubMed, can be provided seamless access to articles at their own local libraries.

Web Links:
http://www.nlm.nih.gov/pubs/techbull/so02/so02_gene_indexing.html

http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PMC

http://www.ncbi.nlm.nih.gov/entrez/linkout/

Linking Courseware to Library Resources Using OpenURL: Experience, Possibilities, and Future Direction

David W. Lewis
Dean of the IUPUI University Library
Indiana University/Purdue University, Indianapolis

Oren Beit-Arie
Vice-President of Research and Managing Director of ISD
Ex Libris (USA), Inc.

Christopher Awre
Programme Manager
Joint Information Systems Committee

This presentation will propose a conceptual framework for thinking about the problem of linking library resources and course management systems, and describe a piece of the framework that will shortly be in place.

David Lewis will present a conceptual framework for structuring this problem. Oren Beit-Arie will discuss the solution being developed by Ex Libris, as part of a general development of OpenURL authoring tools, and how the next version of the OpenURL standard (OpenURL version 1.0) will provide a more generalized framework for addressing these issues. Chris Awre will discuss a current program of work at JISC, called ìLinking Digital Libraries with Virtual Learning Environmentsî (see the URL below) where OpenURL is one of several technologies being investigated to enable linkage between library resources and courseware.

Web Link:
http://www.jisc.ac.uk/index.cfm?name=programme_divle

PowerPoint Presentations:
by Oren Beit-Arie
by David. W. Lewis

Managing Unstructured Data with Latent Semantic Indexing

Maciej Ceglowski
Lead Developer
National Institute for Technology and Liberal Education

Clara Yu
Director
National Institute for Technology and Liberal Education/CET

John L. Cuadrado
Consultant
National Institute for Technology and Liberal Education

Much of the digital content becoming available online lacks meaningful metadata descriptors, but metadata creation is both time-consuming and expensive. Using latent semantic indexing (LSI) techniques, the National Institute for Technology and Liberal Education (NITLE) have developed a search and archiving tool that is able to make inferences about document similarity from patterns of word use across a collection. These similarity values, in turn, allow the tool to assign the documents to categories based on their content. This procedure is language-neutral and fully automatic. While the tool is able to make use of existing metadata, it also can sort and organize raw documents with a high degree of accuracy, across databases, in centralized or distributed mode.

Web Link:
http://www.nitle.org/lsi.php

Handout:
Managing Unstructured Data with Latent Semantic Indexing

METS: A Status Report

Jerome McDonough
Digital Library Development Team Leader
New York University

This briefing will provide an update on the progress of the Metadata Encoding & Transmission Standard (METS) initiative, an effort to define a common format for encoding digital library objects and their metadata. It will include an overview of current software development efforts, and a discussion of METS profiles, which allow organizations to specify restrictions on the METS format for local applications.

Web Link:
http://www.loc.gov/standards/mets/

PowerPoint Presentation:
METS: A Status Report

The National Digital Information Infrastructure and Preservation Program: Challenges and Solutions

Laura E. Campbell
Associate Librarian for Strategic Initiatives
Library of Congress

The Library of Congress is now poised to enter the next phase of the National Information Infrastructure and Preservation Program (NDIIPP), which was created by federal legislation in December 2000 (PL 106-554). Preserving Our Digital Heritage; Plan for the National Digital Information Infrastructure and Preservation Program, A Collaborative Initiative of the Library of Congress (2002) resulted from almost two years of consultations with a broad range of stakeholder communities. The document was submitted to Congress in the fall of 2002 and accepted in early January 2003. In the coming year, we expect to build out the specifications of a proposed technical architecture, invest in pilot projects and experiments, and contribute to a basic research program together with other federal agencies in an effort led by the National Science FoundationÌs program in Digital Government.

This presentation will review progress to date and discuss the shape of the program for the next phase of work.

PowerPoint Presentation:
The National Digital Information Infrastructure and Preservation Program (NDIIPP): Challenges and Solutions

The National STEM Education Digital Library: A Progress Report

Lee Zia
Lead Program Director, NSDL Program
National Science Foundation

This session will provide a progress report on the National Science Foundation’s program to support the development of the National Science, Technology, Engineering, and Mathematics Education Digital Library (NSDL). To date three sets of grants have been made in three tracks: 1) Collections, 2) Services, and 3) Targeted Research. In addition, a Core Integration activity is developing the “technical and organizational glue” to bind distributed users with distributed collections and services. Members of the core integration team will also participate and provide an update on the ongoing work on access and authentication, and the NSDL’s sustainability efforts.

New Initiatives for Resource Description and Preservation Metadata

Priscilla Caplan
Assistant Director for Digital Library Services
State University System of Florida

Sally H. McCallum
Chief, Network Development and MARC Standards Office
Library of Congress

Rebecca S. Guenther
Network Development and MARC Standards Office
Library of Congress

A new working group, composed of representatives from across the digital preservation community, is being organized by OCLC and RLG. It is a follow-on to the OCLC/RLG Preservation Metadata Working Group, which developed a metadata framework to support the long-term retention of digital materials. The new working group will address implementation strategies for preservation metadata. The group will use the metadata framework developed by the first working group as a starting point, and extend this work to consider issues such as the development of a core set of implementable preservation metadata elements with associated data dictionary; evaluation of alternative strategies for encoding, storage, management, and exchange of preservation metadata; and the development of pilot projects for testing the group’s recommendations.

MARC, sitting on a NISO/ISO standard for record structures, has been a sound basis for the development of a very large automated bibliographic infrastructure globally. But the newer XML record structure provides a flexible environment for use and manipulation of data and, especially, for linking data. Providing an evolutionary pathway from MARC “classic” to MARC in an XML structure, and then developing new approaches in the XML side is the topic of this session. A MARC Toolkit is being developed by the Library of Congress (with community collaboration) that contains data transformation components and enables use of Dublin Core, ONIX and other metadata in the MARC environment. It can help standardize the sometimes chaotic metadata landscape. The purpose and uses of a new simplified MARC companion on the XML side, MODS (Metadata Object Description Schema), will also be described.

 

Web Links:
http://www.oclc.org/research/pmwg/

http://www.loc.gov/marcxml

http://www.loc.gov/mods

OAI Metadata Harvesting and Institutional Repositories

Martin Halbert
Director, Library Systems
Emory University

Institutional planning is needed to prepare and coordinate policies associated with institutional repositories in a way that will facilitate discovery services based on metadata harvesting on the campus and national levels. This presentation will describe planning efforts being undertaken at Emory University as well as policy suggestions for consideration by decision makers involved in developing institutional respositories. The presentation will also outline the risks associated with not coordinating planning on repositories and metadata harvesting.

Online Publishing Use and Costs Evaluation Program, 2003

Christine Norman
Research Director, Online Use and Costs Evaluation Project
Columbia University

Kate Wittenberg
Diector, Electronic Publishing Initiative at Columbia
Columbia University

David Millman
Columbia University

The Andrew W. Mellon Foundation has awarded the Electronic Publishing Initiative at Columbia (EPIC) a cost and usage evaluation grant aimed at gaining a better understanding of how electronic resources affect scholarly communication. In particular, we are interested in how electronic resources are affecting academic presses, information technology personnel, librarians, faculty, and students.

This session will provide an update on the progress of the evaluation program, with specific focus on our multi-institution study of faculty and their use of electronic resources in research and teaching.

Web Links:
http://www.epic.columbia.edu