CA Digital preservation will begin to come into its own. The past five years were about building access; now standards are coalescing and more focus is being paid to actual preservation strategies. Major legal obstacles include the DMCA, which restricts what institutions can do to preserve digital information. There are economic challenges, and we do not really know how much digital preservation will cost.
Phrases
<P1> There will be change, there is no guarantee that you can pick a technology and stay with it for ten years. We have to have an awareness of technological change and what's coming -- we listen to peers and the larger institutions that are taking leading and bleeding edge roles, and we make wise decisions. So in this case it is OK to be trailing edge and choose something that is well-established." (p.3)
SOW
DC OAIS emerged out of an initiative spearheaded by NASA's Consultative Committee for Space Data Systems. It has been shaped and promoted by the RLG and OCLC. Several international projects have played key roles in shaping the OAIS model and adapting it for use in libraries, archives and research repositories. OAIS-modeled repositories include the CEDARS Project, Harvard's Digital Repository, Koninklijke Bibliotheek (KB), the Library of Congress' Archival Information Package for audiovisual materials, MIT's D-Space, OCLC's Digital Archive and TERM: the Texas Email Repository Model.
Type
Journal
Title
Managing the Present: Metadata as Archival Description
Traditional archival description undertaken at the terminal stages of the life cycle has had two deleterious effects on the archival profession. First, it has resulted in enormous, and in some cases, insurmountable processing backlogs. Second, it has limited our ability to capture crucial contextual and structural information throughout the life cycle of record-keeping systems that are essential for fully understanding the fonds in our institutions. This shortcoming has resulted in an inadequate knowledge base for appraisal and access provision. Such complications will only become more magnified as distributed computering and complex software applications continue to expand throughout organizations. A metadata strategy for archival description will help mitigate these problems and enhance the organizational profile of archivists who will come to be seen as valuable organizational knowledge and accountability managers.
Critical Arguements
CA "This essay affirms this call for evaluation and asserts that the archival profession must embrace a metadata systems approach to archival description and management." ... "It is held here that the requirements for records capture and description are the requirements for metadata."
Phrases
<P1> New archival organizational structures must be created to ensure that records can be maintained in a usable form. <warrant> <P2> The recent report of Society of American Archivists (SAA) Committee on Automated Records and Techniques (CART) on curriculum development has argued that archivists need to "understand the nature and utility of metadata and how to interpret and use metadata for archival purposes." <warrant> <P3> The report advises archivists to acquire knowledge on the meanings of metadata, its structures, standards, and uses for the management of electronic records. Interestingly, the requirements for archival description immediately follow this section and note that archivists need to isolate the descriptive requirements, standards, documentiation, and practices needed for managing electronic records. <warrant> <P4> Clearly, archivists need to identify what types of metadata will best suit their descriptive needs, underscoring the need for the profession to develop strategies aand tactics to satisfy these requirements within active software environments. <warrant> <P5> Underlying the metadata systems strategy for describing and managing electronic information technologies is the seemingly universal agreement amongst electronic records archivists on the requirement to intervene earlier in the life cycle of electronic information systems. <warrant> <P6> Metadata has loomed over the archival management of electronic records for over five years now and is increasingly being promised as a basic control strategy for managing these records. <warrant> <P7> However, she [Margaret Hedstrom] also warns that as descriptive practices shift from creating descriptive information to capturing description along with the records, archivists may discover that managing the metadata is a much greater challenge than managing the records themselves. <P8> Archivists must seek to influence the creation of record-keeping systems within organizations by connecting the transaction that created the data to the data itself. Such a connection will link informational content, structure, and the context of transactions. Only when these conditions are met will we have records and an appropriate infrastructure for archival description. <warrant> <P9> Charles Dollar has argued that archivists increasingly will have to rely upon and shape the metadata associated with electronic records in order to fully capture provenance information about them. <warrant> <P10> Bearman proposes a metadata systems strategy, which would focus more explicitly on the context out of which records arise, as opposed to concentrating on their content. This axiom is premised on the assumption that "lifecycle records systems control should drive provenance-based description and link to top-down definitions of holdings." <warrant> <P11> Bearman and Margaret Hedstrom have built upon this model and contend that properly specified metadata capture could fully describe sytems while they are still active and eliminate the need for post-hoc description. The fundamental change wrought in this approach is the shift from doing things to records (surveying, scheduling, appraising, disposing/accessioning, describing, preserving, and accessing) to providing policy direction for adequate documentation through management of organizational behavior (analyzing organizational functions, defining business transactions, defining record metadata, indentifying control tactics, and establishing the record-keeping regime). Within this model archivists focus on steering how records will be captured (and that they will be captured) and how they will be managed and described within record-keeping systems while they are still actively serving their parent organization. <P12> Through the provision of policy guidance and oversight, organizational record-keeping is managed in order to ensure that the "documentation of organizational missions, functions, and responsibilities ... and reporting relationships within the organization, will be undertaken by the organizations themselves in their administrative control systems." <warrant> <P13> Through a metadata systems approach, archivists can realign themselves strategically as managers of authoritative information about organizational record-keeping systems, providing for the capture of information about each system, its contextual attributes, its users, its hardware configurations, its software configurations, and its data configurations. <warrant> <P14> The University of Pittsburgh's functional requirements for record-keeping provides a framework for such information management structure. These functional requirements are appropriately viewed as an absolute ideal, requiring testing within live systems and organizations. If properly implemented, however, they can provide a concrete model for metadata capture that can automatically supply many of the types of descriptive information both desired by archivists and required for elucidating the context out of which records arise. <P15> It is possible that satisfying these requirements will contribute to the development of a robust archival description process integrating "preservation of meaning, exercise of control, and provision of access'" within "one prinicipal, multipurpose descriptive instrument" hinted at by Luciana Duranti as a possible outcome of the electronic era. <P16> However, since electronic records are logical and not physical entities, there is no physical effort required to access and process them, just mental modelling. <P17> Depending on the type of metadata that is built into and linked to electronic information systems, it is possible that users can identify individual records at the lowest level of granularity and still see the top-level process it is related to. Furthermore, records can be reaggregated based upon user-defined criteria though metadata links that track every instance of their use, their relations to other records, and the actions that led to their creation. <P18> A metadata strategy for archival description will help to mitigate these problems and enhance the organizational profile of archivists, who will come to be seen as valuable organizational knowledge and accountability managers. <warrant>
Conclusions
RQ "First and foremost, the promise of metadata for archival description is contingent upon the creation of electronic record-keeping systems as opposed to a continuation of the data management orientation that seems to dominate most computer applications within organizations." ... "As with so many other aspects of the archival endeavour, these requirements and the larger metadata model for description that they are premised upon necessitate further exploration through basic research."
SOW
DC "In addition to New York State, recognition of the failure of existing software applications to capture a full compliment of metadata required for record-keeping and the need for such records management control has also been acknowledged in Canada, the Netherlands, and the World Bank." ... "In conjunction with experts in electronic records managment, an ongoing research project at the University of Pittsburgh has developed a set of thirteen functional requirements for record-keeping. These requirements provide a concrete metadata tool sought by archivists for managing and describing electronic records and electronic record-keeping systems." ... David A. Wallace is an Assistant Professor at the School of Information, University of Michigan, where he teaches in the areas of archives and records management. He holds a B.A. from Binghamton University, a Masters of Library Science from the University at Albany, and a doctorate from the University of Pittsburgh. Between 1988 and 1992, he served as Records/Systems/Database Manager at the National Security Archive in Washington, D.C., a non-profit research library of declassified U.S. government records. While at the NSA he also served as Technical Editor to their "The Making of U.S. Foreign Policy" series. From 1993-1994, he served as a research assistant to the University of Pittsburgh's project on Functional Requirements for Evidence in Recordkeeping, and as a Contributing Editor to Archives and Museum Informatics: Cultural Heritage Informatics Quarterly. From 1994 to 1996, he served as a staff member to the U.S. Advisory Council on the National Information Infrastructure. In 1997, he completed a dissertation analyzing the White House email "PROFS" case. Since arriving at the School of Information in late 1997, he has served as Co-PI on an NHPRC funded grant assessing strategies for preserving electronic records of collaborative processes, as PI on an NSF Digital Government Program funded planning grant investigating the incorporation of born digital records into a FOIA processing system, co-edited Archives and the Public Good: Accountability and Records in Modern Society (Quorum, 2002), and was awarded ARMA International's Britt Literary Award for an article on email policy. He also serves as a consultant to the South African History Archives Freedom of Information Program and is exploring the development of a massive digital library of declassified imaged/digitized U.S. government documents charting U.S. foreign policy.
Type
Electronic Journal
Title
ARTISTE: An integrated Art Analysis and Navigation Environment
This article focuses on the description of the objectives of the ARTISTE project (for "An integrated Art Analysis and Navigation environment") that aims at building a tool for the intelligent retrieval and indexing of high resolution images. The ARTISTE project will address professional users in the fine arts as the primary end-user base. These users provide services for the ultimate end-user, the citizen.
Critical Arguements
CA "European museums and galleries are rich in cultural treasures but public access has not reached its full potential. Digital multimedia can address these issues and expand the accessible collections. However, there is a lack of systems and techniques to support both professional and citizen access to these collections."
Phrases
<P1> New technology is now being developed that will transform that situation. A European consortium, partly funded by the EU under the fifth R&D framework, is working to produce a new management system for visual information. <P2> Four major European galleries (The Uffizi in Florence, The National Gallery and the Victoria and Albert Museum in London and the Louvre related restoration centre, Centre de Recherche et de Restauration des Mus├®es de France) are involved in the project. They will be joining forces with NCR, a leading player in database and Data Warehouse technology; Interactive Labs, the new media design and development facility of Italy's leading art publishing group, Giunti; IT Innovation, Web-based system developers; and the Department of Electronics and Computer Science at the University of Southampton. Together they will create web based applications and tools for the automatic indexing and retrieval of high-resolution art images by pictorial content and information. <P3> The areas of innovation in this project are as follows: Using image content analysis to automatically extract metadata based on iconography, painting style etc; Use of high quality images (with data from several spectral bands and shadow data) for image content analysis of art; Use of distributed metadata using RDF to build on existing standards; Content-based navigation for art documents separating links from content and applying links according to context at presentation time; Distributed linking and searching across multiple archives allowing ownership of data to be retained; Storage of art images using large (>1TeraByte) multimedia object relational databases. <P4> The ARTISTE approach will use the power of object-related databases and content-retrieval to enable indexing to be made dynamically, by non-experts. <P5> In other words ARTISTE would aim to give searchers tools which hint at links due to say colour or brush-stroke texture rather than saying "this is the automatically classified data". <P6> The ARTISTE project will build on and exploit the indexing scheme proposed by the AQUARELLE consortia. The ARTISTE project solution will have a core component that is compatible with existing standards such as Z39.50. The solution will make use of emerging technical standards XML, RDF and X-Link to extend existing library standards to a more dynamic and flexible metadata system. The ARTISTE project will actively track and make use of existing terminology resources such as the Getty "Art and Architecture Thesaurus" (AAT) and the "Union List of Artist Names" (ULAN). <P7> Metadata will also be stored in a database. This may be stored in the same object-relational database, or in a separate database, according to the incumbent systems at the user partners. <P8> RDF provides for metadata definition through the use of schemas. Schemas define the relevant metadata terms (the namespace) and the associated semantics. Individual RDF queries and statements may use multiple schemas. The system will make use of existing schemas such as the Dublin Core schema and will provide wrappers for existing resources such as the Art and Architecture thesaurus in a RDF schema wrapper. <P9> The Distributed Query and Metadata Layer will also provide facilities to enable queries to be directed towards multiple distributed databases. The end user will be able to seamlessly search the combined art collection. This layer will adhere to worldwide digital library standards such as Z39.50, augmenting and extending as necessary to allow the richness of metadata enabled by the RDF standard.
Conclusions
RQ "In conclusion the Artiste project will result into an interesting and innovative system for the art analysis, indexing storage and navigation. The actual state of the art of content-based retrieval systems will be positively influenced by the development of the Artiste project, which will pursue the following goals: A solution which can be replicated to European galleries, museums, etc.; Deep-content analysis software based on object relational database technology.; Distributed links server software, user interfaces, and content-based navigation software.; A fully integrated prototype analysis environment.; Recommendations for the exploitation of the project solution by European museums and galleries. ; Recommendations for the exploitation of the technology in other sectors.; "Impact on standards" report detailing augmentations of Z39.50 with RDF." ... ""Not much research has been carried out worldwide on new algorithms for style-matching in art. This is probably not a major aim in Artiste but could be a spin-off if the algorithms made for specific author search requirements happen to provide data which can be combined with other data to help classify styles." >
SOW
DC "Four major European galleries (The Uffizi in Florence, The National Gallery and the Victoria and Albert Museum in London and the Louvre related restoration centre, Centre de Recherche et de Restauration des Mus├®es de France) are involved in the project. They will be joining forces with NCR, a leading player in database and Data Warehouse technology; Interactive Labs, the new media design and development facility of Italy's leading art publishing group, Giunti; IT Innovation, Web-based system developers; and the Department of Electronics and Computer Science at the University of Southampton. Together they will create web based applications and tools for the automatic indexing and retrieval of high-resolution art images by pictorial content and information."
Type
Electronic Journal
Title
A Spectrum of Interoperability: The Site for Science Prototype for the NSDL
"Currently, NSF is funding 64 projects, each making its own contribution to the library, with a total annual budget of about $24 million. Many projects are building collections; others are developing services; a few are carrying out targeted research.The NSDL is a broad program to build a digital library for education in science, mathematics, engineering and technology. It is funded by the National Science Foundation (NSF) Division of Undergraduate Education. . . . The Core Integration task is to ensure that the NSDL is a single coherent library, not simply a set of unrelated activities. In summer 2000, the NSF funded six Core Integration demonstration projects, each lasting a year. One of these grants was to Cornell University and our demonstration is known as Site for Science. It is at http://www.siteforscience.org/ [Site for Science]. In late 2001, the NSF consolidated the Core Integration funding into a single grant for the production release of the NSDL. This grant was made to a collaboration of the University Corporation for Atmospheric Research (UCAR), Columbia University and Cornell University. The technical approach being followed is based heavily on our experience with Site for Science. Therefore this article is both a description of the strategy for interoperability that was developed for Site for Science and an introduction to the architecture being used by the NSDL production team."
ISBN
1082-9873
Critical Arguements
CA "[T]his article is both a description of the strategy for interoperability that was developed for the [Cornell University's NSF-funded] Site for Science and an introduction to the architecture being used by the NSDL production team."
Phrases
<P1> The grand vision is that the NSDL become a comprehensive library of every digital resource that could conceivably be of value to any aspect of education in any branch of science and engineering, both defined very broadly. <P2> Interoperability among heterogeneous collections is a central theme of the Core Integration. The potential collections have a wide variety of data types, metadata standards, protocols, authentication schemes, and business models. <P3> The goal of interoperability is to build coherent services for users, from components that are technically different and managed by different organizations. This requires agreements to cooperate at three levels: technical, content and organizational. <P4> Much of the research of the authors of this paper aims at . . . looking for approaches to interoperability that have low cost of adoption, yet provide substantial functionality. One of these approaches is the metadata harvesting protocol of the Open Archives Initiative (OAI) . . . <P5> For Site for Science, we identified three levels of digital library interoperability: Federation; Harvesting; Gathering. In this list, the top level provides the strongest form of interoperability, but places the greatest burden on participants. The bottom level requires essentially no effort by the participants, but provides a poorer level of interoperability. The Site for Science demonstration concentrated on the harvesting and gathering, because other projects were exploring federation. <P6> In an ideal world all the collections and services that the NSDL wishes to encompass would support an agreed set of standard metadata. The real world is less simple. . . . However, the NSDL does have influence. We can attempt to persuade collections to move along the interoperability curve. <warrant> <P7> The Site for Science metadata strategy is based on two principles. The first is that metadata is too expensive for the Core Integration team to create much of it. Hence, the NSDL has to rely on existing metadata or metadata that can be generated automatically. The second is to make use of as much of the metadata available from collections as possible, knowing that it varies greatly from none to extensive. Based on these principles, Site for Science, and subsequently the entire NSDL, developed the following metadata strategy: Support eight standard formats; Collect all existing metadata in these formats; Provide crosswalks to Dublin Core; Assemble all metadata in a central metadata repository; Expose all metadata records in the repository for service providers to harvest; Concentrate limited human effort on collection-level metadata; Use automatic generation to augment item-level metadata. <P8> The strategy developed by Site for Science and now adopted by the NSDL is to accumulate metadata in the native formats provided by the collections . . . If a collection supports the protocols of the Open Archives Initiative, it must be able to supply unqualified Dublin Core (which is required by the OAI) as well as the native metadata format. <P9> From a computing viewpoint, the metadata repository is the key component of the Site for Science system. The repository can be thought of as a modern variant of the traditional library union catalog, a catalog that holds comprehensive catalog records from a group of libraries. . . . Metadata from all the collections is stored in the repository and made available to providers of NSDL service.
Conclusions
RQ 1 "Can a small team of librarians manage the collection development and metadata strategies for a very large library?" RQ 2 "Can the NSDL actually build services that are significantly more useful than the general web search services?"
The Semantic Web activity is a W3C project whose goal is to enable a 'cooperative' Web where machines and humans can exchange electronic content that has clear-cut, unambiguous meaning. This vision is based on the automated sharing of metadata terms across Web applications. The declaration of schemas in metadata registries advance this vision by providing a common approach for the discovery, understanding, and exchange of semantics. However, many of the issues regarding registries are not clear, and ideas vary regarding their scope and purpose. Additionally, registry issues are often difficult to describe and comprehend without a working example.
ISBN
1082-9873
Critical Arguements
CA "This article will explore the role of metadata registries and will describe three prototypes, written by the Dublin Core Metadata Initiative. The article will outline how the prototypes are being used to demonstrate and evaluate application scope, functional requirements, and technology solutions for metadata registries."
Phrases
<P1> Establishing a common approach for the exchange and re-use of data across the Web would be a major step towards achieving the vision of the Semantic Web. <warrant> <P2> The Semantic Web Activity statement articulates this vision as: 'having data on the Web defined and linked in a way that it can be used for more effective discovery, automation, integration, and reuse across various applications. The Web can reach its full potential if it becomes a place where data can be shared and processed by automated tools as well as by people.' <P3> In parallel with the growth of content on the Web, there have been increases in the amount and variety of metadata to manipulate this content. An inordinate amount of standards-making activity focuses on metadata schemas (also referred to as vocabularies or data element sets), and yet significant differences in schemas remain. <P4> Different domains typically require differentiation in the complexity and semantics of the schemas they use. Indeed, individual implementations often specify local usage, thereby introducing local terms to metadata schemas specified by standards-making bodies. Such differentiation undermines interoperability between systems. <P5> This situation highlights a growing need for access by users to in-depth information about metadata schemas and particular extensions or variations to schemas. Currently, these 'users' are human  people requesting information. <warrant> <P6> It would be helpful to make available easy access to schemas already in use to provide both humans and software with comprehensive, accurate and authoritative information. <warrant> <P7> The W3C Resource Description Framework (RDF) has provided the basis for a common approach to declaring schemas in use. At present the RDF Schema (RDFS) specification offers the basis for a simple declaration of schema. <P8> Even as it stands, an increasing number of initiatives are using RDFS to 'publish' their schemas. <P9> Registries provide 'added value' to users by indexing schemas relevant to a particular 'domain' or 'community of use' and by simplifying the navigation of terms by enabling multiple schemas to be accessed from one view. <warrant> <P10> Additionally, the establishment of registries to index terms actively being used in local implementations facilitates the metadata standards activity by providing implementation experience transferable to the standards-making process. <warrant> <P11> The overriding goal has been the development of a generic registry tool useful for registry applications in general, not just useful for the DCMI. <P12> The formulation of a 'definitive' set of RDF schemas within the DCMI that can serve as the recommended, comprehensive and accurate expression of the DCMI vocabulary has hindered the development of the DCMI registry. To some extent, this has been due to the changing nature of the RDF Schema specification and its W3C candidate recommendation status. However, it should be recognized that the lack of consensus within the DCMI community regarding the RDF schemas has proven to be equally as impeding. <P13> The automated sharing of metadata across applications is an important part of realizing the goal of the Semantic Web. Users and applications need practical solutions for discovering and sharing semantics. Schema registries provide a viable means of achieving this. <warrant>
Conclusions
RQ "Many of the issues regarding metadata registries are unclear and ideas regarding their scope and purpose vary. Additionally, registry issues are often difficult to describe and comprehend without a working example. The DCMI makes use of rapid prototyping to help solve these problems. Prototyping is a process of quickly developing sample applications that can then be used to demonstrate and evaluate functionality and technology."
SOW
DC "New impetus for the development of registries has come with the development activities surrounding creation of the Semantic Web. The motivation for establishing registries arises from domain and standardization communities, and from the knowledge management community." ... "The original charter for the DCMI Registry Working Group was to establish a metadata registry to support the activity of the DCMI. The aim was to enable the registration, discovery, and navigation of semantics defined by the DCMI, in order to provide an authoritative source of information regarding the DCMI vocabulary. Emphasis was placed on promoting the use of the Dublin Core and supporting the management of change and evolution of the DCMI vocabulary." ... "Discussions within the DCMI Registry Working Group (held primarily on the group's mailing list) have produced draft documents regarding application scope and functionality. These discussions and draft documents have been the basis for the development of registry prototypes and continue to play a central role in the iterative process of prototyping and feedback." ... The overall goal of the DCMI Registry Working Group (WG) is to provide a focus for continued development of the DCMI Metadata Registry. The WG will provide a forum for discussing registry-related activities and facilitating cooperation with the ISO 11179 community, the Semantic Web, and other related initiatives on issues of common interest and relevance.
Type
Electronic Journal
Title
The Warwick Framework: A container architecture for diverse sets of metadata
This paper is a abbreviated version of The Warwick Framework: A Container Architecture for Aggregating Sets of Metadata. It describes a container architecture for aggregating logically, and perhaps physically, distinct packages of metadata. This "Warwick Framework" is the result of the April 1996 Metadata II Workshop in Warwick U.K.
ISBN
1082-9873
Critical Arguements
CA Describes the Warwick Framework, a proposal for linking together the various metadata schemes that may be attached to a given information object by using a system of "packages" and "containers." "[Warwick Workshop] attendees concluded that ... the route to progress on the metadata issue lay in the formulation a higher-level context for the Dublin Core. This context should define how the Core can be combined with other sets of metadata in a manner that addresses the individual integrity, distinct audiences, and separate realms of responsibility of these distinct metadata sets. The result of the Warwick Workshop is a container architecture, known as the Warwick Framework. The framework is a mechanism for aggregating logically, and perhaps physically, distinct packages of metadata. This is a modularization of the metadata issue with a number of notable characteristics. It allows the designers of individual metadata sets to focus on their specific requirements, without concerns for generalization to ultimately unbounded scope. It allows the syntax of metadata sets to vary in conformance with semantic requirements, community practices, and functional (processing) requirements for the kind of metadata in question. It separates management of and responsibility for specific metadata sets among their respective "communities of expertise." It promotes interoperability by allowing tools and agents to selectively access and manipulate individual packages and ignore others. It permits access to the different metadata sets that are related to the same object to be separately controlled. It flexibly accommodates future metadata sets by not requiring changes to existing sets or the programs that make use of them."
Phrases
<P1> The range of metadata needed to describe and manage objects is likely to continue to expand as we become more sophisticated in the ways in which we characterize and retrieve objects and also more demanding in our requirements to control the use of networked information objects. The architecture must be sufficiently flexible to incorporate new semantics without requiring a rewrite of existing metadata sets. <warrant> <P2> Each logically distinct metadata set may represent the interests of and domain of expertise of a specific community. <P3> Just as there are disparate sources of metadata, different metadata sets are used by and may be restricted to distinct communities of users and agents. <P4> Strictly partitioning the information universe into data and metadata is misleading. <P5> If we allow for the fact that metadata for an object consists of logically distinct and separately administered components, then we should also provide for the distribution of these components among several servers or repositories. The references to distributed components should be via a reliable persistent name scheme, such as that proposed for Universal Resources Names (URNs) and Handles. <P6> [W]e emphasize that the existence of a reliable URN implementation is a necessary to avoid the problems of dangling references that plague the Web. <warrant> <P7> Anyone can, in fact, create descriptive data for a networked resource, without permission or knowledge of the owner or manager of that resource. This metadata is fundamentally different from that metadata that the owner of a resource chooses to link or embed with the resource. We, therefore, informally distinguish between two categories of metadata containers, which both have the same implementation [internally referenced and externally referenced metadata containers].
Conclusions
RQ "We run the danger, with the full expressiveness of the Warwick Framework, of creating such complexity that the metadata is effectively useless. Finding the appropriate balance is a central design problem. ... Definers of specific metadata sets should ensure that the set of operations and semantics of those operations will be strictly defined for a package of a given type. We expect that a limited set of metadata types will be widely used and 'understood' by browsers and agents. However, the type system must be extensible, and some method that allows existing clients and agents to process new types must be a part of a full implementation of the Framework. ... There is a need to agree on one or more syntaxes for the various metadata sets. Even in the context of the relatively simple World Wide Web, the Internet is often unbearably slow and unreliable. Connections often fail or time out due to high load, server failure, and the like. In a full implementation of the Warwick Framework, access to a "document" might require negotiation across distributed repositories. The performance of this distributed architecture is difficult to predict and is prone to multiple points of failure. ... It is clear that some protocol work will need to be done to support container and package interchange and retrieval. ... Some examination of the relationship between the Warwick Framework and ongoing work in repository architectures would likely be fruitful.
Type
Report
Title
Mapping of the Encoded Archival Description DTD Element Set to the CIDOC CRM
The CIDOC CRM is the first ontology designed to mediate contents in the area of material cultural heritage and beyond, and has been accepted by ISO TC46 as work item for an international standard. The EAD Document Type Definition (DTD) is a standard for encoding archival finding aids using the Standard Generalized Markup Language (SGML). Archival finding aids are detailed guides to primary source material which provide fuller information than that normally contained within cataloging records. 
Publisher
Institute of Computer Science, Foundation for Research and Technology - Hellas
Publication Location
Heraklion, Crete, Greece
Language
English
Critical Arguements
CA "This report describes the semantic mapping of the current EAD DTD Version 1.0 Element Set to the CIDOC CRM and its latest extension. This work represents a proof of concept for the functionality the CIDOC CRM is designed for." 
Conclusions
RQ "Actually, the CRM seems to do the job quite well ÔÇô problems in the mapping arise more from underspecification in the EAD rather than from too domain-specific notions. "┬á... "To our opinion, the archival community could benefit from the conceptualizations of the CRM to motivate more powerful metadata standards with wide interoperability in the future, to the benefit of museums and other disciplines as well."
SOW
DC "As a potential international standard, the EAD DTD is maintained in the Network Development and MARC Standards Office of the Library of Congress in partnership with the Society of American Archivists." ... "The CIDOC Conceptual Reference Model (see [CRM1999], [Doerr99]), in the following only referred to as ┬½CRM┬╗, is outcome of an effort of the Documentation Standards Group of the CIDOC Committee (see ┬½http:/www.cidoc.icom.org┬╗, ÔÇ£http://cidoc.ics.forth.grÔÇØ) of ICOM, the International Council of Museums beginning in 1996."
CA This is the first of four articles describing Geospatial Standards and the standards bodies working on these standards. This article will discuss what geospatial standards are and why they matter, identify major standards organizations, and list the characteristics of successful geospatial standards.
Conclusions
RQ Which federal and international standards have been agreed upon since this article's publication?
SOW
DC FGDC approved the Content Standard for Digital Geospatial Metadata (FGDC-STD-001-1998) in June 1998. FGDC is a 19-member interagency committee composed of representatives from the Executive Office of the President, Cabinet-level and independent agencies. The FGDC is developing the National Spatial Data Infrastructure (NSDI) in cooperation with organizations from State, local and tribal governments, the academic community, and the private sector. The NSDI encompasses policies, standards, and procedures for organizations to cooperatively produce and share geographic data.
CA "The purpose of this document is: (1) To provide a better understanding of the functionality that the MPEG-21 multimedia framework should be capable of providing; (2) To offer high level descriptions of different MPEG-21 applications against which the formal requirements for MPEG-21 can be checked; (3) To act as a basis for devising Core Experiments which establish proof of concept; (4) To provide a point of reference to support the evaluation of responses submitted against ongoing MPEG-21 Calls for Proposals; (5) To be a 'Public Relations' instrument that can help to explain what MPEG-21 is about."
Conclusions
RQ not applicable
SOW
DC The Moving Picture Experts Group (MPEG) is a working group of ISO/IEC, made up of some 350 members from various industries and universities, in charge of the development of international standards for compression, decompression, processing, and coded representation of moving pictures, audio and their combination. MPEG's official designation is ISO/IEC JTC1/SC29/WG11. So far MPEG has produced the following compression formats and ancillary standards: MPEG-1, the standard for storage and retrieval of moving pictures and audio on storage media (approved Nov. 1992); MPEG-2, the standard for digital television (approved Nov. 1994); MPEG-4, the standard for multimedia applications; MPEG-7, the content representation standard for multimedia information search, filtering, management and processing; and MPEG-21, the multimedia framework.
CA Discussion of the challenges faced by librarians and archivists who must determine which and how much of the mass amounts of digitally recorded sound materials to preserve. Identifies various types of digital sound formats and the varying standards to which they are created. Specific challenges discussed include copyright issues; technologies and platforms; digitization and preservation; and metadata and other standards.
Conclusions
RQ "Whether between record companies and archives or with others, some type of collaborative approach to audio preservation will be necessary if significant numbers of audio recordings at risk are to be preserved for posterity. ... One particular risk of preservation programs now is redundancy. ... Inadequate cataloging is a serious impediment to preservation efforts. ... It would be useful to archives, and possibly to intellectual property holders as well, if archives could use existing industry data for the bibliographic control of published recordings and detailed listings of the music recorded on each disc or tape. ... Greater collaboration between libraries and the sound recording industry could result in more comprehensive catalogs that document recording sessions with greater specificity. With access to detailed and authoritative information about the universe of published sound recordings, libraries could devote more resources to surveying their unpublished holdings and collaborate on the construction of a preservation registry to help reduce preservation redundancy. ... Many archivists believe that adequate funding for preservation will not be forthcoming unless and until the recordings preserved can be heard more easily by the public. ... If audio recordings that do not have mass appeal are to be preserved, that responsibility will probably fall to libraries and archives. Within a partnership between archives and intellectual property owners, archives might assume responsibility for preserving less commercial music in return for the ability to share files of preserved historical recordings."
Type
Web Page
Title
CDL Digital Object Standard: Metadata, Content and Encoding
This document addresses the standards for digital object collections for the California Digital Library 1. Adherence to these standards is required for all CDL contributors and may also serve University of California staff as guidelines for digital object creation and presentation. These standards are not intended to address all of the administrative, operational, and technical issues surrounding the creation of digital object collections.
Critical Arguements
CA These standards describe the file formats, storage and access standards for digital objects created by or incorporated into the CDL as part of the permanent collections. They attempt to balance adherence to industry standards, reproduction quality, access, potential longevity and cost.
Conclusions
RQ not applicable
SOW
DC "This is the first version of the CDL Digital Object Standard. This version is based upon the September 1, 1999 version of the CDL's Digital Image Standard, which included recommendations of the Museum Educational Site Licensing Project (MESL), the Library of Congress and the MOA II participants." ... "The Museum Educational Site Licensing Project (MESL) offered a framework for seven collecting institutions, primarily museums, and seven universities to experiment with new ways to distribute visual information--both images and related textual materials. " ... "The Making of America (MoA II) Testbed Project is a Digital Library Federation (DLF) coordinated, multi-phase endeavor to investigate important issues in the creation of an integrated, but distributed, digital library of archival materials (i.e., digitized surrogates of primary source materials found in archives and special collections). The participants include Cornell University, New York Public Library, Pennsylvania State University, Stanford University and UC Berkeley. The Library of Congress white papers and standards are based on the experience gained during the American Memory Pilot Project. The concepts discussed and the principles developed still guide the Library's digital conversion efforts, although they are under revision to accomodate the capabilities of new technologies and new digital formats." ... "The CDL Technical Architecture and Standards Workgroup includes the following members with extensive experience with digital object collection and management: Howard Besser, MESL and MOA II digital imaging testbed projects; Diane Bisom, University of California, Irvine; Bernie Hurley, MOA II, University of California, Berkeley; Greg Janee, Alexandria Digital Library; John Kunze, University of California, San Francisco; Reagan Moore and Chaitanya Baru, San Diego Supercomputer Center, ongoing research with the National Archives and Records Administration on the long term storage and retrieval of digital content; Terry Ryan, University of California, Los Angeles; David Walker, California Digital Library"
This is one of a series of guides produced by the Cedars digital preservation project. This guide concentrates on the technical approaches that Cedars recommends as a result of its experience. The accent is on preservation, without which continued access is not possible. The time scale is at least decades, i.e. way beyond the lifetime of any hardware technology. The overall preservation strategy is to remove the data from its medium of acquisition and to preserve the digital content as a stream of bytes. There is good reason to be confident that data held as a stream of bytes can be preserved indefinitely. Just as there is no access without preservation, preservation with no prospect of future access is a very sterile exercise. As well as preserving the data as a byte-stream, Cedars adds in metadata. This includes reference to facilities (called technical metadata in this document) for accessing the intellectual content of the preserved data. This technical metadata will usually include actual software for use in accessing the data. It will be stored as a preserved object in the overall archive store, and will be revised as technology evolves making new methods of access to preserved objects appropriate. There will be big economies of scale, as most, if not all, objects of the same type will share the same technical metadata. Cedars recommends against repeated format conversions, and instead argues for keeping the preserved byte-stream, while tracking evolving technology by maintaining the technical metadata. It is for this reason that Cedars includes only a reference to the technical metadata in the preserved data object. Thus future users of the object will be pointed to information appropriate to their own era, rather than that of the object's preservation. The monitoring and updating of this aspect of the technical metadata is a vital function of the digital library. In practice, Cedars expects that very many preserved digital objects will be in the same format, and will reference the same technical metadata. Access to a preserved object then involves Migration on Request, in that any necessary migration from an obsolete format to an appropriate current day format happens at the point of request. As well as recommending actions to be taken to preserve digital objects, Cedars also recommends the use of a permanent naming scheme, with a strong recommendation that such a scheme should be infinitely extensible.
Critical Arguements
CA "This document is intended to inform technical practitioners in the actual preservation of digital materials, and also to highlight to library management the importance of this work as continuing their traditional scholarship role into the 21st century."
Type
Web Page
Title
National States Geographic Information Council (NSGIC) Metadata Primer -- A "How To" Guide on Metadata Implementation
The primer begins with a discussion of what metadata is and why metadata is important. This is followed by an overview of the Content Standards for Digital Geospatial Metadata (CSDGM) adopted by the Federal Geographic Data Committee (FGDC). Next, the primer focuses on the steps required to begin collecting and using metadata. The fourth section deals with how to select the proper metadata creation tool from the growing number being developed. Section five discusses the mechanics of documenting a data set, including strategies on reviewing the output to make sure it is in a useable form. The primer concludes with a discussion of other assorted metadata issues.
Critical Arguements
CA The Metadata Primer is one phase of a larger metadata research and education project undertaken by the National States Geographic Information Council and funded by the Federal Geographic Data Committee's Competetive Cooperative Agreements Program (CCAP). The primer is designed to provide a practical overview of the issues associated with developing and maintaining metadata for digital spatial data. It is targeted toward an audience of state, local, and tribal government personnel. The document provides a "cook book" approach to the creation of metadata. Because much of the most current information on metadata resides on the Internet, the primer summarizes relevant material available from other World Wide Web (WWW) home pages.
Conclusions
RQ To what extent could the NSGIC recommendations be used for non-geographic applications?
SOW
DC FGDC approved the Content Standard for Digital Geospatial Metadata (FGDC-STD-001-1998) in June 1998. FGDC is a 19-member interagency committee composed of representatives from the Executive Office of the President, Cabinet-level and independent agencies. The FGDC is developing the National Spatial Data Infrastructure (NSDI) in cooperation with organizations from State, local and tribal governments, the academic community, and the private sector. The NSDI encompasses policies, standards, and procedures for organizations to cooperatively produce and share geographic data.
Type
Web Page
Title
Softening the borderlines of archives through XML - a case study
Archives have always had troubles getting metadata in formats they can process. With XML, these problems are lessening. Many applications today provide the option of exporting data into an application-defined XML format that can easily be post-processed using XSLT, schema mappers, etc, to fit the archives┬┤ needs. This paper highlights two practical examples for the use of XML in the Swiss Federal Archives and discusses advantages and disadvantages of XML in these examples. The first use of XML is the import of existing metadata describing debates at the Swiss parliament whereas the second concerns preservation of metadata in the archiving of relational databases. We have found that the use of XML for metadata encoding is beneficial for the archives, especially for its ease of editing, built-in validation and ease of transformation.
Notes
The Swiss Federal Archives defines the norms and basis of records management and advises departments of the Federal Administration on their implementation. http://www.bar.admin.ch/bar/engine/ShowPage?pageName=ueberlieferung_aktenfuehrung.jsp
Critical Arguements
CA "This paper briefly discusses possible uses of XML in an archival context and the policies of the Swiss Federal Archives concerning this use (Section 2), provides a rough overview of the applications we have that use XML (Section 3) and the experiences we made (Section 4)."
Conclusions
RQ "The systems described above are now just being deployed into real world use, so the experiences presented here are drawn from the development process and preliminary testing. No hard facts in testing the sustainability of XML could be gathered, as the test is time itself. This test will be passed when we can still access the data stored today, including all metadata, in ten or twenty years." ... "The main problem area with our applications was the encoding of the XML documents and the non-standard XML document generation of some applications. When dealing with the different encodings (UTF-8, UTF-16, ISO-8859-1, etc) some applications purported a different encoding in the header of the XML document than the true encoding of the document. These errors were quickly identified, as no application was able to read the documents."
SOW
DC The author is currently a private digital archives consultant, but at the time of this article, was a data architect for the Swiss Federal Archives. The content of this article owes much to the work being done by a team of architects and engineers at the Archives, who are working on an e-government project called ARELDA (Archiving of Electronic Data and Records).
Type
Web Page
Title
Recordkeeping Metadata Standard for Commonwealth Agencies
This standard describes the metadata that the National Archives of Australia recommends should be captured in the recordkeeping systems used by Commonwealth government agencies. ... Part One of the standard explains the purpose and importance of standardised recordkeeping metadata and details the scope, intended application and features of the standard. Features include: flexibility of application; repeatability of data elements; extensibility to allow for the management of agency-specific recordkeeping requirements; interoperability across systems environments; compatibility with related metadata standards, including the Australian Government Locator Service (AGLS) standard; and interdependency of metadata at the sub-element level.
Critical Arguements
CA Compliance with the Recordkeeping Metadata Standard for Commonwealth Agencies will help agencies to identify, authenticate, describe and manage their electronic records in a systematic and consistent way to meet business, accountability and archival requirements. In this respect the metadata is an electronic recordkeeping aid, similar to the descriptive information captured in file registers, file covers, movement cards, indexes and other registry tools used in the paper-based environment to apply intellectual and physical controls to records.
Conclusions
RQ "The National Archives intends to consult with agencies, vendors and other interested parties on the implementation and continuing evolution of the Recordkeeping Metadata Standard for Commonwealth Agencies." ... "The National Archives expects to re-examine and reissue the standard in response to broad agency feedback and relevant advances in theory and methodology." ... "The development of public key technology is one area the National Archives will monitor closely, in consultation with the Office for Government Online, for possible additions to a future version of the standard."
SOW
DC "This standard has been developed in consultation with recordkeeping software vendors endorsed by the Office for Government OnlineÔÇÖs Shared Systems Initiative, as well as selected Commonwealth agencies." ... "The standard has also been developed with reference to other metadata standards emerging in Australia and overseas to ensure compatibility, as far as practicable, between related resource management tools, including: the Dublin Core-derived Australian Government Locator Service (AGLS) metadata standard for discovery and retrieval of government services and information in web-based environments, co-ordinated by the National Archives of Australia; and the non-sector-specific Recordkeeping Metadata Standards for Managing and Accessing Information Resources in Networked Environments Over Time for Government, Social and Cultural Purposes, co-ordinated by Monash University using an Australian Research Council Strategic Partnership with Industry Research and Training (SPIRT) Support Grant."
Joined-up government needs joined-up information systems. The e-Government Metadata Standard (e-GMS) lays down the elements, refinements and encoding schemes to be used by government officers when creating metadata for their information resources or designing search interfaces for information systems. The e-GMS is needed to ensure maximum consistency of metadata across public sector organisations.
Publisher
Office of the e-Envoy, Cabinet Office, UK.
Critical Arguements
CA "The e-GMS is concerned with the particular facets of metadata intended to support resource discovery and records management. The Standard covers the core set of ÔÇÿelementsÔÇÖ that contain data needed for the effective retrieval and management of official information. Each element contains information relating to a particular aspect of the information resource, e.g. 'title' or 'creator'. Further details on the terminology being used in this standard can be found in Dublin Core and Part Two of the e-GIF."
Conclusions
RQ "The e-GMS will need to evolve, to ensure it remains comprehensive and consistent with changes in international standards, and to cater for changes in use and technology. Some of the elements listed here are already marked for further development, needing additional refinements or encoding schemes. To limit disruption and cost to users, all effort will be made to future-proof the e-GMS. In particular we will endeavour: not to remove any elements or refinements; not to rename any elements or refinements; not to add new elements that could contain values contained in the existing elements."
SOW
DC The E-GMS is promulgated by the British government as part of its e-government initiative. It is the technical cornerstone of the e-government policy for joining up the public sector electronically and providing modern, improved public services.
Type
Web Page
Title
Record Keeping Metadata Requirements for the Government of Canada
This document comprises descriptions for metadata elements utilized by the Canadian Government as of January 2001.
Critical Arguements
CA "The Record Keeping Metadata is defined broadly to include the type of information Departments are required to capture to describe the identity, authenticity, content, context, structure and management requirements of records created in the context of a business activity. The Metadata model consists of elements, which are the attributes of a record that are comparable to fields in a database. The model is modular in nature. It permits Departments to use a core set of elements that will meet the minimum requirements for describing and sharing information, while facilitating interoperability between government Departments. It also allows Departments with specialized needs or the need for more detailed descriptions to add new elements and/or sub-elements to the basic metadata in order to satisfy their particular business requirements."
Expanded version of the article "Ensuring the Longevity of Digital Documents" that appeared in the January 1995 edition of Scientific American (Vol. 272, Number 1, pp. 42-7).
Publisher
Council on Library and Information Resources
Critical Arguements
CA "It is widely accepted that information technology is revolutionizing our concepts of documents and records in an upheaval at least as great as the introduction of printing, if not of writing itself. The current generation of digital records therefore has unique historical significance; yet our digital documents are far more fragile than paper. In fact, the record of the entire present period of history is in jeopardy. The content and historical value of many governmental, organizational, legal, financial, and technical records, scientific databases, and personal documents may be irretrievably lost to future generations if we do not take steps to preserve them."
Conclusions
RQ "We must develop evolving standards for encoding explanatory annotations to bootstrap the interpretation of digital documents that are saved in nonstandard forms. We must develop techniques for saving the bit streams of software-dependent documents and their associated systems and application software. We must ensure that the hardware environments necessary to run this software are described in sufficient detail to allow their future emulation. We must save these specifications as digital documents, encoded using the bootstrap standards developed for saving annotations so that they can be read without special software (lest we be recursively forced to emulate one system in order to learn how to emulate another). We must associate contextual information with our digital documents to provide provenance as well as explanatory annotations in a form that can be translated into successive standards so as to remain easily readable. Finally, we must ensure the systematic and continual migration of digital documents onto new media, preserving document and program bit streams verbatim, while translating their contextual information as necessary."
Type
Web Page
Title
Archiving of Electronic Digital Data and Records in the Swiss Federal Archives (ARELDA): e-government project ARELDA - Management Summary
The goal of the ARELDA project is to find long-term solutions for the archiving of digital records in the Swiss Federal Archives. This includes the accession, the long-term storage, preservation of data, description, and access for the users of the Swiss Federal Archives. It is also coordinated with the basic efforts of the Federal Archives to realize a uniform records management solution in the federal administration and therefore to support the pre-archival creation of documents of archival value for the benefits of the administration as well as of the Federal Archives. The project is indispensable for the long-term execution of the Federal Archives Act; Older IT systems are being replaced by newer ones. A complete migration of the data is sometimes not possible or too expensive; A constant increase of small database applications, built and maintained by people with no IT background; More and more administrative bodies are introducing records and document management systems.
Publisher
Swiss Federal Archives
Publication Location
Bern
Critical Arguements
CA "Archiving in general is a necessary prerequisite for the reconstruction of governmental activities as well as for the principle of legal certainty. It enables citizens to understand governmental activities and ensures a democratic control of the federal administration. And finally are archives a prerequisite for the scientific research, especially in the social and historical fields and ensure the preservation of our cultural heritage. It plays a vital role for an ongoing and efficient records management. A necessary prerequisite for the Federal Archives in the era of the information society will be the system ARELDA (Archiving of Electronic Data and Records)."
Conclusions
RQ "Because of the lack of standard solutions and limited or lacking personal resources for an internal development effort, the realisation of ARELDA will have to be outsourced and the cooperation with the IT division and the Federal Office for Information Technology, Systems and Telecommunication must be intensified. The guidelines for the projects are as follows:
SOW
DC ARELDA is one of the five key projects in the Swiss government's e-government strategy.