Artiste is a European project developing a cross-collection search system for art galleries and museums. It combines image content retrieval with text based retrieval and uses RDF mappings in order to integrate diverse databases. The test sites of the Louvre, Victoria and Albert Museum, Uffizi Gallery and National Gallery London provide their own database schema for existing metadata, avoiding the need for migration to a common schema. The system will accept a query based on one museumÔÇÖs fields and convert them, through an RDF mapping into a form suitable for querying the other collections. The nature of some of the image processing algorithms means that the system can be slow for some computations, so the system is session-based to allow the user to return to the results later. The system has been built within a J2EE/EJB framework, using the Jboss Enterprise Application Server.
Secondary Title
WWW2002: The Eleventh International World Wide Web Conference
Publisher
International World Wide Web Conference Committee
ISBN
1-880672-20-0
Critical Arguements
CA "A key aim is to make a unified retrieval system which is targeted to usersÔÇÖ real requirements and which is usable with integrated cross-collection searching. Museums and Galleries often have several digital collections ranging from public access images to specialised scientific images used for conservation purposes. Access from one gallery to another was not common in terms of textual data and not done at all in terms of image-based queries. However the value of cross-collection access is recognised as important for example in comparing treatments and conditions of paintings. While ARTISTE is primarily designed for inter-museum searching it could equally be applied to museum intranets. Within a MuseumÔÇÖs intranet there may be systems which are not interlinked due to local management issues."
Conclusions
RQ "The query language for this type of system is not yet standardised but we hope that an emerging standard will provide the session-based connectivity this application seems to require due to the possibility of long query times." ... "In the near future, the project will be introducing controlled vocabulary support for some of the metadata fields. This will not only make retrieval more robust but will also facilitate query expansion. The LouvreÔÇÖs multilingual thesaurus will be used in order to ensure greater interoperability. The system is easily extensible to other multimedia types such as audio and video (eg by adding additional query items such as "dialog" and "video sequence" with appropriate analysers). A follow-up project is scheduled to explore this further. There is some scope for relating our RDF query format to the emerging query standards such as XQuery and we also plan to feed our experience into standards such as the ZNG initiative.
SOW
DC "The Artiste project is a European Commission funded collaboration, investigating the use of integrated content and metadata-based image retrieval across disparate databases in several major art galleries across Europe. Collaborating galleries include the Louvre in Paris, the Victoria and Albert Museum in London, the Uffizi Gallery in Florence and the National Gallery in London." ... "Artiste is funded by the European CommunityÔÇÖs Framework 5 programme. The partners are: NCR, The University of Southampton, IT Innovation, Giunti Multimedia, The Victoria and Albert Museum, The National Gallery, The research laboratory of the museums of France (C2RMF) and the Uffizi Gallery. We would particularly like to thank our collaborators Christian Lahanier, James Stevenson, Marco Cappellini, John Cupitt, Raphaela Rimabosci, Gert Presutti, Warren Stirling, Fabrizio Giorgini and Roberto Vacaro."
This study focuses upon access to authentic electronic records that are no longer required in day-to-day operations and that have been set aside in a recordkeeping system or storage repository for future reference. One school of thought, generally associated with computer information technology specialists, holds that long-term access to electronic records is primarily a technological issue with little attention devoted to authenticity. Another school of thought, associated generally with librarians, archivists, and records managers, contends that long-term access to electronic records is as much an intellectual issue as it is a technological issue. This latter position is clearly evident in several recent research projects and studies about electronic records whose findings illuminate the discussion of long-term access to electronic records. Therefore, a review of eight research projects highlighting findings relevant for long-term access to electronic records begins this chapter. This review is followed by a discussion, from the perspective of archival science, of nine questions that a long-term access strategy must take into account. The nine issues are: What is a document?; What is a record?; What are authentic electronic records?; What does "archiving" mean?; What is an authentic reformatted electronic record?; What is a copy of an authentic electronic record?; What is an authentic converted electronic record?; What is involved in the migration of authentic electronic records?; What is technology obsolescence?
Book Title
Authentic Electronic Records: Strategies for Long-Term Access
Publisher
Cohasset Associates, Inc.
Publication Location
Chicago
ISBN
0970064004
Critical Arguements
CA "Building upon the key concepts and concerns articulated by the studies described above, this report attempts to move the discussion of long-term access to electronic records towarad more clearly identified, generally applicable and redily im(TRUNCATED)
Conclusions
RQ
SOW
DC This book chapter was written by Charles M. Dollar for Cohasset Associates, Inc. Mr. Dollar has "twenty-five years of experience in working with electronic records as a manager at the National Archives and Records Administration, as an archival educator at the University of British Columbia, and a consultant to governments and businesses in North America, Asia, Europe, and the Middle East." Cohasset Associates Inc. is "one of the nation's foremost consulting firms specializing in document-based information management."
Type
Journal
Title
Capturing records' metadata: Unresolved questions and proposals for research
The author reviews a range of the research questions still unanswered by research on the capture of metadata required for recordness. These include how to maintain inviolable linkages between records and their metadata in a variety of architectures, what structure metadata content should take, the semantics of records metadata and that of other electronic sources, how new metadata can be acquired by records over time, maintaining the meaning of contextual metadata over time, the use of metadata in records management and the design of environments in which Business Acceptable Communications ÔÇô BAC ÔÇô (those with appropriate evidential metadata) can persist.
Critical Arguements
CA "My research consists of model building which enables the construction of theories and parallel implementations based on shared assumptions. Some of these models are now being tested in applications, so this report reflects both what we do not yet know from abstract constructs and questions being generated by field testing. " ... Bearman overviews research questions such as semantics, syntax, structure and persistence of metadata that still need to be addressed.
Phrases
<P1> Records are evidence when they are bound to appropriate metadata about their content, structure and context. <P2> The metadata required for evidence is described in the Reference Model for Business Acceptable Communications (BAC). <P3> Metadata which is required for evidence must continue to be associated with the record to which it relates over time and neither it nor the record content can be alterable. <P4> To date we have only identified three implementations which, logically, could allow metadata to retain this inviolable connection. Metadata can be: kept in a common envelope WITH a record (encapsulated), bound TO a record (by integrity controls within an environment), or LINKED with a record through a technical and/or social process (registration, key deposit, etc.). <P5> Metadata content was defined in order to satisfy a range of functional requirements of records, hence it ought to have a structure which enables it to serve these functions effectively and in concrete network implementations. <warrant> <P6> Clusters of metadata are must operate together. Clusters of metadata are required by different processes which take place at different times, for different software clients, and within a variety of processes. Distinct functions will need access to specified metadata substructures and must be able to act on these appropriately. Structures have been proposed in the Reference Model for Business Acceptable Communications. <P7> Metadata required for recordness must, logically, be standard; that required for administration of recordkeeping systems is extensible and locally variable. <P8> Records metadata must be semantically homogenous but it is probably desirable for it to be syntactically heterogeneous and for a range of protocols to operate against it. Records metadata management system requirements have both an internal and external aspect; internally they satisfy management requirements while externally they satisfy on-going recordness requirements. <P9> The metadata has to come either from a specific user/session or from rules defined to extract data either from a layer in the application or a layer between the application and the recording event. <P10> A representation of the business context must exist from which the record-creating event can obtain metadata values. <P11> Structural metadata must both define the dependent structures and identify them to a records management environment which is ÔÇ£patrollingÔÇØ for dependencies which are becoming risky in the evolving environment in order to identify needs for migration. <P12> BAC conformant environments could reduce overheads and, if standards supported the uniform management of records from the point of issue to the point of receipt. Could redundancy now imposed by both paper and electronic processes be dramatically reduced if records referenced other records? <P13>
Conclusions
RQ "All the proposed methods have some degree of external dependency. What are the implications software dependencies? Encapsulation, integrity controls and technico-social process are all software dependent. Is this avoidable? Can abstract reference models of the metadata captured by these methods serve to make them effectively software independent? " ... "What are the relative overhead costs of maintaining the systems which give adequate societal assurances of records retention following any of these approaches? Are there some strategies that are currently more efficient or effective? What are the organizational requirements for implementing metadata capture systems? In particular, what would the costs of building such systems within a single institution be versus the costs of implementing records metadata adhering communications servers on a universal scale?" ... "Can we model mechanisms to enable an integrated environment of recordkeeping throughout society for all electronically communicated transactions?" ... "Are the BAC structures workable? Complete? Extensible in ways that are known to be required? For example, metadata required for ÔÇ£recordnessÔÇØ is created at the time of the creation of the records but other metadata, as premised by the Warwick Framework, 2 may be created subsequently. Are these packets of metadata orthogonal with respect to recordness? If not, how are conflicts dealt with? " ... "Not all metadata references fixed facts. Thus, for example, we have premised that proper reference to a retention schedule is a citation to an external source rather than a date given within the metadata values of a record. Similar external references are required for administration of shifting access permissions. What role can registries (especially rights clearinghouses) play in a world of electronic records? How well do existing languages for permission management map to the requirements of records administration, privacy and confidentiality protection, security management, records retention and destruction, etc." ... "Not all records will be created with equally perfect metadata. Indeed risk-based decisions taken by organizations in structuring their recordsÔÇÖ capture are likely to result in conscious decisions to exclude certain evidential metadata. What are the implications of incomplete metadata on an individual organization level and on a societal level? Does the absence of data as a result of policy need to be noted? And if so, how?" ... "Since metadata has owners, howdo owners administer recordsÔÇÖ metadata over time? In particular, since records contain records, how are the layers of metadata exposed for management and administrative needs (if internal metadata documenting dependencies can slip through the migration process, we will end up with records that cannot serve as evidence. If protected records within unprotected records are not protected, we will end up with insecure records environments, etc. etc.)." ... "In principle, the BAC could be expressed as Dublin metadata 3 and insofar as it cannot be, the Dublin metadata will be inadequate for evidence. What other syntax could be used? How could these be comparatively tested?" .. "Could Dublin Core metadata, if extended by qualifying schema, serve the requirements of recordness? Records are, after all, documents in the Dublin sense of fixed information objects. What would the knowledge representation look like?" ... "Strategies for metadata capture currently locate the source of metadata either in the API layer, or the communications system, using data provided by the application (an analysis supports defining which data and where they can be obtained), from the user interface layer, or from the business rules defined for specified types of communication pathways. Can all the required metadata be obtained by some combination of these sources? In other words, can all the metadata be acquired from sources other than content created by the record-creator for the explicit and sole purpose of documentation (since such data is both suspect in itself and the demand for it is annoying to the end user)? " ... "Does the capture of metadata from the surrounding software layers require the implementation of a business-application specific engine, or can we design generic tools that provide the means by which even legacy computing systems can create evidential records if the communication process captures the interchange arising from a record-event and binds it with appropriate metadata?" ... "What kinds of representations of business processes and structures can best carry contextualizing metadata at this level of granularity and simultaneously serve end user requirements? Are the discovery and documentation representations of provenance going to have to be different? " ... "Can a generic level of representation of context be shared? Do standards such a STEP 4 provide adequate semantic rules to enable some meaningful exchange of business context information? " ... "Using past experiences of expired standards as an indicator, can the defined structural metadata support necessary migrations? Are the formal standards of the source and target environments adequate for actual record migration to occur?" ... "What metadata is required to document a migration itself?" ... "Reduction of redundancy requires record uses to impose post-creation metadata locks on records created with different retention and access controls. To what extent is the Warwick Framework relevant to these packets and can architectures be created to manage these without their costs exceeding the savings?" ... "A number of issues about proper implementation depend on the evolution (currently very rapid) of metadata strategies in the broader Internet community. Issues such as unique identification of records, external references for metadata values, models for metadata syntax, etc. cannot be resolved for records without reference to the ways in which the wider community is addressing them. Studies that are supported for metadata capture methods need to be aware of, and flexible in reference to, such developments."
CA Makes a distinction between archival description of the record at hand and documentation of the context of its creation. Argues the importance of the latter in establishing the evidentiary value of records, and criticizes ISAD(G) for its failure to account for context. "(1) The subject of documentation is, first and foremost, the activity that generated the records, the organizations and individuals who used the records, and the purposes to which the records were put. (2). The content of the documentation must support requirements for the archival management of records, and the representations of data should support life cycle management of records. (3) The requirements of users of archives, especially their personal methods of inquiry, should determine the data values in documentation systems and guide archivists in presenting abstract models of their systems to users." (p. 45-46)
Phrases
<P1> [T]he ICA Principles rationalize existing practice -- which the author believes as a practical matter we cannot afford; which fail to provide direct access for most archives users; and which do not support the day-to-day information requirements of archivists themselves. These alternatives are also advanced because of three, more theoretical, differences with the ICA Principles: (1) In focusing on description rather than documentation, they overlook the most salient characteristic of archival records: their status as evidence. (2) In proposing specific content, they are informed by the bibliographic tradition rather than by concrete analysis of the way in which information is used in archives. (3) In promoting data value standardization without identifying criteria or principles by which to identify appropriate language or structural links between the objects represented by such terms, they fail adequately to recognize that the data representation rules they propose reflect only one particular, and a limiting, implementation. (p. 33-34) <P2> Archives are themselves documentation; hence I speak here of "documenting documentation" as a process the objective of which is to construct a value-added representation of archives, by means of strategic information capture and recording into carefully structured data and information access systems, as a mechanism to satisfy the information needs of users including archivists. Documentation principles lead to methods and practices which involve archivists at the point, and often at the time, of records creation. In contrast, archival description, as described in the ICA Principles[,] is "concerned with the formal process of description after the archival material has been arranged and the units or entities to be described have been determined." (1.7) I believe documentation principles will be more effective, more efficient and provide archivists with a higher stature in their organizations than the post accessioning description principles proposed by the ICA. <warrant> (p. 34) <P3> In the United States, in any case, there is still no truly theoretical formulation of archival description principles that enjoys a widespread adherence, in spite of the acceptance of rules for description in certain concrete application contexts. (p. 37) <P4> [T]he MARC-AMC format and library bibliographic practices did not adequately reflect the importance of information concerning the people, corporate bodies and functions that generated records, and the MARC Authority format did not support appropriate recording of such contexts and relations. <warrant> (p. 37) <P5> The United States National Archives, even though it had contributed to the data dictionary which led to the MARC content designation, all the data which it believed in 1983 that it would want to interchange, rejected the use of MARC two years later because it did not contain elements of information required by NARA for interchange within its own information systems. <warrant> (p. 37) <P6> [A]rchivists failed to understand then, just as the ISAD(G) standard fails to do now, that rules for content and data representation make sense in the context of the purposes of actual exchanges or implementation, not in the abstract, and that different rules or standards for end-products may derive from the same principles. (p. 38) <P7> After the Committee on Archival Information Exchange of the Society of American Archivists was confronted with proposals to adopt many different vocabularies for a variety of different data elements, a group of archivists who were deeply involved in standards and description efforts within the SAA formed an Ad Hoc Working Group on Standards for Archival Description (WGSAD) to identify what types of standards were needed in order to promote better description practices.  WSAD concluded that existing standards were especially inadequate to guide practice in documenting contexts of creation.  Since then, considerable progress has been made in developing frameworks for documentation, archival information systems architecture and user requirements analysis, which have been identified as the three legs on which the documenting documentation platform rests. <warrant> (p. 38) <P8> Documentation of organizational activity ought to begin long before records are transferred to archives, and may take place even before any records are created -- at the time records are created -- at the time when new functions are assigned to an organization. (p. 39) <P9> It is possible to identify records which will be created and their retention requirements before they are created, because their evidential value and informational content are essentially predetermined. (p. 39) <P10> Archivists can actively intervene through regulation and guidance to ensure that the data content and values depicting activities and functions are represented in such a way that will make them useful for subsequent management and retrieval of the records resulting from these activities. This information, together with systems documentation, defines the immediate information system context out of which the records were generated, in which they are stored, and from which they were retrieved during their active life. (p. 39) <P11> Documentation of the link between data content and the context of creation and use of the records is essential if records (archives or manuscripts) are to have value as evidence. (p. 39) <P12> [C]ontextual documentation capabilities can be dramatically improved by having records managers actively intervene in systems design and implementation.  The benefits of proactive documentation of the context of records creation, however, are not limited to electronic records; the National Archives of Canada has recently revised its methods of scheduling to ensure that such information about important records systems and contexts of records creation will be documented earlier. <warrant> (p. 39) <P13> Documentation of functions and of information systems can be conducted using information created by the organization in the course of its own activity, and can be used to ensure the transfer of records to archives and/or their destruction at appropriate times. It ensures that data about records which were destroyed as well as those which were preserved will be kept, and it takes advantage of the greater knowledge of records and the purposes and methods of day-to-day activity that exist closer to the events. (p. 40) <P14> The facts of processing, exhibiting, citing, publishing and otherwise managing records becomes significant for their meaning as records, which is not true of library materials. (p. 41) <P15> [C]ontent and data representation requirements ought to be derived from analysis of the uses to which such systems must be put, and should satisfy the day to day information requirements of archivists who are the primary users of archives, and of researchers using archives for primary evidential purposes. (p. 41) <P16> The ICA Commission proposes a principle by which archivists would select data content for archival descriptions, which is that "the structure and content of representations of archival material should facilitate information retrieval." (5.1) Unfortunately, it does not help us to understand how the Commission selected the twenty-five elements of information identified as its standard, or how we could apply the principle to the selection of additional data content. It does, however, serve as a prelude to the question of which principles should guide archivists in choosing data values in their representations. (p. 42) <P17> Libraries have found that subject access based on titles, tables of contents, abstracts, indexes and similar formal subject analysis by-products of publishing can support most bibliographic research, but the perspectives brought to materials by archival researchers are both more varied and likely to differ from those of the records creators. (p. 43) <P18> The user should not only be able to employ a terminology and a perspective which are natural, but also should be able to enter the system with a knowledge of the world being documented, without knowing about the world of documentation. (p. 44) <P19> Users need to be able to enter the system through the historical context of activity, construct relations in that context, and then seek avenues down into the documentation. This frees them from trying to imagine what records might have survived -- documentation assists the user to establish the non-existence of records as well as their existence -- or to fathom how archivists might have described records which did survive. (p. 44) <P20> When they departed from the practices of Brooks and Schellenberg in order to develop means for the construction of union catalogues of archival holdings, American archivists were not defining new principles, but inventing a simple experiment. After several years of experience with the new system, serious criticisms of it were being leveled by the very people who had first devised it. (p. 45)
Conclusions
RQ "In short, documentation of the three aspects of records creation contexts (activities, organizations and their functions, and information systems), together with representation of their relations, is essential to the concept of archives as evidence and is therefore a fundamental theoretical principle for documenting documentation. Documentation is a process that captures information about an activity which is relevant to locating evidence of that activity, and captures information about records that are useful to their ongoing management by the archival repository. The primary source of information is the functions and information systems giving rise to the records, and the principal activity of the archivist is the manipulation of data for reference files that create richly-linked structures among attributes of the records-generating context, and which point to the underlying evidence or record." (p. 46)
Type
Journal
Title
Building record-keeping systems: Archivists are not alone on the wild frontier
CA The digital environment offers archivists a host of new tools that can be adapted and used for recordkeeping. However, archivists must choose their tools judisciously while considering the long-term implications of their use as well as research and development. Ultimately, they must pick tools and strategies that dovetail with their institutions' specific needs while working to produce reliable and authentic records.
Phrases
<P1> Evidence from this review of emerging methods for secure and authentic electronic communications shows that the division of responsibility, accountability, and jurisdiction over recordkeeping is becoming more complex than a clear line between the records creator and the records preserver. (p.66) <P2> Storage of records in encrypted form is another area of concern because encryption adds additional levels of systems dependency on access to keys, proprietary encryption algorithims, hardware, and software. (p.62) <P3> It is important for archivists and records managers to understand parallel developments, because some new strategies and methods may support recordkeeping, while others may impede the achievement of archival objectives. (p.45) <P4> The concept of warrant and subsequent research on it by Wendy Duff is a significant contribution, because it situates the mandates for creating and maintaining records in a legal, administrative, and professional context, and it presents a methodology for locating, compiling, and presenting the rules governing proper and adequate documentation in modern organizations. (p. 48)
Conclusions
RQ Are electronic recordkeeping systems truly inherently inferior to paper-based systems in their capacity to maintain authentic records over time? How tightly can recordkeeping be integrated into normal business processes, and where does one draw the line between how a business does its work and how it does its recordkeeping?
Type
Journal
Title
When Documents Deceive: Trust and Provenance as New Factors for Information Retrieval in a Tangled Web
Journal of the American Society for Information Science and Technology
Periodical Abbreviation
JASIST
Publication Year
2001
Volume
52
Issue
1
Pages
12
Publisher
John Wiley & Sons
Critical Arguements
"This brief and somewhat informal article outlines a personal view of the changing framework for information retrieval suggested by the Web environment, and then goes on to speculate about how some of these changes may manifest in upcoming generations of information retrieval systems. It also sketches some ideas about the broader context of trust management infrastructure that will be needed to support these developments, and it points towards a number of new research agendas that will be critical during this decade. The pursuit of these agendas is going to call for new collaborations between information scientists and a wide range of other disciplines." (p. 12) Discusses public key infrastructure (PKI) and Pretty Good Practice (PGP) systems as steps toward ensuring the trustworthiness of metadata online, but explains their limitations. Makes a distinction between the identify of providers of metadata and their behavior, arguing that it is the latter we need to be concerned with.
Phrases
<P1> Surrogates are assumed to be accurate because they are produced by trusted parties, who are the only parties allowed to contribute records to these databases. Documents (full documents or surrogate records) are viewed as passive; they do not actively deceive the IR system.... Compare this to the realities of the Web environment. Anyone can create any metadata they want about any object on the net, with any motivation. (p. 13) <P2> Sites interested in manipulating the results of the indexing process rapidly began to exploit the difference between the document as viewed by the user and the document as analyzed by the indexing crawler through a set of techniques broadly called "index spamming." <P3> Pagejacking might be defined generally as providing arbitrary documents with independent arbitrary index entries. Clearly, building information retrieval systems to cope with this environment is a huge problem. (p. 14) <P4> [T]he tools are coming into place that let one determine the source of a metadata assertion (or, more precisely and more generally) the identity of the person or organization that stands behind the assertion, and to establish a level of trust in this identity. (p. 16) <P5> It is essential to recognize that in the information retrieval context one is not concerned so much with identity as with behavior. ... This distinction is often overlooked or misunderstood in discussions about what problems PKI is likely to solve: identity alone does not necessarily solve the problem of whether to trust information provided by, or warranted by, that identity. ... And all of the technology for propagating trust, either in hierarchical (PKI) or web-of-trust identity management, is purely about trust in identity. (p. 16) <P6> The question of formalizing and recording expectations about behavior, or trust in behavior, are extraordinarily complex, and as far as I know, very poorly explored. (p. 16) <P7> [A]n appeal to certification or rating services simply shifts the problem: how are these services going to track, evaluate, and rate behavior, or certify skills and behavior? (p. 16) <P8> An individual should be able to decide how he or she is willing to have identity established, and when to believe information created by or associated with such an identity. Further, each individual should be able to have this personal database evolve over time based on experience and changing beliefs. (p. 16) <P9> [T]he ability to scale and to respond to a dynamic environment in which new information sources are constantly emerging is also vital.<P10> In determining what data a user (or an indexing system, which may make global policy decisions) is going to consider in matching a set of search criteria, a way of defining the acceptable level of trust in the identity of the source of the data will be needed. (p. 16) <P10> Only if the data is supported by both sufficient trust in the identity of the source and the behavior of that identity will it be considered eligible for comparison to the search criteria. Alternatively, just as ranking of result sets provided a more flexible model of retrieval than just deciding whether documents or surrogates did or did not match a group of search criteria, one can imagine developing systems that integrate confidence in the data source (both identity and behavior, or perhaps only behavior, with trust in identity having some absolute minimum value) into ranking algorithms. (p. 17) <P11> As we integrate trust and provenance into the next generations of information retrieval systems we must recognize that system designers face a heavy burden of responsibility. ... New design goals will need to include making users aware of defaults; encouraging personalization; and helping users to understand the behavior of retrieval systems <warrant> (p. 18) <P12> Powerful paternalistic systems that simply set up trust-related parameters as part of the indexing process and thus automatically apply a fixed set of such parameters to each search submitted to the retrieval system will be a real danger. (p. 17)
Conclusions
RQ "These developments suggest a research agenda that addresses indexing countermeasures and counter-countermeasures; ways of anonymously or pseudononymously spot-checking the results of Web-crawling software, and of identifying, filtering out, and punishing attempts to manipulate the indexing process such as query-source-sensitive responses or deceptively structured pages that exploit the gap between presentation and content." (p. 14) "Obviously, there are numerous open research problems in designing such systems: how can the user express these confidence or trust constraints; how should the system integrate them into ranking techniques; how can efficient index structures and query evaluation algorithms be designed that integrate these factors. ... The integration of trust and provenance into information retrieval systems is clearly going to be necessary and, I believe, inevitable. If done properly, this will inform and empower users; if done incorrectly, it threatens to be a tremendously powerful engine of censorship and control over information access. (p. 17)
Type
Journal
Title
Grasping the Nettle: The Evolution of Australian Archives Electronic Records Policy
CA An overview of the development of electronic records policy at the Australian Archives.
Phrases
<P1> The notion of records being independent of format and of "virtual" records opens up a completely new focus on what it is that archival institutions are attempting to preserve. (p. 136) <P2> The import of Bearman's contention that not all infomation systems are recordkeeping systems challenges archivists to move attention away from managing archival records after the fact toward involvement in the creation phase of records, i.e., in the systems design and implementation process. (p. 139) <P3> The experience of the Australian Archives is but one slice of a very large pie, but I think it is a good indication of the challenges other institutions are facing internationally. (p. 144)
Conclusions
RQ How has the Australian Archives managed the transition from paper to electronic records? What issues were raised and how were they dealt with?
Type
Journal
Title
The role of standards in the archival management of electronic records
CA Technical standards, developed by national and international organizations, are increasingly important in electronic recordkeeping. Thirteen standards are summarized and their sponsoring organizations described.
Phrases
<P1> The challenge to archivists is to make sure that the standards being applied to electronic records systems today are adequate to ensure the long-term preservation and use of information contained in the systems. (p.31) <P2> While consensus can easily be established that data exchange standards offer a wealth of potential benefits, there are also a number of real barriers to implementation that make the road ahead for archivists a very bumpy one. (p.41)
Conclusions
RQ What the current state of standardization in the archival management of electronic records and what are the issues involved?
Type
Electronic Journal
Title
ARTISTE: An integrated Art Analysis and Navigation Environment
This article focuses on the description of the objectives of the ARTISTE project (for "An integrated Art Analysis and Navigation environment") that aims at building a tool for the intelligent retrieval and indexing of high resolution images. The ARTISTE project will address professional users in the fine arts as the primary end-user base. These users provide services for the ultimate end-user, the citizen.
Critical Arguements
CA "European museums and galleries are rich in cultural treasures but public access has not reached its full potential. Digital multimedia can address these issues and expand the accessible collections. However, there is a lack of systems and techniques to support both professional and citizen access to these collections."
Phrases
<P1> New technology is now being developed that will transform that situation. A European consortium, partly funded by the EU under the fifth R&D framework, is working to produce a new management system for visual information. <P2> Four major European galleries (The Uffizi in Florence, The National Gallery and the Victoria and Albert Museum in London and the Louvre related restoration centre, Centre de Recherche et de Restauration des Mus├®es de France) are involved in the project. They will be joining forces with NCR, a leading player in database and Data Warehouse technology; Interactive Labs, the new media design and development facility of Italy's leading art publishing group, Giunti; IT Innovation, Web-based system developers; and the Department of Electronics and Computer Science at the University of Southampton. Together they will create web based applications and tools for the automatic indexing and retrieval of high-resolution art images by pictorial content and information. <P3> The areas of innovation in this project are as follows: Using image content analysis to automatically extract metadata based on iconography, painting style etc; Use of high quality images (with data from several spectral bands and shadow data) for image content analysis of art; Use of distributed metadata using RDF to build on existing standards; Content-based navigation for art documents separating links from content and applying links according to context at presentation time; Distributed linking and searching across multiple archives allowing ownership of data to be retained; Storage of art images using large (>1TeraByte) multimedia object relational databases. <P4> The ARTISTE approach will use the power of object-related databases and content-retrieval to enable indexing to be made dynamically, by non-experts. <P5> In other words ARTISTE would aim to give searchers tools which hint at links due to say colour or brush-stroke texture rather than saying "this is the automatically classified data". <P6> The ARTISTE project will build on and exploit the indexing scheme proposed by the AQUARELLE consortia. The ARTISTE project solution will have a core component that is compatible with existing standards such as Z39.50. The solution will make use of emerging technical standards XML, RDF and X-Link to extend existing library standards to a more dynamic and flexible metadata system. The ARTISTE project will actively track and make use of existing terminology resources such as the Getty "Art and Architecture Thesaurus" (AAT) and the "Union List of Artist Names" (ULAN). <P7> Metadata will also be stored in a database. This may be stored in the same object-relational database, or in a separate database, according to the incumbent systems at the user partners. <P8> RDF provides for metadata definition through the use of schemas. Schemas define the relevant metadata terms (the namespace) and the associated semantics. Individual RDF queries and statements may use multiple schemas. The system will make use of existing schemas such as the Dublin Core schema and will provide wrappers for existing resources such as the Art and Architecture thesaurus in a RDF schema wrapper. <P9> The Distributed Query and Metadata Layer will also provide facilities to enable queries to be directed towards multiple distributed databases. The end user will be able to seamlessly search the combined art collection. This layer will adhere to worldwide digital library standards such as Z39.50, augmenting and extending as necessary to allow the richness of metadata enabled by the RDF standard.
Conclusions
RQ "In conclusion the Artiste project will result into an interesting and innovative system for the art analysis, indexing storage and navigation. The actual state of the art of content-based retrieval systems will be positively influenced by the development of the Artiste project, which will pursue the following goals: A solution which can be replicated to European galleries, museums, etc.; Deep-content analysis software based on object relational database technology.; Distributed links server software, user interfaces, and content-based navigation software.; A fully integrated prototype analysis environment.; Recommendations for the exploitation of the project solution by European museums and galleries. ; Recommendations for the exploitation of the technology in other sectors.; "Impact on standards" report detailing augmentations of Z39.50 with RDF." ... ""Not much research has been carried out worldwide on new algorithms for style-matching in art. This is probably not a major aim in Artiste but could be a spin-off if the algorithms made for specific author search requirements happen to provide data which can be combined with other data to help classify styles." >
SOW
DC "Four major European galleries (The Uffizi in Florence, The National Gallery and the Victoria and Albert Museum in London and the Louvre related restoration centre, Centre de Recherche et de Restauration des Mus├®es de France) are involved in the project. They will be joining forces with NCR, a leading player in database and Data Warehouse technology; Interactive Labs, the new media design and development facility of Italy's leading art publishing group, Giunti; IT Innovation, Web-based system developers; and the Department of Electronics and Computer Science at the University of Southampton. Together they will create web based applications and tools for the automatic indexing and retrieval of high-resolution art images by pictorial content and information."
Type
Electronic Journal
Title
Keeping Memory Alive: Practices for Preserving Digital Content at the National Digital Library Program of the Library of Congress
CA An overview of the major issues and initiatives in digital preservation at the Library of Congress. "In the medium term, the National Digital Library Program is focusing on two operational approaches. First, steps are taken during conversion that are likely to make migration or emulation less costly when they are needed. Second, the bit streams generated by the conversion process are kept alive through replication and routine refreshing supported by integrity checks. The practices described here provide examples of how those steps are implemented to keep the content of American Memory alive."
Phrases
<P1> The practices described here should not be seen as policies of the Library of Congress; nor are they suggested as best practices in any absolute sense. NDLP regards them as appropriate practices based on real experience, the nature and content of the originals, the primary purposes of the digitization, the state of technology, the availability of resources, the scale of the American Memory digital collection, and the goals of the program. They cover not just the storage of content and associated metadata, but also aspects of initial capture and quality review that support the long-term retention of content digitized from analog sources. <P2> The Library recognizes that digital information resources, whether born digital or converted from analog forms, should be acquired, used, and served alongside traditional resources in the same format or subject area. Such responsibility will include ensuring that effective access is maintained to the digital content through American Memory and via the Library's main catalog and, in coordination with the units responsible for the technical infrastructure, planning migration to new technology when needed. <P3> Refreshing can be carried out in a largely automated fashion on an ongoing basis. Migration, however, will require substantial resources, in a combination of processing time, out-sourced contracts, and staff time. Choice of appropriate formats for digital masters will defer the need for large-scale migration. Integrity checks and appropriate capture of metadata during the initial capture and production process will reduce the resource requirements for future migration steps. <warrant> We can be certain that migration of content to new data formats will be necessary at some point. The future will see industrywide adoption of new data formats with functional advantages over current standards. However, it will be difficult to predict exactly which metadata will be useful to support migration, when migration of master formats will be needed, and the nature and extent of resource needs. Human experts will need to decide when to undertake migration and develop tools for each migration step. <P4> Effective preservation of resources in digital form requires (a) attention early in the life-cycle, at the moment of creation, publication, or acquisition and (b) ongoing management (with attendant costs) to ensure continuing usability. <P5> The National Digital Library Program has identified several categories of metadata needed to support access and management for digital content. Descriptive metadata supports discovery through search and browse functions. Structural metadata supports presentation of complex objects by representing relationships between components, such as sequences of images. In addition, administrative metadata is needed to support management tasks, such as access control, archiving, and migration. Individual metadata elements may support more than one function, but the categorization of elements by function has proved useful. <P6> It has been recognized that metadata representations appropriate for manipulation and long-term retention may not always be appropriate for real-time delivery. <P7> It has also been realized that some basic descriptive metadata (at the very least a title or brief description) should be associated with the structural and administrative metadata. <P8> During 1999, an internal working group reviewed past experience and prototype exercises and compiled a core set of metadata elements that will serve the different functions identified. This set will be tested and refined as part of pilot activities during 2000. <P9> Master formats are well documented and widely deployed, preferably formal standards and preferably non-proprietary. Such choices should minimize the need for future migration or ensure that appropriate and affordable tools for migration will be developed by the industry. <warrant>
Conclusions
RQ "Developing long-term strategies for preserving digital resources presents challenges associated with the uncertainties of technological change. There is currently little experience on which to base predictions of how often migration to new formats will be necessary or desirable or whether emulation will prove cost-effective for certain categories of resources. ... Technological advances, while sure to present new challenges, will also provide new solutions for preserving digital content."
Type
Electronic Journal
Title
Electronic Records Research: Working Meeting May 28-30, 1997
CA Archivists are specifically concerned with records that are not easy to document -- records that are full of secret, proprietary or sensitive information, not to mention hardware and software dependencies. This front end of recordmaking and keeping must be addressed as we define what electronic records are and are not, and how we are to deal with them.
Phrases
<P1> Driven by pragmatism, the University of Pittsburgh team looked for "warrant" in the sources considered authoritative by the practicioners of ancillary professions on whom archivists rely -- lawyers, auditors, IT personnel , etc. (p.3) <P2> If the record creating event and the requirements of 'recordness' are both known, focus shifts to capturing the metadata and binding it to the record contents. (p.7) <P3> A strong business case is still needed to justify the role of archivists in the creation of electronic record management systems. (p.10)
Conclusions
RQ Warrant needs to be looked at in different countries. Does the same core definition of what constitutes a record cut across state borders? What role do specific user needs play in complying to regulation and risk management?
CA Metadata is a key part of the information infrastructure necessary to organize and classify the massive amount of information on the Web. Metadata, just like the resources they describe, will range in quality and be organized around different principles. Modularity is critical to allow metadata schema designers to base their new creations on established schemas, thereby benefiting from best practices rather than reinventing elements each time. Extensibility and cost-effectiveness are also important factors. Controlled vocabularies provide greater precision and access. Multilingualism (translating specification documents into many languages) is an important step in fostering global metadata architecture(s).
Phrases
<P1> The use of controlled vocabularies is another important approach to refinement that improves the precision for descriptions and leverages the substantial intellectual investment made by many domains to improve subject access. (p.4) <P2> Standards typically deal with these issues through the complementary processes of internalization and localization: the former process relates to the creation of "neutral" standards, whereas the latter refers to the adaptation of such a neutral standard to a local context. (p.4)
Conclusions
RQ In order for the full potential of resource discovery that the Web could offer to be realized, a"convergence" of standards and semantics must occur.
The Semantic Web activity is a W3C project whose goal is to enable a 'cooperative' Web where machines and humans can exchange electronic content that has clear-cut, unambiguous meaning. This vision is based on the automated sharing of metadata terms across Web applications. The declaration of schemas in metadata registries advance this vision by providing a common approach for the discovery, understanding, and exchange of semantics. However, many of the issues regarding registries are not clear, and ideas vary regarding their scope and purpose. Additionally, registry issues are often difficult to describe and comprehend without a working example.
ISBN
1082-9873
Critical Arguements
CA "This article will explore the role of metadata registries and will describe three prototypes, written by the Dublin Core Metadata Initiative. The article will outline how the prototypes are being used to demonstrate and evaluate application scope, functional requirements, and technology solutions for metadata registries."
Phrases
<P1> Establishing a common approach for the exchange and re-use of data across the Web would be a major step towards achieving the vision of the Semantic Web. <warrant> <P2> The Semantic Web Activity statement articulates this vision as: 'having data on the Web defined and linked in a way that it can be used for more effective discovery, automation, integration, and reuse across various applications. The Web can reach its full potential if it becomes a place where data can be shared and processed by automated tools as well as by people.' <P3> In parallel with the growth of content on the Web, there have been increases in the amount and variety of metadata to manipulate this content. An inordinate amount of standards-making activity focuses on metadata schemas (also referred to as vocabularies or data element sets), and yet significant differences in schemas remain. <P4> Different domains typically require differentiation in the complexity and semantics of the schemas they use. Indeed, individual implementations often specify local usage, thereby introducing local terms to metadata schemas specified by standards-making bodies. Such differentiation undermines interoperability between systems. <P5> This situation highlights a growing need for access by users to in-depth information about metadata schemas and particular extensions or variations to schemas. Currently, these 'users' are human  people requesting information. <warrant> <P6> It would be helpful to make available easy access to schemas already in use to provide both humans and software with comprehensive, accurate and authoritative information. <warrant> <P7> The W3C Resource Description Framework (RDF) has provided the basis for a common approach to declaring schemas in use. At present the RDF Schema (RDFS) specification offers the basis for a simple declaration of schema. <P8> Even as it stands, an increasing number of initiatives are using RDFS to 'publish' their schemas. <P9> Registries provide 'added value' to users by indexing schemas relevant to a particular 'domain' or 'community of use' and by simplifying the navigation of terms by enabling multiple schemas to be accessed from one view. <warrant> <P10> Additionally, the establishment of registries to index terms actively being used in local implementations facilitates the metadata standards activity by providing implementation experience transferable to the standards-making process. <warrant> <P11> The overriding goal has been the development of a generic registry tool useful for registry applications in general, not just useful for the DCMI. <P12> The formulation of a 'definitive' set of RDF schemas within the DCMI that can serve as the recommended, comprehensive and accurate expression of the DCMI vocabulary has hindered the development of the DCMI registry. To some extent, this has been due to the changing nature of the RDF Schema specification and its W3C candidate recommendation status. However, it should be recognized that the lack of consensus within the DCMI community regarding the RDF schemas has proven to be equally as impeding. <P13> The automated sharing of metadata across applications is an important part of realizing the goal of the Semantic Web. Users and applications need practical solutions for discovering and sharing semantics. Schema registries provide a viable means of achieving this. <warrant>
Conclusions
RQ "Many of the issues regarding metadata registries are unclear and ideas regarding their scope and purpose vary. Additionally, registry issues are often difficult to describe and comprehend without a working example. The DCMI makes use of rapid prototyping to help solve these problems. Prototyping is a process of quickly developing sample applications that can then be used to demonstrate and evaluate functionality and technology."
SOW
DC "New impetus for the development of registries has come with the development activities surrounding creation of the Semantic Web. The motivation for establishing registries arises from domain and standardization communities, and from the knowledge management community." ... "The original charter for the DCMI Registry Working Group was to establish a metadata registry to support the activity of the DCMI. The aim was to enable the registration, discovery, and navigation of semantics defined by the DCMI, in order to provide an authoritative source of information regarding the DCMI vocabulary. Emphasis was placed on promoting the use of the Dublin Core and supporting the management of change and evolution of the DCMI vocabulary." ... "Discussions within the DCMI Registry Working Group (held primarily on the group's mailing list) have produced draft documents regarding application scope and functionality. These discussions and draft documents have been the basis for the development of registry prototypes and continue to play a central role in the iterative process of prototyping and feedback." ... The overall goal of the DCMI Registry Working Group (WG) is to provide a focus for continued development of the DCMI Metadata Registry. The WG will provide a forum for discussing registry-related activities and facilitating cooperation with the ISO 11179 community, the Semantic Web, and other related initiatives on issues of common interest and relevance.
Type
Electronic Journal
Title
Collection-Based Persistent Digital Archives - Part 1
The preservation of digital information for long periods of time is becoming feasible through the integration of archival storage technology from supercomputer centers, data grid technology from the computer science community, information models from the digital library community, and preservation models from the archivistÔÇÖs community. The supercomputer centers provide the technology needed to store the immense amounts of digital data that are being created, while the digital library community provides the mechanisms to define the context needed to interpret the data. The coordination of these technologies with preservation and management policies defines the infrastructure for a collection-based persistent archive. This paper defines an approach for maintaining digital data for hundreds of years through development of an environment that supports migration of collections onto new software systems.
ISBN
1082-9873
Critical Arguements
CA "Supercomputer centers, digital libraries, and archival storage communities have common persistent archival storage requirements. Each of these communities is building software infrastructure to organize and store large collections of data. An emerging common requirement is the ability to maintain data collections for long periods of time. The challenge is to maintain the ability to discover, access, and display digital objects that are stored within an archive, while the technology used to manage the archive evolves. We have implemented an approach based upon the storage of the digital objects that comprise the collection, augmented with the meta-data attributes needed to dynamically recreate the data collection. This approach builds upon the technology needed to support extensible database schema, which in turn enables the creation of data handling systems that interconnect legacy storage systems."
Phrases
<P1> The ultimate goal is to preserve not only the bits associated with the original data, but also the context that permits the data to be interpreted. <warrant> <P2> We rely on the use of collections to define the context to associate with digital data. The context is defined through the creation of semi-structured representations for both the digital objects and the associated data collection. <P3>A collection-based persistent archive is therefore one in which the organization of the collection is archived simultaneously with the digital objects that comprise the collection. <P4> The goal is to preserve digital information for at least 400 years. This paper examines the technical issues that must be addressed and presents a prototype implementation. <P5>Digital object representation. Every digital object has attributes that define its structure, physical context, and provenance, and annotations that describe features of interest within the object. Since the set of attributes (such as annotations) will vary across all objects within a collection, a semi-structured representation is needed. Not all digital objects will have the same set of associated attributes. <P6> If possible, a common information model should be used to reference the attributes associated with the digital objects, the collection organization, and the presentation interface. An emerging standard for a uniform data exchange model is the eXtended Markup Language (XML). <P7> A particular example of an information model is the XML Document Type Definition (DTD) which provides a description for the allowed nesting structure of XML elements. Richer information models are emerging such as XSchema (which provides data types, inheritance, and more powerful linking mechanisms) and XMI (which provides models for multiple levels of data abstraction). <P8> Although XML DTDs were originally applied to documents only, they are now being applied to arbitrary digital objects, including the collections themselves. More generally, OSDs can be used to define the structure of digital objects, specify inheritance properties of digital objects, and define the collection organization and user interface structure. <P9> A persistent collection therefore needs the following components of an OSD to completely define the collection context: Data dictionary for collection semantics; Digital object structure; Collection structure; and User interface structure. <P10> The re-creation or instantiation of the data collection is done with a software program that uses the schema descriptions that define the digital object and collection structure to generate the collection. The goal is to build a generic program that works with any schema description. <P11> The information for which driver to use for access to a particular data set is maintained in the associated Meta-data Catalog (MCAT). The MCAT system is a database containing information about each data set that is stored in the data storage systems. <P12> The data handling infrastructure developed at SDSC has two components: the SDSC Storage Resource Broker (SRB) that provides federation and access to distributed and diverse storage resources in a heterogeneous computing environment, and the Meta-data Catalog (MCAT) that holds systemic and application or domain-dependent meta-data about the resources and data sets (and users) that are being brokered by the SRB. <P13> A client does not need to remember the physical mapping of a data set. It is stored as meta-data associated with the data set in the MCAT catalog. <P14> A characterization of a relational database requires a description of both the logical organization of attributes (the schema), and a description of the physical organization of attributes into tables. For the persistent archive prototype we used XML DTDs to describe the logical organization. <P15> A combination of the schema and physical organization can be used to define how queries can be decomposed across the multiple tables that are used to hold the meta-data attributes. <P16> By using an XML-based database, it is possible to avoid the need to map between semi-structured and relational organizations of the database attributes. This minimizes the amount of information needed to characterize a collection, and makes the re-creation of the database easier. <warrant> <P17> Digital object attributes are separated into two classes of information within the MCAT: System-level meta-data that provides operational information. These include information about resources (e.g., archival systems, database systems, etc., and their capabilities, protocols, etc.) and data objects (e.g., their formats or types, replication information, location, collection information, etc.); Application-dependent meta-data that provides information specific to particular data sets and their collections (e.g., Dublin Core values for text objects). <P18> Internally, MCAT keeps schema-level meta-data about all of the attributes that are defined. The schema-level attributes are used to define the context for a collection and enable the instantiation of the collection on new technology. <P19> The logical structure should not be confused with database schema and are more general than that. For example, we have implemented the Dublin Core database schema to organize attributes about digitized text. The attributes defined in the logical structure that is associated with the Dublin Core schema contains information about the subject, constraints, and presentation formats that are needed to display the schema along with information about its use and ownership. <P20> The MCAT system supports the publication of schemata associated with data collections, schema extension through the addition or deletion of new attributes, and the dynamic generation of the SQL that corresponds to joins across combinations of attributes. <P21> By adding routines to access the schema-level meta-data from an archive, it is possible to build a collection-based persistent archive. As technology evolves and the software infrastructure is replaced, the MCAT system can support the migration of the collection to the new technology.
Conclusions
RQ Collection-Based Persistent Digital Archives - Part 2
SOW
DC "The technology proposed by SDSC for implementing persistent archives builds upon interactions with many of these groups. Explicit interactions include collaborations with Federal planning groups, the Computational Grid, the digital library community, and individual federal agencies." ... "The data management technology has been developed through multiple federally sponsored projects, including the DARPA project F19628-95-C-0194 "Massive Data Analysis Systems," the DARPA/USPTO project F19628-96-C-0020 "Distributed Object Computation Testbed," the Data Intensive Computing thrust area of the NSF project ASC 96-19020 "National Partnership for Advanced Computational Infrastructure," the NASA Information Power Grid project, and the DOE ASCI/ASAP project "Data Visualization Corridor." Additional projects related to the NSF Digital Library Initiative Phase II and the California Digital Library at the University of California will also support the development of information management technology. This work was supported by a NARA extension to the DARPA/USPTO Distributed Object Computation Testbed, project F19628-96-C-0020."
Type
Electronic Journal
Title
Collection-Based Persistent Digital Archives - Part 2
"Collection-Based Persistent Digital Archives: Part 2" describes the creation of a one million message persistent E-mail collection. It discusses the four major components of a persistent archive system: support for ingestion, archival storage, information discovery, and presentation of the collection. The technology to support each of these processes is still rapidly evolving, and opportunities for further research are identified.
ISBN
1082-9873
Critical Arguements
CA "The multiple migration steps can be broadly classified into a definition phase and a loading phase. The definition phase is infrastructure independent, whereas the loading phase is geared towards materializing the processes needed for migrating the objects onto new technology. We illustrate these steps by providing a detailed description of the actual process used to ingest and load a million-record E-mail collection at the San Diego Supercomputer Center (SDSC). Note that the SDSC processes were written to use the available object-relational databases for organizing the meta-data. In the future, it may be possible to go directly to XML-based databases."
Phrases
<P1> The processes used to ingest a collection, transform it into an infrastructure independent form, and store the collection in an archive comprise the persistent storage steps of a persistent archive. The processes used to recreate the collection on new technology, optimize the database, and recreate the user interface comprise the retrieval steps of a persistent archive. <P2> In order to build a persistent collection, we consider a solution that "abstracts" all aspects of the data and its preservation. In this approach, data object and processes are codified by raising them above the machine/software dependent forms to an abstract format that can be used to recreate the object and the processes in any new desirable forms. <P3> The SDSC infrastructure uses object-relational databases to organize information. This makes data ingestion more complex by requiring the mapping of the XML DTD semi-structured representation onto a relational schema. <P4> The SDSC infrastructure uses object-relational databases to organize information. This makes data ingestion more complex by requiring the mapping of the XML DTD semi-structured representation onto a relational schema. <P5> The steps used to store the persistent archive were: (1) Define Digital Object: define meta-data, define object structure (OBJ-DTD) --- (A), define object DTD to object DDL mapping --- (B) (2) Define Collection: define meta-data, define collection structure (COLL-DTD) --- (C), define collection DTD structure to collection DDL mapping --- (D) (3) Define Containers: define packing format for encapsulating data and meta-data (examples are the AIP standard, Hierarchical Data Format, Document Type Definition) <P5> In the ingestion phase, the relational and semi-structured organization of the meta-data is defined. No database is actually created, only the mapping between the relational organization and the object DTD. <P6> Note that the collection relational organization does not have to encompass all of the attributes that are associated with a digital object. Separate information models are used to describe the objects and the collections. It is possible to take the same set of digital objects and form a new collection with a new relational organization. <P7> Multiple communities across academia, the federal government, and standards groups are exploring strategies for managing very large archives. The persistent archive community needs to maintain interactions with these communities to track development of new strategies for data management and storage. <warrant> <P8>
Conclusions
RQ "The four major components of the persistent archive system are support for ingestion, archival storage, information discovery, and presentation of the collection. The first two components focus on the ingestion of data into collections. The last two focus on access to the resulting collections. The technology to support each of these processes is still rapidly evolving. Hence consensus on standards has not been reached for many of the infrastructure components. At the same time, many of the components are active areas of research. To reach consensus on a feasible collection-based persistent archive, continued research and development is needed. Examples of the many related issues are listed below:
Type
Electronic Journal
Title
Search for Tomorrow: The Electronic Records Research Program of the U.S. National Historical Publications and Records Commission
The National Historical Publications and Records Commission (NHPRC) is a small grant-making agency affiliated with the U.S. National Archives and Records Administration. The Commission is charged with promoting the preservation and dissemination of documentary source materials to ensure an understanding of U.S. history. Recognizing that the increasing use of computers created challenges for preserving the documentary record, the Commission adopted a research agenda in 1991 to promote research and development on the preservation and continued accessibility of documentary materials in electronic form. From 1991 to the present the Commission awarded 31 grants totaling $2,276,665 for electronic records research. Most of this research has focused on two issues of central concern to archivists: (1) electronic record keeping (tools and techniques to manage electronic records produced in an office environment, such as word processing documents and electronic mail), and (2) best practices for storing, describing, and providing access to all electronic records of long-term value. NHPRC grants have raised the visibility of electronic records issues among archivists. The grants have enabled numerous archives to begin to address electronic records problems, and, perhaps most importantly, they have stimulated discussion about electronic records among archivists and records managers.
Publisher
Elsevier Science Ltd
Critical Arguements
CA "The problem of maintaining electronic records over time is big, expensive, and growing. A task force on digital archives established by the Commission on Preservation and Access in 1994 commented that the life of electronic records could be characterized in the same words Thomas Hobbes once used to describe life: ÔÇ£nasty, brutish, and shortÔÇØ [1]. Every day, thousands of new electronic files are created on federal, state, and local government computers across the nation. A small but important portion of these records will be designated for permanent retention. Government agencies are increasingly relying on computers to maintain information such as census files, land titles, statistical data, and vital records. But how should electronic records with long-term value be maintained? Few government agencies have developed comprehensive policies for managing current electronic records, much less preserving those with continuing value for historians and other researchers. Because of this serious and growing problem, the National Historical Publications and Records Commission (NHPRC), a small grantmaking agency affiliated with the U.S. National Archives and Records Administration (NARA), has been making grants for research and development on the preservation and use of electronic documentary sources. The program is conducted in concert with NARA, which in 1996 issued a strategic plan that gives high priority to mastering electronic records problems in partnership with federal government agencies and the NHPRC.
Phrases
<P1> How can data dictionaries, information resource directory systems, and other metadata systems be used to support electronic records management and archival requirements? <P2> In spite of the number of projects the Commission has supported, only four questions from the research agenda have been addressed to date. Of these, the question relating to requirements for the development of data dictionaries and other metadata systems (question number four) has produced a single grant for a state information locator system in South Carolina, and the question relating to needs for archival education (question 10) has led to two grants to the Society of American Archivists for curricular materials. <P3> Information systems created without regard for these considerations may have deficiencies that limit the usefulness of the records contained on them. <warrant> <P4> The NHPRC has awarded major grants to four institutions over the past five years for projects to develop and test requirements for electronic record keeping: University of Pittsburgh (1993): A working set of functional requirements and metadata specifications for electronic record keeping systems; City of Philadelphia (1995, 1996, and 1997): A project to incorporate a subset of the Pittsburgh metadata specifications into a new human resources information system and other city systems as test cases and to develop comprehensive record keeping policies and standards for the cityÔÇÖs information technology systems; Indiana University (1995): A project to develop an assessment tool and methodology for analyzing existing electronic records systems, using the Pittsburgh functional requirements as a model and the student academic record system and a financial system as test cases; Research Foundation of the State University of New York-Albany, Center for Technology in Government (1996): A project to identify best practices for electronic record keeping, including work by the U.S. Department of Defense and the University of British Columbia in addition to the University of Pittsburgh. The Center is working with the stateÔÇÖs Adirondack Parks Agency in a pilot project to develop a system model for incorporating record keeping and archival considerations into the creation of networked computing and communications applications. <P5> No definitive solution has yet been identified for the problems posed by electronic records, although progress has been made in learning what will be needed to design functional electronic record keeping systems. <P6> With the proliferation of digital libraries, the need for long-term storage, migration and retrieval strategies for electronic information has become a priority for a wide variety of information providers. <warrant>
Conclusions
RQ "How best to preserve existing and future electronic formats and provide access to them over time has remained elusive. The answers cannot be found through theoretical research alone, or even through applied research, although both are needed. Answers can only emerge over time as some approaches prove able to stand the test of time and others do not. The problems are large because the costs of maintaining, migrating, and retrieving electronic information continue to be high." ... "Perhaps most importantly, these grants have stimulated widespread discussion of electronic records issues among archivists and record managers, and thus they have had an impact on the preservation of the electronic documentary record that goes far beyond the CommissionÔÇÖs investment."
SOW
DC The National Historic Publications and Records Commission (NHPRC) is the outreach arm of the National Archives and makes plans for and studies issues related to the preservation, use and publication of historical documents. The Commission also makes grants to non-Federal archives and other organizations to promote the preservation use of America's documentary heritage.
Type
Electronic Journal
Title
The Dublin Core Metadata Inititiative: Mission, Current Activities, and Future Directions
Metadata is a keystone component for a broad spectrum of applications that are emerging on the Web to help stitch together content and services and make them more visible to users. The Dublin Core Metadata Initiative (DCMI) has led the development of structured metadata to support resource discovery. This international community has, over a period of 6 years and 8 workshops, brought forth: A core standard that enhances cross-disciplinary discovery and has been translated into 25 languages to date; A conceptual framework that supports the modular development of auxiliary metadata components; An open consensus building process that has brought to fruition Australian, European and North American standards with promise as a global standard for resource discovery; An open community of hundreds of practitioners and theorists who have found a common ground of principles, procedures, core semantics, and a framework to support interoperable metadata.
This document presents the ARTISTE three-level approach to providing an open and flexible solution for combined metadata and image content-based search and retrieval across multiple, distributed image collections. The intended audience for this report includes museum and gallery owners who are interested in providing or extending services for remote access, developers of collection management and image search and retrieval systems, and standards bodies in both the fine art and digital library domains.
Notes
ARTISTE (http://www.artisteweb.org/) is a European Commission supported project that has developed integrated content and metadata-based image retrieval across several major art galleries in Europe. Collaborating galleries include the Louvre in Paris, the Victoria and Albert Museum in London, the Uffizi Gallery in Florence and the National Gallery in London.
Edition
Version 2.0
Publisher
The ARTISTE Consortium
Publication Location
Southampton, United Kindom
Accessed Date
08/24/05
Critical Arguements
<CA>  Over the last two and a half years, ARTISTE has developed an image search and retrieval system that integrates distributed, heterogeneous image collections. This report positions the work achieved in ARTISTE with respect to metadata standards and approaches for open search and retrieval using digital library technology. In particular, this report describes three key aspects of ARTISTE: the transparent translation of local metadata to common standards such as Dublin Core and SIMI consortium attribute sets to allow cross-collection searching; A methodology for combining metadata and image content-based analysis into single search galleries to enable versatile retrieval and navigation facilities within and between gallery collections; and an open interface for cross-collection search and retrieval that advances existing open standards for remote access to digital libraries, such as OAI (Open Archive Initiative) and ZING SRW (Z39.50 International: Next Generation Search and Retrieval Web Service).
Conclusions
RQ "A large part of ARTISTE is concerned with use of existing standards for metadata frameworks. However, one area where existing standards have not been sufficient is multimedia content-based search and retrieval. A proposal has been made to ZING for additions to SRW. This will hopefully enable ARTISTE to make a valued contribution to this rapidly evolving standard." ... "The work started in ARTISTE is being continued in SCULTEUR, another project funded by the European Commission. SCUPLTEUR will develop both the technology and the expertise to create, manage, and present cultural archives of 3D models and associated multimedia objects." ... "We believe the full benefit of multimedia search and retrieval can only be realised through seamless integration of content-based analysis techniques. However, not only does introduction of content-bases analysis require modification to existing standards as outlines in this report, but it also requires a review if the use of semantics in achieving digital library interoperability. In particular, machine understandable description of the semantics of textual metadata, multimedia content, and content-based analysis, can provide a foundation for a new generation of flexible and dynamic digital library tools and services. " ... "Existing standards do not use explicit semantics to describe query operators or their application to metadata and multimedia content at individual sites. However, dynamically determining what operators and types are supported by a collection is essential to robust and efficient cross-collection searching. Dynamic use of published semantics would allow a collection and any associated content-based analysis to be changed  by its owner without breaking conformance to search and retrieval standards. Furthermore, individual sites would not need to publish detailed, human readable descriptions of available functionality.  
SOW
DC "Four major European galleries are involved in the project: the Uffizi in Florence, the national Gallery and the Victoria and Albert Museum in London, and the Centre de Recherche et de Restauration des Musees de France (C2RMF) which is the Louvre related restoration centre. The ARTISTE system currently holds over 160,000 images from four separate collections owned by these partners. The galleries have partnered with NCR, leading player in database and Data Warehouse technology; Interactive Labs, the new media design and development facility of Italy's leading art publishing group, Giunti; IT Innovation, a specialist in building innovative IT systems, and the Department of Electronics and Computer Science at the University of Southhampton." 
Type
Report
Title
Advice: Introduction to the Victorian Electronic Records Strategy (VERS) PROS 99/007 (Version 2)
This document is an introduction to the PROV Standard Management of Electronic Records (PROS 99/007), also known as the VERS Standard. This document provides background information on the goals and the VERS approach to preservation. Nothing in this document imposes any requirements on agencies.
Critical Arguements
CA The Victorian Elextronic Records Strategy (VERS) addresses the cost-effective, long-term preservation of electronic records. The structure and requirements of VERS are formally specified in the STandard for the Management of Electronic Records (PROS 99/007) and its five technical specifications. This Advice provides background to the Standard. It covers: the history of the VERS project; the preservation theory behind VERS; how the five specifications support the preservation theory; a brief introduction to the VERS Encapsulated Object (VEO). In this document we distinguish between the record and the content of the record. The content is the actuial information contained in the record; for example, the report or the image. The record as a whole contains the record content and metadata that contains information about the record, including its context, description, history, and integrity cvontrol. 
Conclusions
<RQ>
SOW
<DC>Public Record Office Victoria is the archives of the State Government of Victoria. They hold records from the beginnings of the colonial administration of Victoria in the mid-1830s to today and are responsible for ensuring the accountability of the Victoria State Government. 
Type
Report
Title
Introduction to the Victoria Electronic Records Strategy (VERS) PROS 99/007 (Version 2)
CA VERS has two major goals: the preservation of electronic records and enabling efficient management in doing so. Version 2 has an improved structure, additional metadata elements, requirements for preservation and compliance requirements for agencies. ÔÇ£ExportÔÇØ compliance allows agencies to maintain their records within their own recordkeeping systems and add a module so they can generate the VERS format for export, especially for long term preservation. ÔÇ£NativeÔÇØ complicance is when records are converted to long term preservation format upon registration which is seen as the ideal approach.
Type
Web Page
Title
Archiving The Avant Garde: Documenting And Preserving Variable Media Art.
Archiving the Avant Garde is a collaborative project to develop, document, and disseminate strategies for describing and preserving non-traditional, intermedia, and variable media art forms, such as performance, installation, conceptual, and digital art. This joint project builds on existing relationships and the previous work of its founding partners in this area. One example of such work is the Conceptual & Intermedia Arts Online (CIAO) Consortium, a collaboration founded by the BAM/PFA, the Walker Art Center, and Franklin Furnace, that includes 12 other international museums and arts organizations. CIAO develops standardized methods of documenting and providing access to conceptual and other ephemeral intermedia art forms. Another example of related work conducted by the project's partners is the Variable Media Initiative, organized by the Guggenheim Museum, which encourages artists to define their work independently from medium so that the work can be translated once its current medium is obsolete. Archiving the Avant Garde will take the ideas developed in previous efforts and develop them into community-wide working strategies by testing them on specific works of art in the practical working environments of museums and arts organizations. The final project report will outline a comprehensive strategy and model for documenting and preserving variable media works, based on case studies to illustrate practical examples, but always emphasizing the generalized strategy behind the rule. This report will be informed by specific and practical institutional practice, but we believe that the ultimate model developed by the project should be based on international standards independent of any one organization's practice, thus making it adaptable to many organizations. Dissemination of the report, discussed in detail below, will be ongoing and widespread.
Critical Arguements
CA "Works of variable media art, such as performance, installation, conceptual, and digital art, represent some of the most compelling and significant artistic creation of our time. These works are key to understanding contemporary art practice and scholarship, but because of their ephemeral, technical, multimedia, or otherwise variable natures, they also present significant obstacles to accurate documentation, access, and preservation. The works were in many cases created to challenge traditional methods of art description and preservation, but now, lacking such description, they often comprise the more obscure aspects of institutional collections, virtually inaccessible to present day researchers. Without strategies for cataloging and preservation, many of these vital works will eventually be lost to art history. Description of and access to art collections promote new scholarship and artistic production. By developing ways to catalog and preserve these collections, we will both provide current and future generations the opportunity to learn from and be inspired by the works and ensure the perpetuation and accuracy of art historical records. It is to achieve these goals that we are initiating the consortium project Archiving the Avant Garde: Documenting and Preserving Variable Media Art."
Conclusions
RQ "Archiving the Avant Garde will take a practical approach to solving problems in order to ensure the feasibility and success of the project. This project will focus on key issues previously identified by the partners and will leave other parts of the puzzle to be solved by other initiatives and projects in regular communication with this group. For instance, this project realizes that the arts community will need to develop software tools which enable collections care professionals to implement the necessary new description and metadata standards, but does not attempt to develop such tools in the context of this project. Rather, such tools are already being developed by a separate project under MOAC. Archiving the Avant Garde will share information with that project and benefit from that work. Similarly, the prospect of developing full-fledged software emulators is one best solved by a team of computer scientists, who will work closely with members of the proposed project to cross-fertilize methods and share results. Importantly, while this project is focused on immediate goals, the overall collaboration between the partner organizations and their various initiatives will be significant in bringing together the computer science, arts, standards, and museum communities in an open-source project model to maximize collective efforts and see that the benefits extend far and wide."
SOW
DC "We propose a collaborative project that will begin to establish such professional best practice. The collaboration, consisting of the Berkeley Art Museum and Pacific Film Archive (BAM/PFA), the Solomon R. Guggenheim Museum, Rhizome.org, the Franklin Furnace Archive, and the Cleveland Performance Art Festival and Archive, will have national impact due to the urgent and universal nature of the problem for contemporary art institutions, the practicality and adaptability of the model developed by this group, and the significant expertise that this nationwide consortium will bring to bear in the area of documenting and preserving variable media art." ... "We believe that a model informed by and tested in such diverse settings, with broad public and professional input (described below), will be highly adaptable." ..."Partners also represent a geographic and national spread, from East Coast to Midwest to West Coast. This coverage ensures that a wide segment of the professional community and public will have opportunities to participate in public forums, hosted at partner institutions during the course of the project, intended to gather an even broader cross-section of ideas and feedback than is represented by the partners." ... "The management plan for this project will be highly decentralized ensuring that no one person or institution will unduly influence the model strategy for preserving variable media art and thereby reduce its adaptability."
CA "The purpose of this document is: (1) To provide a better understanding of the functionality that the MPEG-21 multimedia framework should be capable of providing; (2) To offer high level descriptions of different MPEG-21 applications against which the formal requirements for MPEG-21 can be checked; (3) To act as a basis for devising Core Experiments which establish proof of concept; (4) To provide a point of reference to support the evaluation of responses submitted against ongoing MPEG-21 Calls for Proposals; (5) To be a 'Public Relations' instrument that can help to explain what MPEG-21 is about."
Conclusions
RQ not applicable
SOW
DC The Moving Picture Experts Group (MPEG) is a working group of ISO/IEC, made up of some 350 members from various industries and universities, in charge of the development of international standards for compression, decompression, processing, and coded representation of moving pictures, audio and their combination. MPEG's official designation is ISO/IEC JTC1/SC29/WG11. So far MPEG has produced the following compression formats and ancillary standards: MPEG-1, the standard for storage and retrieval of moving pictures and audio on storage media (approved Nov. 1992); MPEG-2, the standard for digital television (approved Nov. 1994); MPEG-4, the standard for multimedia applications; MPEG-7, the content representation standard for multimedia information search, filtering, management and processing; and MPEG-21, the multimedia framework.
CA Discussion of the challenges faced by librarians and archivists who must determine which and how much of the mass amounts of digitally recorded sound materials to preserve. Identifies various types of digital sound formats and the varying standards to which they are created. Specific challenges discussed include copyright issues; technologies and platforms; digitization and preservation; and metadata and other standards.
Conclusions
RQ "Whether between record companies and archives or with others, some type of collaborative approach to audio preservation will be necessary if significant numbers of audio recordings at risk are to be preserved for posterity. ... One particular risk of preservation programs now is redundancy. ... Inadequate cataloging is a serious impediment to preservation efforts. ... It would be useful to archives, and possibly to intellectual property holders as well, if archives could use existing industry data for the bibliographic control of published recordings and detailed listings of the music recorded on each disc or tape. ... Greater collaboration between libraries and the sound recording industry could result in more comprehensive catalogs that document recording sessions with greater specificity. With access to detailed and authoritative information about the universe of published sound recordings, libraries could devote more resources to surveying their unpublished holdings and collaborate on the construction of a preservation registry to help reduce preservation redundancy. ... Many archivists believe that adequate funding for preservation will not be forthcoming unless and until the recordings preserved can be heard more easily by the public. ... If audio recordings that do not have mass appeal are to be preserved, that responsibility will probably fall to libraries and archives. Within a partnership between archives and intellectual property owners, archives might assume responsibility for preserving less commercial music in return for the ability to share files of preserved historical recordings."
There are many types of standards used to manage museum collections information. These "standards", which range from precise technical  standards to general guidelines, enable museum data to be efficiently  and consistently indexed, sorted, retrieved, and shared, both  in automated and paper-based systems. Museums often use metadata standards  (also called data structure standards) to help them: define what types of information to record in their database  (or card catalogue); structure this information (the relationships between the  different types of information). Following (or mapping data to) these standards makes it possible  for museums to move their data between computer systems, or share  their data with other organizations.
Notes
The CHIN Web site features sections dedicated to Creating and Managing Digital Content, Intellectual Property, Collections Management, Standards, and more. CHIN's array of training tools, online publications, directories and databases are especially designed to meet the needs of both small and large institutions. The site also provides access to up-to-date information on topics such as heritage careers, funding and conferences.
Critical Arguements
CA "Museums often want to use their collections data for many purposes, (exhibition catalogues, Web access for the public, and curatorial research, etc.), and they may want to share their data with other museums, archives, and libraries in an automated way. This level of interoperability between systems requires cataloguing standards, value standards, metadata standards, and interchange standards to work together. Standards enable the interchange of data between cataloguer and searcher, between organizations, and between computer systems."
Conclusions
RQ "HIN is also involved in a project to create metadata for a pan-Canadian inventory of learning resources available on Canadian museum Web sites. Working in consultation with the Consortium for the Interchange of Museum Information (CIMI), the Gateway to Educational Materials (GEM) [link to GEM in Section G], and SchoolNet, the project involves the creation of a Guide to Best Practices and cataloguing tool for generating metadata for online learning materials. " 
SOW
DC "CHIN is involved in the promotion, production, and analysis of standards for museum information. The CHIN Guide to Museum Documentation Standards includes information on: standards and guidelines of interest to museums; current projects involving standards research and implementation; organizations responsible for standards research and development; Links." ... "CHIN is a member of CIMI (the Consortium for the Interchange of Museum Information), which works to enable the electronic interchange of museum information. From 1998 to 1999, CHIN participated in a CIMI Metadata Testbed which aimed to explore the creation and use of metadata for facilitating the discovery of electronic museum information. Specifically, the project explored the creation and use of Dublin Core metadata in describing museum collections, and examined how Dublin Core could be used as a means to aid in resource discovery within an electronic, networked environment such as the World Wide Web." 
This is one of a series of guides produced by the Cedars digital preservation project. This guide concentrates on the technical approaches that Cedars recommends as a result of its experience. The accent is on preservation, without which continued access is not possible. The time scale is at least decades, i.e. way beyond the lifetime of any hardware technology. The overall preservation strategy is to remove the data from its medium of acquisition and to preserve the digital content as a stream of bytes. There is good reason to be confident that data held as a stream of bytes can be preserved indefinitely. Just as there is no access without preservation, preservation with no prospect of future access is a very sterile exercise. As well as preserving the data as a byte-stream, Cedars adds in metadata. This includes reference to facilities (called technical metadata in this document) for accessing the intellectual content of the preserved data. This technical metadata will usually include actual software for use in accessing the data. It will be stored as a preserved object in the overall archive store, and will be revised as technology evolves making new methods of access to preserved objects appropriate. There will be big economies of scale, as most, if not all, objects of the same type will share the same technical metadata. Cedars recommends against repeated format conversions, and instead argues for keeping the preserved byte-stream, while tracking evolving technology by maintaining the technical metadata. It is for this reason that Cedars includes only a reference to the technical metadata in the preserved data object. Thus future users of the object will be pointed to information appropriate to their own era, rather than that of the object's preservation. The monitoring and updating of this aspect of the technical metadata is a vital function of the digital library. In practice, Cedars expects that very many preserved digital objects will be in the same format, and will reference the same technical metadata. Access to a preserved object then involves Migration on Request, in that any necessary migration from an obsolete format to an appropriate current day format happens at the point of request. As well as recommending actions to be taken to preserve digital objects, Cedars also recommends the use of a permanent naming scheme, with a strong recommendation that such a scheme should be infinitely extensible.
Critical Arguements
CA "This document is intended to inform technical practitioners in the actual preservation of digital materials, and also to highlight to library management the importance of this work as continuing their traditional scholarship role into the 21st century."
This document provides some background on preservation metadata for those interested in digital preservation. It first attempts to explain why preservation metadata is seen as an essential part of most digital preservation strategies. It then gives a broad overview of the functional and information models defined in the Reference Model for an Open Archival Information System (OAIS) and describes the main elements of the Cedars outline preservation metadata specification. The next sections take a brief look at related metadata initiatives, make some recommendations for future work and comment on cost issues. At the end there are some brief recommendations for collecting institutions and the creators of digital content followed by some suggestions for further reading.
Critical Arguements
CA "This document is intended to provide a brief introduction to current preservation metadata developments and introduce the outline metadata specifications produced by the Cedars project. It is aimed in particular at those who may have responsibility for digital preservation in the UK further and higher education community, e.g. senior staff in research libraries and computing services. It should also be useful for those undertaking digital content creation (digitisation) initiatives, although it should be noted that specific guidance on this is available elsewhere. The guide may also be of interest to other kinds of organisations that have an interest in the long-term management of digital resources, e.g. publishers, archivists and records managers, broadcasters, etc. This document aimes to provide: A rationale for the creation and maintenance of preservation metadata to support digital preservation strategies, e.g. migration or emulation; An introduction to the concepts and terminology used in the influential ISO Reference Model for an Open Archival Information System (OAIS); Brief information on the Cedars outline preservation metadata specification and the outcomes of some related metadata initiatives; Some notes on the cost implications of preservation metadata and how these might be reduced.
Conclusions
RQ "In June 2000, a group of archivists, computer scientists and metadata experts met in the Netherlands to discuss metadata developments related to recordkeeping and the long-term preservation of archives. One of the key conclusions made at this working meeting was that the recordkeeping metadata communities should attempt to co-operate more with other metatdata initiatives. The meeting also suggested research into the contexts of creation and use, e.g. identifying factors that might encourage or discourage creators form meeting recordkeeping metadata requirements. This kind of research would also be useful for wider preservation metadata developments. One outcome of this meeting was the setting up of an Archiving Metadata Forum (AMF) to form the focus of future developments." ... "Future work on preservation metadata will need to focus on several key issues. Firstly, there is an urgent need for more practical experience of undertaking digital preservation strategies. Until now, many preservation metadata initiatives have largely been based on theoretical considerations or high-level models like the OAIS. This is not in itself a bad thing, but it is now time to begin to build metadata into the design of working systems that can test the viability of digital preservation strategies in a variety of contexts. This process has already begun in initiatives like the Victorian Electronic Records Stategy and the San Diego Supercomputer Center's 'self-validating knowledge-based archives'. A second need is for increased co-operation between the many metadata initiatives that have an interest in digital preservation. This may include the comparison and harmonisation of various metadata specifications, where this is possible. The OCLC/LG working group is an example of how this has been taken forward whitin a particular domain. There is a need for additional co-operation with recordkeeping metadata specialists, computing scientists and others in the metadata research community. Thirdly, there is a need for more detailed research into how metadata will interact with different formats, preservation strategies and communities of users. This may include some analysis of what metadata could be automatically extracted as part of the ingest process, an investigation of the role of content creators in metadata provision, and the production of user requirements." ... "Also, thought should be given to the development of metadata standards that will permit the easy exchange of preservation metadata (and information packages) between repositories." ... "As well as ensuring that digital repositories are able to facilitate the automatic capture of metadata, some thought should also be given to how best digital repositories could deal with any metadata that might already exist."
SOW
DC "Funded by JISC (the Joint Information Systems Committee of the UK higher education funding councils), as part of its Electronic Libraries (eLib) Programme, Cedars was the only project in the programme to focus on digital preservation." ... "In the digitial library domain, the development of a recommendation on preservation metadata is being co-ordinated by a working group supported by OCLC and the RLG. The membership of the working group is international, and inlcudes key individuals who were involved in the development of the Cedars, NEDLIB and NLA metadata specifications."
The CDISC Submission Metadata Model was created to help ensure that the supporting metadata for these submission datasets should meet the following objectives: Provide FDA reviewers with clear describtions of the usage, structure, contents, and attributes of all datasets and variables; Allow reviewers to replicate most analyses, tables, graphs, and listings with minimal or no transformations; Enable reviewers to easily view and subset the data used to generate any analysis, table, graph, or listing without complex programming. ... The CDISC Submission Metadata Model has been defined to guide sponsors in the preparation of data that is to be submitted to the FDA. By following the principles of this model, sponsors will help reviewers to accurately interpret the contents of submitted data and work with it more effectively, without sacrificing the scientific objectives of clinical development.
Publisher
The Clinical Data Interchange Standards Consortium
Critical Arguements
CA "The CDISC Submission Data Model has focused on the use of effective metadata as the most practical way of establishing meaningful standards applicable to electronic data submitted for FDA review."
Conclusions
RQ "Metadata prepared for a domain (such as an efficacy domain) which has not been described in a CDISC model should follow the general format of the safety domains, including the same set of core selection variables and all of the metadata attributes specified for the safety domains. Additional examples and usage guidelines are available on the CDISC web site at www.cdisc.org." ... "The CDISC Metadata Model describes the structure and form of data, not the content. However, the varying nature of clinical data in general will require the sponsor to make some decisions about how to represent certain real-world conditions in the dataset. Therefore, it is useful for a metadata document to give the reviewer an indication of how the datasets handle certain special cases."
SOW
DC CDISC is an open, multidisciplinary, non-profit organization committed to the development of worldwide standards to support the electronic acquisition, exchange, submission and archiving of clinical trials data and metadata for medical and biopharmaceutical product development. CDISC members work together to establish universally accepted data standards in the pharmaceutical, biotechnology and device industries, as well as in regulatory agencies worldwide. CDISC currently has more than 90 members, including the majority of the major global pharmaceutical companies.
Type
Web Page
Title
CDISC Achieves Two Significant Milestones in the Development of Models for Data Interchange
CA "The Clinical Data Interchange Standards Consortium has achieved two significant milestones towards its goal of standard data models to streamline drug development and regulatory review processes. CDISC participants have completed metadata models for the 12 safety domains listed in the FDA Guidance regarding Electronic Submissions and have produced a revised XML-based data model to support data acquisition and archive."
Conclusions
RQ "The goal of the CDISC XML Document Type Definition (DTD) Version 1.0 is to make available a first release of the definition of this CDISC model, in order to support sponsors, vendors and CROs in the design of systems and processes around a standard interchange format."
SOW
DC "This team, under the leadership of Wayne Kubick of Lincoln Technologies, and Dave Christiansen of Genentech, presented their metadata models to a group of representatives at the FDA on Oct. 10, and discussed future cooperative efforts with Agency reviewers."... "CDISC is a non-profit organization with a mission to lead the development of standard, vendor-neutral, platform-independent data models that improve process efficiency while supporting the scientific nature of clinical research in the biopharmaceutical and healthcare industries"
Type
Web Page
Title
eXtensible rights Markup Language (XrML) 2.0 Specification Part I: Primer
This specification defines the eXtensible rights Markup Language (XrML), a general-purpose language in XML used to describe the rights and conditions for using digital resources.
Publisher
ContentGuard
Critical Arguements
CA This chapter provides an overview of XrML. It provides a basic definition of XrML, describes the need that XrML is meant to address, and explains design goals for the language.
Conclusions
RQ not applicable
SOW
DC ContentGuard contributed XrML to MPEG-21, the OASIS Rights Language Technical Committee and the Open eBook Forum (OeBF). In each case they are using XrML as the base for their rights language specification. Furthest along is MPEG, where the process has reached Committee Draft. They have also recommended to other standards bodies to build on this work. ContentGuard will propose XrML to any standards organization seeking a rights language. Because of this progress ContentGuard has frozen its release of XrML at Version 2.0.
CA ContentGuard intends to submit XrML to standards bodies that are developing specifications that enable the exchange and trading of content as well as the creation of repositories for storage and management of digital content.
SOW
DC ContentGuard contributed XrML to MPEG-21, the OASIS Rights Language Technical Committee and the Open eBook Forum (OeBF). In each case they are using XrML as the base for their rights language specification. Furthest along is MPEG, where the process has reached Committee Draft. They have also recommended to other standards bodies to build on this work. ContentGuard will propose XrML to any standards organization seeking a rights language. Because of this progress ContentGuard has frozen its release of XrML at Version 2.0.
Type
Web Page
Title
PBCore: Public Broadcasting Metadata Dictionary Project
CA "PBCore is designed to provide -- for television, radio and Web activities -- a standard way of describing and using media (video, audio, text, images, rich interactive learning objects). It allows content to be more easily retrieved and shared among colleagues, software systems, institutions, community and production partners, private citizens, and educators. It can also be used as a guide for the onset of an archival or asset management process at an individual station or institution. ... The Public Broadcasting Metadata Dictionary (PBCore) is: a core set of terms and descriptors (elements) used to create information (metadata) that categorizes or describes media items (sometimes called assets or resources)."
Conclusions
<RQ> The PBCore Metadata Elements are currently in their first published edition, Version 1.0. Over two years of research and lively discussions have generated this version. ... As various users and communities begin to implement the PBCore, updates and refinements to the PBCore are likely to occur. Any changes will be clearly identified, ramifications outlined, and published to our constituents.
SOW
DC "Initial development funding for PBCore was provided by the Corporation for Public Broadcasting. The PBCore is built on the foundation of the Dublin Core (ISO 15836) ... and has been reviewed by the Dublin Core Metadata Initiative Usage Board. ... PBCore was successfully deployed in a number of test implementations in May 2004 in coordination with WGBH, Minnesota Public Radio, PBS, National Public Radio, Kentucky Educational Television, and recognized metadata expert Grace Agnew. As of July 2004 in response to consistent feedback to make metadata standards easy to use, the number of metadata elements was reduced to 48 from the original set of 58 developed by the Metadata Dictionary Team. Also, efforts are ongoing to provide more focused metadata examples that are specific to TV and radio. ... Available free of charge to public broadcasting stations, distributors, vendors, and partners, version 1.0 of PBCore was launched in the first quarter of 2005. See our Licensing Agreement via the Creative Commons for further information. ... Plans are under way to designate an Authority/Maintenance Organization."
The creation and use of metadata is likely to become an important part of all digital preservation strategies whether they are based on hardware and software conservation, emulation or migration. The UK Cedars project aims to promote awareness of the importance of digital preservation, to produce strategic frameworks for digital collection management policies and to promote methods appropriate for long-term preservation - including the creation of appropriate metadata. Preservation metadata is a specialised form of administrative metadata that can be used as a means of storing the technical information that supports the preservation of digital objects. In addition, it can be used to record migration and emulation strategies, to help ensure authenticity, to note rights management and collection management data and also will need to interact with resource discovery metadata. The Cedars project is attempting to investigate some of these issues and will provide some demonstrator systems to test them.
Notes
This article was presented at the Joint RLG and NPO Preservation Conference: Guidelines for Digital Imaging, held September 28-30, 1998.
Critical Arguements
CA "Cedars is a project that aims to address strategic, methodological and practical issues relating to digital preservation (Day 1998a). A key outcome of the project will be to improve awareness of digital preservation issues, especially within the UK higher education sector. Attempts will be made to identify and disseminate: Strategies for collection management ; Strategies for long-term preservation. These strategies will need to be appropriate to a variety of resources in library collections. The project will also include the development of demonstrators to test the technical and organisational feasibility of the chosen preservation strategies. One strand of this work relates to the identification of preservation metadata and a metadata implementation that can be tested in the demonstrators." ... "The Cedars Access Issues Working Group has produced a preliminary study of preservation metadata and the issues that surround it (Day 1998b). This study describes some digital preservation initiatives and models with relation to the Cedars project and will be used as a basis for the development of a preservation metadata implementation in the project. The remainder of this paper will describe some of the metadata approaches found in these initiatives."
Conclusions
RQ "The Cedars project is interested in helping to develop suitable collection management policies for research libraries." ... "The definition and implementation of preservation metadata systems is going to be an important part of the work of custodial organisations in the digital environment."
SOW
DC "The Cedars (CURL exemplars in digital archives) project is funded by the Joint Information Systems Committee (JISC) of the UK higher education funding councils under Phase III of its Electronic Libraries (eLib) Programme. The project is administered through the Consortium of University Research Libraries (CURL) with lead sites based at the Universities of Cambridge, Leeds and Oxford."
Type
Web Page
Title
Metadata for preservation : CEDARS project document AIW01
This report is a review of metadata formats and initiatives in the specific area of digital preservation. It supplements the DESIRE Review of metadata (Dempsey et al. 1997). It is based on a literature review and information picked-up at a number of workshops and meetings and is an attempt to briefly describe the state of the art in the area of metadata for digital preservation.
Critical Arguements
CA "The projects, initiatives and formats reviewed in this report show that much work remains to be done. . . . The adoption of persistent and unique identifiers is vital, both in the CEDARS project and outside. Many of these initiatives mention "wrappers", "containers" and "frameworks". Some thought should be given to how metadata should be integrated with data content in CEDARS. Authenticity (or intellectual preservation) is going to be important. It will be interesting to investigate whether some archivists' concerns with custody or "distributed custody" will have relevance to CEDARS."
Conclusions
RQ Which standards and initiatives described in this document have proved viable preservation metadata models?
SOW
DC OAIS emerged out of an initiative spearheaded by NASA's Consultative Committee for Space Data Systems. It has been shaped and promoted by the RLG and OCLC. Several international projects have played key roles in shaping the OAIS model and adapting it for use in libraries, archives and research repositories. OAIS-modeled repositories include the CEDARS Project, Harvard's Digital Repository, Koninklijke Bibliotheek (KB), the Library of Congress' Archival Information Package for audiovisual materials, MIT's D-Space, OCLC's Digital Archive and TERM: the Texas Email Repository Model.
Type
Web Page
Title
Approaches towards the Long Term Preservation of Archival Digital Records
The Digital Preservation Testbed is carrying out experiments according to pre-defined research questions to establish the best preservation approach or combination of approaches. The Testbed will be focusing its attention on three different digital preservation approaches - Migration; Emulation; and XML - evaluating the effectiveness of these approaches, their limitations, costs, risks, uses, and resource requirements.
Language
English; Dutch
Critical Arguements
CA "The main problem surrounding the preservation of authentic electronic records is that of technology obsolescence. As changes in technology continue to increase exponentially, the problem arises of what to do with records that were created using old and now obsolete hardware and software. Unless action is taken now, there is no guarantee that the current computing environment (and thus also records) will be accessible and readable by future computing environments."
Conclusions
RQ "The Testbed will be conducting research to discover if there is an inviolable way to associate metadata with records and to assess the limitations such an approach may incur. We are also working on the provision of a proposed set of preservation metadata that will contain information about the preservation approach taken and any specific authenticity requirements."
SOW
DC The Digital Preservation Testbed is part of the non-profit organisation ICTU. ICTU is the Dutch organisation for ICT and government. ICTU's goal is to contribute to the structural development of e-government. This will result in improving the work processes of government organisations, their service to the community and interaction with the citizens. Government institutions, such as Ministries, design the policies in the area of e-government, and ICTU translates these policies into projects. In many cases, more than one institution is involved in a single project. They are the principals in the projects and retain control concerning the focus of the project. In case of the Digital Preservation Testbed the principals are the Ministry of the Interior and the Dutch National Archives.
Type
Web Page
Title
The Gateway to Educational Materials: An Evaluation Study, Year 4: A Technical Report submitted to the US Department of Education
CA The Gateway to Educational Materials (GEM) is a Web site created through the efforts of several groups, including the US Department of Education, The National Library of Education, and a team from Syracuse University. The goal of the project is to provide teachers with a broad range of educational materials on the World Wide Web. This study evaluates The Gateway as an online source of educational information. The purpose of this evaluation is to provide developers of The Gateway with information about aspects of the system that might need improvement, and to display lessons learned through this process to developers of similar systems. It is the fourth in a series of annual studies, and focuses on effectiveness of The Gateway from the perspectives of end users and collection holders.
Type
Web Page
Title
METS : Metadata Encoding and Transmission Standard
CA "METS, although in its early stages, is already sufficiently established amongst key digital library players that it can reasonably be considered the only viable standard for digital library objects in the foreseeable future. Although METS may be an excellent framework, it is just that and only that. It does not prescribe the content of the metadata itself, and this is a continuing problem for METS and all other schema to contend with if they are to realize their full functionality and usefulness."
Conclusions
RQ The standardization (via some sort of cataloging rules) of the content held by metadata "containers" urgently needs to be addressed. If not, the full value of any metadata scheme, no matter how extensible or robust, will not be realized.
Type
Web Page
Title
Softening the borderlines of archives through XML - a case study
Archives have always had troubles getting metadata in formats they can process. With XML, these problems are lessening. Many applications today provide the option of exporting data into an application-defined XML format that can easily be post-processed using XSLT, schema mappers, etc, to fit the archives┬┤ needs. This paper highlights two practical examples for the use of XML in the Swiss Federal Archives and discusses advantages and disadvantages of XML in these examples. The first use of XML is the import of existing metadata describing debates at the Swiss parliament whereas the second concerns preservation of metadata in the archiving of relational databases. We have found that the use of XML for metadata encoding is beneficial for the archives, especially for its ease of editing, built-in validation and ease of transformation.
Notes
The Swiss Federal Archives defines the norms and basis of records management and advises departments of the Federal Administration on their implementation. http://www.bar.admin.ch/bar/engine/ShowPage?pageName=ueberlieferung_aktenfuehrung.jsp
Critical Arguements
CA "This paper briefly discusses possible uses of XML in an archival context and the policies of the Swiss Federal Archives concerning this use (Section 2), provides a rough overview of the applications we have that use XML (Section 3) and the experiences we made (Section 4)."
Conclusions
RQ "The systems described above are now just being deployed into real world use, so the experiences presented here are drawn from the development process and preliminary testing. No hard facts in testing the sustainability of XML could be gathered, as the test is time itself. This test will be passed when we can still access the data stored today, including all metadata, in ten or twenty years." ... "The main problem area with our applications was the encoding of the XML documents and the non-standard XML document generation of some applications. When dealing with the different encodings (UTF-8, UTF-16, ISO-8859-1, etc) some applications purported a different encoding in the header of the XML document than the true encoding of the document. These errors were quickly identified, as no application was able to read the documents."
SOW
DC The author is currently a private digital archives consultant, but at the time of this article, was a data architect for the Swiss Federal Archives. The content of this article owes much to the work being done by a team of architects and engineers at the Archives, who are working on an e-government project called ARELDA (Archiving of Electronic Data and Records).
Type
Web Page
Title
Report of the Ad Hoc Committee for Development of a Standardized Tool for Encoding Finding Aids
This report focuses on the development of tools for the description and intellectual control of archives and the discovery of relevant resources by users. Other archival functions, such as appraisal, acquisition, preservation, and physical control, are beyond the scope for this project. The system developed as a result of this report should be useable on stand-alone computers in small institutions, by multiple users in larger organisations, and by local, regional, national, and international networks. The development of such a system should take into account the strategies, experiences, and results of other initiatives such as the European Union Archival Network (EUAN), the Linking and Exploring Authority Files (LEAF) initiative, the European Visual Archives (EVA) project, and the Canadian Archival Information Network (CAIN). This report is divided into five sections. A description of the conceptual structure of an archival information system, described as six layers of services and protocols, follows this introduction. Section three details the functional requirements for the software tool and is followed by a discussion of the relationship of these requirements to existing archival software application. The report concludes with a series of recommendations that provide a strategy for the successful development, deployment, and maintenance of an Open Source Archival Resource Information System (OSARIS). There are two appendices: a data model and a comparison of the functional requirements statements to several existing archival systems.
Notes
3. Functional Requirements Requirements for Information Interchange 3.2: The system must support the current archival standards for machine-readable data communication, Encoded Archival Description (EAD) and Encoded Archival Context (EAC). A subset of elements found in EAD may be used to exchange descriptions based on ISAD(G) while elements in EAC may be used to exchange ISAAR(CPF)-based authority data.
Publisher
International Council on Archives Committee on Descriptive Standards
Critical Arguements
CA The Ad Hoc Committee agrees that it would be highly desirable to develop a modular, open source software tool that could be used by archives worldwide to manage the intellectual control of their holdings through the recording of standardized descriptive data. Individual archives could combine their data with that of other institutions in regional, national or international networks. Researchers could access this data either via a stand-alone computerized system or over the Internet. The model for this software would be the successful UNESCO-sponsored free library program, ISIS, which has been in widespread use around the developing world for many years. The software, with appropriate supporting documentation, would be freely available via an ICA or UNESCO web site or on CD-ROM. Unlike ISIS, however, the source code and not just the software should be freely available.
Conclusions
RQ "1. That the ICA endorses the functional requirements presented in this document as the basis for moving the initiative forward. 2. That the functional desiderata and technical specifications for the software applications, such as user requirements, business rules, and detailed data models, should be developed further by a team of experts from both ICA/CDS and ICA/ITC as the next stage of this project. 3. That following the finalization of the technical specifications for OSARIS, the requirements should be compared to existing systems and a decision made to adopt or adapt existing software or to build new applications. At that point in time, it will then be possible to estimate project costs. 4. That a solution that incorporates the functional requirements result in the development of several modular software applications. 5. That the implementation of the system should follow a modular strategy. 6. That the development of software applications must include a thorough investigation and assessment of existing solutions beginning with those identified in section four and Appendix B of this document. 7. That the ICA develop a strategy for communicating the progress of this project to members of the international archival community on a regular basis. This would include the distribution of progress reports in multiple languages. The communication strategy must include a two-way exchange of ideas. The project will benefit strongly from the ongoing comments, suggestions, and input of the members of the international archival community. 8. That a test-bed be developed to allow the testing of software solutions in a realistic archival environment. 9. That the system specifications, its documentation, and the source codes for the applications be freely available. 10. That training courses for new users, ongoing education, and webbased support groups be established. 11. That promotion of the software be carried out through the existing regional infrastructure of ICA and through UNESCO. 12. That an infrastructure for ongoing maintenance, distribution, and technical support be developed. This should include a web site to download software and supporting documentation. The ICA should also establish and maintain a mechanism for end-users to recommend changes and enhancements to the software. 13. That the ICA establishes and maintains an official mechanism for regular review of the software by an advisory committee that includes technical and archival experts. "
SOW
DC "The development of such a system should take into account the strategies, experiences, and results of other initiatives such as the European Union Archival Network (EUAN), the Linking and Exploring Authority Files (LEAF) initiative, the European Visual Archives (EVA) project, and the Canadian Archival Information Network (CAIN)."
Just like other memory institutions, libraries will have to play an important part in the Semantic Web. In that context, ontologies and conceptual models in the field of cultural heritage information are crucial, and the interoperability between these ontologies and models perhaps even more crucial. This document reviews four projects and models that the FRBR Review Group recommends for consideration as to interoperability with FRBR.
Publisher
International Federation of Library Associations and Institutions
Critical Arguements
CA "Just like other memory institutions, libraries will have to play an important part in the Semantic Web. In that context, ontologies and conceptual models in the field of cultural heritage information are crucial, and the interoperability between these ontologies and models perhaps even more crucial."
Conclusions
RQ 
SOW
DC "Some members of the CRM-SIG, including Martin Doerr himself, also are subscribers to the FRBR listserv, and Patrick Le Boeuf, chair of the FRBR Review Group, also is a member of the CRM-SIG and ISO TC46/SC4/WG9 (the ISO Group on CRM). A FRBR to CRM mapping is available from the CIDOC CRM-SIG listserv archive." ... This report was produced by the Cataloguing Section of IFLA, the International Federation of Library Associations and Institutions. 
This document is a draft version 1.0 of requirements for a metadata framework to be used by the International Press Telecommunications Council for all new and revised IPTC standards. It was worked on and agreed to by members of the IPTC Standards Committee, who represented a variety of newspaper, wire agencies, and other interested members of the IPTC.
Notes
Misha Wolf is also listed as author.
Publisher
International Press Telecommunications Council (IPTC)
Critical Arguements
CA "This Requirements document forms part of the programme of work called ITPC Roadmap 2005. The Specification resulting from these Requirements will define the use of metadata by all new IPTC standards and by new major versions of existing IPTC standards." (p. 1) ... "The purpose of the News Metadata Framework (NMDF) WG is to specify how metadata will be expressed, referenced, and managed in all new major versions of IPTC standards. The NMF WG will: Gather, discuss, agree and document functional requirements for the ways in which metadata will be expressed, referenced and managed in all new major versions of IPTC standards; Discuss, agree and document a model, satisfying these requirements; Discuss, agree and document possible approaches to expressing this model in XML, and select those most suited to the tasks. In doing so, the NMDF WG will, where possible, make use of the work of other standards bodies. (p. 2)
Conclusions
RQ "Open issues include: The versioning of schemes, including major and minor versions, and backward compatibility; the versioning of TopicItems; The design of URIs for TopicItem schemes and TopicItem collections, including the issues of: versions (relating to TopicItems, schemes, and collections); representations (relating to TopicItems and collections); The relationship between a [scheme, code] pair, the corresponding URI and the scheme URI." (p. 17)
SOW
DC The development of this framework came out of the 2003 News Standards Summit, which was attended by representatives from over 80 international press and information agencies ... "The News Standards Summit brings together major players--experts on news metadata standards as well as commercial news providers, users, and aggregators. Together, they will analyze the current state and future expectations for news and publishing XML and metadata efforts from both the content and processing model perspectives. The goal is to increase understanding and to drive practical, productive convergence." ... This is a draft version of the standard.
CA The metadata necessary for successful management and use of digital objects is both more extensive than and different from the metadata used for managing collections of printed works and other physical materials. Without structural metadata, the page image or text files comprising the digital work are of little use, and without technical metadata regarding the digitization process, scholars may be unsure of how accurate a reflection of the original the digital version provides. For internal management purposes, a library must have access to appropriate technical metadata in order to periodically refresh and migrate the data, ensuring the durability of valuable resources.
SOW
DC OAIS emerged out of an initiative spearheaded by NASA's Consultative Committee for Space Data Systems. It has been shaped and promoted by the RLG and OCLC. Several international projects have played key roles in shaping the OAIS model and adapting it for use in libraries, archives and research repositories. OAIS-modeled repositories include the CEDARS Project, Harvard's Digital Repository, Koninklijke Bibliotheek (KB), the Library of Congress' Archival Information Package for audiovisual materials, MIT's D-Space, OCLC's Digital Archive and TERM: the Texas Email Repository Model.
Type
Web Page
Title
NHPRC: Minnesota State Archives Strategic Plan: Electronic Records Consultant Project
National Historical Publications and Records Commission Grant No. 95-030
Critical Arguements
CA "The Electronic Records Consultant Project grant was carried out in conjunction with the strategic planning effort for the Minnesota Historical Society's State Archives program. The objective was to develop a plan for a program that will be responsive to the changing nature of government records." ... "The strategic plan that was developed calls for specific actions to meet five goals: 1) strengthening partnerships, 2) facilitating the identification of historically valuable records, 3) integrating electronic records into the existing program, 4) providing quality public service, and 5) structuring the State Archives Department to meet the demands of this plan."
Type
Web Page
Title
Recordkeeping Metadata Standard for Commonwealth Agencies
This standard describes the metadata that the National Archives of Australia recommends should be captured in the recordkeeping systems used by Commonwealth government agencies. ... Part One of the standard explains the purpose and importance of standardised recordkeeping metadata and details the scope, intended application and features of the standard. Features include: flexibility of application; repeatability of data elements; extensibility to allow for the management of agency-specific recordkeeping requirements; interoperability across systems environments; compatibility with related metadata standards, including the Australian Government Locator Service (AGLS) standard; and interdependency of metadata at the sub-element level.
Critical Arguements
CA Compliance with the Recordkeeping Metadata Standard for Commonwealth Agencies will help agencies to identify, authenticate, describe and manage their electronic records in a systematic and consistent way to meet business, accountability and archival requirements. In this respect the metadata is an electronic recordkeeping aid, similar to the descriptive information captured in file registers, file covers, movement cards, indexes and other registry tools used in the paper-based environment to apply intellectual and physical controls to records.
Conclusions
RQ "The National Archives intends to consult with agencies, vendors and other interested parties on the implementation and continuing evolution of the Recordkeeping Metadata Standard for Commonwealth Agencies." ... "The National Archives expects to re-examine and reissue the standard in response to broad agency feedback and relevant advances in theory and methodology." ... "The development of public key technology is one area the National Archives will monitor closely, in consultation with the Office for Government Online, for possible additions to a future version of the standard."
SOW
DC "This standard has been developed in consultation with recordkeeping software vendors endorsed by the Office for Government OnlineÔÇÖs Shared Systems Initiative, as well as selected Commonwealth agencies." ... "The standard has also been developed with reference to other metadata standards emerging in Australia and overseas to ensure compatibility, as far as practicable, between related resource management tools, including: the Dublin Core-derived Australian Government Locator Service (AGLS) metadata standard for discovery and retrieval of government services and information in web-based environments, co-ordinated by the National Archives of Australia; and the non-sector-specific Recordkeeping Metadata Standards for Managing and Accessing Information Resources in Networked Environments Over Time for Government, Social and Cultural Purposes, co-ordinated by Monash University using an Australian Research Council Strategic Partnership with Industry Research and Training (SPIRT) Support Grant."
This document is a revision and expansion of "Metadata Made Simpler: A guide for libraries," published by NISO Press in 2001.
Publisher
NISO Press
Critical Arguements
CA An overview of what metadata is and does, aimed at librarians and other information professionals. Describes various metadata schemas. Concludes with a bibliography and glossary.
Type
Web Page
Title
Use of Encoded Archival Description (EAD) for Manuscript Collection Finding Aids
Presented in 1999 to the Library's Collection Development & Management Committee, this report outlines support for implementing EAD in delivery of finding aids for library collections over the Web. It describes the limitations of HTML, provides an introduction to SGML, XML, and EAD, outlines the advantages of conversion from HTML to EAD, the conversion process, the proposed outcome, and sources for further information.
Publisher
National Library of Australia
Critical Arguements
CA As use of the World Wide Web has increased, so has the need of users to be able to discover web-based information resources easily and efficiently, and to be able to repeat that discovery in a consistent manner. Using SGML to mark up web-based documents facilitates such resource discovery.
Conclusions
RQ To what extent have the mainstream web browser companies fulfilled their committment to support native viewing of SGML/XML documents?
Joined-up government needs joined-up information systems. The e-Government Metadata Standard (e-GMS) lays down the elements, refinements and encoding schemes to be used by government officers when creating metadata for their information resources or designing search interfaces for information systems. The e-GMS is needed to ensure maximum consistency of metadata across public sector organisations.
Publisher
Office of the e-Envoy, Cabinet Office, UK.
Critical Arguements
CA "The e-GMS is concerned with the particular facets of metadata intended to support resource discovery and records management. The Standard covers the core set of ÔÇÿelementsÔÇÖ that contain data needed for the effective retrieval and management of official information. Each element contains information relating to a particular aspect of the information resource, e.g. 'title' or 'creator'. Further details on the terminology being used in this standard can be found in Dublin Core and Part Two of the e-GIF."
Conclusions
RQ "The e-GMS will need to evolve, to ensure it remains comprehensive and consistent with changes in international standards, and to cater for changes in use and technology. Some of the elements listed here are already marked for further development, needing additional refinements or encoding schemes. To limit disruption and cost to users, all effort will be made to future-proof the e-GMS. In particular we will endeavour: not to remove any elements or refinements; not to rename any elements or refinements; not to add new elements that could contain values contained in the existing elements."
SOW
DC The E-GMS is promulgated by the British government as part of its e-government initiative. It is the technical cornerstone of the e-government policy for joining up the public sector electronically and providing modern, improved public services.
During the past decade, the recordkeeping practices in public and private organizations have been revolutionized. New information technologies from mainframes, to PC's, to local area networks and the Internet have transformed the way state agencies create, use, disseminate, and store information. These new technologies offer a vastly enhanced means of collecting information for and about citizens, communicating within state government and between state agencies and the public, and documenting the business of government. Like other modern organizations, Ohio state agencies face challenges in managing and preserving their records because records are increasingly generated and stored in computer-based information systems. The Ohio Historical Society serves as the official State Archives with responsibility to assist state and local agencies in the preservation of records with enduring value. The Office of the State Records Administrator within the Department of Administrative Services (DAS) provides advice to state agencies on the proper management and disposition of government records. Out of concern over its ability to preserve electronic records with enduring value and assist agencies with electronic records issues, the State Archives has adapted these guidelines from guidelines created by the Kansas State Historical Society. The Kansas State Historical Society, through the Kansas State Historical Records Advisory Board, requested a program development grant from the National Historical Publications and Records Commission to develop policies and guidelines for electronic records management in the state of Kansas. With grant funds, the KSHS hired a consultant, Dr. Margaret Hedstrom, an Associate Professor in the School of Information, University of Michigan and formerly Chief of State Records Advisory Services at the New York State Archives and Records Administration, to draft guidelines that could be tested, revised, and then implemented in Kansas state government.
Notes
These guidelines are part of the ongoing effort to address the electronic records management needs of Ohio state government. As a result, this document continues to undergo changes. The first draft, written by Dr. Margaret Hedstrom, was completed in November of 1997 for the Kansas State Historical Society. That version was reorganized and updated and posted to the KSHS Web site on August 18, 1999. The Kansas Guidelines were modified for use in Ohio during September 2000
Critical Arguements
CA "This publication is about maintaining accountability and preserving important historical records in the electronic age. It is designed to provide guidance to users and managers of computer systems in Ohio government about: the problems associated with managing electronic records, special recordkeeping and accountability concerns that arise in the context of electronic government; archival strategies for the identification, management and preservation of electronic records with enduring value; identification and appropriate disposition of electronic records with short-term value, and
Type
Web Page
Title
Requirements for Electronic Records Management Systems: (2) Metadata Standard
Requirements for Electronic Records Management Systems includes: (1) "Functional Requirements" (http://www.nationalarchives.gov.uk/electronicrecords/reqs2002/pdf/requirementsfinal.pdf); (2) "Metadata Standard" (the subject of this record); (3) Reference Document (http://www.nationalarchives.gov.uk/electronicrecords/reqs2002/pdf/referencefinal.pdf); and (4) "Implementation Guidance: Configuration and Metadata Issues" (http://www.nationalarchives.gov.uk/electronicrecords/reqs2002/pdf/implementation.pdf)
Publisher
Public Records Office, [British] National Archives
Critical Arguements
CA Sets out the implications for records management metadata in compliant systems. It has been agreed with the Office of the e-Envoy that this document will form the basis for an XML schema to support the exchange of records metadata and promote interoperability between ERMS and other systems
SOW
DC The National Archives updated the functional requirements for electronic records management systems (ERMS) in collaboration with the central government records management community during 2002. The revision takes account of developments in cross-government and international standards since 1999.
Type
Web Page
Title
The MPEG-21 Rights Expression Language: A White Paper
CA Presents the business case for a Digital Rights Expression Language, an overview of the DRM landscape, a discussion of the history and role of standards in business, and some technical aspects of MPEG-21. "[U]nless the rights to ... content can be packaged within machine-readable licences, guaranteed to be ubiquitous, unambiguous and secure, which can then be processed consistently and reliably, it is unlikely that content owners will trust consign [sic] their content to networks. The MPEG Rights Expression Language (REL) is designed to provide the functionality required by content owners in order to create reliable, secure licences for content which can be used throughout the value chain, from content creator to content consumer."
Conclusions
RQ "While true interoperability may still be a distant prospect, a common rights expression language, with extensions based on the MPEG REL, can incrementally bring many of the benefits true interoperability will eventually yield. As extensions are created in multiple content verticals, it will be possible to transfer content generated in one securely to another. This will lead to cross channel fertilisation and the growth of multimedia content. At the same time, a common rights language will also lead to the possibility of broader content distribution (by enabling cross-DRM portability), thus providing more channel choice for consumers. It is this vision of the MPEG REL spreading out that is such an exciting prospect. ... The history of MPEG standards would seem to suggest that implementers will start building to the specification in mid-2003, coincidental with the completion of the standard. This will be followed by extensive take-up within two or three years, so that by mid 2006, the MPEG REL will be a pervasive technology, implemented across many different digital rights management and conditional access systems, in both the content industries and in other, non-rights based industries. ... The REL will ultimately become a 'transparent' technology, as invisible to the user as the phone infrastructure is today."
SOW
DC DC The Moving Picture Experts Group (MPEG) is a working group of ISO/IEC, made up of some 350 members from various industries and universities, in charge of the development of international standards for compression, decompression, processing, and coded representation of moving pictures, audio and their combination. MPEG's official designation is ISO/IEC JTC1/SC29/WG11. So far MPEG has produced the following compression formats and ancillary standards: MPEG-1, the standard for storage and retrieval of moving pictures and audio on storage media (approved Nov. 1992); MPEG-2, the standard for digital television (approved Nov. 1994); MPEG-4, the standard for multimedia applications; MPEG-7, the content representation standard for multimedia information search, filtering, management and processing; and MPEG-21, the multimedia framework.
Expanded version of the article "Ensuring the Longevity of Digital Documents" that appeared in the January 1995 edition of Scientific American (Vol. 272, Number 1, pp. 42-7).
Publisher
Council on Library and Information Resources
Critical Arguements
CA "It is widely accepted that information technology is revolutionizing our concepts of documents and records in an upheaval at least as great as the introduction of printing, if not of writing itself. The current generation of digital records therefore has unique historical significance; yet our digital documents are far more fragile than paper. In fact, the record of the entire present period of history is in jeopardy. The content and historical value of many governmental, organizational, legal, financial, and technical records, scientific databases, and personal documents may be irretrievably lost to future generations if we do not take steps to preserve them."
Conclusions
RQ "We must develop evolving standards for encoding explanatory annotations to bootstrap the interpretation of digital documents that are saved in nonstandard forms. We must develop techniques for saving the bit streams of software-dependent documents and their associated systems and application software. We must ensure that the hardware environments necessary to run this software are described in sufficient detail to allow their future emulation. We must save these specifications as digital documents, encoded using the bootstrap standards developed for saving annotations so that they can be read without special software (lest we be recursively forced to emulate one system in order to learn how to emulate another). We must associate contextual information with our digital documents to provide provenance as well as explanatory annotations in a form that can be translated into successive standards so as to remain easily readable. Finally, we must ensure the systematic and continual migration of digital documents onto new media, preserving document and program bit streams verbatim, while translating their contextual information as necessary."
This standard sets out principles for making and keeping full and accurate records as required under section 12(1) of the State Records Act 1998. The principles are: Records must be made; Records must be accurate; Records must be authentic; Records must have integrity; Records must be useable. Each principle is supported by mandatory compliance requirements.
Critical Arguements
CA "Section 21(1) of the State Records Act 1998 requires public offices to 'make and keep full and accurate records'. The purpose of this standard is to assist public offices to meet this obligation and to provide a benchmark against which a public office's compliance may be measured."
Conclusions
RQ None
SOW
DC This standard is promulgated by the State Records Agency of New South Wales, Australia, as required under section 12(1) of the State Records Act 1998.
CA NSW has issued their metadata standard because one of the ÔÇ£key methodsÔÇØ for assuring the long-term preservation of e-records is through he use of standardized sets of recordkeeping metadata. Not only can their metadata strategy help public offices meet their individual requirements for accu
Type
Web Page
Title
Archiving of Electronic Digital Data and Records in the Swiss Federal Archives (ARELDA): e-government project ARELDA - Management Summary
The goal of the ARELDA project is to find long-term solutions for the archiving of digital records in the Swiss Federal Archives. This includes the accession, the long-term storage, preservation of data, description, and access for the users of the Swiss Federal Archives. It is also coordinated with the basic efforts of the Federal Archives to realize a uniform records management solution in the federal administration and therefore to support the pre-archival creation of documents of archival value for the benefits of the administration as well as of the Federal Archives. The project is indispensable for the long-term execution of the Federal Archives Act; Older IT systems are being replaced by newer ones. A complete migration of the data is sometimes not possible or too expensive; A constant increase of small database applications, built and maintained by people with no IT background; More and more administrative bodies are introducing records and document management systems.
Publisher
Swiss Federal Archives
Publication Location
Bern
Critical Arguements
CA "Archiving in general is a necessary prerequisite for the reconstruction of governmental activities as well as for the principle of legal certainty. It enables citizens to understand governmental activities and ensures a democratic control of the federal administration. And finally are archives a prerequisite for the scientific research, especially in the social and historical fields and ensure the preservation of our cultural heritage. It plays a vital role for an ongoing and efficient records management. A necessary prerequisite for the Federal Archives in the era of the information society will be the system ARELDA (Archiving of Electronic Data and Records)."
Conclusions
RQ "Because of the lack of standard solutions and limited or lacking personal resources for an internal development effort, the realisation of ARELDA will have to be outsourced and the cooperation with the IT division and the Federal Office for Information Technology, Systems and Telecommunication must be intensified. The guidelines for the projects are as follows:
SOW
DC ARELDA is one of the five key projects in the Swiss government's e-government strategy.
Type
Web Page
Title
Metadata Resources: Metadata Encoding and Transmission Standard (METS)
DC OAIS emerged out of an initiative spearheaded by NASA's Consultative Committee for Space Data Systems. It has been shaped and promoted by the RLG and OCLC. Several international projects have played key roles in shaping the OAIS model and adapting it for use in libraries, archives and research repositories. OAIS-modeled repositories include the CEDARS Project, Harvard's Digital Repository, Koninklijke Bibliotheek (KB), the Library of Congress' Archival Information Package for audiovisual materials, MIT's D-Space, OCLC's Digital Archive and TERM: the Texas Email Repository Model.
Museums and the Online Archive of California (MOAC) builds on existing standards and their implementation guidelines provided by the Online Archive of California (OAC) and its parent organization, the California Digital Library (CDL). Setting project standards for MOAC consisted of interpreting existing OAC/CDL documents and adapting them to the projects specific needs, while at the same time maintaining compliance with OAC/CDL guidelines. The present overview over the MOAC technical standards references both the OAC/CDL umbrella document and the MOAC implementation / adaptation document at the beginning of each section, as well as related resources which provide more detail on project specifications.
Critical Arguements
CA The project implements specifications for digital image production, as well as three interlocking file exchange formats for delivering collections, digital images and their respective metadata. Encoded Archival Description (EAD) XML describes the hierarchy of a collection down to the item-level and traditionally serves for discovering both the collection and the individual items within it. For viewing multiple images associated with a single object record, MOAC utilizes Making of America 2 (MOA2) XML. MOA2 makes the images representing an item available to the viewer through a navigable table of contents; the display mimics the behavior of the analog item by e.g. allowing end-users to browse through the pages of an artist's book. Through the further extension of MOA2 with Text Encoding Initiative (TEI) Lite XML, not only does every single page of the book display in its correct order, but a transcription of its textual content also accompanies the digital images.
Conclusions
RQ "These two instances of fairly significant changes in the project's specifications may serve as a gentle reminder that despite its solid foundation in standards, the MOAC information architecture will continue to face the challenge of an ever-changing technical environment."
SOW
DC The author is Digital Media Developer at the UC Berkeley Art Museum & Pacific Film Archives, a member of the MOAC consortium.
Type
Web Page
Title
Imaging Nuggets: Metadata Encoding and Transmission Standard
CA The main advantages of METS consists of the following: First, it provides a syntax for transferring the entire digital objects along with their associated metadata and other supporting files. Second, it provides a functional syntax, a basis for providing users the means of navigating through and manipulating the object. Third, it provides a syntax for archiving the data as an integrated whole.
CA One problem in the field of radio archives is the tendency to view anything that is not audio or video (specifically this leaves text) as metadata. However, all text is not metadata. While all text can be seen as potentially useful due to the information it represents, the creators of P/FRA recommend standardizing only the essential information needed to describe and retrieve radio archive information.
Conclusions
RQ Rules need to be drafted specifying the content of metadata fields. While the authors extol the value of ÔÇ£good metadataÔÇØ for resource discovery, proscribing the content of metadata containers is a problem here as in every other filed.