CA Technical standards, developed by national and international organizations, are increasingly important in electronic recordkeeping. Thirteen standards are summarized and their sponsoring organizations described.
Phrases
<P1> The challenge to archivists is to make sure that the standards being applied to electronic records systems today are adequate to ensure the long-term preservation and use of information contained in the systems. (p.31) <P2> While consensus can easily be established that data exchange standards offer a wealth of potential benefits, there are also a number of real barriers to implementation that make the road ahead for archivists a very bumpy one. (p.41)
Conclusions
RQ What the current state of standardization in the archival management of electronic records and what are the issues involved?
Type
Electronic Journal
Title
A Spectrum of Interoperability: The Site for Science Prototype for the NSDL
"Currently, NSF is funding 64 projects, each making its own contribution to the library, with a total annual budget of about $24 million. Many projects are building collections; others are developing services; a few are carrying out targeted research.The NSDL is a broad program to build a digital library for education in science, mathematics, engineering and technology. It is funded by the National Science Foundation (NSF) Division of Undergraduate Education. . . . The Core Integration task is to ensure that the NSDL is a single coherent library, not simply a set of unrelated activities. In summer 2000, the NSF funded six Core Integration demonstration projects, each lasting a year. One of these grants was to Cornell University and our demonstration is known as Site for Science. It is at http://www.siteforscience.org/ [Site for Science]. In late 2001, the NSF consolidated the Core Integration funding into a single grant for the production release of the NSDL. This grant was made to a collaboration of the University Corporation for Atmospheric Research (UCAR), Columbia University and Cornell University. The technical approach being followed is based heavily on our experience with Site for Science. Therefore this article is both a description of the strategy for interoperability that was developed for Site for Science and an introduction to the architecture being used by the NSDL production team."
ISBN
1082-9873
Critical Arguements
CA "[T]his article is both a description of the strategy for interoperability that was developed for the [Cornell University's NSF-funded] Site for Science and an introduction to the architecture being used by the NSDL production team."
Phrases
<P1> The grand vision is that the NSDL become a comprehensive library of every digital resource that could conceivably be of value to any aspect of education in any branch of science and engineering, both defined very broadly. <P2> Interoperability among heterogeneous collections is a central theme of the Core Integration. The potential collections have a wide variety of data types, metadata standards, protocols, authentication schemes, and business models. <P3> The goal of interoperability is to build coherent services for users, from components that are technically different and managed by different organizations. This requires agreements to cooperate at three levels: technical, content and organizational. <P4> Much of the research of the authors of this paper aims at . . . looking for approaches to interoperability that have low cost of adoption, yet provide substantial functionality. One of these approaches is the metadata harvesting protocol of the Open Archives Initiative (OAI) . . . <P5> For Site for Science, we identified three levels of digital library interoperability: Federation; Harvesting; Gathering. In this list, the top level provides the strongest form of interoperability, but places the greatest burden on participants. The bottom level requires essentially no effort by the participants, but provides a poorer level of interoperability. The Site for Science demonstration concentrated on the harvesting and gathering, because other projects were exploring federation. <P6> In an ideal world all the collections and services that the NSDL wishes to encompass would support an agreed set of standard metadata. The real world is less simple. . . . However, the NSDL does have influence. We can attempt to persuade collections to move along the interoperability curve. <warrant> <P7> The Site for Science metadata strategy is based on two principles. The first is that metadata is too expensive for the Core Integration team to create much of it. Hence, the NSDL has to rely on existing metadata or metadata that can be generated automatically. The second is to make use of as much of the metadata available from collections as possible, knowing that it varies greatly from none to extensive. Based on these principles, Site for Science, and subsequently the entire NSDL, developed the following metadata strategy: Support eight standard formats; Collect all existing metadata in these formats; Provide crosswalks to Dublin Core; Assemble all metadata in a central metadata repository; Expose all metadata records in the repository for service providers to harvest; Concentrate limited human effort on collection-level metadata; Use automatic generation to augment item-level metadata. <P8> The strategy developed by Site for Science and now adopted by the NSDL is to accumulate metadata in the native formats provided by the collections . . . If a collection supports the protocols of the Open Archives Initiative, it must be able to supply unqualified Dublin Core (which is required by the OAI) as well as the native metadata format. <P9> From a computing viewpoint, the metadata repository is the key component of the Site for Science system. The repository can be thought of as a modern variant of the traditional library union catalog, a catalog that holds comprehensive catalog records from a group of libraries. . . . Metadata from all the collections is stored in the repository and made available to providers of NSDL service.
Conclusions
RQ 1 "Can a small team of librarians manage the collection development and metadata strategies for a very large library?" RQ 2 "Can the NSDL actually build services that are significantly more useful than the general web search services?"
CA Through OAI, access to resources is effected in a low-cost, interoperable manner.
Phrases
<P1> The need for a metadata format that would support both metadata creation by authors and interoperability across heterogeneous repositories led to the choice of unqualified Dublin Core. (p.16) <P2> OAI develops and promotes a low-barrier interoperability framework and associated standards, originally to enhance access to e-print archives, but now taking into account access to other digital materials. (p.16)
Conclusions
RQ The many players involved in cultural heritage need to work together to define standards and best practices.
Type
Electronic Journal
Title
Review: Some Comments on Preservation Metadata and the OAIS Model
CA Criticizes some of the limitations of OAIS and makes suggestions for improvements and clarifications. Also suggests that OAIS may be too library-centric, to the determinent of archival and especially recordkeeping needs. "In this article I have tried to articulate some of the main requirements for the records and archival community in preserving (archival) records. Based on this, the conclusion has to be that some adaptations to the [OAIS] model and metadata set would be necessary to meet these requirements. This concerns requirements such as the concept of authenticity of records, information on the business context of records and on relationships between records ('documentary context')."(p. 20)
Phrases
<P1> It requires records managers and archivists (and perhaps other information professionals) to be aware of these differences [in terminology] and to make a translation of such terms to their own domain. (p. 15) <P2> When applying the metadata model for a wider audience, more awareness of the issue of terminology is required, for instance by including clear definitions of key terms. (p. 15) <P3> The extent to which the management of objects can be influenced differs with respect to the type of objects. In the case of (government) records, legislation governs their creation and management, whereas, in the case of publications, the influence will be mostly based on agreements between producers, publishers and preservers. (p. 16) <P4> [A]lthough the suggestion may sometimes be otherwise, preservation metadata do not only apply to what is under the custody of a cultural or other preserving institution, but should be applied to the whole lifecycle of digital objects. ... Preservation can be viewed as part of maintenance. <warrant> (p. 16) <P5> [B]y taking library community needs as leading (albeit implicitly), the approach is already restricting the types of digital objects. Managing different types of 'digital objects', e.g. publications and records, may require not entirely similar sets of metadata. (p. 16) <P6> Another issue is that of the requirements governing the preservation processes. ... There needs to be insight and, as a consequence, also metadata about the preservation strategies, policies and methods, together with the context in which the preservation takes place. <warrant> (p. 16) <P7> [W]hat do we want to preserve? Is it the intellectual content with the functionality it has to have in order to make sense and achieve its purpose, or is it the digital components that are necessary to reproduce it or both? (p. 16-17) <P8> My view is that 'digital objects' should be seen as objects having both conceptual and technical aspects that are closely interrelated. As a consequence of the explanation given above, a digital object may consist of more than one 'digital component'. The definition given in the OAIS model is therefore insufficient. (p. 17) <P9> [W]e have no fewer than five metadata elements that could contain information on what should be rendered and presented on the screen. How all these elements relate to each other, if at all, is unclear. (p. 17) <P10> What we want to achieve ... is that in the future we will still be able to see, read and understand the documents or other information entities that were once produced for a certain purpose and in a certain context. In trying to achieve this, we of course need to preserve these digital components, but, as information technology will evolve, these components have to be migrated or in some cases emulated to be usable on future hard- and software platforms. (p. 17) <P11> I would like to suggest including an element that reflects the original technical environment. (p. 18) <P12> Records, according to the recently published ISO records management standard 15489, are 'information created, received and maintained as evidence and information by an organisation or person, in pursuance of legal obligations or in the transaction of business'. ... The main requirements for records to serve as evidence or authoritative information sources are ... authenticity and integrity, and knowledge about the business context and about the interrelationship between records (e.g. in a case file). <warrant> (p. 18) <P13> It would have been helpful if there had been more acknowledgement of the issue of authenticity and the requirements for it, and if the Working Group had provided some background information about its view and considerations on this aspect and to what extent it is included or not. (p. 19) <P14> In order to be able to preserve (archival) records it will ... be necessary to extend the information model with another class of information that refers to business context. Such a subset could provide a structure for describing what in archival terminology is called information about 'provenance' (with a different meaning from that in OAIS). (p. 19) <P15> In order to accommodate the identified complexity it is necessary to distinguish at least between the following categories of relationships: relationships between intellectual objects ... in the archival context this is referred to as 'documentary context'; relationships between the (structural) components of one intellectual object ... ; [and] relationships between digital components. (p. 19-20) <P16> [T]he issue of appraisal and disposition of records has to be included. In this context the recently published records management standard (ISO 15489) may serve as a useful framework. It would make the OAIS model even more widely applicable. (p. 20)
Conclusions
RQ "There are some issues ... which need further attention. They concern on the one hand the scope and underlying concepts of the OAIS model and the resulting metadata set as presented, and on the other hand the application of the model and metadata set in a records and archival environment. ... [T]he distinction between physical and conceptual or intellectual aspects of a digital object should be made more explicit and will probably have an impact on the model and metadata set also. More attention also needs to be given to the relationship between the (preservation) processes and the metadata. ... In assessing the needs of the records and archival community, the ISO records management standard 15489 may serve as a very useful framework. Such an exercise would also include a test for applicability of the model and metadata set for record-creating organisations and, as such, broaden the view of the OAIS model." (p. 20)
SOW
DC OAIS emerged out of an initiative spearheaded by NASA's Consultative Committee for Space Data Systems. It has been shaped and promoted by the RLG and OCLC. Several international projects have played key roles in shaping the OAIS model and adapting it for use in libraries, archives and research repositories. OAIS-modeled repositories include the CEDARS Project, Harvard's Digital Repository, Koninklijke Bibliotheek (KB), the Library of Congress' Archival Information Package for audiovisual materials, MIT's D-Space, OCLC's Digital Archive and TERM: the Texas Email Repository Model.
Type
Report
Title
RLG Best Practice Guidelines for Encoded Archival Description
These award-winning guidelines, released in August 2002, were developed by the RLG EAD Advisory Group to provide practical, community-wide advice for encoding finding aids. They are designed to: facilitate interoperability of resource discovery by imposing a basic degree of uniformity on the creation of valid EAD-encoded documents; encourage the inclusion of particular elements, and; develop a set of core data elements. 
Publisher
Research Libraries Group
Publication Location
Mountain View, CA, USA
Language
English
Critical Arguements
<CA> The objectives of the guidelines are: 1. To facilitate interoperability of resource discovery by imposing a basic degree of uniformity on the creation of valid EAD-encoded documents and to encourage the inclusion of elements most useful for retrieval in a union index and for display in an integrated (cross-institutional) setting; 2. To offer researchers the full benefits of XML in retrieval and display by developing a set of core data elements to improve resource discovery. It is hoped that by identifying core elements and by specifying "best practice" for those elements, these guidelines will be valuable to those who create finding aids, as well as to vendors and tool builders; 3. To contribute to the evolution of the EAD standard by articulating a set of best practice guidelines suitable for interinstitutional and international use. These guidelines can be applied to both retrospective conversion of legacy finding aids and the creation of new finding aids.  
Conclusions
<RQ>
SOW
<DC> "RLG organized the EAD working group as part of our continuing commitment to making archival collections more accessible on the Web. We offer RLG Archival Resources, a database of archival materials; institutions are encouraged to submit their finding aids to this database." ... "This set of guidelines, the second version promulgated by RLG, was developed between October 2001 and August 2002 by the RLG EAD Advisory Group. This group consisted of ten archivists and digital content managers experienced in creating and managing EAD-encoded finding aids at repositories in the United States and the United Kingdom."
CA This is the first of four articles describing Geospatial Standards and the standards bodies working on these standards. This article will discuss what geospatial standards are and why they matter, identify major standards organizations, and list the characteristics of successful geospatial standards.
Conclusions
RQ Which federal and international standards have been agreed upon since this article's publication?
SOW
DC FGDC approved the Content Standard for Digital Geospatial Metadata (FGDC-STD-001-1998) in June 1998. FGDC is a 19-member interagency committee composed of representatives from the Executive Office of the President, Cabinet-level and independent agencies. The FGDC is developing the National Spatial Data Infrastructure (NSDI) in cooperation with organizations from State, local and tribal governments, the academic community, and the private sector. The NSDI encompasses policies, standards, and procedures for organizations to cooperatively produce and share geographic data.
Type
Web Page
Title
An Assessment of Options for Creating Enhanced Access to Canada's Audio-Visual Heritage
CA "This project was conducted by Paul Audley & Associates to investigate the feasibility of single window access to information about Canada's audio-visual heritage. The project follows on the recommendations of Fading Away, the 1995 report of the Task Force on the Preservation and Enhanced Use of Canada's Audio-Visual Heritage, and the subsequent 1997 report Search + Replay. Specific objectives of this project were to create a profile of selected major databases of audio-visual materials, identify information required to meet user needs, and suggest models for single-window access to audio-visual databases. Documentary research, some 35 interviews, and site visits to organizations in Vancouver, Toronto, Ottawa and Montreal provided the basis upon which the recommendations of this report were developed."
CA "The purpose of this document is: (1) To provide a better understanding of the functionality that the MPEG-21 multimedia framework should be capable of providing; (2) To offer high level descriptions of different MPEG-21 applications against which the formal requirements for MPEG-21 can be checked; (3) To act as a basis for devising Core Experiments which establish proof of concept; (4) To provide a point of reference to support the evaluation of responses submitted against ongoing MPEG-21 Calls for Proposals; (5) To be a 'Public Relations' instrument that can help to explain what MPEG-21 is about."
Conclusions
RQ not applicable
SOW
DC The Moving Picture Experts Group (MPEG) is a working group of ISO/IEC, made up of some 350 members from various industries and universities, in charge of the development of international standards for compression, decompression, processing, and coded representation of moving pictures, audio and their combination. MPEG's official designation is ISO/IEC JTC1/SC29/WG11. So far MPEG has produced the following compression formats and ancillary standards: MPEG-1, the standard for storage and retrieval of moving pictures and audio on storage media (approved Nov. 1992); MPEG-2, the standard for digital television (approved Nov. 1994); MPEG-4, the standard for multimedia applications; MPEG-7, the content representation standard for multimedia information search, filtering, management and processing; and MPEG-21, the multimedia framework.
CA Discussion of the challenges faced by librarians and archivists who must determine which and how much of the mass amounts of digitally recorded sound materials to preserve. Identifies various types of digital sound formats and the varying standards to which they are created. Specific challenges discussed include copyright issues; technologies and platforms; digitization and preservation; and metadata and other standards.
Conclusions
RQ "Whether between record companies and archives or with others, some type of collaborative approach to audio preservation will be necessary if significant numbers of audio recordings at risk are to be preserved for posterity. ... One particular risk of preservation programs now is redundancy. ... Inadequate cataloging is a serious impediment to preservation efforts. ... It would be useful to archives, and possibly to intellectual property holders as well, if archives could use existing industry data for the bibliographic control of published recordings and detailed listings of the music recorded on each disc or tape. ... Greater collaboration between libraries and the sound recording industry could result in more comprehensive catalogs that document recording sessions with greater specificity. With access to detailed and authoritative information about the universe of published sound recordings, libraries could devote more resources to surveying their unpublished holdings and collaborate on the construction of a preservation registry to help reduce preservation redundancy. ... Many archivists believe that adequate funding for preservation will not be forthcoming unless and until the recordings preserved can be heard more easily by the public. ... If audio recordings that do not have mass appeal are to be preserved, that responsibility will probably fall to libraries and archives. Within a partnership between archives and intellectual property owners, archives might assume responsibility for preserving less commercial music in return for the ability to share files of preserved historical recordings."
Type
Web Page
Title
CDL Digital Object Standard: Metadata, Content and Encoding
This document addresses the standards for digital object collections for the California Digital Library 1. Adherence to these standards is required for all CDL contributors and may also serve University of California staff as guidelines for digital object creation and presentation. These standards are not intended to address all of the administrative, operational, and technical issues surrounding the creation of digital object collections.
Critical Arguements
CA These standards describe the file formats, storage and access standards for digital objects created by or incorporated into the CDL as part of the permanent collections. They attempt to balance adherence to industry standards, reproduction quality, access, potential longevity and cost.
Conclusions
RQ not applicable
SOW
DC "This is the first version of the CDL Digital Object Standard. This version is based upon the September 1, 1999 version of the CDL's Digital Image Standard, which included recommendations of the Museum Educational Site Licensing Project (MESL), the Library of Congress and the MOA II participants." ... "The Museum Educational Site Licensing Project (MESL) offered a framework for seven collecting institutions, primarily museums, and seven universities to experiment with new ways to distribute visual information--both images and related textual materials. " ... "The Making of America (MoA II) Testbed Project is a Digital Library Federation (DLF) coordinated, multi-phase endeavor to investigate important issues in the creation of an integrated, but distributed, digital library of archival materials (i.e., digitized surrogates of primary source materials found in archives and special collections). The participants include Cornell University, New York Public Library, Pennsylvania State University, Stanford University and UC Berkeley. The Library of Congress white papers and standards are based on the experience gained during the American Memory Pilot Project. The concepts discussed and the principles developed still guide the Library's digital conversion efforts, although they are under revision to accomodate the capabilities of new technologies and new digital formats." ... "The CDL Technical Architecture and Standards Workgroup includes the following members with extensive experience with digital object collection and management: Howard Besser, MESL and MOA II digital imaging testbed projects; Diane Bisom, University of California, Irvine; Bernie Hurley, MOA II, University of California, Berkeley; Greg Janee, Alexandria Digital Library; John Kunze, University of California, San Francisco; Reagan Moore and Chaitanya Baru, San Diego Supercomputer Center, ongoing research with the National Archives and Records Administration on the long term storage and retrieval of digital content; Terry Ryan, University of California, Los Angeles; David Walker, California Digital Library"
There are many types of standards used to manage museum collections information. These "standards", which range from precise technical  standards to general guidelines, enable museum data to be efficiently  and consistently indexed, sorted, retrieved, and shared, both  in automated and paper-based systems. Museums often use metadata standards  (also called data structure standards) to help them: define what types of information to record in their database  (or card catalogue); structure this information (the relationships between the  different types of information). Following (or mapping data to) these standards makes it possible  for museums to move their data between computer systems, or share  their data with other organizations.
Notes
The CHIN Web site features sections dedicated to Creating and Managing Digital Content, Intellectual Property, Collections Management, Standards, and more. CHIN's array of training tools, online publications, directories and databases are especially designed to meet the needs of both small and large institutions. The site also provides access to up-to-date information on topics such as heritage careers, funding and conferences.
Critical Arguements
CA "Museums often want to use their collections data for many purposes, (exhibition catalogues, Web access for the public, and curatorial research, etc.), and they may want to share their data with other museums, archives, and libraries in an automated way. This level of interoperability between systems requires cataloguing standards, value standards, metadata standards, and interchange standards to work together. Standards enable the interchange of data between cataloguer and searcher, between organizations, and between computer systems."
Conclusions
RQ "HIN is also involved in a project to create metadata for a pan-Canadian inventory of learning resources available on Canadian museum Web sites. Working in consultation with the Consortium for the Interchange of Museum Information (CIMI), the Gateway to Educational Materials (GEM) [link to GEM in Section G], and SchoolNet, the project involves the creation of a Guide to Best Practices and cataloguing tool for generating metadata for online learning materials. " 
SOW
DC "CHIN is involved in the promotion, production, and analysis of standards for museum information. The CHIN Guide to Museum Documentation Standards includes information on: standards and guidelines of interest to museums; current projects involving standards research and implementation; organizations responsible for standards research and development; Links." ... "CHIN is a member of CIMI (the Consortium for the Interchange of Museum Information), which works to enable the electronic interchange of museum information. From 1998 to 1999, CHIN participated in a CIMI Metadata Testbed which aimed to explore the creation and use of metadata for facilitating the discovery of electronic museum information. Specifically, the project explored the creation and use of Dublin Core metadata in describing museum collections, and examined how Dublin Core could be used as a means to aid in resource discovery within an electronic, networked environment such as the World Wide Web." 
This document provides some background on preservation metadata for those interested in digital preservation. It first attempts to explain why preservation metadata is seen as an essential part of most digital preservation strategies. It then gives a broad overview of the functional and information models defined in the Reference Model for an Open Archival Information System (OAIS) and describes the main elements of the Cedars outline preservation metadata specification. The next sections take a brief look at related metadata initiatives, make some recommendations for future work and comment on cost issues. At the end there are some brief recommendations for collecting institutions and the creators of digital content followed by some suggestions for further reading.
Critical Arguements
CA "This document is intended to provide a brief introduction to current preservation metadata developments and introduce the outline metadata specifications produced by the Cedars project. It is aimed in particular at those who may have responsibility for digital preservation in the UK further and higher education community, e.g. senior staff in research libraries and computing services. It should also be useful for those undertaking digital content creation (digitisation) initiatives, although it should be noted that specific guidance on this is available elsewhere. The guide may also be of interest to other kinds of organisations that have an interest in the long-term management of digital resources, e.g. publishers, archivists and records managers, broadcasters, etc. This document aimes to provide: A rationale for the creation and maintenance of preservation metadata to support digital preservation strategies, e.g. migration or emulation; An introduction to the concepts and terminology used in the influential ISO Reference Model for an Open Archival Information System (OAIS); Brief information on the Cedars outline preservation metadata specification and the outcomes of some related metadata initiatives; Some notes on the cost implications of preservation metadata and how these might be reduced.
Conclusions
RQ "In June 2000, a group of archivists, computer scientists and metadata experts met in the Netherlands to discuss metadata developments related to recordkeeping and the long-term preservation of archives. One of the key conclusions made at this working meeting was that the recordkeeping metadata communities should attempt to co-operate more with other metatdata initiatives. The meeting also suggested research into the contexts of creation and use, e.g. identifying factors that might encourage or discourage creators form meeting recordkeeping metadata requirements. This kind of research would also be useful for wider preservation metadata developments. One outcome of this meeting was the setting up of an Archiving Metadata Forum (AMF) to form the focus of future developments." ... "Future work on preservation metadata will need to focus on several key issues. Firstly, there is an urgent need for more practical experience of undertaking digital preservation strategies. Until now, many preservation metadata initiatives have largely been based on theoretical considerations or high-level models like the OAIS. This is not in itself a bad thing, but it is now time to begin to build metadata into the design of working systems that can test the viability of digital preservation strategies in a variety of contexts. This process has already begun in initiatives like the Victorian Electronic Records Stategy and the San Diego Supercomputer Center's 'self-validating knowledge-based archives'. A second need is for increased co-operation between the many metadata initiatives that have an interest in digital preservation. This may include the comparison and harmonisation of various metadata specifications, where this is possible. The OCLC/LG working group is an example of how this has been taken forward whitin a particular domain. There is a need for additional co-operation with recordkeeping metadata specialists, computing scientists and others in the metadata research community. Thirdly, there is a need for more detailed research into how metadata will interact with different formats, preservation strategies and communities of users. This may include some analysis of what metadata could be automatically extracted as part of the ingest process, an investigation of the role of content creators in metadata provision, and the production of user requirements." ... "Also, thought should be given to the development of metadata standards that will permit the easy exchange of preservation metadata (and information packages) between repositories." ... "As well as ensuring that digital repositories are able to facilitate the automatic capture of metadata, some thought should also be given to how best digital repositories could deal with any metadata that might already exist."
SOW
DC "Funded by JISC (the Joint Information Systems Committee of the UK higher education funding councils), as part of its Electronic Libraries (eLib) Programme, Cedars was the only project in the programme to focus on digital preservation." ... "In the digitial library domain, the development of a recommendation on preservation metadata is being co-ordinated by a working group supported by OCLC and the RLG. The membership of the working group is international, and inlcudes key individuals who were involved in the development of the Cedars, NEDLIB and NLA metadata specifications."
The CDISC Submission Metadata Model was created to help ensure that the supporting metadata for these submission datasets should meet the following objectives: Provide FDA reviewers with clear describtions of the usage, structure, contents, and attributes of all datasets and variables; Allow reviewers to replicate most analyses, tables, graphs, and listings with minimal or no transformations; Enable reviewers to easily view and subset the data used to generate any analysis, table, graph, or listing without complex programming. ... The CDISC Submission Metadata Model has been defined to guide sponsors in the preparation of data that is to be submitted to the FDA. By following the principles of this model, sponsors will help reviewers to accurately interpret the contents of submitted data and work with it more effectively, without sacrificing the scientific objectives of clinical development.
Publisher
The Clinical Data Interchange Standards Consortium
Critical Arguements
CA "The CDISC Submission Data Model has focused on the use of effective metadata as the most practical way of establishing meaningful standards applicable to electronic data submitted for FDA review."
Conclusions
RQ "Metadata prepared for a domain (such as an efficacy domain) which has not been described in a CDISC model should follow the general format of the safety domains, including the same set of core selection variables and all of the metadata attributes specified for the safety domains. Additional examples and usage guidelines are available on the CDISC web site at www.cdisc.org." ... "The CDISC Metadata Model describes the structure and form of data, not the content. However, the varying nature of clinical data in general will require the sponsor to make some decisions about how to represent certain real-world conditions in the dataset. Therefore, it is useful for a metadata document to give the reviewer an indication of how the datasets handle certain special cases."
SOW
DC CDISC is an open, multidisciplinary, non-profit organization committed to the development of worldwide standards to support the electronic acquisition, exchange, submission and archiving of clinical trials data and metadata for medical and biopharmaceutical product development. CDISC members work together to establish universally accepted data standards in the pharmaceutical, biotechnology and device industries, as well as in regulatory agencies worldwide. CDISC currently has more than 90 members, including the majority of the major global pharmaceutical companies.
Type
Web Page
Title
CDISC Achieves Two Significant Milestones in the Development of Models for Data Interchange
CA "The Clinical Data Interchange Standards Consortium has achieved two significant milestones towards its goal of standard data models to streamline drug development and regulatory review processes. CDISC participants have completed metadata models for the 12 safety domains listed in the FDA Guidance regarding Electronic Submissions and have produced a revised XML-based data model to support data acquisition and archive."
Conclusions
RQ "The goal of the CDISC XML Document Type Definition (DTD) Version 1.0 is to make available a first release of the definition of this CDISC model, in order to support sponsors, vendors and CROs in the design of systems and processes around a standard interchange format."
SOW
DC "This team, under the leadership of Wayne Kubick of Lincoln Technologies, and Dave Christiansen of Genentech, presented their metadata models to a group of representatives at the FDA on Oct. 10, and discussed future cooperative efforts with Agency reviewers."... "CDISC is a non-profit organization with a mission to lead the development of standard, vendor-neutral, platform-independent data models that improve process efficiency while supporting the scientific nature of clinical research in the biopharmaceutical and healthcare industries"
Type
Web Page
Title
eXtensible rights Markup Language (XrML) 2.0 Specification Part I: Primer
This specification defines the eXtensible rights Markup Language (XrML), a general-purpose language in XML used to describe the rights and conditions for using digital resources.
Publisher
ContentGuard
Critical Arguements
CA This chapter provides an overview of XrML. It provides a basic definition of XrML, describes the need that XrML is meant to address, and explains design goals for the language.
Conclusions
RQ not applicable
SOW
DC ContentGuard contributed XrML to MPEG-21, the OASIS Rights Language Technical Committee and the Open eBook Forum (OeBF). In each case they are using XrML as the base for their rights language specification. Furthest along is MPEG, where the process has reached Committee Draft. They have also recommended to other standards bodies to build on this work. ContentGuard will propose XrML to any standards organization seeking a rights language. Because of this progress ContentGuard has frozen its release of XrML at Version 2.0.
CA ContentGuard intends to submit XrML to standards bodies that are developing specifications that enable the exchange and trading of content as well as the creation of repositories for storage and management of digital content.
SOW
DC ContentGuard contributed XrML to MPEG-21, the OASIS Rights Language Technical Committee and the Open eBook Forum (OeBF). In each case they are using XrML as the base for their rights language specification. Furthest along is MPEG, where the process has reached Committee Draft. They have also recommended to other standards bodies to build on this work. ContentGuard will propose XrML to any standards organization seeking a rights language. Because of this progress ContentGuard has frozen its release of XrML at Version 2.0.
Type
Web Page
Title
PBCore: Public Broadcasting Metadata Dictionary Project
CA "PBCore is designed to provide -- for television, radio and Web activities -- a standard way of describing and using media (video, audio, text, images, rich interactive learning objects). It allows content to be more easily retrieved and shared among colleagues, software systems, institutions, community and production partners, private citizens, and educators. It can also be used as a guide for the onset of an archival or asset management process at an individual station or institution. ... The Public Broadcasting Metadata Dictionary (PBCore) is: a core set of terms and descriptors (elements) used to create information (metadata) that categorizes or describes media items (sometimes called assets or resources)."
Conclusions
<RQ> The PBCore Metadata Elements are currently in their first published edition, Version 1.0. Over two years of research and lively discussions have generated this version. ... As various users and communities begin to implement the PBCore, updates and refinements to the PBCore are likely to occur. Any changes will be clearly identified, ramifications outlined, and published to our constituents.
SOW
DC "Initial development funding for PBCore was provided by the Corporation for Public Broadcasting. The PBCore is built on the foundation of the Dublin Core (ISO 15836) ... and has been reviewed by the Dublin Core Metadata Initiative Usage Board. ... PBCore was successfully deployed in a number of test implementations in May 2004 in coordination with WGBH, Minnesota Public Radio, PBS, National Public Radio, Kentucky Educational Television, and recognized metadata expert Grace Agnew. As of July 2004 in response to consistent feedback to make metadata standards easy to use, the number of metadata elements was reduced to 48 from the original set of 58 developed by the Metadata Dictionary Team. Also, efforts are ongoing to provide more focused metadata examples that are specific to TV and radio. ... Available free of charge to public broadcasting stations, distributors, vendors, and partners, version 1.0 of PBCore was launched in the first quarter of 2005. See our Licensing Agreement via the Creative Commons for further information. ... Plans are under way to designate an Authority/Maintenance Organization."
Type
Web Page
Title
Schema Registry: activityreports: Recordkeeping Metadata Standard for Commonwealth Agencies
CA "The Australian SPIRT Recordkeeping Metadata Project was initially a project funded under a programme known as the Strategic Partnership with Industry -- Research and Training (SPIRT) Support Grant -- partly funded by the Australian Research Council. The project was concerned with developing a framework for standardising and defining recordkeeping metadata and produced a metadata element set eventually known as the Australian Recordkeeping Metadata Schema (RKMS). The conceptual frame of reference in the project was based in Australian archival practice, including the Records Continuum Model and the Australian Series System. The RKMS also inherits part of the Australian Government Locator Service (AGLS) metadata set."
This paper discusses how metadata standards can help organizations comply with the ISO 9000 standards for quality systems. It provides a brief overview of metadata, ISO 9000 and related records management standards. It then analyses in some depth the ISO 9000 requirements for quality records, and outlines the problems that some organizations have in complying with them. It also describes the metadata specifications developed by the University of Pittsburgh Electronic Recordkeeping project and the SPIRT Recordkeeping Metadata project in Australia and discusses the role of metadata in meeting ISO 9000 requirements for the creation and preservation of reliable, authentic and accessible records.
Publisher
Records Continuum Research Group
Critical Arguements
CA "During the last few years a number of research projects have studied the types of metadata needed to create, manage and make accessible quality records, i.e. reliable, authentic and useable records. This paper will briefly discuss the purposes of recordkeeping metadata, with reference to emerging records management standards, and the models presented by two projects, one in the United States and one in Australia. It will also briefly review the ISO 9000 requirements for records and illustrate how metadata can help an organization meet these requirements."
Conclusions
RQ "Quality records provide many advantages for organizations and can help companies meet the ISO 9000 certification. However, systems must be designed to create the appropriate metadata to ensure they comply with recordkeeping requirements, particularly those identified by records management standards like AS 4390 and the proposed international standard, which provide benchmarks for recordkeeping best practice. The Pittsburgh metadata model and the SPIRT framework provide organizations with standardized sets of metadata that would ensure the creation, preservation and accessibility of reliable, authentic and meaningful records for as long as they are of use. In deciding what metadata to capture, organisations should consider the cost of meeting the requirements of the ISO 9000 guidelines and any related records management best practice standards, and the possible risk of not meeting these requirements."
CA In March 2003, the intention of undertaking an international survey of LOM implementations was announced at the plenary meeting of the "Information Technology for Learning, Education and Training", ISO/IEC JTC1/SC36 sub-committee. The ISO/IEC JTC1/SC36 committee is international in both membership and emphasis, and has a working group, Working Group (WG) 4, "Management and Delivery for Learning, Education, and Training," which has been explicitly charged with the task of contributing to future standardization work on the LOM. <warrant> The international LOM Survey focuses on two questions: 1) "Which elements were selected for use or population?"; and 2) "How were these elements used, or what where the types of values assigned to them?" This report also attempts to draw a number of tentative suggestions and conclusions for further standardization work
Conclusions
RQ Based on its findings, the preliminary survey report was able to suggest a number of conclusions: First, fewer and better-defined elements may be more effective than the range of choice and interpretive possibilities currently allowed by the LOM. This seems to be especially the case regarding educational elements, which are surprisingly underutilized for metadata that it ostensibly and primarily educational. Second, clear and easily-supported means of working with local, customized vocabularies would also be very valuable. Third, it also seems useful to ensure that structures are provided to accommodate complex but more conventional aspects of resource description. These would include multiple title versions, as well as multilingual descriptions and values.
SOW
DC On June 12, 2002, 1484.12.1 - 2002 Learning Object Metadata (LOM) was approved by the IEEE-Standards Association.
Type
Web Page
Title
Towards a Digital Rights Expression Language Standard for Learning Technology
CA The Learning Technology Standards Committee (LTSC) of the Institute for Electrical and Electronic Engineers (IEEE) concentrated on making recommendations for standardizing a digital rights expression language (DREL) with the specific charge to (1) Investigate existing standards development efforts for DREL and digital rights. (2) Gather DREL requirements germane to the learning, education, and training industries. (3) Make recommendations as to how to proceed. (4) Feed requirements into ongoing DREL and digital rights standardization efforts, regardless of whether the LTSC decides to work with these efforts or embark on its own. This report represents the achievement of these goals in the form a of a white paper that can be used as reference for the LTSC, that reports on the current state of existing and proposed standardization efforts targeting digital rights expression languages and makes recommendations concerning future work.
Conclusions
RQ The recommendations of this report are: 1. Maintain appropriate liaisons between learning technology standards development organizations and those standards development organizations standardizing rights expression languages. The purpose of these liaisons is to continue to feed requirements into broader standardization efforts and to ensure that the voice of the learning, education and training community is heard. 2. Support the creation of application profiles or extensions of XrML and ODRL that include categories and vocabularies for roles common in educational and training settings. In the case of XrML, a name space for local context may be needed. (A name space is required for both XrML and ODRL for the ÔÇ£application profileÔÇØ or specifically the application ÔÇôLT application- extension) 3. Advocate the creation of a standard for expressing local policies in ways that can be mapped to rights expressions. This could be either through a data model or through the definition of an API or service. 4. Launch an initiative to identify models of rights enforcement in learning technology and to possibly abstract a common model for use by architecture and framework definition projects. 5. Further study the implications of patent claims, especially for educational and research purposes.
CA Overview of the program, including keynote speakers, papers presented, invited talks, future directions and next steps.
Conclusions
RQ Some steps to be taken: (1) Investigate potential move to a formal standards body/group and adopt their procedures and processes. Potential groups include; W3C, OASIS, ECMA, IEEE, IETF, CEN/ISS, Open Group. The advantages and disadvantages of such a move will be documented and discussed within the ODRL community. (2) Potential to submit current ODRL version to national bodies for adoption. (3) Request formal liaison relationship with the OMA. <warrant>
This document is a draft version 1.0 of requirements for a metadata framework to be used by the International Press Telecommunications Council for all new and revised IPTC standards. It was worked on and agreed to by members of the IPTC Standards Committee, who represented a variety of newspaper, wire agencies, and other interested members of the IPTC.
Notes
Misha Wolf is also listed as author.
Publisher
International Press Telecommunications Council (IPTC)
Critical Arguements
CA "This Requirements document forms part of the programme of work called ITPC Roadmap 2005. The Specification resulting from these Requirements will define the use of metadata by all new IPTC standards and by new major versions of existing IPTC standards." (p. 1) ... "The purpose of the News Metadata Framework (NMDF) WG is to specify how metadata will be expressed, referenced, and managed in all new major versions of IPTC standards. The NMF WG will: Gather, discuss, agree and document functional requirements for the ways in which metadata will be expressed, referenced and managed in all new major versions of IPTC standards; Discuss, agree and document a model, satisfying these requirements; Discuss, agree and document possible approaches to expressing this model in XML, and select those most suited to the tasks. In doing so, the NMDF WG will, where possible, make use of the work of other standards bodies. (p. 2)
Conclusions
RQ "Open issues include: The versioning of schemes, including major and minor versions, and backward compatibility; the versioning of TopicItems; The design of URIs for TopicItem schemes and TopicItem collections, including the issues of: versions (relating to TopicItems, schemes, and collections); representations (relating to TopicItems and collections); The relationship between a [scheme, code] pair, the corresponding URI and the scheme URI." (p. 17)
SOW
DC The development of this framework came out of the 2003 News Standards Summit, which was attended by representatives from over 80 international press and information agencies ... "The News Standards Summit brings together major players--experts on news metadata standards as well as commercial news providers, users, and aggregators. Together, they will analyze the current state and future expectations for news and publishing XML and metadata efforts from both the content and processing model perspectives. The goal is to increase understanding and to drive practical, productive convergence." ... This is a draft version of the standard.
Type
Web Page
Title
Metadata Reference Guide: ONIX ONline Information eXchange
CA According to Editeur, the group responsible for the maintenance of the ONIX standard, ONIX is the international standard for representing book, serial, and video product information in electronic form.
Type
Web Page
Title
NHPRC: Minnesota State Archives Strategic Plan: Electronic Records Consultant Project
National Historical Publications and Records Commission Grant No. 95-030
Critical Arguements
CA "The Electronic Records Consultant Project grant was carried out in conjunction with the strategic planning effort for the Minnesota Historical Society's State Archives program. The objective was to develop a plan for a program that will be responsive to the changing nature of government records." ... "The strategic plan that was developed calls for specific actions to meet five goals: 1) strengthening partnerships, 2) facilitating the identification of historically valuable records, 3) integrating electronic records into the existing program, 4) providing quality public service, and 5) structuring the State Archives Department to meet the demands of this plan."
Type
Web Page
Title
Minnesota Recordkeeping Metadata Standard (IRM Standard 20, Version 1.2)
<P1> The Minnesota Recordkeeping Metadata Standard is referenced as a "current standard" in the Minnesota Enterprise Technical Architecture under Chapter 4, "Data and Records Management Architecture." State agencies bound by the Architecture should reference that document for compliance requirements. <P2> The Minnesota Recordkeeping Metadata Standard is directly based upon the one developed by the National Archives of Australia (NAA), the Recordkeeping Metadata Standard for Commonwealth Nations, version 1.0, May 1999. (p. 7) <warrant> <P3> The Minnesota Recordkeeping Metadata Standard (Minnesota Office of Technology standard IRM 20) was developed to facilitate records management by government entities at any level of government.
"The ERMS Metadata Standard forms Part 2 of the National Archives' 'Requirements for Electronic Records Management Systems' (commonly known as the '2002 Requirements'). It is specified in a technology independent manner, and is aligned with the e-Government Metadata Standard (e-GMS) version 2, April 2003. A version of e-GMS v2 including XML examples was published in the autumn of 2003. This Guide should be read in conjunction with the ERMS Metadata Standard. Readers may find the GovTalk Schema Guidelines (available via http://www.govtalk.gov.uk ) helpful regarding design rules used in building the schemas."
Conclusions
RQ Electronically enabled processes need to generate appropriate records, according to established records management principles. These records need to reach the ERMS that captures them with enough information to enable the ERMS to classify them appropriately, allocate an appropriate retention policy, etc.
SOW
DC This document is a draft.
Type
Web Page
Title
Recordkeeping Metadata Standard for Commonwealth Agencies
This standard describes the metadata that the National Archives of Australia recommends should be captured in the recordkeeping systems used by Commonwealth government agencies. ... Part One of the standard explains the purpose and importance of standardised recordkeeping metadata and details the scope, intended application and features of the standard. Features include: flexibility of application; repeatability of data elements; extensibility to allow for the management of agency-specific recordkeeping requirements; interoperability across systems environments; compatibility with related metadata standards, including the Australian Government Locator Service (AGLS) standard; and interdependency of metadata at the sub-element level.
Critical Arguements
CA Compliance with the Recordkeeping Metadata Standard for Commonwealth Agencies will help agencies to identify, authenticate, describe and manage their electronic records in a systematic and consistent way to meet business, accountability and archival requirements. In this respect the metadata is an electronic recordkeeping aid, similar to the descriptive information captured in file registers, file covers, movement cards, indexes and other registry tools used in the paper-based environment to apply intellectual and physical controls to records.
Conclusions
RQ "The National Archives intends to consult with agencies, vendors and other interested parties on the implementation and continuing evolution of the Recordkeeping Metadata Standard for Commonwealth Agencies." ... "The National Archives expects to re-examine and reissue the standard in response to broad agency feedback and relevant advances in theory and methodology." ... "The development of public key technology is one area the National Archives will monitor closely, in consultation with the Office for Government Online, for possible additions to a future version of the standard."
SOW
DC "This standard has been developed in consultation with recordkeeping software vendors endorsed by the Office for Government OnlineÔÇÖs Shared Systems Initiative, as well as selected Commonwealth agencies." ... "The standard has also been developed with reference to other metadata standards emerging in Australia and overseas to ensure compatibility, as far as practicable, between related resource management tools, including: the Dublin Core-derived Australian Government Locator Service (AGLS) metadata standard for discovery and retrieval of government services and information in web-based environments, co-ordinated by the National Archives of Australia; and the non-sector-specific Recordkeeping Metadata Standards for Managing and Accessing Information Resources in Networked Environments Over Time for Government, Social and Cultural Purposes, co-ordinated by Monash University using an Australian Research Council Strategic Partnership with Industry Research and Training (SPIRT) Support Grant."
Type
Web Page
Title
Use of Encoded Archival Description (EAD) for Manuscript Collection Finding Aids
Presented in 1999 to the Library's Collection Development & Management Committee, this report outlines support for implementing EAD in delivery of finding aids for library collections over the Web. It describes the limitations of HTML, provides an introduction to SGML, XML, and EAD, outlines the advantages of conversion from HTML to EAD, the conversion process, the proposed outcome, and sources for further information.
Publisher
National Library of Australia
Critical Arguements
CA As use of the World Wide Web has increased, so has the need of users to be able to discover web-based information resources easily and efficiently, and to be able to repeat that discovery in a consistent manner. Using SGML to mark up web-based documents facilitates such resource discovery.
Conclusions
RQ To what extent have the mainstream web browser companies fulfilled their committment to support native viewing of SGML/XML documents?
Joined-up government needs joined-up information systems. The e-Government Metadata Standard (e-GMS) lays down the elements, refinements and encoding schemes to be used by government officers when creating metadata for their information resources or designing search interfaces for information systems. The e-GMS is needed to ensure maximum consistency of metadata across public sector organisations.
Publisher
Office of the e-Envoy, Cabinet Office, UK.
Critical Arguements
CA "The e-GMS is concerned with the particular facets of metadata intended to support resource discovery and records management. The Standard covers the core set of ÔÇÿelementsÔÇÖ that contain data needed for the effective retrieval and management of official information. Each element contains information relating to a particular aspect of the information resource, e.g. 'title' or 'creator'. Further details on the terminology being used in this standard can be found in Dublin Core and Part Two of the e-GIF."
Conclusions
RQ "The e-GMS will need to evolve, to ensure it remains comprehensive and consistent with changes in international standards, and to cater for changes in use and technology. Some of the elements listed here are already marked for further development, needing additional refinements or encoding schemes. To limit disruption and cost to users, all effort will be made to future-proof the e-GMS. In particular we will endeavour: not to remove any elements or refinements; not to rename any elements or refinements; not to add new elements that could contain values contained in the existing elements."
SOW
DC The E-GMS is promulgated by the British government as part of its e-government initiative. It is the technical cornerstone of the e-government policy for joining up the public sector electronically and providing modern, improved public services.
To ensure that the digital collections submitted to RLG Cultural Materials can be discovered and understood, RLG has compiled these Descriptive Metadata Guidelines for contributors. While these guidelines reflect the needs of one particular service, they also represent a case study in information sharing across community and national boundaries. RLG Cultural Materials engages a wide range of contributors with different local practices and institutional priorities. Since it is impossible to find -- and impractical to impose -- one universally applicable standard as a submission format, RLG encourages contributors to follow the suite of standards applicable to their particular community (p.1).
Critical Arguements
CA "These guidelines . . . do not set a new standard for metadata submission, but rather support a baseline that can be met by any number of strategies, enabling participating institutions to leverage their local descriptions. These guidelines also highlight the types of metadata that enhance functionality for RLG Cultural Materials. After a contributor submits a collection, RLG maps that description into the RLG Cultural Materials database using the RLG Cultural Materials data model. This ensures that metadata from the various participant communities is integrated for efficient searching and retrieval" (p.1).
Conclusions
RQ Not applicable.
SOW
DC RLG comprises more than 150 research and cultural memory institutions, and RLG Cultural Materials elicits contributions from countless museums, archives, and libraries from around the world that, although they might retain local descriptive standards and metadata schemas, must conform to the baseline standards prescribed in this document in order to integrate into RLG Cultural Materials. Appendix A represents and evaluates the most common metadata standards with which RLG Cultural Materians is able to work.
Type
Web Page
Title
The MPEG-21 Rights Expression Language: A White Paper
CA Presents the business case for a Digital Rights Expression Language, an overview of the DRM landscape, a discussion of the history and role of standards in business, and some technical aspects of MPEG-21. "[U]nless the rights to ... content can be packaged within machine-readable licences, guaranteed to be ubiquitous, unambiguous and secure, which can then be processed consistently and reliably, it is unlikely that content owners will trust consign [sic] their content to networks. The MPEG Rights Expression Language (REL) is designed to provide the functionality required by content owners in order to create reliable, secure licences for content which can be used throughout the value chain, from content creator to content consumer."
Conclusions
RQ "While true interoperability may still be a distant prospect, a common rights expression language, with extensions based on the MPEG REL, can incrementally bring many of the benefits true interoperability will eventually yield. As extensions are created in multiple content verticals, it will be possible to transfer content generated in one securely to another. This will lead to cross channel fertilisation and the growth of multimedia content. At the same time, a common rights language will also lead to the possibility of broader content distribution (by enabling cross-DRM portability), thus providing more channel choice for consumers. It is this vision of the MPEG REL spreading out that is such an exciting prospect. ... The history of MPEG standards would seem to suggest that implementers will start building to the specification in mid-2003, coincidental with the completion of the standard. This will be followed by extensive take-up within two or three years, so that by mid 2006, the MPEG REL will be a pervasive technology, implemented across many different digital rights management and conditional access systems, in both the content industries and in other, non-rights based industries. ... The REL will ultimately become a 'transparent' technology, as invisible to the user as the phone infrastructure is today."
SOW
DC DC The Moving Picture Experts Group (MPEG) is a working group of ISO/IEC, made up of some 350 members from various industries and universities, in charge of the development of international standards for compression, decompression, processing, and coded representation of moving pictures, audio and their combination. MPEG's official designation is ISO/IEC JTC1/SC29/WG11. So far MPEG has produced the following compression formats and ancillary standards: MPEG-1, the standard for storage and retrieval of moving pictures and audio on storage media (approved Nov. 1992); MPEG-2, the standard for digital television (approved Nov. 1994); MPEG-4, the standard for multimedia applications; MPEG-7, the content representation standard for multimedia information search, filtering, management and processing; and MPEG-21, the multimedia framework.
Type
Web Page
Title
Interactive Fiction Metadata Element Set version 1.1, IFMES 1.1 Specification
This document defines a set of metadata elements for describing Interactive Fiction games. These elements incorporate and enhance most of the previous metadata formats currently in use for Interactive Fiction, and attempts to bridge them to modern standards such as the Dublin Core.
Critical Arguements
CA "There are already many metadata standards in use, both in the Interactive Fiction community and the internet at large. The standards used by the IF community cover a range of technologies, but none are fully compatible with bleeding-edge internet technology like the Semantic Web. Broader-based formats such as the Dublin Core are designed for the Semantic Web, but lack the specialized fields needed to describe Interactive Fiction. The Interactive Fiction Metadata Element Set was designed with three purposes. One, to fill in the specialized elements that Dublin Core lacks. Two, to unify the various metadata formats already in use in the IF community into a single standard. Three, to bridge these older standards to the Dublin Core element set by means of the RDF subclassing system. It is not IFMES's goal to provide every single metadata element needed. RDF, XML, and other namespace-aware languages can freely mix different vocabularies, therefore IFMES does not subclass Dublin Core elements that do not relate to previous Interactive Fiction metadata standards. For these elements, IFMES recommends using the existing Dublin Core vocabulary, to maximize interoperability with other tools and communities."
Conclusions
RQ "Several of the IFMES elements can take multiple values. Finding a standard method of expressing multiple values is tricky. The approved method in RDF is either to repeat the predicate with different objects, or create a container as a child object. However, some RDF parsers don't work well with either of these methods, and many other languages don't allow them at all. XML has a value list format in which the values are separated with spaces, however this precludes spaces from appearing within the values themselves. A few legacy HTML attributes whose content models were never formally defined used commas to separate values that might contain spaces, and a few URI schemes accept multiple values separated by semicolons. The IFMES discussion group continues to examine this problem, and hopes to have a well-defined solution by the time this document reaches Candidate Recommendation status. For the time being IFMES recommends repeating the elements whenever possible, and using a container when that fails (for example, JSON could set the value to an Array). If an implementation simply must concatenate the values into a single string, the recommended separator is a space for URI and numeric types, and a comma followed by a space for text types."
SOW
DC The authors are writers and programmers in the interactive fiction community.