CA "The Digital Preservation Testbed is researching three different approaches to long-term digital preservation: migration, emulation and XML. Not only will the effectiveness of each approach be evaluated, but also their limits, costs and application potential. Experiments are taking place on text documents, spreadsheets, emails and databases of different size, complexity and nature."
Conclusions
RQ "New experiments expected in 2002 are the migration of spreadsheets, conversion of spreadsheets and databases into XML and a proof of concept with the UVC for text documents and spreadsheets. ... Eventually at the end of 2003 the Testbed project will provide: advice on how to deal with current digital records; recommendations for an appropriate preservation strategy or a combination ofstrategies; functional requirements for a preservation function; cost models of the various preservation strategies; a decision model for preservation strategy; recommendations concerning guidelines and regulations."
SOW
DC "The Digital Preservation Testbed is part of the non-profit organisation ICTU. ICTU isthe Dutch organisation for ICT and government. ICTU's goal is to contribute to the structural development of e-government. This will result in improving the work processes of government organisations, their service to the community and interaction with the citizens. ... In case of the Digital Preservation Testbed the principals are the Ministry of the Interior, Jan Lintsen and the Dutch National Archives, Maarten van Boven. Together with Public Key Infrastructure, Digital Longevity is the fundament of the ELO-house."
This study focuses upon access to authentic electronic records that are no longer required in day-to-day operations and that have been set aside in a recordkeeping system or storage repository for future reference. One school of thought, generally associated with computer information technology specialists, holds that long-term access to electronic records is primarily a technological issue with little attention devoted to authenticity. Another school of thought, associated generally with librarians, archivists, and records managers, contends that long-term access to electronic records is as much an intellectual issue as it is a technological issue. This latter position is clearly evident in several recent research projects and studies about electronic records whose findings illuminate the discussion of long-term access to electronic records. Therefore, a review of eight research projects highlighting findings relevant for long-term access to electronic records begins this chapter. This review is followed by a discussion, from the perspective of archival science, of nine questions that a long-term access strategy must take into account. The nine issues are: What is a document?; What is a record?; What are authentic electronic records?; What does "archiving" mean?; What is an authentic reformatted electronic record?; What is a copy of an authentic electronic record?; What is an authentic converted electronic record?; What is involved in the migration of authentic electronic records?; What is technology obsolescence?
Book Title
Authentic Electronic Records: Strategies for Long-Term Access
Publisher
Cohasset Associates, Inc.
Publication Location
Chicago
ISBN
0970064004
Critical Arguements
CA "Building upon the key concepts and concerns articulated by the studies described above, this report attempts to move the discussion of long-term access to electronic records towarad more clearly identified, generally applicable and redily im(TRUNCATED)
Conclusions
RQ
SOW
DC This book chapter was written by Charles M. Dollar for Cohasset Associates, Inc. Mr. Dollar has "twenty-five years of experience in working with electronic records as a manager at the National Archives and Records Administration, as an archival educator at the University of British Columbia, and a consultant to governments and businesses in North America, Asia, Europe, and the Middle East." Cohasset Associates Inc. is "one of the nation's foremost consulting firms specializing in document-based information management."
Type
Journal
Title
Capturing records' metadata: Unresolved questions and proposals for research
The author reviews a range of the research questions still unanswered by research on the capture of metadata required for recordness. These include how to maintain inviolable linkages between records and their metadata in a variety of architectures, what structure metadata content should take, the semantics of records metadata and that of other electronic sources, how new metadata can be acquired by records over time, maintaining the meaning of contextual metadata over time, the use of metadata in records management and the design of environments in which Business Acceptable Communications ÔÇô BAC ÔÇô (those with appropriate evidential metadata) can persist.
Critical Arguements
CA "My research consists of model building which enables the construction of theories and parallel implementations based on shared assumptions. Some of these models are now being tested in applications, so this report reflects both what we do not yet know from abstract constructs and questions being generated by field testing. " ... Bearman overviews research questions such as semantics, syntax, structure and persistence of metadata that still need to be addressed.
Phrases
<P1> Records are evidence when they are bound to appropriate metadata about their content, structure and context. <P2> The metadata required for evidence is described in the Reference Model for Business Acceptable Communications (BAC). <P3> Metadata which is required for evidence must continue to be associated with the record to which it relates over time and neither it nor the record content can be alterable. <P4> To date we have only identified three implementations which, logically, could allow metadata to retain this inviolable connection. Metadata can be: kept in a common envelope WITH a record (encapsulated), bound TO a record (by integrity controls within an environment), or LINKED with a record through a technical and/or social process (registration, key deposit, etc.). <P5> Metadata content was defined in order to satisfy a range of functional requirements of records, hence it ought to have a structure which enables it to serve these functions effectively and in concrete network implementations. <warrant> <P6> Clusters of metadata are must operate together. Clusters of metadata are required by different processes which take place at different times, for different software clients, and within a variety of processes. Distinct functions will need access to specified metadata substructures and must be able to act on these appropriately. Structures have been proposed in the Reference Model for Business Acceptable Communications. <P7> Metadata required for recordness must, logically, be standard; that required for administration of recordkeeping systems is extensible and locally variable. <P8> Records metadata must be semantically homogenous but it is probably desirable for it to be syntactically heterogeneous and for a range of protocols to operate against it. Records metadata management system requirements have both an internal and external aspect; internally they satisfy management requirements while externally they satisfy on-going recordness requirements. <P9> The metadata has to come either from a specific user/session or from rules defined to extract data either from a layer in the application or a layer between the application and the recording event. <P10> A representation of the business context must exist from which the record-creating event can obtain metadata values. <P11> Structural metadata must both define the dependent structures and identify them to a records management environment which is ÔÇ£patrollingÔÇØ for dependencies which are becoming risky in the evolving environment in order to identify needs for migration. <P12> BAC conformant environments could reduce overheads and, if standards supported the uniform management of records from the point of issue to the point of receipt. Could redundancy now imposed by both paper and electronic processes be dramatically reduced if records referenced other records? <P13>
Conclusions
RQ "All the proposed methods have some degree of external dependency. What are the implications software dependencies? Encapsulation, integrity controls and technico-social process are all software dependent. Is this avoidable? Can abstract reference models of the metadata captured by these methods serve to make them effectively software independent? " ... "What are the relative overhead costs of maintaining the systems which give adequate societal assurances of records retention following any of these approaches? Are there some strategies that are currently more efficient or effective? What are the organizational requirements for implementing metadata capture systems? In particular, what would the costs of building such systems within a single institution be versus the costs of implementing records metadata adhering communications servers on a universal scale?" ... "Can we model mechanisms to enable an integrated environment of recordkeeping throughout society for all electronically communicated transactions?" ... "Are the BAC structures workable? Complete? Extensible in ways that are known to be required? For example, metadata required for ÔÇ£recordnessÔÇØ is created at the time of the creation of the records but other metadata, as premised by the Warwick Framework, 2 may be created subsequently. Are these packets of metadata orthogonal with respect to recordness? If not, how are conflicts dealt with? " ... "Not all metadata references fixed facts. Thus, for example, we have premised that proper reference to a retention schedule is a citation to an external source rather than a date given within the metadata values of a record. Similar external references are required for administration of shifting access permissions. What role can registries (especially rights clearinghouses) play in a world of electronic records? How well do existing languages for permission management map to the requirements of records administration, privacy and confidentiality protection, security management, records retention and destruction, etc." ... "Not all records will be created with equally perfect metadata. Indeed risk-based decisions taken by organizations in structuring their recordsÔÇÖ capture are likely to result in conscious decisions to exclude certain evidential metadata. What are the implications of incomplete metadata on an individual organization level and on a societal level? Does the absence of data as a result of policy need to be noted? And if so, how?" ... "Since metadata has owners, howdo owners administer recordsÔÇÖ metadata over time? In particular, since records contain records, how are the layers of metadata exposed for management and administrative needs (if internal metadata documenting dependencies can slip through the migration process, we will end up with records that cannot serve as evidence. If protected records within unprotected records are not protected, we will end up with insecure records environments, etc. etc.)." ... "In principle, the BAC could be expressed as Dublin metadata 3 and insofar as it cannot be, the Dublin metadata will be inadequate for evidence. What other syntax could be used? How could these be comparatively tested?" .. "Could Dublin Core metadata, if extended by qualifying schema, serve the requirements of recordness? Records are, after all, documents in the Dublin sense of fixed information objects. What would the knowledge representation look like?" ... "Strategies for metadata capture currently locate the source of metadata either in the API layer, or the communications system, using data provided by the application (an analysis supports defining which data and where they can be obtained), from the user interface layer, or from the business rules defined for specified types of communication pathways. Can all the required metadata be obtained by some combination of these sources? In other words, can all the metadata be acquired from sources other than content created by the record-creator for the explicit and sole purpose of documentation (since such data is both suspect in itself and the demand for it is annoying to the end user)? " ... "Does the capture of metadata from the surrounding software layers require the implementation of a business-application specific engine, or can we design generic tools that provide the means by which even legacy computing systems can create evidential records if the communication process captures the interchange arising from a record-event and binds it with appropriate metadata?" ... "What kinds of representations of business processes and structures can best carry contextualizing metadata at this level of granularity and simultaneously serve end user requirements? Are the discovery and documentation representations of provenance going to have to be different? " ... "Can a generic level of representation of context be shared? Do standards such a STEP 4 provide adequate semantic rules to enable some meaningful exchange of business context information? " ... "Using past experiences of expired standards as an indicator, can the defined structural metadata support necessary migrations? Are the formal standards of the source and target environments adequate for actual record migration to occur?" ... "What metadata is required to document a migration itself?" ... "Reduction of redundancy requires record uses to impose post-creation metadata locks on records created with different retention and access controls. To what extent is the Warwick Framework relevant to these packets and can architectures be created to manage these without their costs exceeding the savings?" ... "A number of issues about proper implementation depend on the evolution (currently very rapid) of metadata strategies in the broader Internet community. Issues such as unique identification of records, external references for metadata values, models for metadata syntax, etc. cannot be resolved for records without reference to the ways in which the wider community is addressing them. Studies that are supported for metadata capture methods need to be aware of, and flexible in reference to, such developments."
Type
Electronic Journal
Title
ARTISTE: An integrated Art Analysis and Navigation Environment
This article focuses on the description of the objectives of the ARTISTE project (for "An integrated Art Analysis and Navigation environment") that aims at building a tool for the intelligent retrieval and indexing of high resolution images. The ARTISTE project will address professional users in the fine arts as the primary end-user base. These users provide services for the ultimate end-user, the citizen.
Critical Arguements
CA "European museums and galleries are rich in cultural treasures but public access has not reached its full potential. Digital multimedia can address these issues and expand the accessible collections. However, there is a lack of systems and techniques to support both professional and citizen access to these collections."
Phrases
<P1> New technology is now being developed that will transform that situation. A European consortium, partly funded by the EU under the fifth R&D framework, is working to produce a new management system for visual information. <P2> Four major European galleries (The Uffizi in Florence, The National Gallery and the Victoria and Albert Museum in London and the Louvre related restoration centre, Centre de Recherche et de Restauration des Mus├®es de France) are involved in the project. They will be joining forces with NCR, a leading player in database and Data Warehouse technology; Interactive Labs, the new media design and development facility of Italy's leading art publishing group, Giunti; IT Innovation, Web-based system developers; and the Department of Electronics and Computer Science at the University of Southampton. Together they will create web based applications and tools for the automatic indexing and retrieval of high-resolution art images by pictorial content and information. <P3> The areas of innovation in this project are as follows: Using image content analysis to automatically extract metadata based on iconography, painting style etc; Use of high quality images (with data from several spectral bands and shadow data) for image content analysis of art; Use of distributed metadata using RDF to build on existing standards; Content-based navigation for art documents separating links from content and applying links according to context at presentation time; Distributed linking and searching across multiple archives allowing ownership of data to be retained; Storage of art images using large (>1TeraByte) multimedia object relational databases. <P4> The ARTISTE approach will use the power of object-related databases and content-retrieval to enable indexing to be made dynamically, by non-experts. <P5> In other words ARTISTE would aim to give searchers tools which hint at links due to say colour or brush-stroke texture rather than saying "this is the automatically classified data". <P6> The ARTISTE project will build on and exploit the indexing scheme proposed by the AQUARELLE consortia. The ARTISTE project solution will have a core component that is compatible with existing standards such as Z39.50. The solution will make use of emerging technical standards XML, RDF and X-Link to extend existing library standards to a more dynamic and flexible metadata system. The ARTISTE project will actively track and make use of existing terminology resources such as the Getty "Art and Architecture Thesaurus" (AAT) and the "Union List of Artist Names" (ULAN). <P7> Metadata will also be stored in a database. This may be stored in the same object-relational database, or in a separate database, according to the incumbent systems at the user partners. <P8> RDF provides for metadata definition through the use of schemas. Schemas define the relevant metadata terms (the namespace) and the associated semantics. Individual RDF queries and statements may use multiple schemas. The system will make use of existing schemas such as the Dublin Core schema and will provide wrappers for existing resources such as the Art and Architecture thesaurus in a RDF schema wrapper. <P9> The Distributed Query and Metadata Layer will also provide facilities to enable queries to be directed towards multiple distributed databases. The end user will be able to seamlessly search the combined art collection. This layer will adhere to worldwide digital library standards such as Z39.50, augmenting and extending as necessary to allow the richness of metadata enabled by the RDF standard.
Conclusions
RQ "In conclusion the Artiste project will result into an interesting and innovative system for the art analysis, indexing storage and navigation. The actual state of the art of content-based retrieval systems will be positively influenced by the development of the Artiste project, which will pursue the following goals: A solution which can be replicated to European galleries, museums, etc.; Deep-content analysis software based on object relational database technology.; Distributed links server software, user interfaces, and content-based navigation software.; A fully integrated prototype analysis environment.; Recommendations for the exploitation of the project solution by European museums and galleries. ; Recommendations for the exploitation of the technology in other sectors.; "Impact on standards" report detailing augmentations of Z39.50 with RDF." ... ""Not much research has been carried out worldwide on new algorithms for style-matching in art. This is probably not a major aim in Artiste but could be a spin-off if the algorithms made for specific author search requirements happen to provide data which can be combined with other data to help classify styles." >
SOW
DC "Four major European galleries (The Uffizi in Florence, The National Gallery and the Victoria and Albert Museum in London and the Louvre related restoration centre, Centre de Recherche et de Restauration des Mus├®es de France) are involved in the project. They will be joining forces with NCR, a leading player in database and Data Warehouse technology; Interactive Labs, the new media design and development facility of Italy's leading art publishing group, Giunti; IT Innovation, Web-based system developers; and the Department of Electronics and Computer Science at the University of Southampton. Together they will create web based applications and tools for the automatic indexing and retrieval of high-resolution art images by pictorial content and information."
Type
Electronic Journal
Title
Electronic Records Research: Working Meeting May 28-30, 1997
CA Archivists are specifically concerned with records that are not easy to document -- records that are full of secret, proprietary or sensitive information, not to mention hardware and software dependencies. This front end of recordmaking and keeping must be addressed as we define what electronic records are and are not, and how we are to deal with them.
Phrases
<P1> Driven by pragmatism, the University of Pittsburgh team looked for "warrant" in the sources considered authoritative by the practicioners of ancillary professions on whom archivists rely -- lawyers, auditors, IT personnel , etc. (p.3) <P2> If the record creating event and the requirements of 'recordness' are both known, focus shifts to capturing the metadata and binding it to the record contents. (p.7) <P3> A strong business case is still needed to justify the role of archivists in the creation of electronic record management systems. (p.10)
Conclusions
RQ Warrant needs to be looked at in different countries. Does the same core definition of what constitutes a record cut across state borders? What role do specific user needs play in complying to regulation and risk management?
Type
Electronic Journal
Title
Metadata: The right approach, An integrated model for descriptive and rights metadata in E-commerce
If you've ever completed a large and difficult jigsaw puzzle, you'll be familiar with that particular moment of grateful revelation when you find that two sections you've been working on separately actually fit together. The overall picture becomes coherent, and the task at last seems achievable. Something like this seems to be happening in the puzzle of "content metadata." Two communities -- rights owners on one hand, libraries and cataloguers on the other -- are staring at their unfolding data models and systems, knowing that somehow together they make up a whole picture. This paper aims to show how and where they fit.
ISBN
1082-9873
Critical Arguements
CA "This paper looks at metadata developments from this standpoint -- hence the "right" approach -- but does so recognising that in the digital world many Chinese walls that appear to separate the bibliographic and commercial communities are going to collapse." ... "This paper examines three propositions which support the need for radical integration of metadata and rights management concerns for disparate and heterogeneous materials, and sets out a possible framework for an integrated approach. It draws on models developed in the CIS plan and the DOI Rights Metadata group, and work on the ISRC, ISAN, and ISWC standards and proposals. The three propositions are: DOI metadata must support all types of creation; The secure transaction of requests and offers data depends on maintaining an integrated structure for documenting rights ownership agreements; All elements of descriptive metadata (except titles) may also be elements of agreements. The main consequences of these propositions are: A cross-sector vocabulary is essential; Non-confidential terms of rights ownership agreements must be generally accessible in a standard form. (In its purest form, the e-commerce network must be able to automatically determine the current owner of any right in any creation for any territory.); All descriptive metadata values (except titles) must be stored as unique, coded values. If correct, the implications of these propositions on the behaviour, and future inter-dependency, of the rights-owning and bibliographic communities are considerable."
Phrases
<P1> Historically, metadata -- "data about data" -- has been largely treated as an afterthought in the commercial world, even among rights owners. Descriptive metadata has often been regarded as the proper province of libraries, a battlefield of competing systems of tags and classification and an invaluable tool for the discovery of resources, while "business" metadata lurked, ugly but necessary, in distribution systems and EDI message formats. Rights metadata, whatever it may be, may seem to have barely existed in a coherent form at all. <P2> E-commerce offers the opportunity to integrate the functions of discovery, access, licensing and accounting into single point-and-click actions in which metadata is a critical agent, a glue which holds the pieces together. <warrant> <P3> E-commerce in rights will generate global networks of metadata every bit as vital as the networks of optical fibre -- and with the same requirements for security and unbroken connectivity. <warrant> <P4> The sheer volume and complexity of future rights trading in the digital environment will mean that any but the most sporadic level of human intervention will be prohibitively expensive. Standardised metadata is an essential component. <warrant> <P5> Just as the creators and rights holders are the sources of the content for the bibliographic world, so it seems inevitable they will become the principal source of core metadata in the web environment, and that metadata will be generated simultaneously and at source to meet the requirements of discovery, access, protection, and reward. <P6> However, under the analysis being carried out within the communities identified above and by those who are developing technology and languages for rights-based e-commerce, it is becoming clear that "functional" metadata is also a critical component. It is metadata (including identifiers) which defines a creation and its relationship to other creations and to the parties who created and variously own it; without a coherent metadata infrastructure e-commerce cannot properly flow. Securing the metadata network is every bit as important as securing the content, and there is little doubt which poses the greater problem. <warrant> <P7> Because creations can be nested and modified at an unprecedented level, and because online availability is continuous, not a series of time-limited events like publishing books or selling records, dynamic and structured maintenance of rights ownership is essential if the currency and validity of offers is to be maintained. <warrant> <P8> Rights metadata must be maintained and linked dynamically to all of its related content. <P9> A single, even partial, change to rights ownership in the original creation needs to be communicated through this chain to preserve the currency of permissions and royalty flow. There are many options for doing this, but they all depend, among other things, on the security of the metadata network. <warrant> <P10>As digital media causes copyright frameworks to be rewritten on both sides of the Atlantic, we can expect measures of similar and greater impact at regular intervals affecting any and all creation types: yet such changes can be relatively simple to implement if metadata is held in the right way in the right place to begin with. <warrant> <P11> The disturbing but inescapable consequence is that it is not only desirable but essential for all elements of descriptive metadata, except for titles, to be expressed at the outset as structured and standardised values to preserve the integrity of the rights chain. <P12> Within the DOI community, which embraces commercial and library interests, the integration of rights and descriptive metadata has become a matter of priority. <P13> What is required is that the establishment of a creation description (for example, the registration of details of a new article or audio recording) or of change of rights control (for example, notification of the acquisition of a work or a catalogue of works) can be done in a standardised and fully structured way. <warrant> <P14> Unless the chain is well maintained at source, all downstream transactions will be jeopardised, for in the web environment the CIS principle of "do it once, do it right" is seen at its ultimate. A single occurrence of a creation on the web, and its supporting metadata, can be the source for all uses. <P15> One of the tools to support this development is the RDF (Resource Description Framework). RDF provides a means of structuring metadata for anything, and it can be expressed in XML. <P16> Although formal metadata standards hardly exist within ISO, they are appearing through the "back door" in the form of mandatory supporting data for identifier standards such as ISRC, ISAN and ISWC. A major function of the INDECS project will be to ensure the harmonisation of these standards within a single framework. <P17> In an automated, protected environment, this requires that the rights transaction is able to generate automatically a new descriptive metadata set through the interaction of the agreement terms with the original creation metadata. This can only happen (and it will be required on a massive scale) if rights and descriptive metadata terminology is integrated and standardised. <warrant> <P18>As resources become available virtually, it becomes as important that the core metadata itself is not tampered with as it is that the object itself is protected. Persistence is now not only a necessary characteristic of identifiers but also of the structured metadata that attends them. <P19> This leads us also to the conclusion that, ideally, standardised descriptive metadata should be embedded into objects for its own protection. <P20> It also leads us to the possibility of metadata registration authorities, such as the numbering agencies, taking wider responsibilities. <P21>If this paper is correct in its propositions, then rights metadata will have to rewrite half of Dublin Core or else ignore it entirely. <P22> The web environment with its once-for-all means of access provides us with the opportunity to eliminate duplication and fragmentation of core metadata; and at this moment, there are no legacy metadata standards to shackle the information community. We have the opportunity to go in with our eyes open with standards that are constructed to make the best of the characteristics of the new digital medium. <warrant>
Conclusions
RQ "The INDECS project (assuming its formal adoption next month), in which the four major communities are active, and with strong links to ISO TC46 and MPEG, will provide a cross-sector framework for this work in the short-term. The DOI Foundation itself may be an appropriate umbrella body in the future. We may also consider that perhaps the main function of the DOI itself may not be, as originally envisaged, to link user to content -- which is a relatively trivial task -- but to provide the glue to link together creation, party, and agreement metadata. The model that rights owners may be wise to follow in this process is that of MPEG, where the technology industry has tenaciously embraced a highly-regimented, rolling standardisation programme, the results of which are fundamental to the success of each new generation of products. Metadata standardisation now requires the same technical rigour and commercial commitment. However, in the meantime the bibliographic world, working on what it has always seen its own part of the jigsaw puzzle, is actively addressing many of these issues in an almost parallel universe. The question remains as to how in practical terms the two worlds, rights and bibliographic, can connect, and what may be the consequences of a prolonged delay in doing so." ... "The former I encourage to make a case for continued support and standardisation of a flawed Dublin Core in the light of the propositions I have set out in this paper, or else engage with the DOI and rights owner communities in its revision to meet the real requirements of digital commerce in its fullest sense."
SOW
DC "There are currently four major active communities of rights-holders directly confronting these questions: the DOI community, at present based in the book and electronic publishing sector; the IFPI community of record companies; the ISAN community embracing producers, users, and rights owners of audiovisuals; and the CISAC community of collecting societies for composers and publishers of music, but also extending into other areas of authors' rights, including literary, visual, and plastic arts." ... "There are related rights-driven projects in the graphic, photographic, and performers' communities. E-commerce means that metadata solutions from each of these sectors (and others) require a high level of interoperability. As the trading environment becomes common, traditional genre distinctions between creation-types become meaningless and commercially destructive."
Type
Web Page
Title
An Assessment of Options for Creating Enhanced Access to Canada's Audio-Visual Heritage
CA "This project was conducted by Paul Audley & Associates to investigate the feasibility of single window access to information about Canada's audio-visual heritage. The project follows on the recommendations of Fading Away, the 1995 report of the Task Force on the Preservation and Enhanced Use of Canada's Audio-Visual Heritage, and the subsequent 1997 report Search + Replay. Specific objectives of this project were to create a profile of selected major databases of audio-visual materials, identify information required to meet user needs, and suggest models for single-window access to audio-visual databases. Documentary research, some 35 interviews, and site visits to organizations in Vancouver, Toronto, Ottawa and Montreal provided the basis upon which the recommendations of this report were developed."
Type
Web Page
Title
Model Requirements For The Management Of Electronic Records
CA "The MoReq specification is ... primarily intended to serve as a practical tool in helping organisations meet their business needs for the management of both computer-based and paper-based records. While its development has taken traditional archival science and records management disciplines into account, these have been interpreted in a manner appropriate to electronic environments. ... Examples of this pragmatic approach include the incorporation of requirements for document management, workflow, metadata and other related technologies. ... This specification attempts to cover a wide range of requirements -- for different countries, in different industries and with different types of records. The wide scope is intentional, but it leads to a significant limitation, namely that this single specification cannot represent a requirement which precisely maps onto existing requirements without modification."
Type
Web Page
Title
Approaches towards the Long Term Preservation of Archival Digital Records
The Digital Preservation Testbed is carrying out experiments according to pre-defined research questions to establish the best preservation approach or combination of approaches. The Testbed will be focusing its attention on three different digital preservation approaches - Migration; Emulation; and XML - evaluating the effectiveness of these approaches, their limitations, costs, risks, uses, and resource requirements.
Language
English; Dutch
Critical Arguements
CA "The main problem surrounding the preservation of authentic electronic records is that of technology obsolescence. As changes in technology continue to increase exponentially, the problem arises of what to do with records that were created using old and now obsolete hardware and software. Unless action is taken now, there is no guarantee that the current computing environment (and thus also records) will be accessible and readable by future computing environments."
Conclusions
RQ "The Testbed will be conducting research to discover if there is an inviolable way to associate metadata with records and to assess the limitations such an approach may incur. We are also working on the provision of a proposed set of preservation metadata that will contain information about the preservation approach taken and any specific authenticity requirements."
SOW
DC The Digital Preservation Testbed is part of the non-profit organisation ICTU. ICTU is the Dutch organisation for ICT and government. ICTU's goal is to contribute to the structural development of e-government. This will result in improving the work processes of government organisations, their service to the community and interaction with the citizens. Government institutions, such as Ministries, design the policies in the area of e-government, and ICTU translates these policies into projects. In many cases, more than one institution is involved in a single project. They are the principals in the projects and retain control concerning the focus of the project. In case of the Digital Preservation Testbed the principals are the Ministry of the Interior and the Dutch National Archives.
Type
Web Page
Title
Softening the borderlines of archives through XML - a case study
Archives have always had troubles getting metadata in formats they can process. With XML, these problems are lessening. Many applications today provide the option of exporting data into an application-defined XML format that can easily be post-processed using XSLT, schema mappers, etc, to fit the archives┬┤ needs. This paper highlights two practical examples for the use of XML in the Swiss Federal Archives and discusses advantages and disadvantages of XML in these examples. The first use of XML is the import of existing metadata describing debates at the Swiss parliament whereas the second concerns preservation of metadata in the archiving of relational databases. We have found that the use of XML for metadata encoding is beneficial for the archives, especially for its ease of editing, built-in validation and ease of transformation.
Notes
The Swiss Federal Archives defines the norms and basis of records management and advises departments of the Federal Administration on their implementation. http://www.bar.admin.ch/bar/engine/ShowPage?pageName=ueberlieferung_aktenfuehrung.jsp
Critical Arguements
CA "This paper briefly discusses possible uses of XML in an archival context and the policies of the Swiss Federal Archives concerning this use (Section 2), provides a rough overview of the applications we have that use XML (Section 3) and the experiences we made (Section 4)."
Conclusions
RQ "The systems described above are now just being deployed into real world use, so the experiences presented here are drawn from the development process and preliminary testing. No hard facts in testing the sustainability of XML could be gathered, as the test is time itself. This test will be passed when we can still access the data stored today, including all metadata, in ten or twenty years." ... "The main problem area with our applications was the encoding of the XML documents and the non-standard XML document generation of some applications. When dealing with the different encodings (UTF-8, UTF-16, ISO-8859-1, etc) some applications purported a different encoding in the header of the XML document than the true encoding of the document. These errors were quickly identified, as no application was able to read the documents."
SOW
DC The author is currently a private digital archives consultant, but at the time of this article, was a data architect for the Swiss Federal Archives. The content of this article owes much to the work being done by a team of architects and engineers at the Archives, who are working on an e-government project called ARELDA (Archiving of Electronic Data and Records).
Museums and the Online Archive of California (MOAC) builds on existing standards and their implementation guidelines provided by the Online Archive of California (OAC) and its parent organization, the California Digital Library (CDL). Setting project standards for MOAC consisted of interpreting existing OAC/CDL documents and adapting them to the projects specific needs, while at the same time maintaining compliance with OAC/CDL guidelines. The present overview over the MOAC technical standards references both the OAC/CDL umbrella document and the MOAC implementation / adaptation document at the beginning of each section, as well as related resources which provide more detail on project specifications.
Critical Arguements
CA The project implements specifications for digital image production, as well as three interlocking file exchange formats for delivering collections, digital images and their respective metadata. Encoded Archival Description (EAD) XML describes the hierarchy of a collection down to the item-level and traditionally serves for discovering both the collection and the individual items within it. For viewing multiple images associated with a single object record, MOAC utilizes Making of America 2 (MOA2) XML. MOA2 makes the images representing an item available to the viewer through a navigable table of contents; the display mimics the behavior of the analog item by e.g. allowing end-users to browse through the pages of an artist's book. Through the further extension of MOA2 with Text Encoding Initiative (TEI) Lite XML, not only does every single page of the book display in its correct order, but a transcription of its textual content also accompanies the digital images.
Conclusions
RQ "These two instances of fairly significant changes in the project's specifications may serve as a gentle reminder that despite its solid foundation in standards, the MOAC information architecture will continue to face the challenge of an ever-changing technical environment."
SOW
DC The author is Digital Media Developer at the UC Berkeley Art Museum & Pacific Film Archives, a member of the MOAC consortium.