AudioVisual Heritage
and the Digital Universe

JTS 2007 Program Committee
Adrian Cosentini - ARSC Andris Kesteris - ICA
Thomas Christensen - FIAF George Abbott - IFLA
Henry Lindqvist - FIAT/IFTA Edward Tse - SEAPAVAA
Lars Gaustad - IASA Grover Crisp (Co-Chair)
Henry Lindqvist - FIAT/IFTA Michael Friend (Co-Chair)

PROGRAM
Speakers and Abstracts

New Web-Based Technology
for Environmental Monitoring of Moving Image Collections

James M. Reilly
Director, Image Permanence Institute
Rochester Institute of Technology

An important aspect of preserving moving image collections and their associated documentation is maintaining an appropriate environment. Archivists face a number of difficult challenges in gathering environmental data, determining what it means for their collections and planning for improvements. The requirements for a practical system include simple to use, inexpensive datalogger hardware, standardized and meaningful interpretive algorithms for temperature and humidity data, and easy access to reports and conclusions for archivists, facility managers, and collection administrators. The Image Permanence Institute at Rochester Institute of technology has developed an integrated approach to environmental assessment that addresses these requirements by creating a new type of datalogger and shifting data storage, interpretation and reporting to a web server rather than local computers. This presentation describes the design philosophy and technical rationale for the major elements of this system, which include:

  • The PEM2, a datalogger designed to be a pipeline of data direct to the web. The PEM2 has no software. It writes the data in plain text to a USB flash drive.
  • A web server application where each institution stores and analyzes its data. Interpretation of data is performed using standard metrics for chemical change, physical damage, mold risk, and metal corrosion risk.
  • Automated reporting in the form of pdf documents generated on the web server. The presentation will show examples of the uses of such a system in dealing with moving image collection storage problems.

Open Source Archival Repositories
and Preservation Systems

Kevin Bradley
Curator of Oral History and Folklore and
Director of Sound Preservation at the National Library of Australia

The problem of digital preservation has captured the attention of collection managers all over the world. Predominately large institutions with archival responsibilities or well funded projects with research concerns have supported loose cooperative arrangements amongst themselves and driven the digital preservation agenda with remarkable results, addressing a range of very complex and increasingly convoluted problems. The needs of many archival institutions are more prosaic. They require reliable, sustainable, preservation standard, archival digital storage that is affordable and appropriate to their needs. The priority is for managing and preserving simple, discrete digital objects; images, audio, video and text.

There are a finite number of functions an archival digital repository must be able to perform. These are defined in the Reference Model for an Open Archival Information System (OAIS) as; Data Management, Ingest, Access, Administration, Preservation Planning and Archival Storage. It would appear that affordable hardware and open source software exists to support many of these functions, but not completely, and not in a single form. The UNESCO Memory of the World Sub Committee on Technology (MoW SCoT), commissioned a report to test this hypothesis and identify development gaps, the resolution of which might be encouraged. The report was funded jointly by UNESCO MoW and the Australian Partnership for Sustainable Repositories (APSR), and submitted to UNESCO in April 2007. This paper describes the reports findings and proposes a method for carrying it forward.

Digital Archiving of Motion Pictures

Dave Cavena
Engagement Architect, Sun Microsystems

Filmed entertainment quickly is moving to digital files from analog film for capturing and displaying images. When completed, this move will catch-up to the decade-old move to digital for post production workflows. Archiving of this invaluable, irreplaceable content, however, still is primarily reliant on outputting the images to film and storing the film in a vault under controlled environmental conditions, a process little changed by technology for over a century. Archiving of motion pictures, however, is coming under increased scrutiny as analog film quickly is being replaced by digital bits in all phases of the imaging process, from capture to post-production, to projection in the theater, to distribution to the consumer over the internet, in packaged media or via digital television, whether broadcast, satellite or cable. Archiving for long-term preservation remains the only part of the workflow still reliant on film.

This paper will explore methods of using digital technology to archive motion pictures. Areas of discussion will include:

  • Capturing, digitizing and storing the image
  • Ensuring data integrity algorithmically
  • Bit error detection
  • Bit error correction
  • Archive management software development
  • Cost differentials - film vs. digital archive
  • Long-term retrieval of digitized images

The goal in presenting this paper is to initiate serious discussion, examine and improve the model, and assist archivists and content creators in creating viable, scalable, cost-effective methods for digitally archiving filmed images.


HF-Bias Signal Pick-Up & Pre-processing for
Wow & Flutter Correction of Analogue Magnetic Tape - Analyses and Limitations in Practical Application


Nadja Wallaszkovits
Phonogrammarchiv, Austrian Academy of Sciences, Vienna

Heinrich Pichler
Audio Consultant, Vienna

In conventional transfer processes of analogue magnetic audio tape, the main focus is set on the reproduction of the signal band carrying the primary audio content. Due to various limitations of standard playback systems, some additional technical information is lost, like the information possibly provided by the HF bias signal recorded on the original tape.

Since the audio, signal as well as the HF bias signal, are similarly affected by wow and flutter, these deviations from the standardised tape speed are reflected in the HF bias signal, provided the bias frequency is constant. The correction of wow and flutter has already been discussed in theory, mostly referring to the signal processing part. Practical implementations of signal processing have already been developed, using various automated and semi-automated detection routines in combination with non-uniform re-sampling methods.

The paper describes the problems and limitations of the practical implementation of HF bias signal pick-up from analogue magnetic tape at original replay speeds, to be implemented in a standard archival workflow, using slightly modified standard playback facilities. The signal pre-processing in analogue as well as digital domain are compared, and basing on analyses of bias signals from professional as well as semi-professional recordings, the various practical problems are discussed: level instability and unknown frequency of the recorded HF bias signal, frequency variations mainly with semi-professional devices of older generations due to the instability of the bias oscillator, as well as effects of signal distortions, interferences and ultrasonic artefacts.

The Motion Picture Archive:
A Real-World Implementation

Paul Collard
Vice President, Film and Digital Services, Ascent Media Group

Craig German
Vice President, Program Management for Digital Services, Ascent Media Grou
p

This presentation describes the development approach, implementation details, and operations of a working system for a digital data media archive. Although the system was initially developed for the media and entertainment sector, it provides a state-of-the-art solution that can be applied to the preservation of digital data for the media industry and public sector archives at large. As the motion picture industry accelerates adoption of digital intermediate and file-based repurposing workflows, members of the media and entertainment community are moving toward file-based archives for the preservation and use of important media assets. Recommended solutions include the following system components:

- remote and in-facility content ingest points
- central metadata repository
- integration with the client or archive logistics system
- hierarchical storage management
- quality control software
- environmentally controlled data tape archival locations

The handling of content has historically been a compartmentalized, human-intensive activity focused on physical film or video elements. These physical objects are vaulted under environmentally-controlled conditions, physically transported, and used in printing, duplication, reformatting and other activities required by the everyday work of a major distribution entity. They are also labeled, stored and retrieved using primarily manual systems.

Today, content owners and archivists are interested in using centralized, "virtual" methods for archiving, searching and deploying content via integrated IT systems. Such systems offer the possibility of eliminating many undesirable physical aspects of preservation and archival, and also much of the uncertainty in providing high quality content to the end user. The archival systems address two classes of assets: "born digital" material such as digital intermediates (where the most original and highest resolution form of the media exists in the form of digital files), and the digital surrogates of legacy film and television elements (that is, film elements scanned to high-definition or higher resolution formats and digitally optimized for distribution in electronic media). The potential for improvements in efficiency, control, and cost reduction has stimulated active investment in file-based archival initiatives.

In an archival scenario, elements are received from the client or archive on Firewire drives and on data tape. At the time of receipt, a resource is evaluated for technical and structural quality (content is checked mathematically as well as visually for data integrity). An ISO-compliant metadata schema (developed in collaboration with the client or archive), designed to capture key technical information, version data, and other critical content to allow efficient use in a variety of human and machine-read contexts, is applied to the resource. Content is loaded into a SAN environment for technical evaluation and registered in the repository with the appropriate metadata and location ID. Upon technical validation, two sets of data tapes are generated and archived in geographically separated zones as a backup to the primary data resource. Regular integrity and migration reviews are scheduled for each element to ensure preservation.

This presentation describes a system developed to provide a long-term digital archive for major media assets (including digitized data files made from legacy film elements, DI-generated motion picture data, ancillary and added-value material, video masters, and the versions derived from these primary resources for distribution). This solution for long-term archival retention is part of a larger initiative with content owners and archives to provide a file-based archive of all elements that are part of the content lifecycle. The presentation essentially covers the design, implementation, and operation of a first-generation virtual archive.

Video Archiving: On the Way to the IT-World

Franz Pavuza and Julia Ahamer
Phonogrammarchiv, Austrian Academy of Sciences, Vienna

At the JTS 2004 the Vienna Phonogrammarchiv reported the start of it’s - at this point of time rather adventurous - new enterprise of linear video file archiving. Meanwhile, the archive looks back to a labour-intensive but in general successful project, many hours of valuable analogue footage have been transferred to the digital domain, using uncompressed data representation. While in some areas some tasks remain to be optimised, the archive considers linear archiving to be a viable and future-proof solution.

So video archiving is on its way to a new technical environment. The achievements of the omnipresent Information Technology (IT) opened the doors for video archivists that have already been passed by their combatants of the audio community. The possibility to work freely and independent of proprietary chains, combined with emerging standards and recommendations of major institutions and experts groups set up an exciting new world for the technically oriented archivist.

Furthermore, the dramatically shrinking costs and the comparatively bright outlook for well-defined, technically sound and broadly supported storage media encourages the video archivist to approach the undisputed ideal of preserving the footage in a linear way, avoiding lossy compression and undesirable data reductions originally provoked by limited storage space. In the long run, even for broadcast companies - who still heavily rely on proprietary structures - this development may lead to rethinking their preservation strategies.

The paper compares conventional and IT-based strategies from the technical and financial perspective and outlines benefits and possible drawbacks of the latter.


The EDCINE Project for Archives:
A System for Conservation and Access
Based on MXF and JPEG 2000

Arne Nowak
Fraunhofer Institute for Integrated Circuits, Germany

Luís Nunes
MOG Solutions, Portugal

Ernesto Santos
MOG Solutions, Portugal

Digital technologies can be used to ease and facilitate access to archived material. Digitally stored images and sound can be used to distribute films in a wide variety of different formats for different needs. Access copies for individual viewings, internet streaming, HDTV and even digital cinema presentation can be produced automatically and delivered without the costly movement of precious physical items.

Besides the problems of how to store such large amounts of data securely and how to ensure accessibility over a very long time, a key challenge for long term preservation is the definition of digital data formats suitable to this aim. In the course of the European EDCine project a system for conservation and access for digital film archives is developed, which is based on the open standards MXF and JPEG 2000. In this presentation we describe how film archives can take advantage of digital technologies without dependencies on proprietary software and file formats.

For the encoding of the image data JPEG 2000 provides a good foundation for several reasons. The most important: it is an open standard that is well documented and no patent or other claims restrict development of appropriate systems, at least concerning the basic functionalities required in this context. Besides, one of its most important features is a built-in scalability. This makes it possible to store a very high quality high resolution version which is only slightly or mathematically lossless compressed and create lower resolution / lower quality access copies without the need to perform computationally demanding conversions. The presented digital film archive system makes use of this feature to automatically produce access and dissemination versions of stored films. Finally, the fact that JPEG 2000 will be chosen by SMPTE for the standards for digital cinema distribution means that its use and know-how will become widely spread in the industry.

MXF is – just like JPEG 2000 – a well documented open standard to wrap media files and to store them together with the associated metadata in one or more files. In our concept MXF is used to give the individual compressed image files a higher meaning by bundling them in the right order together with sound, technical and descriptive metadata. Technical metadata comprise information needed for playback like frame rate, aspect ratio, anamorphism as well as historical metadata. The latter contains information about the origin of the digital objects, e.g. from which film element a certain image was scanned, how it was originally produced, what processing it underwent after scanning, etc. Descriptive metadata contains information about the contents of the movie etc. Unique identifiers are used to provide links between MXF files and to external databases. They can even point to film materials residing on a shelf in the archive.

The presentation describes a flexible system that makes use mainly of these two open standards and provides a scalable architecture that allows film archives to find a smooth transition into the digital film era and to exploit the benefits of digital technologies even for their existing access copy by making them available on- and off-line easily.

MEMORIES:
Design for an Audio Advanced Acquisition & Semantic Indexation System Allowing Information Retrieval for the Access to Archive Content in Open Archival Information Systems

Jean-François Cosandier
Radio Suisse Romande

Dorothée Degimbe
Memnon Audio Archiving Services S. A.

Eric Lesage
Guy Maréchal
Thierry Lero
y

Acquiring media contents, structuring and attaching metadata and controls (ontology and semantics), archiving and exploiting in various modes (i.e. organising an easy accesses and powerful searches for the users) are complex processes for which many approaches have been developed. The present project intends to contribute to the elaboration of solutions to that challenge, having three specific properties in mind:

  • Maximum computer assistance to the archivists for attaching ontology and semantics to the contents. The future operators will be assisted by an innovative facility of "Source separation", combined with classical "Speech to text" and "Wave to midi" functions.
  • Maximum computer assistance to the users (customers, scientists …) for searching and finding their targets in large databases. The future operators will be assisted by an innovative facility of "Advance search based on semantic associations". Its development will apply, in the context of the media, strategies developed for biology and genetics, using the textual annotation associated with and within the media.
  • Open system approach. The project will design an implementation, named AXIS, of the reference model presented in the ISO standard "Open Archival Information Systems" OAIS :
    • Open to the acquisition and to the exploitation of old archives and to new productions
    • Predictable persistence (i.e. very long term exploitation capability with assurance of the integrity)
    • Interoperability (i.e. capacity to exchange subsets of the databases between independent systems)
    • Scalability (i.e. capacity to operate from small to large systems)
    • Adaptability (i.e. capacity to be adapted to the specific needs of a context)

The project will demonstrate the three innovations on a prototype system.

It will generate two "General Software Libraries", one for ‘source separation’, one for ‘advanced searches’. The results of the ‘open system’ development will be made freely available under an "Open Licence" a "AXIS tool kit" covering the "Architectural definition", the "Technical specification [based exclusively on standards and norms] of the interchanges" and a "Software Development Kit" helping the usage of that open interchange.

Quality Control in Digital Cinematography

John Galt
Senior Vice President, Advanced Digital Imaging, Panavision

The photo-chemistry involved in the manufacture and processing of silver halide film emulsions has always required careful process control. Inherent in the technology has been the requirement for quality control at every stage from negative manufacture to chemical development of camera negative, intermediate film elements, through final release printing.  Over the past 100 years of the evolution of this technology a close collaboration between the film manufacturers, the film laboratories, and the end users, has evolved to the point where this process is almost taken for granted and although various problems can and do arise, the system has evolved to where problems are quickly identified and remedied.

Mainly through television broadcasting, electronic motion imaging technology has been a major part of our entertainment and information systems for more than half a century.  Yet, it has been less than a decade since electronic imaging systems have been developed that rival the image quality of the silver halide-based motion picture film technology first developed over a century ago.  The  vigilant quality control process that we take for granted in film-based imaging systems must now be re-invented to encompass the new world of digital image capture, post production and archiving.  

This paper will explore the various issues and problems involved in developing an adequate quality control process for this nascent technology.

Film Archives: Needs and Requirements
in the D-Cinema Age

Nicola Mazzanti
Consultant, Film Archiving and Preservation
FIAF Technical Commission

Paul Read
M.Sc., Ph.D., FBKSTS. Paul Read Associates (UK)
Consultant Film and Digital Cinema Post-Production
FIAF Technical Commission

Journalists, technologists and now chief executives of film manufacturing organizations are predicting the demise of film in favour of digital cinema projection within a few years. When that will occur is still uncertain, but when it does the increased cost of print making, even if still possible, will increase the intrinsic value of all film elements and restrict archives (and all distributors) to digital formats for virtually all access and display.

With this change come several new imperatives and the archives member of the ACE (the European Association of Film Archives) engaged in defining challenges and issues in finding potential practical solutions to some of these problems. The first step of this work consisted in defining user requirements through a survey of film archives’ needs and aspirations:

1. In the medium term, but perhaps very soon, there needs to be some universal open access route for storing our digital versions of film, able to generate whatever version is required for access. This package should hold data that makes its D-cinema output, whether for in-house or distributed us, a near-authentic reproduction of the many characteristic film systems and formats of the cinema’s 110years. Many aspects must be respected; for example the visual and aural characters of original image quality, photographic system, format, frame rate, aspect ratio, resolution equivalence, and projection conditions. The package should also be able to output lower quality versions for all other access purposes.

2. Archives already hold many digital versions of their film holdings and need to be able to access all these in a common parallel and browse-able manner too.

3. In parallel with increasing digital access will be diminishing conventional film projection; content will then represent the cinema alone. Linking the digital content with their film origins is descriptive metadata, and a new (or almost new) metadata not so far widely recorded. Metadata will become the only link with visual and aural characters of original film cinema, and these may only be retained as “technical metadata”. This data hardly exists in film archives today. It needs to reach back to record how the original process operated, pass through the elements digitized and the digitization process, and stretch into the digital chain used for restoration, format conversions, compression and every piece of data manipulation .

4. If archives are to retain tangible links with the film origins and create, manage and utilize this technical metadata archivists will need to be trained to understand the original film technology, in all its vast complexity and variation, as well as the content’s digital future.

A remaining issue still waits for an answer: when film in archives finally decays and no alternative preservation route is available the transfer to a digital version for its long term preservation will be essential. There is at present no alternative technology, and this technology has no long term security in any way comparable to the storage in optimum conditions of analogue photochemical film.

Preserving Digital Public Television

As part of the National Digital Information Infrastructure Preservation Program (NDIIPP), NYU, WNET, WGBH, and PBS have spent the past few years collaborating to preserve digital public television content. The two panels will discuss the development of the work and the project’s progress to date.
Coordinated by Nan Rubin, Project Director, Preserving Digital Public Television.

Part One
An Overview of MXF and the Search for the Video File Wrapper

Dave MacCarn
WGBH-TV

Thomas Edwards
PBS

Carl Fleischhauer
Library of Congress

The Society of Motion Picture and Television Engineers (SMPTE) has released the Material Exchange Format (MXF) format for the inter- change of audio-visual material. Many open source projects for video codecs have appeared. Has the technology caught up with the proposal for a "Universal Preservation Format (UPF)?" This presentation will evaluate if the union of these "standards" can lead us to a digital moving image preservation format. It will detail the creation of MXF AS/PBS for video distribution and the extension of MXF for use in video archiving, including new collaboration between U.S. public television and Turner Broadcasting to create an MXF wrapper for video production. It will also include a look at the availability of open source codecs and look at an example of storing digital moving image material with the application of these available technologies.

Part Two
Designing the Repository

James Bullen, Unni Pillai and Brian Hoffman
Digital Library Team, New York University

NYU is currently developing a digital preservation repository, built around DSpace, which is intended to archive materials in many different formats. This development provides the basis for designing the model repository for preserving digital public television. Our prototyping has raised some interesting challenges, such as; dealing with very large video files, working with proprietary file formats and acquiring metadata from production work flows. In this session we will outline the repository design and discuss our approaches to some of these problems, including the use of Storage Resource Broker, Kepler, MXF, METS and PBCore.

Images for the Future

Giovanna Fossati
Curator, the Nederlands Filmmuseum

On September 19th 2006 the Dutch government announced that it will fund an ambitious joint project by a number of Dutch archives under the name Images for the Future.

The project aims at preserving, digitizing and making accessible some 285,000 hours of film and video material, and almost three million photos. The digitized content will be accessible for educational use, but also for professionals and the general public. The plan also includes the creation of an infrastructure for distribution and the settlement of copyrights, where applicable, through Creative Commons licenses.

The partners in the project are Nederlands Filmmuseum, Institute for Sound and Vision, Nationaal Archief, Centraal Discotheek, Association of Public Libraries and the foundation Kennisland.

The Images for the Future project is now in the preparation phase. The execution phase will start in the summer of 2007 and is expected to be completed by the end of 2014. The project’s budget, granted by the Dutch government, amounts to 154 million euros.

This presentation will outline the project’s goals and will address, in particular, the strategies that are going to be adopted for film preservation and digitization. It will also promote a discussion on quality criteria and standards needed for such an ambitious project. The discussion at JTS2007 is expected to give precious feedback to the project, whose scope and magnitude will hopefully set an example for the audio-visual sector.

For more info on Images for the Future see:
www.beeldenvoordetoekomst.nl/documents/Beeldenvoordetoekomst_summary.pdf


Research Report on JPEG 2000 for Video Archiving

Ian Gilmour
Media Matters LLC.

A report on recent original research conducted in New York used actual production video footage from several different agencies. The primary aims of the project were to investigate and compare file sizes and data rates of mathematically reversible [lossless] and irreversible [lossy] encoding.

The report discusses quality and performance of JPEG 2000 along with other popular CODECs, with a focus upon a newly-developed real-time hardware encoder which wraps JP2 video and audio in MXF along with selected metadata.

Data is also presented on secondary savings and the business case for improved network transfer times, reduced costs for backup and disaster recovery, and in a simplified system architecture.

This paper will also follow-up the evolution of digital storage issues raised in JTS 2004.

New Tools for Film Sound Restoration

Robert Heiber
President, Chace Audio

Removing pops, crackle and hiss are well known sound restoration technologies and are as ubiquitous for sound restoration as wet-gate printing is in the laboratory. However, developers are continually working on new and more powerful tools to address more difficult problems with a narrowly focused solution. These new developments offer opportunities to correct more severely distressed or damaged audio or make more successful repairs. Additionally, improvements in existing technologies offer new methodologies for film sound preservation and restoration work.

The development of these new tools has also created new responsibilities for archivists. The ability to rescue materials thought once unrecoverable can present quite a dilemma for determining the end of the useful life for legacy sound elements, like 35mm magnetic and optical sound.

Another issue facing archivists is whether to revisit earlier sound restorations that might now benefit from these new methods. With limited budgets, re-doing a program must be balanced against preserving and restoring other at-risk content that remains unprotected.

New Tools for Film Sound Restoration examines the “improvement-in-the-art” that has occurred since the late 1980s and the issues that this improvement brings.

Examples of the results that can now be achieved will be demonstrated with before and after examples of recently completed work on Vi gifter oss (We are Getting Married) 1951, for the Norwegian Film Institute.


Migration of 1.5 million Hours of Audio-Visual Material
at The Swedish National Archive of Recorded
Sound and Moving Images

Martin Jacobson
Head of Technology and Development,
Swedish National Archive of Recorded Sound and Moving Images

During the year of 2006 SLBA has run a project to establish an infrastructure for mass-migration of substantial parts of its analogue audio and video collections to digital files, which are subsequently made available online. A number of “unconventional methods” were used such as high-speed transfer, automation using robotics, and a suite of custom scripts that automatically process the digitized files. The infrastructure includes an in-house developed migration asset management system that handles both physical and logical material logistics including metadata, final storage and linkage to the description database records. SLBA has made a first selection… all formats included they will begin by migrating nearly 1.5 million hours in approximately 3 years, adding additional production lines as needed.

Much improved preservation and access capabilities motivate this enormous effort and SLBA would like to share their experiences, including these issues:

  • what issues were considered when creating a migration strategy?
  • why did SLBA decide not to outsource?
  • what were the stumbling blocks?
  • a look at the solutions, costs and metrics.

Presently two ¼ inch open-reel audio formats are being migrated to Broadcast Wave files at a rate of 1500 hours per day on one shift. By February 2007 SLBA will be underway with the robotic migration of 576 hours of audio per day from the data tape format QIC, and also the robotic migration of VHS tapes to MPEG files at a rate of 252 hours per day through 12 VHS players running 24/7. Impending video formats to be migrated are Digital Betacam and DVC-Pro. With the help of some external consultancy, SLBA developed the robotic system by way of adapting a data-tape robot, creating machine control and communication software, and quality control functions.

Spatial Resolutions: Restoring Motion Pictures in 4K

Daniel DeVincent
Director of Digital Imaging, Cineric, Inc.

This paper will outline the challenges of restoring motion pictures in a complete 4K digital workflow environment. There will be four specific areas for discussion:

  • A brief description of an all-4k digital workflow. A true 4k workflow means maintaining 4k spatial resolutions without downsizing to 2K or less during the data management work process. Once sampled at 4k resolution, the image is never resized and therefore less chance of incurring reconstruction artifacts or loss of natural sharpness.
  • The challenges of 4k vs. 2k regarding storage, time and resources. This would involve an explanation of the differences in the amount of storage needed, and the workstation time and data management necessary to work at very high resolutions.
  • Working with black and white images in 4k. Current film scanners and recorders are optimized for working with color film. This will cover the difficulties in re-engineering for working in black and white.
  • A comparison of the same images at both 4k and 2k. This will include a discussion of the extra time and costs involved at differing resolutions. Additionally, we will show comparisons of 35mm prints from recorded-out negatives produced from the 4K workflow versus the 4K digital file projection.


Tools for Audio Preservation:
The Sound Directions Project

David Ackerman
Lead Engineer for the Harvard College Library’s Audio Preservation Service

Mike Casey
Co-chair of the ARSC Technical Committee and Associate Director for Recording Services, Archives of Traditional Music, Indiana University

Sound Directions is a research and development collaboration between Harvard University and Indiana University funded by the National Endowment for the Humanities in the U.S. The project is charged with developing detailed best practices and testing emerging standards for the preservation of audio in the digital domain. One output from the project has been the development of software tools to aid and automate parts of the preservation process. Harvard has developed a suite of 40+ cross-platform command line software utilities, designed to be interfaced together through batch/shell scripts. The resulting scripts form audio and metadata processing workflows that automate routine and mundane tasks in the audio preservation process. Indiana University has created FACET—the Field Audio Collection Evaluation Tool—to assess the preservation condition and level of risk carried by recorded sound collections. Indiana has also developed a technical metadata collection tool to gather and store data on source audio objects, digital files created during transfer, and the preservation transfer process.

Archiving and Delivery of Student Portfolios

Dirk Matthews
Assistant Director, Portfolio Center
Columbia College Chicago

One of the challenges facing institutions of higher education is archiving student work while assisting students in the creation of electronic portfolios. The Portfolio Center of Columbia College Chicago uses a digital archiving system compatible with a web publishing system that creates standalone student websites for display of student work. The web publishing system also generates XML files of student portfolios and metadata for archival purposes. Since 2004, the Portfolio Center has assisted graduating seniors prepare their work for portfolios and reels that will help them secure post-graduate opportunities in their field of study.

The presentation will discuss the development of the system, outline current methods used for archiving and web publishing, and discuss future goals.

Assessment and Prioritization:
Recent and Current Research and Development Projects

Introduction by Chris Lacinak
AudioVisual Preservation Solutions

Assessment and prioritization for preservation activities, including digitization, has been a topic of much interest over the past few years. These two activities result in pivotal decision points on which the long-term success of a preservation strategy hinges. As we cross the bridge from analog to digital - physical to electronic - there is arguably no more important task than proper selection and appropriate allocation of resources to overcome the challenges faced. While we've yet to arrive at an ideal solution for assessment and prioritization, there has been a great deal of funding and research in recent years that has resulted in significant progress. This panel will provide the most comprehensive survey and review of recent and current major assessment and prioritization projects on an international scale seen to date. As a whole, the projects represented approach all aspects of assessment and prioritization including obsolescence, degradation, rights, value and uniqueness.

Part One – Assessment

PrestoSpace/CRCDG

Léon-Bavi Vilmont
Project Manager, PrestoSpace “Media Condition Assessment” Work Package CRCDG – Centre de Recherches sur la Conservation des Documents Graphiques

The PrestoSpace project consists of many in-depth projects that seek to enable and support mass migration. One of these projects under the Work Package 06 set forth the following tasks for “Media Condition Assessment”

  • to understand the way video tapes degrade over time and become unplayable
  • to develop a method to measure the deterioration level in order to anticipate playback problems.

These objectives were seen as particularly important considering the financial and time impact of mass transfer operations of audiovisual (A/V) tapes for preservation and access purposes. Optimization of the preservation workflow requires effective prioritization of the media according to technical considerations.

The magnetic tapes deterioration is a difficult notion to define because numerous parameters are involved: operator tape handling, tape player, original media quality, materials formulations, and chemical decay. As a consequence, the magnetic tape deterioration study requires multidisciplinary fields of investigation from mechanical player considerations to organic chemistry analysis. The CRCDG used a comprehensive study strategy involving all aspects of the problem. The deliverable of the project, D6.1: Report on video and audio tape deterioration mechanisms and considerations about implementation of a collection condition assessment method reflects this approach.

The first part of the report (Part A) provides an overview of the reasons why a tape becomes unplayable and to identify specific chemical deterioration issues.

The second part of the report (Part B) is based on the results and data obtained from laboratory investigations in order to propose a condition assessment method for archival magnetic tape collections based on a statistical approach. A knowledge database and its management system and applications are presented in detail to represent a recommended software tool.

A primary contributor on this project from the CRCDG will discuss the project, their findings and offer an update on recent follow up activities.

The Preservation of Magnetic Tape Collections – One Perspective

Jean-Louis Bigourdan
Image Permanence Institute, Rochester Institute of Technology

In 2003, the U. S. National Endowment for the Humanities, Division of Preservation and Access, awarded funding to the Image Permanence Institute (IPI) for a research project dealing with the preservation of magnetic tape collections. The main objective was to study the feasibility of developing a nondestructive diagnostic tool for magnetic tape collections analogous to A-D Strips®, acid-detector strips for acetate-based film, previously developed by IPI. IPI’s research focused on investigating three indicators of tape binder decay: free acidity, acetone extraction, and friction tests. The study was designed as the primary step in the development of a simple field diagnostic test. After extensive testing, it was determined that the data cast doubt on the feasibility of creating an easy-to-use diagnostic device for assessing magnetic tape condition. Although the number of materials tested was necessarily limited, differences in their behavior were repeatedly observed, and this inconsistency was considered to be a significant obstacle to the development of a diagnostic device.

Therefore, during the course of the project, the primary objective of the research shifted toward providing a perspective outlining a possible strategy for preserving magnetic records, addressing, in short, (1) the need for optimizing tape storage, (2) the need for facilitating the emergence of new automated tape transfer technology, and (3) the creation of a decision-making tool for implementing prioritized transfer programs.

This presentation will summarize experimental data developed during IPI’s research and discuss its practical significance to the preservation of magnetic tape collections.

The Preservation of Magnetic Tape Collections
– Another Perspective

Tanisha Jones
New York University

For the past several years, efforts have been underway to develop strategies for assessing magnetic media preservation needs, ranging from the work of the National Media Lab and the Smithsonian Institution to such projects as FACET and TAPE and, most recently, the IPI study. Informed in large part by these groundbreaking initiatives, New York University has embarked upon a related project funded by the Andrew W. Mellon Foundation to develop methodologies for assessing the condition of archival magnetic media based on visual and playback inspection in order to prioritize the relative need and appropriate pathways toward preservation.

As was recommended by IPI, a preservation decision-making tool in the form of a database is being developed as a component of the NYU project. This presentation will focus on the design of the tool and the particular challenges it presented, explaining how prioritization ratings were devised and calculated, and presenting recommendations for reformatting decision-making based on data gathered using the tool. Finally, the preliminary results of research into the use of random sampling as a methodology for assessing archival audio/visual materials will be discussed.

Part Two – Prioritization

The Field Audio Collection Evaluation Tool

Michael Casey
Indiana University

The Field Audio Collection Evaluation Tool (FACET) is a point-based tool for ranking the level of deterioration that collections exhibit and the amount of risk they carry. It assesses the characteristics of, preservation problems with, and modes of deterioration of various formats. This tool helps collection managers construct a prioritized list of collections by the level of risk they represent, enabling informed selection for preservation. This presentation will discuss the logic and planning behind the tool. Mike Casey will also walk through the tool to demonstrate its functionality and features.

Special Collections Material Survey Instrument

Janet Gertz
Columbia University

In 2005 the Andrew W. Mellon Foundation generously provided support to the Columbia University Libraries for a two-year project to develop and test a survey instrument to inventory and assess the physical condition and intellectual control of audio and moving image materials.

The tool provides a mechanism for (1) recording quantities and types of materials in detail, (2) documenting physical condition, (3) collecting information about intellectual control and intellectual property rights, and (4) evaluating potential research value.

Both survey-wide reports and collection-specific reports can be generated, as well as reports ranking collections by research importance, degree of physical damage, and lack of intellectual control, and a preservation priority ranking based on these factors to enable institutions to set priorities and establish long-term plans.

The survey instrument is being thoroughly tested in a survey of all the rare and unique audio and moving image materials held by Columbia. As of March 2007 almost 26,000 items had been surveyed. Janet Gertz will discuss the logic behind the tool as well as her experience in using it since 2003. She will also walk the audience through the use of the survey tool and exhibit its features and functionality.

The Task Force on Selection for Digital Transfer

Dietrich Schueller
Director, Phonogrammarchiv of the Austrian Academy of Sciences
IASA Technical Committee

The Task Force on Selection for Digital Transfer was commissioned by the IASA Executive Board in February 2000 to examine the issues underlying the process of setting priorities for the digital transfer of analogue and digital audio content, and to deliver a statement of principles for use by sound archives in their planning for digitisation. The members of the Task Force were drawn from IASA’s Cataloguing and Documentation, Discography, and Technical Committees, and its National Archives and Radio Sound Archives Sections. TheTask Force released a document meeting the charge of the Executive Board in 2003.

This document examines the issues underlying the process of setting priorities for digital transfer. It analyses the various criteria which can be applied in the institutional, national, and international context, and identifies strategies for co-operation and co-ordination to avoid duplication of expenditure where institutions have overlapping holdings. It delivers a statement of principles which can be used by different kinds and sizes of sound archive in planning and setting priorities for digitisation. The issues examined include the following:

  • Cultural, scientific, or academic significance of content
  • fragility of existing analogue carriers
  • primary institutional responsibilities
  • technical obsolescence of existing analogue platforms
  • present and future level of demand for use and access
  • restrictions on archival activity arising from intellectual property law
  • the resource required to generate metadata to support the digitised recordings

Dietrich Schueller will discuss the complex set of issues and principles based around institutional objectives and the intrinsic nature of audiovisual materials addressed by the Task Force.

Non-Contact Surface Metrology for Preservation and Sound Recovery from Mechanical Sound Recordings

P. J. Boltryk, M. Hill, J. W. McBride, A. Nascè
School of Engineering Sciences, University of Southampton, Southampton, UK

N. Bewley , W. Prentice
British Library Sound Archive, London, UK

Despite careful storage, early mechanical recordings on cylinders and flat disc formats have been identified as at risk from deterioration, caused mainly by material degradation and biological attack from mould growth. There is therefore an urgency to transfer the content of culturally-important artefacts to digital format to preserve the recordings’ content for archival posterity. However, some recordings are too precious to risk playback using conventional stylus methods because the very act of using a mechanical stylus playback system may in some circumstances contribute to further damage to the integrity of the sound contained in the recording’s groove, caused by wear. Other artefacts, such as 78s exhibiting delamination of the shellac from the metallic substrate, may be too damaged for a stylus to be a practical method for transfer.

In recent years there has been a significant quantity of research aimed at developing optical measurement systems for mechanical recordings for non-contact sound recovery. 2-D imaging systems using high-resolution photography have been developed for flat disc recordings where the sound modulations are encoded as lateral undulations of the sound-carrying groove. However, in cylinder recordings and some 78s the modulations are in a vertical plane relative to the groove, in so-called ‘hill and dale’ modulations. To measure these features requires 3-D surface profiling using optical sensors that measure the surface topology by determining the displacement distance between the surface and the sensor.

Systems have been independently developed by the Ukrainian Institute for Information Recording Problems, Syracuse University (US) and Hokkaido University group in Japan for 3D measurement of the sound carrying groove. However, these methods require a tracking system to guide the optical sensor in the nominally helical path around the cylinder to follow the groove. This tracking must be robust at time of measurement, a task which is made difficult by damage and deformation of the artefact’s surface.

An alternative transfer strategy being developed through collaboration between the University of Southampton, the British Library Sound Archive, and TaiCaan Technologies Ltd, uses optical sensors to measure the recording’s surface in its entirety. A significant outcome from this approach is the full high precision digital record of the artefact’s surface form for preservation, which is available for future research. The post-measurement processing of the surface topology data makes use of image and signal processing to reconstruct the audio content of the recording. This aspect of the research is aimed at facilitating access to the audio content of culturally-important artefacts by current generations. In this paper we provide a detailed overview of the scanning process for cylinder recordings, the data processing techniques used to recover the audio from the data and describe the high sensor precision required for measuring the surface for successful audio extraction. We show examples of groove damage thought to originate from repeated stylus playback, and highlight the advantages offered by this scanning strategy for application to damaged or even broken recordings.

Archiving Meets Automatic Speech Recognition -
Curse or Blessing?

Christophe Kummer
NOA Audio Solutions

The present paper examines general concepts behind automatic speech and language processing technologies set against the requirements of audio archives. It is argued that current technologies in automatic speech recognition, text-analysis and speaker-technologies may be a good starting point to index speech from digitized speech archive audio material to create low-level descriptors for basic text mining. Together with semantic annotations created the traditional way, the additional information may be the key to an extended archival mining approach

Digital Storage Options - a Transitional Perspective
How Current Storage Technologies Can Facilitate
Longevity and Access

Richard Hess
Vignettes Media

John Spencer
President, BMS/Chace

Jim Wheeler
Media Forward

A continued debate still reigns within the archival community – whether Gold CD-R, HDD, or data storage tapes are the "best" choices for small digitization projects. This session will present the challenges of small archival digitization projects and offer examples of current technologies to provide a transitional approach of cost-effective, robust interim storage solutions. 

Additionally, the concept of a "transitional repository" will be discussed, as there are many parts of a migration project that are not readily available to small archives, such as:

  • Structured metadata databases and templates for technical and descriptive documentation
  • Creation of checksums and data tape writing
  • Workflow consultation
  • Grant proposal review with an emphasis on the resultant digital files
  • Other tools/ hardware/ etc. not available to a small archive

The panel will discuss the current market trends that shape the digital environment, and provide insightful real-world solutions, as well as examples of how small archives have created a digital preservation file and metadata strategy to ensure their longevity as we continue to learn and understand the benefits and risks of digital technology. 

Using Audio Description Text for Shot-by-Shot
Indexing of Films

James M Turner
Université de Montréal

Suzanne Mathieu
Université de Montréal

The E-inclusion Research Network (<http://e-inclusion.crim.ca/?q=en>) has a goal of "creating powerful audio-video tools... to improve the richness of the multi-media experience for the blind, the deaf, the hard of hearing, and the hard of seeing". Project 3.1 of the research network involves identification of types of information needed by the visually handicapped to understand moving images. By analysing the audio description provided in a number of films, we identified the types of information described for the visually handicapped, and developed a classification of these types. We analysed the text of the audio description of individual shots, as well as that of user descriptions of the shots. By comparing the two, we can estimate the possibility of automatically deriving indexing to individual shots in a film. Indexing individual shots greatly increases the possibilities for studying films, but it is too expensive to produce such indexing other than automatically. By "recycling" the keywords in the audio description text as indexing terms, access to films at the shot level can be provided.

Metamorphosis of a Digital System:
A Retrospect to 7 Years of Growing
Experience for Audio Digitizing

Hermann Lewetz
Austrian Mediathek, Vienna

With the start of the new millennium, the Austrian Mediathek installed a complex digital system consisting of several modules: digitising station (supported by a job data base), catalogue database combined with a special audio player, automatic procedures, mass storage system etc.

The system is ever changing, however, and especially the digitising has been improved considerably since the initial start of the system. At first, we digitised with a ratio of more than 1 to 7. The documentation was done manually, that is without the metadata produced automatically by the recording application. The quality control was imperfect.

There have also been developed defined workflows which consist of separated action modules. Some of them are automated while others have to be executed manually. This allows splitting the whole workflow in quick and slow steps. The workflows do not have to be completed one after the other. Therefore, complicated steps can be collected and executed at another time perhaps by another person without interrupting the working processes.

The recordings can be done parallel up to 4 carriers at once. There is an automatic analysing tool. Lots of different metadata - including the workflow steps, used parameters and comments – are collected.

There are several other features of our system, which had to be improved; especially the controlling of the enormous amount of interrelated files the system is handling now. The paper will identify these developments in our system and other critical fields in which practice still forces us to change or to improve our workflow.


Automated Workflows in Mass Audio Archiving

Rob Poretti
Sascom, Toronto

Migrating large audio archives represents a daunting task. Once the archive is cataloged, assessed and prioritized for preservation activities, managing an efficient transition to a digital carrier poses its own sets of challenges. This paper investigates computerized solutions for the mass-migration of analog and digital archival media, to mass-storage systems.

Processes covered include:

  • Importing legacy data to generate work-flows and system jobs.
  • Digitization of analog material with quality analysis.
  • Using quality analysis meta-data to drive automatic batch processes.
  • Using batch-processing to generate multiple derivatives.
  • Reporting and Exporting meta-data to the Preservation database

Some archival media lend themselves to more efficient ingest strategies. For example, by their nature, optical media archives can be transferred at many times "real-time". New technologies now allow for faster than real-time transfers for analog media as well. The paper will investigate the ramifications of:

  • Multiple stream digitization - up to 8 simultaneous devices.
  • Ingest speeds from 1/8th to 8x real time
  • Forward/Reverse digitization for cassettes and 1/4 track reels.
  • Multi-channel digitization up to 8 channels per stream

Mass digitization strategies have their own sets of challenges. Operators need specialized tools to manage multiple streams simultaneously, especially when they originate from different media types. Purpose built monitoring functions required for digitizing in reverse, or high-speed ingest, will be investigated.

When the digital master is created, browsing copies and other derivatives may be required on a timely basis or in an automated fashion. This paper examines an approach to an extensible automated batch processor for digital archives that integrates into the entire archive system

Save Our Audiovisual Memory (SAM)

Frédéric Dumas
French National Audiovisual Institute (INA)

FIAT / IFTA has been commissioned by the Group to take over the running and coordination of its activities. Since November 2006, Emmanuel Hoog, special envoy of FIAT and CEO of Ina (Institut National de l'Audiovisuel), is the chairman of the Group. Sue Malden is the executive coordinator.

The Group consists of:

  • United Nations, represented by Lily Chau, Antonio da Silva
  • UNESCO, represented by Joie Springer
  • WBU - World Broadcasting Union, represented by David Baylor
  • EBU / UER - European Broadcasting Union, represented by David Wood
  • FIAT / IFTA - International Federation of Television Archive, represented by Emmanuel Hoog, Sue Malden, Dominique Saintville
  • Matt White (independent)

The Group was created in February 2006, following the session on endangered archives held at the World Electronic Media Forum - WEMF, on the occasion of the World Summit of the Information Society - WSIS ( Tunis, 15-16 November 2005)

At the WEMF closing session, the recommendations were presented to Kofi Annan, General Secretary of the UN. They included the creation of an ad hoc group. The group would have the task of proposing and implementing an action plan for the preservation of endangered archives, particularly for the developing world.

Furthermore, in its message to the heads of state and government attending the WSIS, the WEMF II rapporteur requested them to "provide support for urgent action to preserve the world's audiovisual heritage, enabling future generations to access archives on their own social and cultural history, and for the establishment of an international ad hoc group on audiovisual archives comprising the world's broadcasting unions, UNESCO, specialist organisations and financing agencies."

The Group has launched a world survey, sent by each of the eight regional broadcasting unions to their members, intended to estimate the magnitude of the issue and identify archive preservation / digitisation projects that may benefit from an international support.

Based on the first results of the survey, a project has been designed. This presentation will detail technical issues related to the project.

Archival Cylinder Box: An ARSC Design
and Engineering Project

Bill Klinger
Association for Recorded Sound Collections

The world’s oldest sound recordings have yet to benefit from objectively calibrated audio extractions.

Cylinder records dominated the U.S. recording industry throughout its first 23 years (1889 to 1912). However, in 2007, the 82,000 titles known to have been commercially issued on cylinders, worldwide, still await proper archival transfer and preservation.

Promising advances in non-contact playback methods, now in development, may eventually provide the necessary calibrated extractions. In the meantime, at least one million surviving cylinder records are housed in historical containers that threaten the continued survival of the audio information carried on those cylindrical artifacts.

Commissioned by the Library of Congress National Recording Preservation Board, the Cylinder Subcommittee of the ARSC Technical Committee is developing an Archival Cylinder Box (ACB). The objective of the project is to define, design, and produce an optimized, low-cost, archival-quality container for use in safely storing and transporting a single “standard-size” cylinder phonograph record.

This talk presents 3-D CAD models, renderings, and animations that illustrate the advanced tools, processes, and materials employed to meet the technical challenges posed by the demanding ACB requirements. A prototype ACB will be available during the symposium, for review and comment.

 

 

For updates on the JTS 2007 Program send your email address to info@jts2007.org.