You are here

Performance of Analog-to-Digital Converters for Sound: Methods and Metrics

Title (author1): 
Ms
First names (author1): 
Kate
Surname (author 1): 
Murray
Institution: 
Library of Congress
Country: 
UNITED STATES
Other authors: 
Chris Lacinak, Carl Fleischhauer
Presentation type: 
spoken paper
Date: 
27 Sept Sunday
Start time: 
1 200
Venue: 
Belvedere 1
Abstract: 

In 2012 the US Federal Agencies Digitization Guidelines Initiative (FADGI) Audio-Visual Working Group set out to explore the application of a matched pair of test methods and performance metrics for the measurement of Analog-to-Digital Converters (ADC) employed in service of preservation and archiving. The goal was to bring about progress in regard to developing standard, cost-effective, simple mechanisms for measuring and reporting on the performance of ADCs.
FADGI’s expert consultant, Chris Lacinak, was tasked with this effort, focusing first on using the valuable building blocks of IASA TC-04 performance metrics and the AES-17 test method. This work involved the testing of five ADC devices using a broad suite of test methods in order to analyze the results and help determine a core set of test methods and performance metrics, resulting in a FADGI guideline published in 2012. The FADGI recommendations differ slightly from the IASA set: five new metrics substitute for three IASA metrics and, in one case, an IASA metric has been divided into two parts. The guideline has since been accepted as the basis for an official AES standards project. The first part of this presentation will provide an overview of the process, enumerate the metrics and explain why they matter.
In 2014, FADGI continued this effort, tasking Chris Lacinak with field testing the guideline metrics and methods in a set of federal agency preservation facilities, in order to review the guideline's usability and to identify areas in need of refinement. FADGI also was interested in levels of performance testing. Two performance-test setups are in play. The first is based on expensive analytic tools and will provide a comprehensive result. The second is based on relatively inexpensive analytic tools and will provide a partial result. The underlying assumption for the second setup is that some testing is better than none. Meanwhile, the field testing has served to confirm most of the metrics and methods in the guideline and, at the same time, highlighted the need to adjust a few others. The second part of the presentation will offer an overview of the process, findings and refinements brought about through the latest effort.