The AudiAnnotate Project has been awarded a 2019 Digital Extension Grant from the American Council of Learned Societies to build on existing efforts by the HiPSTAS project and Brumfield Labs to produce and share (1) a web application to help scholars to create IIIF-AV annotations; (2) documentation that describes three different use cases that leverage this workflow in scholarship on poetry performance recordings; and (3) a workshop for sharing this work. The developed application and workflows will help users to translate their own analyses of audio recordings into media annotations that will be publishable as easy-to-maintain, static, W3C Web Annotations associated with IIIF manifests and hosted in a GitHub repository that are viewable through presentation software. Local, national, and international partners include the Harry Ransom Center, the IIIF Consortium, and the SpokenWeb project.
As this project develops, documentation and tools are available at https://hipstas.github.io/AudiAnnotate/.
History: To annotate is to add comments to an object. To audiate is to sense the meaning of sound when it is not present. The AudiAnnotate project originates from the premise that facilitating the annotation of audio collections will accelerate access to, promote scholarship with, and extend our understanding of important audio collections, some of which may be currently inaccessible and others which could potentially be lost forever. Audio collections are not discoverable without annotations. If we cannot discover an audio file, we will not use it in scholarship. If we do not use audio collections, libraries and archives that hold massive collections of audio recordings from a diverse range of bygone timeframes, cultures, and contexts will not preserve them. The AudiAnnotate project’s goal is to make audio and its interpretations more discoverable and usable by extending the use of the newest IIIF (International Image Interoperability Framework) standard for audio with the development of the AudiAnnotate web application, documented workflows and workshops that will facilitate the use of existing best-of-breed, open source tools for audio annotation
(Sonic Visualiser), for public code and document repositories (GitHub), and audio presentation (Universal Viewer) to produce, publish, and sustain shareable W3C Web Annotations for individual and collaborative audio projects.
Overview: Broadly speaking, the application and workflows that we will develop in the AudiAnnotate project will help users to translate their own analyses of audio recordings into media annotations that will be publishable as easy-to-maintain, static, W3C Web Annotations associated with IIIF manifests and hosted in a GitHub repository that are viewable through presentation software such as Universal Viewer. Specifically, the AudiAnnotate project is seeking funds for a year of development and outreach from September 1, 2109 to August 31, 2020 in order to produce and share
(1) an AudiAnnotate web application that we will develop to help scholars to translate their annotations into IIIF-AV annotations via two human-readable and shareable derivatives of their research: (a) a IIIF item that is playable in IIIF presentation tools such as Universal Viewer with overlays of annotations on the original item; (b) a webpage that lists the annotations and timestamps that will make the audio easily discoverable by other researchers and indexed by search engines even when the audio itself is unavailable. The AudiAnnotate software or tool will be a web application built in Ruby on Rails, running on a server at UT Austin, and released as open source.
(2) documentation that will describe three different use cases (described below) that will demonstrate this workflow in scholarship on poetry recordings; and
(3) a workshop for sharing this work and teaching others how to produce IIIF-AV annotations.
Reports: Among other things, the AudiAnnotate project is documenting existing annotation practices and tools for audio. The following reports are focused on how users at the Harry Ransom Center at UT Austin have used Sonic Visualizer and Audacity.
- User Testing Workshop Report, Harry Ransom Center, November 22, 2019 by Bethany Radcliff
- SonicVisualiser user report by Bethany Radcliff
- Audacity user report by Bethany Radcliff
People: The AudiAnnotate project is facilitating collaboration on two levels: across the project members and their extended project communities and across the wide range of cultural heritage institutions and scholars who work with audio including the IIIF AV Working Group and the
- Primary Investigator: Tanya Clement, Associate Professor, Department of English, University of Texas, Austin
- Ben Brumfield, Co-lead, Brumfield Labs
- Sara Brumfield, Co-lead, Brumfield Labs
- Bethany Radcliff, Graduate Research Assistant, English Department and School of Information, UT Austin
- Liz Fischer, Graduate Research Assistant, English Department, UT Austin
- Kylie Warkentin, Undergraduate Research Assistant, English Department, UT Austin
- Jason Camlot, Associate Professor, Department of English and Associate Dean, Faculty Affairs (FAS) at Concordia University, Montreal and Principle Investigator, SpokenWeb Project;
- Chris Cannam, Head Developer, Sonic Visualiser;
- Tom Crane, Technical Director and Co-chair, IIIF A/V Technical Specification Group; Technical Director, Digirati. These project partners are integral as advisors on this project as well as liaisons for a larger network of communities who will benefit from the AudiAnnotate use cases, workflows, and workshops for using IIIF for audio.
- Jon Dunn, Assistant Dean for Library Technologies at Indiana University, Co-Chair, IIIF A/V Technical Specification Group; Director of Avalon Media System, PI of the Automated Metadata Project (AMP);
- Jim Kuhn, Associate Director for Library Division and Interim Head of Digital Services at the Harry Ransom Center, UT Austin;