In 2012, the School of Information at the University of Texas at Austin and the Illinois Informatics Institute at the University of Illinois at Urbana-Champaign received an NEH Institutes in Advanced Technologies in the Digital Humanities grant to host two rounds of an Institute on High Performance Sound Technologies for Access and Scholarship (HiPSTAS) in May 2013 and May 2014. Humanists interested in sound scholarship, stewards of sound collections, and computer scientists and technologists versed in computational analytics and visualizations of sound gathered to develop more productive tools for advancing scholarship in spoken text audio.
HiPSTAS participants included 20 humanities junior and senior faculty and advanced graduate students as well as librarians and archivists from across the U.S. interested in developing and using new technologies to access and analyze spoken word recordings within audio collections. The collections we made available for participants included poetry from PennSound at the University of Pennsylvania, folklore from the Dolph Briscoe Center for American History at UT Austin, speeches from the Lyndon B. Johnson Library and Presidential Museum in Austin, and storytelling from the Native American Projects (NAP) at the American Philosophical Society in Philadelphia. Sound archivists from UT at Austin, computer scientists and technology developers from I3 at Illinois, and representatives from each of the participating collections came together for the HiPSTAS Institute to discuss the collections, the work that researchers already do with audio cultural artifacts, and the work HiPSTAS participants can do with advanced computational analysis of sounds.
At the first four-day meeting (“A-Side”), held at the iSchool at UT May 29 – June 1, 2013, participants were introduced to essential issues that archivists, librarians, humanities scholars, and computer scientists and technologists face in understanding the nature of digital sound scholarship and the possibilities of building an infrastructure for enabling such scholarship. At this first meeting, participants will be introduced to advanced computational analytics such as clustering, classification, and visualizations.
Participants developed use cases for a year-long project in which they used advanced technologies to augment their research on sound. In the interim year, participants met virtually with the Institute Co-PI’s (Clement, Auvil, and Tcheng) and reported periodically on their use cases and ongoing research within the developing environment.
In the second year, the participants returned to the HiPSTAS institute for a two-day symposium (the “B-Side” meeting) at which they reported on their year of research. In this second event, the participants presented scholarship based on these new modes of inquiry and critique the tools and approaches they tried during the development year. This second meeting ended with a daylong session in which the group drafted recommendations for implementing HiPSTAS as an open-source, freely available suite of tools for supporting scholarship on and using audio files.