Assistant Professor, Department of English, Georgia State University
Hello sound people,
I’m curious about how machine learning and data vis can help correlate affect and sound, particularly for collections that mix speech and environmental audio. There are large, poorly described collections of oral histories that might be made more useful if concordances of affect could be automatically suggested such that researchers could search the collections for specific affects or affect patterns. My interest in sound studies came from my work in oral history.
Digital humanities and the collective memory of population level events is probably the best way of describing the silo in which my work takes place. Most of the time, that means collective traumas, such as occur when a population is the victim of a genocidal regime. Sometimes, it’s when an environmental tragedy strikes. Occasionally, I look at events and platforms that just help me understand how communities form and how media are used to build those communities. Even more occasionally, I comment about that work on twitter (@intransitive) or on a blog <http://60hzhumanism.wordpress.com/>.
Currently, I research at Georgia State University, hold a joint appointment in English and Communication, and am part of a cluster in New and Emerging Media <http://2cinem.gsu.edu>. That group connects Art + Design, English, Music, and Communication, and works on challenges connected to media production, media research, and media industries. I also work with our Center for Human Rights and Democracy, and Transcultural Violence Initiatives as affiliate faculty. Most of my time over the last two years has been spent as PI on a Digging into Data Challenge project that’s looking at transversal reading of human rights violations corpora from many different countries. The team <http://digging.gsu.edu> has been great, and bridges linguistics, computer science, history, information science, and national languages and literatures.