I was privileged to attend the Digital Humanities conference in Hamburg this year, thanks to DiSC (Digital Scholarship Commons). Part of the reason I took the time to write up all my notes as blog posts here on Tech-Know How was as way of saying thank you for the opportunity, and to share with my colleagues at Emory a small part of the wealth of ideas and innovation I was exposed to.
A presentation on a tool to work with email archives, and two papers on text analysis: one testing out methods for automated recognition of speech and thought representation; and another testing the value of a proposed feature for authorship analysis, based on language around the use of names.
This session consisted of a presentation on an approach for multi-modal analysis to compare text, video and subtitles for different representations of the same story; another on automated analysis of emotional affect in user-tagged images; and a third presentation an a process for doing aural analysis on digital texts.
This panel session on topic modeling started with presentations from Travis Brown (MITH/University of Maryland), David Mimno (NLP researcher and current maintainer of MALLET), and Rob Nelson (University of Richmond's Digital Scholarship Lab), and then continued with a lively, engaged discussion (view panel abstract for the abstracts of all three presentations).
An analysis of the user and image networks of deviantART, a report on named-entity extraction reliability for historical data from messy texts, and an attempt to chart the growth of cultural complexity using Google Ngrams.
This was an odd mix of presentations (at least, I couldn't find any common theme), but fortunately for me I was interested in all three of them: an experimental 3D poetry visualization grounded in literary theory; Amy Earhart's compelling argument for the need to recover early digitization projects that are disappearing or even already gone; and Doug Reside's discussion of the successes and problems with DH Code Camps.
A session on three different geographical or otherwise map-related projects: applying 3D technologies to archaeology, investigating the precision of Ptolemy's maps, and the new map-making tool Neatline from UVA Scholar's Lab.
This was a diverse session-- from detailed visual analysis of title pages in 17th century medical texts, to the failure of traditional keyword analysis methods with authors like Dickens, to "macroanalysis" across some 3,500 19th century literary texts.
This session included three very different approaches to visualizing aspects of English language and literature - a visualization tool for poetry, surprisingly beautiful tree-maps of the history of the English language, and vocabulary trends in English literature over the 18th and 19th Century.
This session consisted of three different papers relating in some way or other to authorship attribution or verification; the first looked at the technique of "unmasking" to see if it might be used across genres; the next looked at error-tolerance in most frequently used word authorship attribution techniques in multiple languages, and the last went over the contested history of the twelve disputed Federalist papers.
Powered by Drupal
, an open source content management system. iEarth