This was an odd mix of presentations (at least, I couldn't find any common theme), but fortunately for me I was interested in all three of them: an experimental 3D poetry visualization grounded in literary theory; Amy Earhart's compelling argument for the need to recover early digitization projects that are disappearing or even already gone; and Doug Reside's discussion of the successes and problems with DH Code Camps.
A session on three different geographical or otherwise map-related projects: applying 3D technologies to archaeology, investigating the precision of Ptolemy's maps, and the new map-making tool Neatline from UVA Scholar's Lab.
Story Points Completed: 27 sp
Average Velocity: 38.6 sp/itr
Storypoints by Project
- Digital Archives: 3 pts
- ETD: 16 pts
- Open Emory: 4 pts
- Views of Rome: 4 pts
See all 2012 and previous years Metrics spreadsheets
This was a diverse session-- from detailed visual analysis of title pages in 17th century medical texts, to the failure of traditional keyword analysis methods with authors like Dickens, to "macroanalysis" across some 3,500 19th century literary texts.
This session included three very different approaches to visualizing aspects of English language and literature - a visualization tool for poetry, surprisingly beautiful tree-maps of the history of the English language, and vocabulary trends in English literature over the 18th and 19th Century.
These are some rough planning notes related to the upcomming Iteration.
Overall the negotiations over the data in the project has been very difficult and has been impeding our ability to work with it effectively. We might benefit some mechanism as part of the DiSC process for vetting data and projects. This it outside the context of software engineering but does impact our ability to proceed. For the future it would significantly benefit our ability to deliver projects.
This session consisted of three different papers relating in some way or other to authorship attribution or verification; the first looked at the technique of "unmasking" to see if it might be used across genres; the next looked at error-tolerance in most frequently used word authorship attribution techniques in multiple languages, and the last went over the contested history of the twelve disputed Federalist papers.
This session was a panel with four speakers - two from the text encoding "team" and two from the analytics "team"; each of them spoke briefly and then there was an interesting and engaged conversation (see the abstract).
I enjoyed this tutorial from the Free Your Metadata group. This session was an actual, valuable workshop on using Google Refine to clean and refine metadata, and it was very well run (apparently because this team has had plenty of practice, running these workshops for libraries).
This session was billed as a workshop, but was really a series of presentations - sort of a mixed bag of things relating to annotation, ontologies, etc, as Rob Sanderson (who presented on Open Annotation) tweeted:
Powered by Drupal
, an open source content management system. iEarth