Today I went to the “Defining the Digital Humanities” lecture at Columbia University in New York City. It was a really interesting and thought provoking talk, and I thought it would be worth sharing with y’all.
The bulk of Cohen’s talk was on the variety of ways in which new technologies have aided the study of Humanities and allowed scholars new insights into their fields. For example, through analysis of large bodies of knowledge, namely texts but also other media.
The example Cohen gave included a graph of how the use of the word “Christian” in the titles of books decreased during the 19th century. His point was that given a starting point like that, the researcher could then switch to a close reading to investigate for a possible language and cultural shift that could accompany the data. To Cohen, aggregate data analysis is only one of the skills in a researcher’s toolbox, but one that allows researchers the ability to differentiate anecdotal evidence from overall trends. Cohen believes that the skills necessary to harness digital humanities will be taught/should be taught in graduate school along with, say, paleography.
Other projects and technologies brought up by Cohen:
- Papers of the War Department – This represents the possibility of bringing together a large number of objects that are geographically dispersed into one database where they can be found more easily. The Papers of the War project is meant to re-create the official documents of the War Department of the United States before its office and all its records burned in a fire. The office’s papers can be mostly reconstructed because two copies were made of each document, one for the War Department and one for the recipient. The database allows for research that previously had been prohibitively expensive, as large numbers of documents are available for analysis without having to track down every archive that possesses a document from the War Department.
- Omeka – a free, open source publishing platform that allows for easier remixing of data, such as plugging in a map to help analyze existing data and look for patterns.
- Born-digital media is very fragile but also can be very useful. It tends to be more useful or more managable in aggregate. Example: President Johnson’s White House had 40,000 memos, President Clinton’s White House had 4 million electronic memos. It is impossible to go through 4 million memos using traditional means.
- Zotero – a favorite of Columbia staff – possesses the ability to strip semantic data from the web to help users deal with the heterogeneous mixture of media and data present on the web.
- Scholarly Communication is a field that can benefit greatly through the application of digital humanities. Cohen encourages his graduate students to blog and create a web presence. Blogs and other social media are useful because they allow for a more interdisciplinary conversation. Cohen also recommends The New Everyday, that is, a media outlet for short essays (1000-2000 words) and multimedia, and
- Twitter can be very useful, because of its ability to quickly query a large number of people for hard-to-research information. Jay Rosen at NYU’s School of Journalism uses Twitter to run a “flying seminar” what over 50,000 people follow.
- Digital Humanities Now
Her presentation primarily focused on a humanities analysis of technology, and the ways in which the humanities can be critical of and shape technology.
Increasingly, the digital humanities is stabilizing and increasingly being embedded in the University, becoming part of the discourse about universities when universities are increasingly multinational and in crisis. This is problematic because given humanities’ intimate ties with universities, they run the risk of becoming part of the greater discourse on higher education. The field of digital humanities is also solidifying in a way that is problematic.
Frabetti pointed out that globalization itself requires the innovation from digitization to make itself financially feasible.
She cites Gary Hall from Culturemachine.net, asking the question of what humanities can bring to the study of computer science and how hey can shape technological development. Frabetti calls on humanities scholars to not only understand how to use software, but also to understand how it works and analyze how software shapes the human being. She also goes on to discuss the writings of Stiegler and Tim Clark (from Deconstructions: A User’s Guide), namely that Western thought is based on the Aristotelian idea of separating technical knowledge from epistemology, and that tools do not shape the person who wields them. This is of course problematic given that increasingly we feel like we have created Frankenstein’s Monster, that the technology that we have created is increasingly affecting us in negative ways.
As a parable, Frabetti brings up the novel, The Turing Option in which the protagonist, Delany has a computer installed in a part of the brain he lost in an accident. Delaney must then relearn all that he had previously learned. Ultimately, however, Delaney finds that he can never reconstruct the brain he once had; there is no way to reconstruct his original self. This story brings to light a process that is always happening in humans: we are always reconstructing our brains and changing our minds.
Frabetti questions the assumption about the instrumentality and rationality at the core of digitization. There is a gap between technology and language, and there needs to be space for self-reflexive digital humanities. Also, it is necessary to examine universities’ role in digital creation, given their increasing political significance.
Buzzetti follows in Manfred Thaller’s footsteps in saying that the purpose of computer science is to create algorithms and systems, but for digital humanities, these algorithms and systems must be built around the disciplines they wish to incorporate.
The digital wishes to create a simulation of the text it presents.
Can visual representations of data replace reading and reflection?
With knowledge representations, tools can create new relationships with the text.
This problem can be solved through the unification of Web 2.0, the Semantic Web and Digital Libraries to form Semantic Digital Libraries.