DH2014: Work with Digital Image Data

By now, most desks have been cleared of all things DH2014 to make way for a busy autumn. Maybe this is a good time to take a step back, have a look at notebook scribblings from Lausanne, and reflect on what one carried away.

I went into DH2014 with the particular aim of attending as many sessions as I could where people were talking about working with digital image data – especially with computational means of image processing or computer vision. There were quite a number of papers and posters, and a panel, to fit this criteria. Few notes and some thoughts below:

  • Reside talked about having tested some out-of-the-box tools, such as TinEye’s MatchEngine, for automatically recognising similar-looking images in image collections (CBIR). Test cases included recognising and retrieving images of Japanese woodblock prints held by different institutions for the users to be able to compare whether they might have been mis-attributed in their metadata; and trying to take steps towards means to identifying actors in photographs of theatrical performances who remain unidentified in the records of the particular pictures. Regarding results, the latter seemed at best work in progress.
  • Stahmer also talked about image-based searching of archives. They had developed an impressive system for the use case of finding similar-looking woodcut prints from an archive of broadside ballads: the system extracts feature points (“visual words”) from individual images, constructs a dictionary of these words, and uses it to estimate similarities between the images. Interestingly, once the visual words are extracted the system works with the data following means that are rather analogous to working with text (e.g. filtering out frequent words, or indexing and searching the content with Lucene).
  • Lorang et al. had explored identifying poems from newspaper images by means of their visual characteristics (interplay between text and whitespace on the page). They had extracted and labelled snippets manually and used that as data to train a neural network of a system they had developed. Results-wise, work in progress.
  • Arvanitopoulos and Süsstrunk presented a method for text line extraction from historical manuscript facsimiles based on a technique called seam carving. This binarisation-free method belongs to the early stages of the document image analysis pipeline. Their results look quite promising.
  • A panel discussed developments in image analysis. The topics included, for example, advanced image acquisition techniques to enhance legibility (such as RTI); roles of the produced facsimiles in interpretation and related cognitive processes; and face recognition.

The above is a small sample of the breadth and variety of application of image processing and computer vision within digital humanities. The conference included more discussion, for example, in the form of posters addressing problems related to document image analysis.

All in all, many of the above sessions were well attended, and there certainly was a lot of interest in, and hopes directed at the applicability of, these kinds of techniques in the hallways. There also seemed to be quite a few younger practitioners and newcomers involved in all of this.

In general, a sentiment shared by many seemed to be that we cannot expect computer scientists to develop specialiased tools addressing the kinds of research questions people in DH are interested in. Instead, we need to modify existing tools and build new ones by ourselves.

The above translated into a need for more practical workshops and hands-on training for DH people to take existing image processing and computer vision tools and techniques into use and to gain a better understanding of what goes on under the hood on the algorithmic-level. These events would help to curb false hopes directed at the techniques, and to help people recognise the possibilities when it comes to their own research questions and data.

Let’s hope that there is something brewing here.


Leave a Reply

Your email address will not be published. Required fields are marked *