Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

DiXiT Convention 2 – “Academia, Cultural Heritage, Society”

Design: Tessa Gengnagel

Design: Tessa Gengnagel

Opening words
“So, what is the use of your edition? And who actually uses it?” When you ask such questions to scholarly editors, chances are you see them flinch a little, before they set out on a long and enthusiastic exposé about why exactly their material is so interesting and truly deserves a proper edition. If its popularity depended on the level of enthusiasm of its editors, the digital scholarly edition would conquer the world. But at the same time, editors are very much aware of the difficulties of—to name but a few—reaching an audience, ensuring preservation, using cultural heritage collections, and dealing with copyrighted material.

The second DiXiT-convention set out to address issues like these. It targeted the academic community as well as a wider audience of those working in publishing and the GLAM-sector. This made for a diverse program where—in line with the title—academia, society, and cultural heritage could meet. Interaction was further promoted by a series of evening excursions to Cologne’s breweries and pubs, with as highlights a visit to the Kolumba-museum and a sword-fight demonstration in the steamy Stereo Wonderland. The program, which ran from Tuesday March 15 till Friday March 18, indeed included speakers from libraries and other cultural heritage institutes. The sessions covered many of the topics outlined above: licenses, community building, publishing and funding. At the same time, many attendants and speakers at the conference were “regular” textual scholars or scholarly editors. This did by no means affect the competence of the speakers: working in the field of digital editing has ensured their expertise in those topics.

Themes
Although the talks were of a diverse nature, during the conference we could spot some recurring themes. First of all, speakers often asked questions of a reflective and existential nature: “what do we mean by editing now? What is our role as humanists in this evolving community?” Questions that are especially prominent in projects that use forms of citizen science and crowdsourcing, like the Open Greek and Latin Project and SunoikisisDC, presented by professor Monica Berti; the Letters of 1916; and “Edvard Munch’s Writings” of Hilde Bøe, but also the Lexicon of Scholarly Editing maintained by Wout Dillen. Another example is the Digital Panopticon that enables collaboration between different databases. Since such projects facilitate (student) engagement, dissemination, and publicity, they also prompt the question of training: how can we adjust the content of DH-programs to students from various backgrounds?

As a second theme, therefore, we have “collaboration” or “knowledge exchange” and the various forms this can take. Speakers repeatedly demonstrated ways in which fields can benefit from each other. Indeed, most editing projects today are a collaboration between computer scientists and researchers, between academia and the commercial sectors, between professionals and amateurs. They also combine tradition (for a lack of a better term) and innovation. Professor Andreas Speer illustrated this in his discussion of critical editions of large scientific text corpora (e.g. the “Corpus Aristotelicum”). This type of material is often a challenging mix between big data and small data. Speer recommended the editing method of the printed edition, which has been developed by medieval scholars over several centuries and proven its worth. Our increasing use of digital technologies does not mean we can now abandon traditional print-based methodologies.

Scholars and publishers
In her opening keynote, Claire Clivaz explored the collaboration between scholars and publishers. The digital medium, which facilitates the combination of audio, visual and literature (oral and script), makes for a new, multimodal type of digital object. Since this object represents several voices and different levels of rhetoric, our current definitions of authorship and interpretation do not hold and need to be “reinvented”. Another pressing issue is the preservation of such multimodal objects. Clivaz argued that the know-how of publishers is especially valuable here. She also suggested to be more open to more flexible modes of publishing, for instance through Sciencematters.io that publishes small pieces of data or observations that are not necessarily part of a large project.

At the same time, the digital publishing methods change many things we took for granted. Anna Maria Sichani and Wout Dillen addressed such issues in their talks on, respectively, Open Access and licenses. Dillen made painfully clear that the issue of copyright in digital editing is “a mess”, yet both speakers concluded on a positive note. They emphasized the importance of creativity if you have to manoeuver within legislative constraints and limitations.

Additionally, Sichani claimed that digital editions are more valuable once we consider them as “ongoing, evolving projects” instead of fixed, finite products. She proposed an “open ended deep access” model in which editors would not only publish their underlying data like XML, but also their rejected proposals and other types of documentation. The model would provide an ideal test environment for new computational approaches. In the same vein, Øyvind Eide pointed out that scholarly commentary and annotations can be published independently of a (licensed) work. A good example is a recent ruling in France, which says that a critical apparatus is not part of the text and thus not under copyright. Publishing this can work as advertisement to show to the public what is “out there”.

Scholars and research institutes
Alexander Czmiel proposed a similar layered model of an edition: from data layer to the functionality and presentation layer. Each layer has different requirements in order to properly preserve and sustain it. Czmiel distinguished three rules for sustainability that ideally all digital editions should follow: (1) a formal documentation; (2) a package that stores all files of the digital edition; (3) and a well-assorted toolbox. Finally, he described an interdisciplinary collaboration when he argues for a team of “data curators” who would preserve and sustain the layers of digital editions. Research libraries are a good starting point for such a “data curator”-team, said Czmiel, referring to the talk of Torsten Schassan earlier that week.

According to Schassan, the role of institutions (such as research libraries) is to enhance the chance that the data “lives on” after the edition is completed. They can attain this goal by stimulating the process of normalization, and by pushing individual researchers towards accuracy. Schassen stressed that libraries and editors should work together, right at the start, and think about the best approach. For instance, librarians can show editors about “weird” nesting structures in their encoding (e.g., variance in the use of the @type); editors can learn librarians about their code, their encodings and what they want to present.

Scholar and programmers
The ways in which we regard and process the digital object were also central to the closing keynote of professor Ariana Ciula. She argued that digital technologies have become so ordinary that we forget their principles, how they work, and how they change us. It is not a coincidence, therefore, that modelling is the core research methodology of digital humanities: making and manipulating external representations of phenomena helps to make sense of them.

“First understand, then model” was also the motto of Michael Pidd. In his presentation on computer-aided digital editions he emphasized that digital editions are at their best when they use computational techniques that go beyond the skills of the editors. This implies that scholars need to transfer the principles of their critical thinking into the learning algorithms of the computer. However, if scholarly editors ask computers to take over these critical thinking tasks, they also need to be able to assess the results produced by algorithms.

This last point was also made during the workshop on XML entitled “Editing Beyond XML”. Although the three invited speakers have a more than average knowledge of computational methods, they do not think humanists need to become full-on programmers themselves. However, they agree upon the importance of “a basic understanding” of the workings of software—even if this is only on a conceptual level. Professor Manfred Thaller, one of the founders of the DH-program in Cologne, believes that only when textual scholars grasp what is required to solve textual problems (such as variation), software technologists can prepare to come up with a more permanent solution for that problem. In the same vein, professor Fabio Vitali stated that the complex problems of textual scholars are highly interesting for computer scientists: they form a nice challenge or puzzle to be solved.

Whether the editing community should actually move beyond XML, as the title of the workshop suggests, remained undecided. Indeed, as the workshop organizer Fabio Ciotti acknowledged, TEI/XML is now part of a scholar’s “ecosystem”. Thaller did not think the community needs to move away from XML. As long as the tools that are developed stem from the same underlying assumption, they will deal with problems in similar ways and will therefore be interoperable. Vitali also suggested to keep using XML: it is the most advanced language that offers manifold ways to express critical knowledge of a text. Yes, it has some evident downsides (such as overlapping hierarchies), but there are ways to work around that. Why not just embrace its advantages and accept its limitations?, he wondered. A good example is this our urge to solve the “overlapping hierarchy”-problem: if you mention this issue outside the textual scholarship community, you are met with blank stares.

Desmond Schmidt too stated that we ask too much of the technology of XML. With regard to interoperability, Schmidt pointed out that TEI/XML is not interoperable due to variance in markup. Since texts are encoded for one type of output, they cannot be reused for another. As possible solutions, he proposed the technologies of Multiversion Document (MVD) and stand-off properties. To illustrate his point, Desmond Schmidt presented how these technologies can deal with the complex problem of (variation within) Charles Harpur’s texts. He gave a live demonstration of the recently launched Charles Harpur Archive. The plenary discussion that took place at the end of the workshop confirmed the points of Vitali and Thaller: the complexities of text can be interesting for both textual scholars and computer scientists. Again, communication and collaboration is paramount.

Closing words
It is mildly ironic, perhaps, that writing a report on such a divers conference brings up the same issues present-day scholarly editors have. There is a wealth of equally fascinating material and one can only do so much. The engaging poster slam, the museum lecture, and the abundant catering are but a few subjects that did not make the final cut. But, as we have learned, users/readers should not be overwhelmed with information—at least not without having some tools to find their way. Preferably interoperable tools, that is. Luckily, the conference program and abstracts can be found online and many slides will be provided. And we can look forward to the third and final DiXiT convention, this fall in Antwerp.


OpenEdition suggests that you cite this post as follows:
Elli (April 6, 2016). DiXiT Convention 2 – “Academia, Cultural Heritage, Society” DiXiT. Retrieved October 14, 2024 from https://doi.org/10.58079/ns3y


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.