DiXiT ESR 7: But I Specifically Requested a Late Checkout!
Three and a half years ago, I got an e-mail forward from Professor Lewis Ulman with the subject line “Have you seen this?” and a link to a rather unique Call for Applications. I read through the Fellowship offerings of this new network and was impressed, if also confused. What was this? Was this a thing? Did people put together broad, international research consortia in Digital Humanities topics? And, perhaps more importantly, was there actually funding for it? Especially THIS level of funding? Was this even real? I shrugged, dusted off my CV, and tossed an application (or six) into consideration. Probably nothing would come of it, but what the heck; nothing ventured, nothing gained.
A month later, I was invited to interview. 3 months after that, I was on an airplane, Cologne bound. Something came of it after all. Quite a bit of something, in fact. And three and a half years later, here am I, trying to make sense of it all, at least well enough to cram the details of it into a concise report.
My Research/Dissertation
Like many of my fellow Fellows, my research did not proceed precisely along the lines as they were envisioned in the original job posting. In my particular case, my early research into questions of “Mass Digitization Data for Scholarly Research and Digital Editions” – particularly the focus on the measurement and improvement of “quality digital images and text” – spawned a host of ancillary questions of its own. What was the scale of “mass” digitization and how did it differ from other forms of digitization? What was the “data” in this scenario, and how did it inter-relate with other considerations along similar axes (e.g. “information,” “analysis,” and even “meta-data”). And, perhaps most importantly, what was “quality?” And, if we were starting from the assumption that it needed improving why was it currently bad (or, at least, in need of improvement).
From this, I backtrenched, paying particular attention to issues of “quality.” Desmond Schmidt’s “Towards an Interoperable Digital Scholarly Edition” was then just being discussed in – and out – of the professional literature, and it occurred to me that my own “quality,” especially in so far as it was assumed missing or in need of improvement, dovetailed neatly with Schmidt’s complaint about interoperability: mainly, that there was nothing wrong – no lack of quality — either with digital proxies as they were created by “mass” digitizers (however those were defined) or by scholarly editions as Schmidt criticized them. Rather, both were – if anything – too overfull of “qualities” that later users found not only useless, but often an active hindrance. How, then, could we encode textual data/metadata/information in such a way that multiple user expectations and priorities could be represented, while also leaving the facility to strip out those “qualities” (perhaps better understood as interpretations based on alternate, or even competing, editorial perspectives) which are of little or negative usefulness to a given application? Schmidt’s own proposal – an admittedly elegant approach to stand-off markup – seemed to me to still suffer from two tremendous weaknesses: first, it lost the ability to recognize content object interrelation and dependence, and second, while it allows for a fluid base-text (and content markup emendation based on that fluidity), it still only allows for a SINGLE base-text, however fluid, leaving the question of what happens to now-supposedly-interoperable use cases that diverge on their interpretation of a particular character or transcription. My dissertation, then, is a proposal of just such a system of encoding, one that seeks not simply to move our existing understanding of text to a new technology or method of markup, but one that seeks to reconsider what we think text is, in light of our current difficulties in encoding it.
While this would normally be the place to brag about how great that dissertation was and highlight its conclusions and instruct you on how you can order your very own copy from Amazon… it is, unfortunately, not yet complete. Some early, unexpected administrative difficulties and some late, equally unexpected, personal difficulties set my writing schedule back, and I have currently only finished two of five proposed case studies to test its premises. It is still on track (for a given definition of that phrase), and I look forward to sharing it with my faculty at the Ohio State University in the Autumn for further feedback and improvement. But even without a complete manifesto on the near horizon, I believe I can still make some tentative observations from the work so far.
- First, the “plain text” transcription which is so foundational to so much of our work is actually deeply, DEEPLY weird. I originally decentered the plain-text transcription in my data model because it is more analysis-dependent (and, thus, more prone to dissent in interpretation), but in all instances, attempting to reconstitute what I am currently calling “the descriptive transcription” (a description of individual units of existing inscription in a document artifact) to the “transcription per se” (a re-presentation of the language structures so encoded) has proved that the relationship between a description of the inscription on the page to a re-presentation and transmediation is both extremely complex and equally extremely difficult to explicate.
- Second, the use of a prescriptive standard (in this case, Unicode) to carry the burden of describing inscriptions is inherently problematic. In fact, its only saving grace is that it follows hard in the footsteps of established practice, particularly the TEI practice of using descriptive type variations (e.g. “italic,” “bold,” et cetera) to describe variant typography. However, the vagaries of glyph-level inscription is much greater than in typeface description, especially in manuscript sources. While this difficulty was forecast, current manuscript test cases are demonstrating exactly how
- Finally, the greatest weakness of this model – if, indeed, it is a weakness rather than a feature – is its sheer bloat. The elision of the multitude of analytical and interpretative choices that lead from the data of the document sources to the completed editorial encoding produces, as a by-product, a concision that is difficult to appreciate without having tried to explicitly account for them. At present, I estimate that my explicit encodings comprise between 100x and 300x the character counts as a similar implicit encoding. While the disk-space footprint can easily be offset with a byte-reduced schema (or by database compression), this threatens to make an already sprawling and complex code completely unreadable to human readers.
Life and Work Outside the Dissertation
There was much more to DiXiT than simply a research topic and the deskspace to pursue it, though. The generous funding by the Marie Curie-Sklodowska Actions also made possible a wealth of opportunities for professional development and training that would have been not only unreachable, but unthinkable, without it. In this position, I was able to attend the ‘Medieval and Modern Manuscript Studies in the Digital Age’ workshop in (Cambridge/London, 2014), the NeDiMAH ‘Beyond the Digital Humanities’ network event (London, 2015), the Oxford Digital Humanities Summer School (Oxford, 2015), as well as several meetings of learned societies such as the European Society for Textual Scholarship (Helsinki, 2014 and Antwerp, 2016), the Text Encoding Initiative (Lyon, 2015 and Vienna, 2016), and the Alliance of Digital Humanities Organizations (Lausanne, 2014). This is, of course, in addition to DiXiT’s own core program of six conferences and workshops (London and Graz, 2014; Boras and Den Haag, 2015; and Cologne and Antwerp, 2016). These opportunities not only allowed me to learn new skills and teach and present on those I already had, but also the chance to meet and interact with practitioners in the field who would otherwise have been no more than names in my bibliography.
Nor were conferences and workshops my only opportunity for such professional developing and networking. I owe a deep debt to all of the supervisors, partners, and Experienced Researchers in the DiXiT network for making themselves so readily available throughout the span of the program. Their support, insight, and – in particular – their difficult questions and skepticism provided mentoring and direction in an already difficult and fraught process. In particular, during my research secondment to Oxford, James Cummings and Magdalena Turska generously provided FAR more of their time and insight, during the busiest segment of their professional year, than I had any right to hope for or expect.
But perhaps the most valuable aspect of the DiXiT Network, both to my professional growth and research, was the presence of the full cohort of my fellow Early Stage Researchers. With our research topics so closely coinciding – where they didn’t absolutely overlap – our meetings always involved ideas for collaboration (some of which even came to fruition, as when Elena Spadini, Magda Turska, and I developed an early stand-off markup scheme for presentation at the 2015 TEI conference), enthusiastic debate, and mutual brainstorming. But perhaps even more than this professional cooperation, it was good to have the presence of a troupe of colleagues who knew, precisely, what the life of DiXiT Fellow fully entailed. Our experience was new – probably unique – in our field, and as such was frequently chaotic, frustrating, and dispiriting; even our supervisors, who were as new to supervising in this kind of arrangement as we were in working within it, were often unsure of how we should proceed. In my own case, I’m sure it was the comradery of the Fellow cohort that helped me to see the full Fellowship through.
Moving Forward
As previously mentioned, several unexpected difficulties have arisen that have pushed back the delivery of my dissertation; so, technically, DiXiT isn’t quite OVER for me, even though I’m no longer funded and have returned to the United States. In the short term, I’m teaching for a standardized test preparation company and working IT support service to pay the bills while writing like blazes in every moment I can snatch back from my employers. Come Autumn, I will be back teaching and writing at OSU, extending the work done for DiXiT into a second dissertation in that program. Casting further into the future, I do still aim for a tenure track professorial position at a R1 University in the US, even fully aware of the rather long odds of such a plan, especially in the environment of today’s American Higher Education landscape. However, I fully expect that the experience, training, and professional network that DiXiT has provided will give me a significant advantage in that regard. And here’s hoping that two PhDs won’t hurt my chances, too.
Worst case scenario: Word around the network is that goat herding in Sardinia is always an attractive fallback plan.
OpenEdition suggests that you cite this post as follows:
Misha (July 14, 2017). DiXiT ESR 7: But I Specifically Requested a Late Checkout! DiXiT. Retrieved October 14, 2024 from https://doi.org/10.58079/ns4d