In the previous post of this series, we surfaced some significant issues relating to the reconstruction of 1QSa II 11–12. This particular example is interesting for it raises significant issues regarding editorial methodology, textual presentation in a critical edition, interpretation, and commentary. The fragmentary and damaged status of the manuscript complicates any editorial method, and the syntactical, political, theological, and historical influences pose various challenges in how we read these lines, and the media we choose imposes strengths and weaknesses when we present our scholarly research to the readers/users. What is more, the issues are further compounded by the editorial method and textual presentation, on the one hand, and the need to present a rationale as to one’s editorial decisions, on the other hand. In the era of digital media, we are in a fortunate position to use our skills to bring some level of clarity to our reading of 1QSa II 11–12, in particular, and also to broader issues of material reconstruction of fragmentary manuscripts in general. But we need to step back for a moment.
In this post, I want resurface some issues of methodology I wrestled with as it pertains to reading, interpreting, and understanding ancient texts. For Qumran studies, most of the issues we face relate to the fragmentary status of the discovered manuscripts. When I began work on my doctoral thesis, I did not plan to make new editions of The Community Rule manuscripts at Qumran. My focus has always been on the legal issues of the text, but the more I studied the texts and the published editions, the more I realised how many assumptions about the text were predicated on assumptions about the materiality of the manuscripts. For this reason, I designed a methodology—not data—to study these manuscripts in a 2D and 3D digital environment. In some ways, I found that I was returning to ideas of editions I had articulated at two conferences in 2014. Whereas my ideas in those conference papers have influenced the current trajectory of projects like, Scripta Qumranica Electronica, I would also have to say the ideas of those papers are significantly out-of-date. Digital humanities is face paced—at least faster than the humanities. At any rate, with the release of OpenAI’s GPT-3, I would dare say any ideas of “digital editions” that do not incorporate Computer Vision and Deep Neural Networks as realtime services to its users are tantamount of Origin’s Hexapla to a SQL database—outdated, clumsy, and a waste of money. In this post, I want to transition away from the analogue methods of DJD, to build a new vocabulary for philological study that draws on the advancements from the media and technology of our era. To do this, I would like to quickly survey the ideas I had of digital editions in 2014, then provide a critique of my 2014-self.
In 2014, I argued that an exciting feature of the digital medium is the fundamental place that images can have in an edition—not as static media but how algorithms can interact and compute information in conjunction with an image. In print editions, images of manuscripts rarely ever play a central role—despite their significance. To be sure, there have been some editions where images did serve a significant role, DJD 32 is an example. Yet, most—nearly all—of the DJD editions put the images of the manuscripts in the back of the volume. This may seem like an insignificant critique, but as we have seen, the transcription methods of dots/circelli can be rather deceptive. And the DJD volumes print only one image—yet often times there are as many as 15-20 different images of any given fragment. The digital medium presents many opportunities to create solutions—and a solution is part of one’s intellectual property. But how should our solution model the data?
I wrote in 2014, that “image(s) of the fragment can be tagged with Regions of Interest (ROI) and be associated with an array of meta-data. Interoperability is possible vis-à-vis XML/TEI or JSON encoding standards; hence, it would be possible to associate the position of the word with the additional meta-data and interpretative decisions.”1 In this talk, I provided a relatively simple example of 1QIsaiaha line 1. At the time, the example was not meant to showcase the multiplex of issues, but was rather meant to lay out a vision for digital editions. To be sure, the ideas I laid out in this paper were very ambitious, and envisioned clearly an editorial pipeline that would result in editions that far surpassed the analogue editions of DJD. To be clear, the DJD editions vary in quality—but my ideals, then and now, are more about resolving two fundamental issues: the fragmentary status of the 1947 Judaean Desert discoveries and understanding the scribal practices and legal interpretation attested in these ancient scrolls.
At the conclusion of my presentation, I received good feedback from Alison Schofield and Daniel Falk. I was, however, also a bit disappointed because I wanted our conversation to focus on how we could move forward to achieve a better understanding to address the fragmentary issues, so that we can eventually get to the real delight of reading texts. In my notes, I wrote: “I was a bit frustrated, however, that soon the conversation began to focus on the top-down, and which of course is a massive project. I had rather hoped that conversation would have helped in learning priorities from a bottom-up approach, that is, what is the first step towards creating a very helpful tool from which we can research. It seems that whatever a digital edition might be, the general idea is the ability for manipulation and malleability.”2 I do not say this to take away from Alison and Daniel’s feedback. Their feedback was helpful and insightful, and prodded me to think about the issues from a top-down approach.
My frustration likely was related to a remark that George Brooke had made some months prior. In Copenhagen, George was the first to present at a conference entitled, Material Philology in the Dead Sea Scrolls: New Approaches for New Text Editions.3 George began his presentation with a remark, and I paraphrase, that pontificating about the principles of digital editions should be achieved through the process of making digital editions. Since George’s paper was focused on “The Principles of Principal Editions of the Dead Sea Scrolls,” I understood his comment as a way to prevent repeating the delays to publish the print editions. That is, I understood to say him to say, “Let’s get to work and see what we learn!” George’s comment furthermore centralises the need for philological study of ancient manuscripts to work in concert with digital humanities methodologies.
To address the many assumptions and presumptions about The Community Rule manuscripts, I found myself returning my ideas of 2014. I wanted to work directly with images of the manuscripts, and I wanted to have a better understanding about the issues of textual, scribal practices, scribal performances, and manuscript reconstruction. I was more than ready to get to work. But what I quickly realised is that I needed a workspace—I needed a methodology—one of the core features of humanitarian research. So, I found myself appreciating both the top-down issues raised by Daniel and Alison, and I found myself appreciating George’s call to work. In other words, I would argue that George’s ideas implicitly critique any digital humanities project that divides editorial issues from technological developments and media.
To return to myself today, I now see editorial methods as intimately wrapped up with hermeneutical issues—in a similar way that data is packaged with code.4 Admittedly, the relationship between editing and interpretation needs to be spelled out in greater detail—but philology and study of ancient scribes is best done in practice than in theory—which is the reason why I began this series with 1QSa II 11–12. So, we need to balance the bottom-up approach with a top-down approach as we create an editorial pipeline—which of course approximates methodological issues of computer science, namely, generalising or abstraction and specifics or instances. This is why I began with an example, with a problem. This is also why digital humanists should receive credit for their intellectual contributions, and any intellectual contribution—digital or analogue—should not be presented apart from the philologist, data scientist, or digital humanist who provided a solution to a research problem! To do otherwise is simply a breach of ethical standards and protocols.5
In the next couple of posts, we need to step back and talk a little about the Virtual Research Environment, Jupyter Lab, the programming language, Python, and the Relational Database Management System of MariaDB, and a Graph Database Model, of MongoDB.
- James M. Tucker, “Digital Editions of the Scrolls and Fragments of the Judaean Desert: Preliminary Thoughts”, Presented at West Coast Qumran Study Group: Difficult Texts and Digital Tools, May 30 – June 1, 2014, University of Oregon. 🔗
- I keep a journal of every conference I attend, so that (1) I can properly cite people in the future; and (2) I find it interesting to keep tabs of who I met, what papers interested me, places we ate, etc. Here are my notes for the workshop.
- The online link to the abstracts now returns an error. I provide here a link to the copy I saved.
- I would argue that a philologist approximates the data scientist in terms of hermeneutical issues. I will spell this out in greater detail in another post.
- I would also argue that those who hire software companies need to be forthright with their audiences and attribute the work done by the software company. To present any work done by a secondary party, the person should cite and give credit to the company who engineered a solution. To present any contracted work as one’s own is deceptive and contrary to the ethics of digital humanists who spent time and money to become educated in their field of study, e.g., medieval latin manuscripts, and the realm of computer science, e.g., computer vision.