A human being is capable of taking in very few things at one time; we see only what is happening in front of us, here and now. Visualizing a simultaneous multiplicity of processes, however they may be interconnected, however they may complement one another, is beyond us. We experience this even with relatively simple phenomena… We observe a fragment of the process, the trembling of a single string in a symphonic orchestra of supergiants, and on top of that we know—we only know, without comprehending—that at the same time, above us and beneath us, in the plunging deep, beyond the limits of sight and imagination there are multiple, million fold simultaneous transformations connected to one another like the notes of musical counterpoint.
Stanislaw Lem, Solaris
Harold Fisk’s maps of the history of the Mississippi River (examples of which I’ve reproduced above and below), made in 1944 long before all of the technical interventions of GIS and the like, have become justly well-known both as works of art in themselves and as superb examples of ‘data visualization,’ that is, an image that takes a set of data and renders it into a form that is not just legible to humans in the narrow sense but expands the possibilities of perception and interpretation by being rendered in a new visual format. These maps convey geological processes whose effects we can perceive up-close or through conventional maps—perceive, that is, through seeing the cumulative effects, the evidence and the differentiation of one stage or another largely or totally occluded to not just the unaided eye but often to the eye aided by other charts and maps and explanatory devices or bodies of data. What is so powerful about Fisk’s maps is precisely the degree to which they make immediately visible the ‘succession of processes’ of the Mississippi River’s changing course, of the history of a geographical feature that at first glance, or even sustained human observation in a given succession of time, seems stable and continuous, subject to rise and fall of water levels but otherwise fixed as a line on a map. Despite living in the flow of time and witnessing change and process and transformation, we struggle to actually make sense of it, to ‘see’ it in a meaningful sense, to see beings as process, as movement, as opposed to perceiving something as a fixed object of perception and reason.
At first glance, a work of data visualization like Fisk’s might seem to invalidate the rather pessimistic argument in this essay’s epigraph, drawn from Lem’s philosophical rumination on the nature of memory, knowledge, the human experience, and a fictional giant, apparently conscious extraterrestrial ocean. Perhaps such work does push back against Lem’s evident pessimism—aided by a good graphical intervention we can to some degree simultaneously ‘see’ the succession of courses the Mississippi River has taken over the course of the last few thousand years of history (going back to perhaps the Pleistocene). That said, what we see in such a map is of course only very partial: Fisk made decisions in interpreting his data (which included historical maps, visible traces in the landscape, and thousands and thousands of coring samples), with the very form of a map requiring a determination of which moments and lines to ‘fix,’ which elements of a dynamic, indeed quite literally fluid process to mark upon the page, how to smooth gaps in the data, and so forth. Much less could Fisk, or anyone else, capture in one visual schemata the vast number of interactions taking place in determining a river’s course: the interplay of climate, weather, ecology, plate tectonics, underlying geology, and sheer contingency—seemingly random logjams, the flow of silt and sand and other particles, the gives and takes of previous iterations of the river, history’s traces compounding upon themselves.
To complicate matters further, we should keep in mind that Fisk’s meander bends represent an almost infinitesimally small portion of this river’s total history, which by some reckonings dates back eighty million years to the late Cretaceous, meaning Fisk’s maps indicate around 0.01% of the river’s history. Lem’s position, then, seemingly stands, even in relation to the comparatively simple process reconstructing and mapping river history: not only are there decided limits to what we can take in of just the Mississippi River’s history in the Holocene, imagining, much less somehow perceiving, tens of millions of years of riverine process in all of its complexity and interconnectivity with other processes and events in earth’s history and ecology simply staggers the mind. Even the question of ‘what is the Mississippi River’ admits no easy answers—it isn’t just that you can’t step in the same river twice because of the flow of the water, it’s that just about the only stable thing about the Mississippi River as we know it is that it flows south into the (geologically older) Mississippi Embayment, the lower part of its course having since its emergence stayed in roughly the same area, even as almost everything else about its precise course, its catchment basins, its tributaries, and its flow has varied immensely. While we can in a manner of speaking envision this past, it’s unlikely, even given a perfect data set, that we could visually capture it in an elegant and immediately perceptible way after the manner of Fisk’s Holocene Mississippi River maps.
Our task becomes infinitely vast if we take into account all of the possible interactions and entities involved—intimately, causatively involved—in the long history of just this one river, of the vast number of silt particles, bits of river cobble and gravel, ice floes, biotic communities, oceanic retreats and resurgences, and all the rest that have gone into making the Mississippi River. We can ‘listen,’ perhaps, to several strings in this orchestra at once, but we cannot really listen to all of them at once, the scale is simply too immense, and our range too limited in capacity. We are not reduced to ignorance, to be sure, but we soon run into limits of perception, things can be taken in many pieces, but to truly simultaneously put them altogether into an epistemic whole exceeds us.
This might not seem like a very sanguine introduction to a related problem in the study of human culture, but bear with me. The issue that I have in mind is somewhat akin, in fact, to the process Harold Fisk set out to map: the question of textual variability in manuscript culture, with the mutations and transformations and additions and subtractions visible in the long span of a text’s life history rather akin to the shifting and changing meander bends of the Father of Waters. The matter of textual variability is not just not a new question, it is arguably one of the oldest matters in the humanities, given that the very existence of a discrete text to study is dependent upon the choices one makes in fixing that text from the pre-print world. Rarely do works of any importance exist in one single form; for instance, the Qur’ān is for a range of reasons arguably one of the most stable works in human history, yet it too has a history, eventually solidified into canonical variations. Most texts, great and small, saw far greater variation over time (see my recent discussion on the Shāhnāmah tradition for more on these dynamics).
Historically, scholars have dealt with the matter of textual variability with one primary goal in mind: locating or reconstructing the original text, combing through and making sense of manuscript exemplars, tracing them into family trees and clusters, and so forth, working towards a critical edition of what they think to be the closest to the author’s original. Much of modern philological practice has its roots in New Testament textual criticism, which itself proceeded out of the very long history of Christian exegesis—a topic for another time, though one that intersects very much with our concerns here. And historically textual scholarship of this sort was slow work, involving laborious ‘test cores’ of manuscripts scattered literally around the world, which meant either obtaining a photocopy or microfilm somehow or going physically to examine the manuscript—we still do that, in fact, and I imagine will continue to do so in the future. That said, digitization has genuinely transformed the study of manuscripts and of premodern textual culture; it has made the process of obtaining and reading and otherwise using manuscripts inestimably easier and cheaper, permitting audiences who would have previously never had an opportunity to explore these works themselves access from the proverbial comfort of home lounging in pajamas. There are also potent possibilities that digitization makes possible for the understanding of textual culture and evolution, but before I get to those we should consider some of the problems that digitization can also generate.
There are, as I see it, two core problems with the digital reception of manuscript texts, one common to all digitally mediated texts, the other more particular to manuscripts. The first, basically universal problem is one that will be familiar to I imagine every single one of my readers: the difficulty of controlling digital texts, of maintaining attention, focus, and depth of reading, of simply staying on task. I have at the moment—I just counted them!—twenty books and articles currently open in PDF format on my computer, all of which I intend to read or am in the (interrupted obviously) process of reading at the moment. Now, a similar situation certainly prevails in my ‘to read’ row of books, but there is definitely a difference in how I approach the two sets of material: while I do switch between books, more often than not I will read at least a full chapter if not more in one sitting, since to interrupt my reading and pick up another tome would require several physical steps of bodily and material displacement. It isn’t just that it require more physical work—it does, technically, though not that much more—but rather there is a greater degree of intentionality involved. Browsing the internet or switching between tabs or open documents operates on a different sort of bodily reflex, there is a psychological smoothness to the operation. With digitized manuscripts we face a similar problem, though it is perhaps not quite as pronounced or rather manifests somewhat differently: it is very easy—I do it myself more than I care to admit—to cycle among many discrete manuscripts, or to spend an inordinate amount of time searching through online catalogs and repositories for more material. The amount is not infinite but it’s still a lot, plenty to browse and scour.
The other core problem is a bit more opaque, and is I think well described by a term that the American philosopher Albert Borgmann has used in relation to modern technology generally, ‘disburdenment,’ in which the textures, contingencies, allowances and limits of the physical, non-digital world are effaced within the digital (or more generally the technological), a process usually seen as a marker of technological progress but which is, as Borgmann argues, much more complex and ambiguous on many fronts. In the case of manuscript culture specifically, there is always a possibility, and in some cases a necessity, of such disburdenment in a negative sense: for instance, the loss of the physical presence and tangibility of the text, something I’ve discussed at some length previously. Relatedly, digital manuscripts can easily become dehistoricized and ‘flattened’ out because of the sheer ease (well, relative to the pre-digital alternative!) we have in accessing them and adding them to our swollen digital libraries. The historical particularities of the text and of the object are effaced. That said, significantly this maneuver is not unique to digitized manuscripts, it is perhaps however simply made easier and harder to resist. In fact, the classic book-bound critical edition can be seen as a type of pre-digital disburdenment, useful to be sure in a great many ways, but also effacing and limiting: not only was the text ‘translated’ into the smooth digital regularity of typographic print, the complications and ambiguities of a sprawling manuscript (and perhaps also oral and aural) tradition are neatly and cleanly packaged for the reader into footnotes of variant readings and the like. We then tend, consciously or not, to identify that critical edition, or the presumed ur-text it hopefully captures or at least strongly reflects, as the ‘real’ text, with everything that has come temporally after, the text extended in physical copies spread over space and time and diverse human communities of production and use, as either a ‘corruption’ or, more generously, a record of ‘reception.’
If pressed we would probably say that the current course of the Mississippi River is the ‘real one’ (though more likely the question would seem rather nonsensical!), in the case of texts and of human culture generally we tend to display a bias for originality, in the sense of the most chronologically prior. Of course, rivers do not have authors in the way texts do, and if we did want to speak of them as ‘authored’ we would speak of God and/or nature doing so, but with the recognition that they are ‘written’ as a process. While we can speak of the ‘birth’ of mountains, seas, and even continents, we know that such language does not map precisely onto the birth of literal organisms, and points to long periods of time in which tectonic forces pushed up mountains or split out seas.
Our ability to speak in such terms and to visualize—literally and metaphorically—such processes is largely thanks to the development of geology over the last three centuries or so, although intimations of geological change have been known to humans since time immemorial—the shifting of rivers, the rise of sea levels, the silting up of ponds, and so forth. Can we do something similar for the history of texts, specifically, texts as realized within and through manuscript culture? Can we ‘see’ the meander bends of a text, and not just that, but can we ‘see’ and deeply read and interpret a given textual product as a genuinely living entity, no single ‘layer’ or moment in the text’s historical iteration being privileged as the ‘real’ text but rather taking the whole as the ‘real’ text, realized in a distributed body of readers and copiers and communities of use? Might we be able to, if you will, re-burden our digitized manuscripts with greater depth and connectivity, to thicken our readings and rough out our perceptions? And might we do this working within and through the allowances of digital technology?
As you may have surmised, my interest here is not strictly a disinterested one: one of the projects that will be occupying a fair amount of my time in the coming year is an initiative to create what we’re calling ‘multi-text digital editions,’ though I suspect we will need to modify our descriptive language somewhat. Our broad goal is to devise digital editions of discrete (whatever that might mean!) individual texts as they appeared in manuscript form over time, using multiple overlapping manuscript exemplars, resulting in an edition that does not privilege one recension or even one ‘family’ of texts within the tradition but makes legible in a more or less simultaneous manner a wide swathe of the textual diversity within that tradition. Ideally—and here I speak more of my own vision, which may or may not be within the range of the technologically possible!—such a digital edition would permit movement between electronic text and apparatus, on the one hand, and the images of the manuscripts themselves, on the other, with their own commentary and other apparatuses visible and even encoded and linked to other instances, manifesting and mapping the complex textual and other ecologies in which these manuscripts and their makers and readers participated. The end result might be something a bit akin to Fisk’s maps of the Mississippi: a visualization, and a reading environment and practice (more on that momentarily), encompassing the ‘meander bends’ of manuscript texts, their being-in-process over time and through many hands and voices and contexts, not just the singularity of the original author of a prototypical ur-text.
Now, as it stands, there are other ways at getting at such maps of dynamism—the phylogenetic approaches that scholars have attempted over the years, and which we hope to apply in a—perhaps!—new way to the Shāhnāmah tradition, is one such approach. But there is a limit to this sort of approach, and it gets at one of the fundamental, arguably, differences between the sciences and the humanities, lying in the nature of textual communication itself: the interpretation of texts cannot be reduced to the quantitative or the distant, even as we ought not discount those possibilities. Texts need to be read, the vast range of interpretation for which we have recourse to texts is dependent upon an actual reader’s engagement with the words, though the style and manner of the reading will of course differ a great deal depending on many factors. What I want to tentatively propose is the possibility of a ‘deep reading’ of premodern texts in their manuscript environments, not just acknowledging diversity and transformation but actually working to read, to humanly read and interpret and annotate and all the rest, along and within and across the meander bends of the text-in-process.
So far as I can tell, no one has used the term ‘deep reading’ in this manner—when I have seen it in use it’s in indication of the opposite of shallow, skimming reading, the sort of reading to which we in the digital age tend to resort. Since I’d like to repurpose ‘deep reading’ for my own different (though not unrelated) sense, I’d prefer we call such deliberate careful reading ‘slow reading’ instead. The deep reading I propose is of course likely to be slow reading, given that it is not just looking at textual evolution at scale but also a real reading of the text-as-process or, perhaps better, the text-in-process, interpreting and contextualizing a text in its long historical evolution, as opposed to fixing it at one given point in its history (usually the point of origin). There are certainly limits which interpretation will run up against: often the ‘why’ of textual evolution will remain totally opaque, the result of scribal error and hence textual mutation, or of choices and modifications or paratextual inclusions whose historical dynamics we can only guess at from our vantage point.
There is a danger in getting overwhelmed, of becoming too burdened by the historical contingencies of textual reality mediated through digital tools, particularly if we include (and at some level I think we ought to!) everything else that actually often makes up these premodern texts: the interlinears, the commentaries, the stacks of texts winding around and within the ‘main’ one, the readers’ notes, the erasures, all the stuff of book history and codicology. None of these elements in fact existed as truly separable pieces but functioned as an integrated if noisy and perhaps clashing whole across the page and through the book, a form of textuality that we still struggle to comprehend, even as, ironically, digital media permit a closer approximation of these past forms than does typographical print (in its modern iterations, at least—other worlds were possible, and did exist for a while, it’s worth remembering). Negotiating all these things is hard, and there’s a reason we often skip over them, sometimes simply because the handwriting is far less legible, sometimes because it’s just too much, sometimes because we lack the theoretical frames or a cogent reading practice. The past is another country, they read differently there—which has many implications for our acts of interpretation, starting with the necessity of identification and recognition, hard enough in themselves.
Thus a program of digitally-aided ‘deep reading’ would entail several things at once: the ‘deep’ signals both a diachronic stance, reading a given text as it appeared within a large (relative to one human reader anyway—the time scale could be a couple hundre years, or a thousand) swathe of time, as well as a reading that operates in the contextual depth of the text, the many iterations and moments of constitution through paratext, reader response, transformation, and so forth. ‘Reading,’ because while such a project would make distant examination and mapping possible, the result text-in-process would still be read, the meanings and nuances of the words and phrases and all the rest examined, the reading moving along the flow of the text’s historical trajectory, and making her arguments accordingly. What precisely would this look like? How does one speak of the meaning of a text in, not a ‘deconstructed’ environment really, but if you will a ‘sedimented,’ almost ‘hyper-constructed’ environment, in which meaning becomes both more and less concrete than in ‘traditional’ readings guided by ur-texts and critical editions?
There are plenty of other open questions, and a lot depends on what shape our digital reading environment and methods of collation and connection end up taking; while theoretical concerns will guide that work, as is always the case the allowances of the technology, our skill sets, and our budget are all limiting factors. It’s very tricky terrain to navigate, to put it mildly. But it’s also vital terrain, not just for improving scholarly knowledge of the past, but in helping us make sense of deeper major questions of meaning-making, of the shape and shaping role of technologies of text and communication, of how we know and perceive and make ourselves known to others, and how we might adjust or transform or even reject particular tools or ways of doing things in the present and the future.
Scientific knowledge—including, if I may be permitted the inclusion, the kind of historical inquiry and epistemic interventions scholars of my sort make—can only ever supplement and augment the ordinary human capacity for perception to a limited degree; intricate geological maps of earth processes or complex digital editions of manuscript texts, as useful and powerful as they can truly be, can only integrate so much, can only take us so far. We are not computers, and our sense perceptions, memories, ‘stores’ of knowledge, abilities, and so forth, are not programs or pieces of data, even if through a really rather miraculous intellectual alchemy we are able to produce such things and even turn around and project our tools and creations onto our inestimably more complex and nearly ineffable actual nature as conscious rational embodied organisms.
But this means that the interface between the knowledge produced with our tools—be they literal technological instruments and outputs or the vaster ‘tools’ of knowledge—and the ‘real world’ will always be imperfect and will not lead to an exponentially greater direct apprehension and containment of reality. Awareness of those limits (an awareness that unsurprisingly is often hard to maintain!) can much better guide and direct our attempts at ‘seeing’ the world and comprehending its patterns, dynamics, its truth, through our epistemic and technological and other means, even as we bear in mind that what we are hearing and seeing is only really a fragmentary portion of a much greater whole. Such boundaries can be a cause of existential angst, of a despair in the face of the enormity of the human universe, to say nothing of the unimaginably vaster cosmos as a whole, or we can graciously accept our limitations and learn to live within them, celebrating the fact that much will always remain a mystery to us. But therein lies a whole other train of thought, for another time.
Note to readers: as mentioned in a previous post, I am hosting a reading group this spring, History of Islamicate Text Technologies (see link for description), which will cover similar ground to the above. While we are close to a full roster at present, it is still possible to add a couple more, so if this sounds like something you’d be interested in, drop me a line. We will most likely be meeting Mondays at 1:00 pm Eastern US time.