Author Archives: Sarah Garnett Kinniburgh

Timekeeping + lightning talks

Building off Farah’s most recent post, which collected the projects presented at the CUNY DHI Lightning Talks this past Tuesday: I loved hearing about these ongoing and potential projects across CUNY and I’m so grateful to have all of those links in one place for reference. I also wanted to speak to the structure of the lightning talks and, from the role of observer/student/audience member, consider the condensed presentation format in more detail. The defining characteristic of a lightning talk is its brevity — a few minutes at most — and I wanted to share a blog post to this end from Danica Savonick, Assistant Professor of English, SUNY Cortland, and former Futures Initiative Fellow and HASTAC Scholar, “Timekeeping as feminist pedagogy.” (Available here.) 

Replicating the bolded text in her original post, Savonick defines timekeeping as “deliberately structuring how much of a given amount of time is allotted to different tasks, communicating this information to participants, helping participants prepare to work within these time constraints, helping them stay on time in the moment, and encouraging an awareness of time constraints in others.” In this particular post, Savonick’s examples focus on the undergraduate classroom and how to teach time awareness, not just by assigning one presentation and using an aggressive noise to cut a student off in the middle of a thought on the day of said presentation, but by building practice into the course. She notes that, “[w]ithout a doubt, the number one “mistake” students make in these facilitations is trying to cover too much material in too little time. By the end of the semester and with lots of practice, students learn to scale back their plans and cut back from three activities to two, or from two discussion questions to one.” The quotation marks around “mistake” emphasize what a genuine challenge this process of time management can be, no matter the length or function of the presentation at hand and no matter the education level of the speaker. (Consider an ambitious lecture session, even in upper-level classes with an experienced instructor, that tries to “get the class back on track” after falling behind on material; even if the instructor is glancing at the clock every few minutes, it can be difficult to adjust material on the spot.) Even if the speaker is making a good-faith effort to make the time constraint and/or be considerate of the other potential uses of “their” time, they often encounter the issue that they have simply too much information and too little time, this time, and have to make a decision about how to act in the moment.

The lightning talks were a fascinating exercise in practicing this entire process of timekeeping and I am so glad I went to witness them. They not only gave a sense of the ongoing projects across CUNY but provided a relatively unforgiving framework for this timekeeping process to occur, particularly in contrast to more traditional formats at this level: possibly a lecture, where the speaker can look at the clock periodically for the hour or so, or a speech, with a few layers of etiquette that would help protect the speaker if they go over their allotted time.  

The lightning talk restrictions also got me thinking about the consequences of timekeeping across settings, from the lightning talks themselves to any conversation oriented around academics. Even students and/or scholars who would follow instructions for a written paper to the letter — and even those who have the foresight to do a practice run or five of a presentation — might find themselves speaking past a time cutoff for one reason or another, or waiting to speak rather than listening fully. The consequence of this failure on the speaker’s part is that someone else does not get to speak to the best of their ability in their time. Echoing elements of our conversations throughout the semester, Savonick reminds us that “[f]eminist pedagogy teaches that silence is not an absence, but the effect of power. It encourages us to listen to those voices that have historically been silenced and to change the structural conditions so that their voices are heard.” To this end, I am also curious about another aspect of this process: not just the consequences of failing to keep one’s time, but what happens physically and emotionally to us and to our audience when we do not do so, from rushing through material to apologizing out of turn, and what this might mean for engaging with a learning community.

(Moving towards) a network analysis of U.S. Senators

In determining what exactly constitutes a network analysis, Miriam Posner’s website directed me to an older version of Scott Weingart’s website, with his introduction to networks available here. (Weingart received the 2011 Paul Fortier Prize, among other recognitions at the top of our field, and/but/so his website — a WordPress! — is a great resource across the board. Highly recommended.) He writes that, “[i]f you’re studying something with networks, odds are you’re doing so because you think the objects of your study are interdependent rather than independent. Representing information as a network implicitly suggests not only that connections matter, but that they are required to understand whatever’s going on.”

This week, I wanted to focus on connections or relationships with a topic that I believe reflects interdependency, so I took this third praxis assignment as an opportunity to explore the concentration of power that has long defined the upper levels of government in the United States. I was curious about how network analysis might make sense of what I view as a relatively closed circuit of people, predominantly white and predominantly men, whose relationships with each other may go back to college — or earlier, in the case of certain recent high-profile hearings — and serve to further consolidate their influence.

To explore this concept in more detail, I aimed to do a network analysis of the current 100 United States Senators to discover connections in their young adult lives through their time as an undergraduates. Many Senators attended one of a few law schools, but I was curious about finding other similarities even earlier in their higher education, whether a shared undergraduate institution or a similar set of experiences as an undergraduate at different institutions. It came to me that I could possibly use the tools for this week to visualize who — if anyone — would have been at the same school at the same time, for example. I ended up creating a dataset of each Senator’s name, college, graduation date, degree type, field of study, and additional information, including fraternity or sorority affiliation and if they served as student body president, were the first in their family to graduate college, graduated as a member of Phi Beta Kappa, or a few other categories that appeared in academic/professional overviews around the Internet. It’s not that I think being in a fraternity or sorority is an accomplishment (I did have a net positive experience with my own Greek life experience, but I’d hardly call it an achievement), but I wanted to include this element of undergraduate life as a potential link between Senators.

To consolidate all of this information in a Numbers file, I took Wikipedia at its word, and dug for details about several Senators as well. Some biographies described how so-and-so “graduated from [institution] in [year] with a [B.A./B.S./B.B.A.] in [field],” which was optimal for my purpose, but many biographies instead included partial information that left rows blank in my spreadsheet: reading that a Senator “holds a B.A. from [institution” or “graduated summa cum laude from [institution] in [year]” would send me to other sources, including alumni magazine profiles and commencement speaker information. For a few Senators, even this secondary step did not turn up the details I wanted, particularly with comparatively older Senators who graduated from college in the 1960s and 1970s whose fields of study remained hidden deeper than I could dig for this week.

At first, I included a column for “additional information” all together, separated with semicolons, but this catch-all approach had its limits. For one obvious thing, this approach didn’t allow me to sort  the data by any one consideration (if I wanted to see the Senators who graduated cum laude or higher, for example) that would illuminate a potential overlap in experience. To try and fix this, I broke the column up into a few not-catchy additional columns: “PBK?”, “other academic Greek,” “Greek social,” “first-gen,” “leadership,” and “athletics.” The column for “leadership” then became “class president?” and “valedictorian?”, making the table harder to navigate than before when these considerations were in one place. I soon realized another limit of this new approach: that some of this information might not be as accurate as I wanted it to be. Besides deliberately untrue information included in a biography, if a Senator graduated as their college or university’s valedictorian, it might not have made it into their Wikipedia biography (which anyone can edit) because so much else later in life eclipsed that one title or any other reason, or maybe the college or university did not even identify a valedictorian in the first place but the person still graduated at the top of their class.

The final consideration, which is fundamental in hindsight, is that I assumed that shared titles or organizations would lead to shared experiences that would lend themselves to network analysis. To be fair, there might be something about a group of people in the same Greek-letter organization at the same institution at the same time, or even across time, as the fascinating social experience of Homecoming illustrates, that ties them together, but it is hard to generalize from this feeling to a network. Do Doug Jones (who graduated from the University of Alabama, Class of 1976, with his B.S. in Political Science) and Michael Bennet (Wesleyan University, 1987, B.A., History) feel any kinship at all for their shared status as brothers in Beta Theta Pi, for example, and would it be fair to say that this kinship has affected their politics in any way, which is what my initial interest in this entire dataset seems to suggest? Or do John Cornyn (Trinity University, 1973) and Pat Roberts (Kansas State University, 1958) have some unspoken bond thanks to their B.A. in journalism?

I originally began pulling together this information to create a network analysis of what United States Senators might have shared early in their adult lives — institutions, honors, social organizations — but encountered fundamental problems with this very curiosity, not to mention the steep learning curve to putting the data into place. In working to express the data through Palladio and Gephi, I found that the platforms did not respond to my organizational approach or my questions, giving me a string of error messages and forcing me to return to the data over and over again to fill it in and rework its structure. I am going to try a few more rounds of editing my spreadsheet and exporting it to a csv file over the next few days, but also have considered that a different method entirely might give more insight.

Much of this process of compiling and adjusting the dataset was the challenge of figuring out how to organize the data as little as possible while aiming for accuracy and consistency. To paraphrase Micki Kaufman’s answer to questions last week about her method for working with a large quantity of documents processed with optical character recognition, I wanted to remember that we look for patterns where the data is cleanest, so the moment that you begin cleaning data, you begin influencing it, even subconsciously. This lesson only felt more important the more I messed with this data, even on a small, limited scale, and realized how much of my own interests and decisions affected any potential takeaways.

To return to Weingart’s post about network analysis, “Relationships (presumably) exist. Friendships, similarities, web links, authorships, and wires all fall into this category. Network analysis generally deals with one or a small handful of types of relationships, and then a multitude of examples of that type.” He uses the examples of authorship and collaboration as types or ways to describe relationships between types of nodes and introduces the distinction between asymmetric relationships, or directed edges that can be visualized with an arrow flowing one way, and symmetric relationships, or undirected edges that can be visualized with a line between nodes implying that the flow of the relationship is the same in both directions. For my purposes, I was only interested in finding the potential directed edges, the undergraduate-level features that current Senators have in common that could possibly indicate shared experiences and start the process of understanding various intangible “benefits of the doubt” that seem to hold real weight in political situations. Moving forward with trying to explore the concentrations of political power in the federal government, I think it might make sense to incorporate a greater sense of asymmetric relationships (who has clerked for who, for example, rather than who were classmates on the same level or who shared an experience “equally”), or else to work with nodes that offer less room for interpretation on my end.

Re: Phototrails

I spent some time this week with Phototrails, a Mellon Foundation-funded collaboration between the University of Pittsburgh’s Department of History of Art and Architecture, California Institute for Telecommunication and Information’s Software Studies Initiative, and The Graduate Center. Photorails maps patterns between 2.3 million Instagram photos from 13 global cities and describes itself as a work of cultural analytics, using computational methods to identify “visual signatures” within this vast amount of data for each city. Our conversation last week about mapping, representing, or else visualizing personal places was on my mind; I was drawn to Phototrails in part because my moves this past summer — to Boston in June and New York in August — have prompted me to think about how to represent my time in these distinctive cities via social media to family and friends in Virginia. Do I share the iconic or expected images (the Manhattan skyline from the Manhattan Bridge, for example, or the arch in Washington Square Park), offer variations on themes (a crowd of people in a museum with Starry Night tucked in a corner), or geotag otherwise nondescript images to signal that, yes, I am in these spaces (a top-down view of my coffee on a table that could be anywhere in the world, only identifiable as my neighborhood coffee shop in New York because of the geotag)? After learning more about what Phototrails aimed to accomplish, I not only wanted to evaluate how my own visual data about and representations of experiences of New York might fit into a dataset or approach to data, but to share a few takeaways about visualizing photographs as qualitative data points and photographic metadata.

First, I wanted to describe the project’s visualization layouts, borrowing language from those sections on the website. The team describes four options for presenting the data: radial visualizations, which organize photos in a circle across their visual attributes (hue, brightness, texture), location, and timing; montage visualizations, which offer a more grid-like organization; PhotoPlot software, available for more investigation here; and points and lines, which use a color-coded system on a gradient to capture the time of day that each photo was taken. The idea with these various layouts is that the data can adjust to show visual characteristics of the data as well as metadata (filters, spatial coordinates, upload date and time). Phototrails describes a “multi-scale reading” capable of “moving between the global-scale cultural and social patterns and the close-ups revealing patterns of individual users,” a middle ground between close and distant reading of behavior, experiences, and representations. With this information in store, however, I began to wonder what other information may have gotten captured. (This is where I loved Drucker’s distinction of data (a “given”) and capta (that which is captured). She elaborates that “capta is not an expression of idiosyncracy, emotion, or individual quirks, but a systematic expression of information understood as constructed, as phenomena perceived according to principles of interpretation” and I am still puzzling over if this notion undercuts the idea that we can find something like a pattern across 2.3 million individual photographs.)

In this sense, Phototrails reminded me of our conversations about text analysis, as when some of us ran became uncertain about if/how Voyant would store our data and ended up pursuing different lines of thought than originally planned. In the case of Phototrails, I was curious about how the team gained access to this data of 2.3 million photographs, then realized that they were publicly posted on Instagram. What are the ethical implications of conducting a large-scale project like this, drawing on social media where those who “participated” in the project might not know that they offered data for this purpose? How do ideas about informed consent — those ideas that shape the concept of the IRB and standards for human-based research, but also notions of privacy more broadly — intersect with this type of scholarship that necessarily casts a wide net and, in many ways, crowdsources from a crowd that often does not recognize itself? It reminds me of when I noticed signs at The Grad Center orientation that said, essentially, “your presence in this space is consent to be photographed and documented on film,” and found myself acting differently — smiling and gesturing more, going into a corner to check a notification on my phone — because I had this heightened awareness of the potential future uses of my image. Because the participants in this project did not have the benefit of such a sign, the Phototrails data is arguably more “real” or “authentic,” but the uneasiness lingers.

At the same time, there are parallels between this challenge of digital data collection and more traditional methods of anaylsis. It makes me uncomfortable to know that any of my own data, visual and otherwise, might very well end up in someone’s research and take on meaning(s) that I did not intend, and that I probably will never even know. In the same way, that farmhand in the nineteenth century might have kept a diary for future insight on labor conditions in their industry, but the diary more likely served a set of purposes in its time and took on new meaning later. Part of conducting responsible research, whether focusing on objects or literature or documents, is recognizing these multiple layers accordingly and not distorting or overstating one aspect to get a desired result.

This is where I found myself disagreeing with Phototrails’s own distinction between big data and thick data. “Zooming into a particular city in specific times, we suggest that social media can also be used for local reading of social and cultural activity,” the Phototrails team wrote. “In other words, we do not necessarily have to aggregate user generated content and digital traces for the purpose of Durkheim-like mapping of society (where individual people and their particular data trajectories and media diaries become invisible). Instead, we can do “thick reading” of the data, practicing “data ethnography” and “data anthropology.”” In my mind, a thick reading of this data would include explanations for why a user shared one location and not another (even to the level of sharing the street address versus a building name, as Sandy mentioned in her discussion of mapping stops on a global tour), information about captions, details about the hashtags, and the consideration of if the photo was taken in New York or just tagged there, all non-visual components that influence a visual signature. Without such context, I think this project is an ambitious and impressive example of visualizing big data, but falls just short of a thick reading that reaches the possible depth of “cultural, social, and political insights about particular (local) places and particular time periods” it aimed for.

Like many of us have mentioned in class and in other conversations, I also find the sense of collapsed boundaries — the idea that we are all constantly, quietly, accidentally providing data, whether it ends up in a peer-reviewed academic journal and helps provide a new perspective on an important social issue or whether someone uses the same data for something far more unsettling — troubling. To illustrate this, I followed the Phototrails website links to its new project, Selfiecity ( Selfiecity addresses the selfie in artistic, theoretical, and quantitative frameworks, including visualizations of 3200 of selfies around the world and an interactive photoset. Close to the bottom of a page of insights, the website offers headshot and bios of a team of eight and then, at the very bottom, single attribution of sorts: “A DigitalThoughtFacility project, 2014.” The link takes you to, which greets you with a description of OFFC, a “a research and design studio based in New York City” that describes its work as follows: “We work with global brands, research institutions and start-ups to explore new product applications for today’s emerging technologies.” This isn’t to say that corporate interests can’t engage with DH scholarship — that’s a huge, ongoing conversation about higher education and business in general — but just to note the curious flow from project to project. This week of readings and projects has provided a good path forward for continuing to explore the interplays between access, democracy, inclusion, and privacy, particularly in the middle ground between close and distant reading.

Reflecting on spatial humanities beyond literature

As the syllabus starts to expand from focusing primarily on (the) text as data, I was admittedly apprehensive about the readings this week. I remember a few vocabulary terms related to space and mapping from a formative class on modernist and contemporary literature and the Anthropocene (focusing on the figure of the flaneur and psychogeography), but I have never used GIS software or any mapping technology in my own work, have spent very little time talking about space in academic contexts, and have only recently begun to learn about geography as a discipline. This relatively “blank slate” turned out to be helpful for this week’s readings. As we have emphasized throughout the semester, digital humanities as a field can/should incorporate a sense of play into the process of inquiry, and it’s much easier to play with a low sense of expectation and an open mind.

I began this week with The Spatial Humanities: GIS and the Future of Humanities Scholarship, edited by David J. Bodenhamer, John Corrigan, and Trevor M. Harris, and worked through the text from designated beginning to end. The opening chapter of The Spatial Humanities, “Turning toward Place, Space, and Time,” by Edward L. Ayers, was eye-opening in several capacities, starting with its overview of the history of GIS across disciplines and how geography as a field fits into this history. Rather than using the familiar (and, at times, frustratingly flexible) term “interdisciplinary,” Ayers quotes geographer Stanley Brunn’s assertion that geography is “the bridging discipline or an interfacing or fusing discipline” (Ayers 2). I am not sure if more scholars have written on the difference between this bridging/interfacing/fusing cluster and interdisciplinarity, or if there is even a substantial enough difference to discuss, but I liked the new phrasing and imagery here. This sense of bridging means that geography crosses and studies “the relationships between the human and physical phenomena” — in this case, focusing on physical phenomena through space, in the same way that Ayers suggests the historian takes time as their organizing unit (3). (I am still not entirely convinced that “history is, at heart, a humanistic discipline rather than a social science,” but we’ll leave this discussion alone for now.) Geography and history make perfect sense as interfacing fields examining the same landscape, particularly with the help of Bakhtin’s critical vocabulary (Ayers 4), sociological insights like Andrew Abbott’s description of time as “a series of overlapping presents” (quoted in Ayers, 5), and the notion of “landscape” itself (explored in detail in “Representations of Space and Place in the Humanities” by Gary Lock, Chapter 6 of the same collection), all of which help locate humanities scholarship in three and four dimensions.

The editors’ chapter, “Challenges for the Spatial Humanities: Toward a Research Agenda,” goes more into the details of the spatial humanities specifically, rather than the history of GIS or GIScience. In particular, I was drawn to their warnings against relying uncritically on the positivist technology of GIS itself for humanities and humanist scholarship (168) and the related importance of applying sound humanistic judgment to spatial and temporal data that often, as we briefly touched on in our discussion of Miriam Posner’s “What’s Next: The Radical, Unrealized Potential of Digital Humanities” (recommended for September 25), collapses into convenient and inaccurate categories. On this wavelength, the editors mention metaphorical space as well — that of text clouds, for example (172) — but I felt myself glossing over this section and wanting to read more about how to ethically, respectfully explore actual/real/non-metaphorical time and space as a humanist (borrowing from the social scientist’s toolkit), returning to the concept of respecting the complexity of your data.

In addition to a sense of play and the importance of designing research that respects the full range of data, this week saw another recurring theme (in my mind) from the semester so far: the fact that humanities scholarship can extend beyond those very few texts typically designated as “literature.” This isn’t about expanding the canon necessarily (a separate and valuable discussion!), but rather expanding my own idea about what humanities scholarship can and should be. I was convinced that our conversation about mapping in DH would all be related to literature or literary history: mapping the Republic of Letters, for example, or else following the plot of a novel around a plot of land. This week disproved my assumption that mapping — or any spatial/temporal approach to humanities work — has to follow or replicate an existing narrative and instead showed that mapping can, no matter the exact technology used, present multiple overlapping or even competing narratives about a time and/or place.

It was here that the section on “The Humanities in the Digital Humanities” (22) in HyperCities: Thick Mapping in the Digital Humanities spoke to me the most, particularly with the spark of an idea about creating a thick map of something to do with culture, art, and/or information in Europe during the Cold War (a period of history that has been catching my attention more and more recently). I didn’t get the chance to complete a map of my own this week in time to write about it and reflect fully, in part because I was so happy to realize that, amazingly, I didn’t need to limit myself to a literary topic, but I hope to be able to return to these readings soon in the semester with a bit of distance. The concept of deep/thick mapping — mentioned in these readings, with more differences between space and place and other practical details outlined here — is one of the most fascinating concepts we’ve encountered so far this semester because it incorporates so many of the values and practices that have cut across readings and discussions. With this in mind, the idea of building a meaningful, layered, expansive map of a historical site is something that I want to return to for sure, especially now that I don’t feel limited to mapping information drawn from or related to literature.

Text mining the ASA Dissertation Prize

The mangle came out in full force as I was deciding which texts to use for this assignment. My first thought was to explore the language of DH research centers around the world, aiming to identify similarities or differences in their mission statements. This idea proved to be too ambitious (and a little too meta for me today), and in my efforts to scale it back I ran into the problem of picking a few centers to compare and, by extension, determining which centers were “representative” of DH. Even on a small scale with low stakes, that task felt a little loaded, so I switched directions.

I then uploaded a corpus of 70 texts from an undergraduate sociology seminar on cultural dimensions of violence, covering a range of historical and contemporary examples from a variety of disciplines, but ran into the problem(s) of formatting. The PDF format clouded Voyant’s reading process, as the language on the cover page of each document (“use,” “published,” “rights”) registered as the most frequently used words. As much as I wanted to work with a larger corpus, I could not figure out how to only upload the text of articles or a page range from a PDF — not to mention that multiple articles registered as having 0 words in them — so I decided to take a different track and pick texts that I could streamline more easily.

With the writing process, publishing, and peer review on my mind from last class, I decided to think about the early stages of the prestige/reputation economy and went to the American Sociological Association website. Of the ASA’s annual awards, including teaching and career accomplishment awards as well as more outward-facing awards for public sociology and social reporting, I was drawn to the Dissertation Award as a way to explore the language of peer review and evaluation. The submissions instructions prove to be fairly detailed, but the selection criteria, not so much:

“The ASA Dissertation Award honors the ASA members’ best PhD dissertation from among those submitted by advisers and mentors in the discipline. Dissertations from PhD recipients with degrees awarded in the current year, will be eligible for consideration for the following year’s award. (e.g. PhD recipients with degrees awarded in the 2018 calendar year will be eligible for consideration for the 2019 ASA Dissertation Award.)”

To get more information about this particular prize, I pulled the press releases for each award decision from 2008-2018 to give me information about the 13 dissertations (counting two years with joint winners and not counting years that included an honorable mention), and put the body of each release into a separate PDF. With much cleaner documents to run through Voyant, I uploaded all 13 at the same time and started to dig in. I was curious to see if the language of these press releases betrayed some logic or reasoning behind the language of the selection criteria. I hypothesized that each press release would include some mention of timeliness and/or novelty (why this particular research mattered to the field at the time and/or why the method contributed something to a subfield or entire field) and that the results in Voyant would show this language accordingly.

Instead, I was struck by the most frequently used words (besides “dissertation”): global (31 times total in the 13 documents); political (25); cultural (24); and social (24). None of these words were used in every text; “global” was concentrated heavily in Kimberly Kay Hoang’s “New Economies of Sex and Intimacy in Vietnam” (which won the award in 2012) and Larissa Buchholz’s “The Global Rules of Art” (2013). Of the most frequently used words, though, “political” appeared in 12 out of 13, only absent in the write-up of Alice Goffman’s “On the Run” (2011), and “social” was only absent in Christopher Michael Muller’s “Historical Origins of Racial Inequality in Incarceration in the United States” (2015). In both cases of absence, I found that these absent words could have applied to the project at hand (i.e., the announcement of “On the Run” could have mentioned the political components to Goffman’s methods and “Historical Origins of Racial Inequality in Incarceration in the United States” touches on undeniably social dimensions), which made me think about who was writing these releases in the first place and choosing to focus on which dimensions of the projects: intensive fieldwork in one case, novel methods in another.

On this note, the releases for 2017 and 2018 projects (Karida Brown’s “Before they were Diamonds: The Intergenerational Migration of Kentucky’s Coal Camp Blacks” and Juliette Galonnier’s “Choosing Faith and Facing Race: Converting to Islam in France and the United States,” respectively), plummeted in word count (105 and 149) compared to the average of 642 words from 2008-2016, which peaked with Goffman’s 789-word announcement in 2011 and had been decreasing since 2013. The vocabulary density of these announcements has also been on the rise, but not consistently, fluctuating between a high density of .771 in 2017 (Brown’s “Before they were Diamonds”) to a low of .451 in 2013 (Daniel Menchik’s “The Practices of Medicine”), and the average words per sentence have also been widely varied, from 85.5 words/sentence in 2009 (Claire Laurier Decoteau’s “The Bio-Politics of HIV/AIDS in Post-Apartheid South Africa”) to 20.9 words/sentence in 2015 (Muller’s “Historical Origins of Racial Inequality in Incarceration in the United States”). Although vocabulary density and average words/sentence tell their own stories, the most striking difference in my eyes has been document length. The sudden change from ~400 to <150 words makes me think that the winners of the Dissertation Award used to write their own announcement, but there was a shift between 2016 and 2017 that moved the announcements much closer to a dense, factual press release format with little embellishment and no outside quotations from supervisors or mentors.

I was also interested to discover through Voyant that these announcements generally do not make a big deal out of each dissertation’s timeliness or novelty; “timely” appeared in three of the 13 announcements, “new” (in context) in two, “groundbreaking” or “breaking new ground” in two, and “ambitious” in three, not the kind of language I predicted. Instead the announcements often mentioned other aspects of quality, from Decoteau’s “masterful” research to Buchholz’s “theoretically and methodologically sophisticated analysis,” and — in seven of the 13 documents — a “contribution” to the field without pointing back to novelty or newness specifically. In a certain way, this lack of specific language about timeliness or novelty and a focus on overall quality creates an in-group feeling. The reader learns about the content of each project — what each scholar studied and how they approached it — and is left to read the rest for themselves. 

In hindsight, I think I was hoping that the write-ups for each dissertation award would give a bit more insight into the selection and review process for this particular prize. However, the selection process for many prizes — from receiving a named award from a scholarly organization to a securing a spot at a music festival to being signed to a particular modeling agency — is often deliberately vague and, even after the fact, information can be limited about how a committee arrived at a decision. This example suggests that, for now, certain academic prizes are no exception.

Peer review + power dynamics in Planned Obsolescence

Keeping in the spirit of Sandy’s post on collaboration vs. “ownership,” I wanted to mention Fitzpatrick’s idea of peer review, share my hesitancy about her diagnosis of the problem and solution, and hopefully hear what everyone else thought about it.

In Planned Obsolescence, Fitzpatrick considers From Book Censorship to Academic Peer Review by Mario Biagioli (full text at to describe how “peer review functions as a self-perpetuating disciplinary system, inculcating the objects of discipline into becoming its subjects” (Fitzpatrick 22). As Biagioli puts it, “subjects take turns at disciplining each other into disciplines” in academia (12). This concept makes sense across types of peer review; Biagioli focuses on the royal academies and the associated “republic of letters” as a way to conceptualize peer review beyond a singular project, and I am also thinking of contemporary practices that are designed to evaluate and recalibrate a power dynamic (like the time I realized that the department head in the back of a classroom was actually there to evaluate the instructor).

This entire process of peer review, but particularly familiar version that Fitzpatrick considers in her first and third chapters in detail, is wrapped up in notions of who counts as a peer. We have discussed the idea of collaboration throughout the semester, starting with the notion that DH projects often accommodate, even require, a variety of skills and contributions; Sandy’s post speaks to this point and flags the critical “decision point about whose contributions to include” in the first place as a good place to start for identifying a project’s collaborators and expanding our notion of a peer. All of this points to a more inclusive notion of the peer which, in turn, aligns with a field like DH that strives to be participatory and democratic in multiple senses of the words.

The peer review process that Fitzpatrick outlines in Chapter 3 seems like a good place to start putting this expanded idea of the peer into practice. She compares how digital commenting functions as one level of peer review for projects such as “Holy of Holies,” Iraq Study Group Report, her own article “CommentPress: New (Social) Structures for New (Networked) Texts,” Expressive Publishing, and a digital release of The Golden Notebook (112-117), describing a spectrum of options from an entirely open commenting feature where any reader could leave a comment to relatively closed off systems where only select readers could provide feedback. As I made my way through this chapter, the phrase “the wisdom of the crowd” (which we first encountered in the context of The Digital Humanities Manifesto 2.0 as described in “This Is Why We Fight” by Lisa Spiro) kept coming to mind. From my perspective, this notion underlies Fitzpatrick’s model for online peer review, which strives to be a social, open process while “managing the potential for chaos” (117). (Granted, this chaotic or more generally negative mob/mass/crowd was much more familiar to me from French history, Romantic literature, early urban sociology, and general concern about trolling, but I have come around to the idea that the crowd can be a force for good in so many DH contexts.)

However, Fitzpatrick also notes that the author of  Expressive Processing experienced that “the preexistence of the community was an absolute necessity” (116) to make its comment structure useful. This experience logically translates to other projects: peer review that turns to the “wisdom of the crowd” can only be as helpful as its crowd. I see how the crowd might offer more variety of feedback and how a more expansive notion of peer review in general could magnify the voices of individuals who may not have gotten the chance to participate in the process otherwise, whether because they fall slightly outside of academic circles, have not yet acquired the prestige to “do peer review” for a publisher, or any other reason. But to become a member of that peer review community or crowd — one of the seven women with commenting privileges on The Golden Notebook, for example — in the first place, I see the same social and technical barriers to access that we have talked about in class. As a result, I am struggling to see how a more democratic comment structure in digital spaces changes the disciplinary power dynamic of peer review. In your reading, does Fitzpatrick’s proposed version of peer review (in certain contexts) adequately address this power dynamic?