Author Archives: Patrick Grady O'Malley

Marxism, Sexism, and Racism

Hi all, many congratulations on completing the semester to all of you. A special shout out to Patrick Smyth, I apologize I can’t make your party tonight. But I wanted to write a blog about my final project. As you may recall, I am doing a textual analysis project based on letters written between Karl Marx and Frederick Engels in the latter half of the 1850’s. I am largely interested in how these letters influenced their other published works, such asCapital. But that isn’t what I am interested in blogging about today. There was one area of my proposal that I submitted yesterday that I wish to discuss now, and that has to do with the inclusiveness of my work.

 

I was struck earlier in the semester reading Lauren F. Klein’s “Distant Reading after Moretti.” It really brings to light issues of inclusiveness in the Digital Humanities and academia as a whole. I realize I am a white male working on a project of the work of two other white males, so I wonder what biases I have just from the beginning in terms of race and gender. I also question if there is inherent racism or sexism in Marxist thought as a whole. What must I do to avoid such biases and to find any instances of racism or sexism in the texts I analyze?

 

Textual analysis, or distant reading, is as Klein says, “unwelcoming to women.” What can I do in my work to be inclusive of feminism and gender thought (and of course race) as I analyze these letters throughout next semester? How can I create a project that invites people of all backgrounds to engage with my work? I don’t think I will be able to answer any of these questions in this blog, but I want to stress how important it is that I think about them.

 

Marxism, in general, boils down to equality, but is that enough to be void of any of the issues I highlighted? Marx and Engels, as I’ve said were white European men, and perhaps have a Eurocentric mindset. I’d be very interested in reading of racism and sexism in Marxism if anyone has article or book suggestions. There is so much to read on the topic, and in my landscape audit I didn’t really see anything regarding these issues. I wonder how female, gender non-conforming, and people of color interact with Marxist thought differently than I would. Has anyone come across something troubling in their own readings of Marxism?

 

Digital Humanities is evolving and growing, as is academia with movements like #metoo and #blacklivesmatter. But when working with products from the canon, we can be sure to encounter ways of thinking that differ from the norms of today. Not to say there is no value in these works, they just need to be interpreted from their own time. I guess the best thing I could do for now is to read works on Marxist thought written by women and people of color. I know they are out there, one of my professors this past semester was female, Marxist and feminist. Is anyone else considering their own inherent biases in the work they are planning on doing next semester?

 

Again, congratulations on a finishing the semester. It’s been a true pleasure for me working with all of you over the course of the term, a major learning experience to be sure. I hope you have the happiest of holiday breaks, however you choose to enjoy it.

I have a big problem with marginalizing the plight of the addict with a phenomenon I don’t see as real

The purpose of this short blog is to get feedback from any of you who have read it that may be able to shed more light on what she is saying so I can discuss it more informed tomorrow. I realize I am posting late, so maybe I won’t get much response, but here goes…  I am referring to the meth mouth section and her implication that white addicts are merely a marker of “good” or “bad” in terms of their whiteness, and not their mental illness, addiction. I don’t think addiction is a good thing I think it is indeed a really tragic, devastating thing. And I think her connecting the plight of the white addict as a mere societal “inconvenience” or “stain” to their overall whiteness is a very toxic and problematic claim to be made. I just feel like racializing addiction for white people only in such a way counter-merits the lived experience of countless people who suffer from such a devastating illness. Can someone who agreed with what she said please explain what I am missing?  I understand addiction can be looked at differently when it comes to race, but I don’t see her claims as anything but insulting to addicted peoples who may happen to be white. I think there were many more responsible avenues for the discussion of surveillance, racism and addiction.

The Demise of Daesh

With today being Veteran’s Day, I took inspiration from a Vice News video I saw on Facebook about two hipster Brooklynites who took a year off of work and volunteered to fight Daesh (ISIS) with the Kurdish Army. I thought it was such a cool thing to do, and so my own contribution is a mapping project that illustrates where Daesh held land, where it has control as of most recently, and where it had been eliminated by the US. I realize I am very late with this praxis, however, I had a medical issue when it was due and my other two were on time so this is just for my own benefit.

I used Carto to make my map. I had a hard time figuring out how to make the layers work for me and eventually, it told me I had exceeded my allotted amount and needed to upgrade my account if I wanted more. The reason I had so many layers was that of the purple and orange polygons (which I will explain in a minute the logic of it all) … I was having a hard time adding more than one polygon to one layer, but I eventually got it to work.

Other than that Carto is very intuitive. There is definitely a learning curve that I haven’t quite gotten over entirely, but it was rewarding to play with this tool. The source of my data was a BBC article on the annihilation of Daesh over time (https://www.bbc.com/news/world-middle-east-27838034). I basically just combined two of the different maps they had… one showing Daesh control and another map visualizing where the US destroyed Daesh. I did this very crudely and not all of the towns that were on the BBC map would come up in Carto, even when I searched for al-Qaim it took me to Yemen, which is not the right answer. So I had to guess as best I could. I looked for Daesh related datasets but came up empty-handed. ISIS has a lot of connotations other than the terror organization, so that search too was useless.

So here is the map…

 

I couldn’t figure out how to embed the image so I just screenshot the entire workstation so anyone not familiar with Carto could see how it is setup. As for a legend of the map, the red dots show US lead coalition strikes against Daesh, there were 13,315 in Iraq and 14,660 in Syria. The purple polygons and one line signify Daesh control of the territory as of Jan 5, 2015 and the orange polygons show their significant loss of control as of Jan 8, 2018.

This was a great way to spend my morning. Seeing the demise of Daesh visualized is very rewarding… and if anyone happens to know those two Brooklynites I referenced earlier please introduce me! The one even talked about reading Hamlet and all of Shakespeare during his deployment. He is a theater designer in real life, just to illustrate his unique personality a bit!

Zotero Workshop

Zotero is the coolest thing I haven’t been using. I am so pumped I went to this workshop. I hate creating citations and bibliographies with a passion. Citation generators are garbage and full of flaws. Zotero remedies all of this and keeps track of everything you find that you want to be saved with both the PDF and the catalog information (you can adjust so it does both or only one or the other).

 

Zotero is very easy to install. Just be warned to not have Microsoft Word open while you do this because Zotero comes with a handy Word extension that wouldn’t install right if you have this program open. The Word extension is what allows you to add your citations and bibliographies with ease to documents you are working on. You could also change between different styles with no problem, i.e. Chicago, MLA, etc. I didn’t ask, but I doubt it works with Apple Pages. You also need the Zotero extension for your browser, so you can pull the data on the pages you find.

 

What’s more is that beyond the downloaded client, all of your data is saved on the Zotero.org site. You need to create an account, of course. But all that you have to do in order to exchange updated information between the two is simply hit sync. This is helpful too for updating the website with PDF’s and data you drag into the client from your hard drive or desktop.

 

It works with academic databases and even newspaper articles like the New York Times. The possibilities are rather endless in how it could make your research easier. And better yet there is a dynamic forum that helps with troubleshooting and has its own community following. There is a massive amount of documentation to help with general troubleshooting though and to help gain mastery. The instructor of the workshop lead me to documentation that will even help me install Zotero on my iPad or other mobile device. He did suggest making an appointment with him to help ensure that download goes smoothly though. He was actually very knowledgeable, he is the Digital Services Librarian named Stephen Klein (sklein@gc.cuny.edu).

 

Another cool function is being able to add notes and tags, so you can keep yourself organized within your workspace/the downloaded client. For example, I searched for articles on Hamlet critical theory for my other class and tagged it as relating to mental health. The tags display in their own section, so you can browse through each one specifically.

 

One thing Mr. Klein suggested was double checking the data was pulled correctly in the citation area. He said sometimes things get rough with page numbers and the like, so you just want to make sure everything loaded properly. He demoed pulling in an article, and sure enough, there were all sorts of errors in the citation with characters/quotation marks/markup language, so it is not entirely flawless.

 

One of the people had trouble installing the browser extension to his Surface Pro, which is a tablet but functions as a computer. He even had Chrome and Firefox, not just Microsoft Edge, the mobile browser. So, if you use this device you may have to get additional help to get things operating.

 

Another thing to consider is that Zotero does run out of allotted memory, but you can purchase additional (there’s even an unlimited plan) for pretty cheap. But Mr. Klein said it does take a while to fill up the free memory.

 

Other than that, I guess it’s just important to stress that these citations can follow you for life. If you are using another device that you don’t usually use, just log in and sync later. You can also easily disconnect the downloaded client from an account too so if you are using someone else’s computer you don’t save your workstation on their device. I highly recommend this workshop and this software if you aren’t already using it.

“it’s just awful trying to find a humanities dataset”

What is the value of teaching methodological tools with no inclusion of theoretical support that informs analysis? But what good is the theoretical when students struggle to learn arduous methodology in software like R Studio? Learning to program, from the outset, seems impossible. It is literally learning a new language, one that is mathematical and statistical. Andrew Goldstone articulates some very promising angles to approaching these dilemmas.

When it comes to having the methodological skillset, the big question is “so what?” What can you say about the visualization on your screen? I recently had this problem with my network analysis… okay, I curated data and created networks of characters that conversed with one another in Hamlet. What good is that for scholarship? Well, it is certainly good for my own personal scholarship. We all were told going into these Praxis assignments that the projects were more about getting experience with digital tools than necessarily revealing anything groundbreaking. You need to test the waters before you can commit to a full-on swan-dive. Goldstone understands this, but at the same time was teaching at the Ph.D. level where results mattered. His course sounded intensely painful but very rewarding at the same time.

My experience with R Studio is limited to one class I took. It is really hard software to learn because, beyond software functionality, there is also the problem of interpreting the R language and making it “do stuff” for you. In Goldstone’s fast-paced one-semester textual analysis course, the students sounded highly committed, intelligent and professional but that would be a given going into the design of the course from the outset, I mean they are Ph.D. students. How could he design pedagogy that would inform his students to create intelligent work and mobilize them to ask worthwhile questions of that work? In a very short time frame.

It seems that Goldstone had three major takeaways from his experience with this trial run of his course:

 

“1. Cultivating technical facility with computer tools—including programming

languages—should receive less attention than methodologies for analyzing

quantitative or aggregative evidence. Despite the widespread DH interest in the

former, it has little scholarly use without the latter.

 

  1. Studying method requires pedagogically suitable material for study, but good

teaching datasets do not exist. It will require communal effort to create them on

the basis of existing research.

 

  1. Following the “theory” model, DH has typically been inserted into curricula as a

single-semester course. Yet as a training in method, the analysis of aggregate data

will undoubtedly require more time, and a different rationale, than that offered by

what Gerald Graff calls “the field-coverage principle” in the curriculum.”

 

When I took Digital Humanities courses at NYU, the layout of the program was much different than ours. In the first semester of their sequence, you are taking an Intro to Python course that proved to be very challenging, especially to people with little programming experience (like me), because like Goldstone’s course, it met once a week. I struggled with homework and went to office hours every Monday morning. I would have benefited from this back-end approach of learning to look at and analyze what is being quantified before being expected to create it on my own. Then, when I would go back to the programming course at a later time, I would know what to expect to come out of the “other end.”

In DH courses, the internet is our oyster, to mark Goldstone’s second point. In other words, it is all of our responsibility to keep an eye out for that perfect database that has everything we all need (does that exist?). Sometimes it does take being a little creative and problem-solvable, I had to make my Hamletdataset by hand, but is that the worse thing for an intro-level course?

There isn’t much to say about the third point other than as we already have drunk the Kool-Aid of this program, we know, one semester just won’t cut it. There are so many concepts, theories, methods, programs, languages, practitioners and articles to read. We are lucky to call our program home because we get the time we need to delve into all that.

While learning to work with data, we must learn not only how to make the data “do stuff” but know how to ask the right questions of it at the right time. Because as Goldstone points out, how can one be sure a “trend is real, and not a random fluctuation?” It’s fun to look at data and believe you are pointing something worthwhile out. It’s less fun to learn what you’re looking at isn’t actually interesting by someone that knows.

It is important to learn to program because another point Goldstone makes is when using GUI interfaces, you are limited to the confines of the system. He uses Voyant for an example. Without having knowledge of coding, you are literally locked out from asking questions other than what Voyant allows you to. Perhaps this is another weakness in the tool that could be addressed in our letter to Voyant (if that hasn’t happened already).

The problem with learning too much methodology at once is what scholarly good is it serving? A balance of the methodological and the theoretical is essential for keeping checks and balances. I know in my course with R Studio, there was a great deal of both. My professor was a proponent of being sure to include theoretical readings along with practical assignments every week. I learned a great deal, and this class is what turned me on to data and DH. It is only through understanding the theoretical that the methodological clicks in such a way that scholars can ask appropriate questions. This is a very important aspect of pedagogy to me and is something that is put into practice in our program.

And of course, Goldstone makes an excellent point in that having guided datasets for beginner students is a great way for one to get their feet wet; “so that instead of being forced to fish for interesting phenomena in an empty ocean, students can follow a trajectory from exploration to valid argument.” It is always helpful to have a guide, especially when learning something so new and complex as programming and any other kind of work with data.

Hamlet SNA

For my network analysis, I explored character interactions in Shakespeare’s Hamlet. I chose this text because I just finished reading it again for my textual analysis class for a project. Since I was refamiliarized with its layout, I figured I would be able to easily spot discrepancies in the output from Gephi, the software that I used. However, everything looks like it came out as best I could have hoped. I also knew there are many characters in Hamlet, so it would make for entertaining social network analysis.

Since I don’t know of any databases with Shakesperian play edge lists, I had to make my data by hand. This was the most time-consuming part. I went through the play and created a pair of edges in Excel for each time a character interacted with another, once per scene. In other words, it was a personal choice to only mark each interaction only one time a scene, as I felt like it would have been too tedious to do so every time character’s spoke to one another, as there are many long interactions. It seemed as though there was much more room for error with this route too. However, I did wonder how different my network analysis would have looked if I did do it this way. Perhaps in the future if I have more time, I’d go back and do it this way, or perhaps there is something built in the software where I could count the number of interactions and plug it in that way as opposed to having a binary pair in Excel in two columns for each interaction.

One of the issues I had that could be seen as a weakness on behalf of the data was deciding who was exactly interacting with who. There are scenes where, say, King Claudius is speaking, so do I mark an edge with everyone who is present in the scene? It was a judgement call, but I didn’t do it that way. I came to my own conclusion who is likely being addressed (there were multiple people in some instances) and I never made an edge out of an entire cast of characters in a scene. But it is possible that I did miss some interactions, due to skipping over characters being addressed that I didn’t realize as I was going through the text. This is where better/stronger textual analysis skills would have come in handy, so I wouldn’t have had to do this manually, but I am a way’s off from writing programs that would be able to pull out this kind of data.

There are five acts of Hamlet, so I had six Excel worksheets. One for each act of five or so scenes, and the final one is a composite of all the scenes to show the relations throughout the duration of the entire play. To be clear, I only made an edge pair per one interaction per scene, which are all on one Excel worksheet per each act, and the final compounded list. That list had 77 edges and 33 nodes. However it did work out that you can see some thicker edges and that signifies that the edge pair interacted more than once per act, so that is a cool thing to have illustrated.

Once my data was imported into Gephi, I spent a good deal of time choosing how I wanted to layout my output. I couldn’t leave the nodes in place as they were because they were too close to one another to be visible or useful, so I had to drag nodes around to different areas of the Gephi workspace. I also had to play around with Gephi a bit before collecting my data to figure out how I wanted to layout my worksheets in Excel. I realized there would be a learning curve to account for how Gephi would read sheets and spreadsheets. As I found out, or at least was able to come up with, one workspace in Gephi = one worksheet in Excel. So, I would be happy to know how to combine workspaces in order to not have to do that manually in my final worksheet like I ended up doing.

I wouldn’t say my analysis really reveals anything to provocative that you wouldn’t have understood just by reading Hamlet. It is pretty apparent who speaks the most and to whom. But it was fun and meaningful taking a literary text and mining it for data and then playing around with that data in Gephi. It has been a while since I’ve used this software last, so it was good to reacquaint myself with it. One thing I never really mastered in Gephi is how to find different types of visualizations such as the work of Micki Kaufman, who mad many. Mine is pretty bare bones, but it goes to show what social network analysis looks like and does in case anyone in our class had questions about it in terms of practicality. And I felt like a real digital humanist!

Act 1

 

Act 2

 

Act 3

 

Act 4

 

Act 5

 

All Acts

Apology

Before I say anything at all, let me just say how sorry I am to everyone in this class for my behavior on the blog this weekend. I wanted to say this in class, but we ran out of time and I had to leave for another class. Listening to your comments tonight (even though I have already come to the realization) solidified to me how wrong I was in dismissing Hannah’s comment on sexism in the article. Even if I disagreed with the notion, which I don’t, it still would have been wrong of me to attack any of you for expressing your interpretation of the text. That is not the spirit of academia nor is it consistent with how I wish to be interpreted myself. But hearing so many of you say that you are concerned with expressing yourselves in the blog or in class, and because I don’t know of too many other examples of this sort of rhetoric occurring in our discussions, I feel fully responsible for this concern.

 

Let me just say that all of you should consider me an ally. We all have our crosses to bear and carry our own baggage, in one way or another. I myself am labeled as a mentally disabled veteran by the VA and the Army, so I should know better than to minimize someone else’s plight. As I said in the blog responses, it is entirely vital that we are on guard as academics to find examples of marginalization in the work we observe. I am really appalled that I reacted that way, and not that it justifies it, but it came out of a totally selfish and bratty place. I had just posted my own blog on Drucker when I saw Hannah had posted hers. I suppose I took her comment as an attack on the article that I just worked hard to conceptualize in my own blog, and just responded irresponsibly. It is never ever right to overlook the experience of others in academia and the “real” world.

 

Sorry (to all of you, and Hannah). Please do not associate my name with a feeling of fear to say what you want in class or in the blogs. I will do much better going forward to contain my own emotional reactions, but I do have to say I am a strongly opinionated and passionate person which is just about the only reason I’ve gotten this far and through everything I have been through. But academic blogs is not the place for Facebook-style trolling, and I get that. PLEASE. Do not stop yourself ever from saying what you feel and know that I do respect you, but with such passionate opinions and my unique emotional/mental circumstance, it gets hard for me at times to keep my mouth shut when I really need to do just that.

 

If you already think less of me, than I deserve that. But I promise to work harder at being fully open to everything all of you have to say and that I am learning so much from everyone’s diverse experiences and perspectives. I hope we can all facilitate a pleasant environment for all of us to work in, and I will do my part to ensure I do so especially. If you have individual concerns with the situation please do not hesitate to reach out to me, even if you just want to tell me off, because I deserve that too. Thank you for a wonderful learning experience thus far, and I look forward to working together throughout the duration of our time in the program.

Speclab and Drucker: Theoretical and Practical Design and Computation

Patrick Grady O’Malley

Like the littérateur concerned with their prose, a Digital Humanist seeks to express their humanistic interests with the digital tool-kit provided by modernity and the laws of technology. How does one use HTML appropriately to express ones thoughts and vision? Which markup language is most appropriate for the task at hand? Not only must the language be spot on when creating a digital work, language referring to that of a literary project, but also its code. Understanding expansive (and growing) digital languages to put one’s dream on a screen is the plight we all face as emerging Digital Humanists.

In order to successfully render a quality project, consider the rules of design that dictate the visual arts. Aesthetically pleasing work is mandatory not just in fine art or graphic design, but also our world too. “Features such as sidebars, hot links, menus, and tabs have become so rapidly conventionalized that their character as representations has become invisible. Under scrutiny, the structural hierarchy of information coded into buttons, bars, windows, and other elements of the interface reveals the rhetoric of display, [9]” reiterates the importance of design choice but also brings to light the notion that certain design elements may easily be overlooked by the user as something of a commodity to be expected. In other words, our hard work in choosing how to visualize our project may barely be noticed, at least by those who aren’t really looking. Nonetheless, these tedious decisions must be made for the relevance of the project and the objects it represents.

Users expect good-looking interfaces that are founded in functionality. When coupled with text to be explored, I could see how it would be easy to overlook the functionality of one versus the other (design/text), but I suppose that is the nature of collaboration amongst experts that bring to the project different skill sets. Only then can something worthwhile and exceptional be achieved. The coding of both the design and the text is a skill in and of itself, furthering the idea that “Humanists are skilled at complexity and ambiguity. Computers, as is well known, are not. [7]” A computer will only do what you tell it to, so artistic and intellectual integrity remains with us, and for as much as people say that computers make people lazy, I’d say we all have good evidence that such is not the case by any sense of the definition.

With regard to all of these considerations, the author clearly takes the stance that attention to detail is of the essence. When discussing how to chunk or tag texts in XML, the author states that “Such decisions might seem trivial, hairsplitting, but not if attention to material features of a text is considered important. [13]” In other words, while it may be tempting to leave certain elements alone, only the finished project suffers and worthy reputations become diminished. This is certainly not the path I hope to travel, even though in my daily life I’m frequently looking for nice ways to cut a corner. But we do what we do for the expansion of scholarship, “art for art’s sake!” so to speak.

The modeling and structuring of a project is the true core of what is being visualized. “It is all an expression of form that embodies a generalized idea of the knowledge it is presenting. [16]. Without a thorough intellectual plan that takes into account the many considerations of design (“visualization, psychology and physiology of vision, and cultural history of visual epistemology [21]”) and computation (statistics, coding, logic theory), the end result is not thorough… “the metatext is only as good as the model of knowledge it encodes. [15]” I have heard of TEI and been exposed to it “under the roof” in a minimal sense, but I know little of the dictations of the “organization setting standards for knowledge representation. [14]” in a broader sense, or really in any way that I could work amongst it at this point in my early career. But I am aware there are rules of functionality that must be interpreted for appropriate text layout. As I broaden my skillset in text analysis, I’m sure the process becomes more and more intuitive, however, I’d be lying if I said now it wasn’t a bit intimidating.

Are the objects we are creating tangible in nature? Or do they only stem from tangible products (books, paintings, song lyrics)? Is there value in discerning between the two? Is the output we create secondary to the primary source it is coming from? Or do our projects take on a new life of their own? “A discourse field is indeterminate, neither random, chaotic, nor fixed, but probabilistic. It is also social, historical, rooted in real and traceable material artifacts. [28]” As Digital Humanists, without having criterion of standards that dictate the work we do, or, the underlying philosophy of a project, what are we left with? There is little point in even bothering to make anything if you can’t summarize what its purpose is, intellectually, from the outset.

Every object has its place in history and I believe it is our job to bring that historicity into modernity in order to illuminate the changing nature of the humanities over centuries. “We are far less concerned with making devices to do things-sort, organize, list, order, number, compare-than with creating ways to expose any form of expression (book, work, text, image, scholarly debate, bibliographical research, description, or paraphrase) as an act of interpretation ( and any interpretive act as a subjective deformance). [26]” In other words, we are learning to read between the blurry lines of theory and practicality, and create work that harbors the two amongst a host of scholarly concerns and quandaries.

Jameson and His Theory Explored

Patrick Grady O’Malley

 

For the text analysis assignment, I used as more corpus to literary theory articles: Jameson’s Third-World Literature in the Age of Multinational Capitalism and a response to that very article by Ahmed, Jameson’s Rhetoric of Otherness and the ‘National Allegory.I chose these two articles as my corpus because I wanted to play around with theoretical texts in response to my latest blog post. In the future, I would consider looking for importable text files of literary examples discussed in these articles, to compare and contrast what could be found in the theoretical work versus the literary.

 

One weakness I noticed right of the bat, or maybe it is my own ignorance to the tool, but I would have liked to have been able to load the two different articles separately but been able to look through the results comparatively. In other words, I wanted to have separate results but in the same window/screen. Right now, I am just operating with two different browser windows of Voyant and looking back and forth amongst them.

 

The first notable thing of the results is that the word “world” is by far the most frequent word of both texts. This is not surprising considering they are articles on world literature. Since Jameson’s article is theoretical in nature and Ahmed’s is a response to that work, the differences begin to pile up after that one similarity. In Jameson’s article, the next four most frequent words are “cultural,” “political,” “social” and “new.” These seem to sum up the theme of most of what he was saying throughout the article the argued for the necessity of considering first, second and third-world literature through different lenses. Ahmed’s piece was argumentative to that standpoint, and one of his major contentions was thinking of world literature as all of one world. So his next four most frequent words were less thematic in nature, and more supportive of his argument. They are “texts,” “Jameson’s,” “experience” and “theory.”

Jameson’s word cloud

 

 

Ahmed’s word cloud

 

 

As you can see in the word cloud images, there is some overlap of word distribution, namely “world” and “literature.” But what I find striking is how Jameson’s article (as per the word cloud) emphasizes the “social” and “political” in a broader sense, with these two word respectively appearing in the word cloud. However, in Ahmed’s critique, the social and political are less nuanced and more direct with words such as “capitalist,” “colonialism,” and “Urdu.” This would make sense considering Ahmed’s piece is a critique, and his argument is more specific and less generalized than Jameson’s overarching work.

 

“World literature” is the most commonly appearing collocate in both articles. What is interesting is that Jameson’s second most frequent collocate is the writer Lu Xun (and Lu Xun’s), which with both the name and the possessive counted together, actually appear more than “world literature.” He spends a good deal of time discussing the work of Xun, but he also talks about other authors, so I am surprised Lu Xun and his possessive had such frequency.

 

One could also see through the collocates my point about Ahmed being more specific of particular types of theoretical or political issues. He has collocates near the top such as “experience/imperialism,” “experience/colonialism,” “capitalist/world,” and “Jameson’s/rhetoric.” Meanwhile, Jameson’s top collocates are more optimistic in nature: “world/culture,” great/great” and “world/intellectual.” I suppose it could be argued that in Jameson’s collocate “world/intellectual” that Jameson is personifying that intellectual with the appearance of his own name and rhetoric as one of Ahmed’s top collocates.

Jameson’s word bubble

 

Ahmed’s word bubble

 

It is striking that Ahmed’s word bubble is the one to include literature, as that is the focus of really both articles. “Social,” “cultural,” “political” and “new” in Jameson’s attests to his work being more descriptive and larger in scope than Ahmed’s rebuttal, which takes a close reading of the article and responds very specifically.

 

Overall, I can see how mining the text of theoretical works can be very useful in building a corpus of theory that can help with my research goals. Distant reading of theoretical works shows the overall nature of the work at a surface level and the prevalence of concepts and themes at a closer level.