Tag Archives: pedagogy

Final Project: DH in Prison

Here are excerpts from my project proposal. If you want to read the whole proposal let me know and I’ll share it with you.

Photograph from Vera Institute of Justice Reimagining Prison Report

In view of the devastating effects of mass incarceration in the United States and in an effort to address the needs of incarcerated people as they rebuild their lives, I propose to design and develop an undergraduate college-level course in digital skills and digital humanities to be taught in prison. Although education is a powerful tool for successful reentry, only 35% of prisons in the United States offer college courses at the present time.[1] Digital humanities are hardly taught at all. An environmental scan of college programs in prisons shows a low occurrence of digital humanities courses in curricula largely due to a scarcity of hard and soft infrastructure to support digital work and because incarcerated people are generally forbidden access to the internet. This gap, or digital divide, presents us with an opportunity to build a course that does not exist at the present time and to innovate through exploring ways to teach specific digital skills without an internet connection. By developing minimal computing software we will create course materials easily exportable to low-tech environments around the world. We will produce a course curriculum, syllabus, lesson plans with datasets, open source documentation and a project website.

This project comes at a time when the field of Digital Humanities is turning from seeing itself under a big tent to being under no tent. Teaching digital humanities and digital skills in prison is an opportunity to share the work we do in the field of digital humanities with a population that on one hand, given its disadvantages, will benefit greatly from having a digital edge and on the other hand will add new perspectives and contributions to the field of digital humanities, expanding its scope by bringing the interests and concerns of communities traditionally underrepresented in digital humanities to the fore.

Photograph from Vera Institute of Justice Reimagining Prison Report

Photograph by anonymous

[1]Bender, Kathleen. “Education Opportunities in Prison Are Key to Reducing Crime.” Center for American Progress, March 2, 2018.

See also Reimagining Prison Report. Vera Institute of Justice, October 10, 2018.

Digital Technologies in the Public University: More Money-Making or Access for All?

Reading the introduction to Promises and Perils of Digital History by Dan Cohen and Roy Rosenzweig last week, I was intrigued by their mention of neo-Luddite Marxist critic David Noble, which led me off on a tangent which ties in with the pieces on pedagogy we’re reading this week.

Because universities have traditionally hierarchized individual authorities as sources of knowledge and because DH aims to break this hierarchy down, I was interested to see that Cohen and Rosenzweig introduce Noble by aligning him with another neo-Luddite, conservative American historian Gertrude Himmelfarb, who, writing in 1996, didn’t like digital technologies because their equalizing power make “no authority […] privileged over any other” [Himmelfarb qtd in Cohen and Rosenzeig 1]. Although the equalizing power that Himmelfarb is afraid of is something DH embraces, Noble doesn’t engage with this but instead warns us against technology’s power to serve as a tool to mass-market higher education. In “Digital Diploma Mills: The Automation of Higher Education” (1998) Nobles warns that

…the trend towards automation of higher education as implemented in North American universities [in 1998] is a battle between students and professors on one side, and university administrations and companies with “educational products” to sell on the other. It is not a progressive trend towards a new era at all, but a regressive trend, towards the rather old era of mass production, standardization and purely commercial interests. [Nobles para 1]

Noble takes issue not with technology itself but with what capitalists use it for. In the 1980s and ‘90s, he writes, universities were the focus of “a change in social perception which has resulted in the systematic conversion of intellectual activity into intellectual capital and, hence, intellectual property” [Noble para 8]. Research, he argues, was being commodified, and knowledge turned into “proprietary products” that can be bought and sold. As these changes took place, universities were implicated “as never before in the economic machinery” [Noble para 9]. Universities began to allocate funds for science and engineering research – because research had become a commodity – at the expense of education. Then instruction too was commercialized and shaped in a corporate model where costs were minimized by replacing human teachers with computer-based instruction. I think back to the wave of MOOCs that attempted to capitalize on the growing global demand for university degrees and certification around 2012 and what a poor substitute these were for seminars. These were Mills indeed. Then came learning management systems, writes Noble, and educational maintenance organizations contracted through outside organizations. Noble expresses concern that faculty lost the rights to their work as they uploaded syllabi and course content to university websites only to see their scholarship outsourced (I trust that Noble’s concern about ownership of intellectual property is concern that scholarship not be freely shared and not concern that faculty lose power over capital they ‘rightfully’ own). It was also unclear, writes Noble, who owned student educational records once students had uploaded their work to digital sites [para 30]. This is an important question and I hope that FERPA protects student privacy in digital media better now than it did in 1998. Having said that, I think of the query we recently began to write to Voyant about what it does with the corpora we upload, and the question appears to be just as pertinent now. Noble saw students as “no better than guinea pigs” in a massive money-making experiment gone totally wrong [para 30].

In 1998 it seemed to Noble that the technological revolution in higher education was all about corporations (including universities that had become de facto corporations) exploiting the capital that universities had come to contain. And “behind this effort are the ubiquitous technozealots who simply view computers as the panacea for everything, because they like to play with them” [Noble para 15]. Ha. A big problem with Noble’s neo-Luddite position is that he marks a division between people who use computers and those who don’t, as if these were two species apart. It’s important to keep in mind that Noble was writing in 1998. I wonder whether his position towards Digital Humanities would have changed by today (Noble died in 2010) in view of the turn towards free open source digital resources and in view of DH’s growing impact on scholarship, publishing, peer review, tenure and promotion, noted by Matthew Kirshenbaum in “What is Digital Humanities and What’s it Doing in English Departments?” (2012) and taken up by Stephen Brier in “Where’s the Pedagogy? The Role of Teaching and Learning in the Digital Humanities” (2012).

To get a sense of how present pedagogy was in digital humanities work in 2012, Brier looked for the key words pedagogy, teaching, learning and classroom in a summary of NEH grants for DH start-up projects from 2007 to 2010, and found hardly any instances of these key terms. This does not mean that no NEH start-up grants were destined to pedagogical DH projects, writes Brier, but does suggest that “these approaches are not yet primary in terms of digital humanists’ own conceptions of their work.” To start a conversation about the implications of digital technologies in higher education, Brier focuses on the City University of New York, the largest public university system in the United States and one which has grown tremendously over the past five decades in large part, writes Brier, thanks to its readiness to undertake radical experiments in pedagogy and open access.

One of these projects, the Writing Across the Curriculum (WAC) project, came into being to continue the mission that CUNY’s Open Admissions policy, dismantled by the CUNY Board of Trustees in 1999, aimed to accomplish, namely, to ensure that all high school graduates be able to enroll in college and get a college degree. WAC aims to do this by having writing fellows teach writing skills to students who need these. WAC brought digital technologies into the classroom in a natural way, writes Brier, because most writing fellows were interested in developing these.

Brier then points us towards The American Social History Project/Center for Media Learning/New Media Lab, which he co-founded in 1981 and which is deeply committed to using digital media for teaching history in high schools and at the undergraduate level. He goes on to discuss the Interactive Technology and Pedagogy Doctoral Cerfificate Program at the GC, the Instructional Technology Fellows Program at the Macaulay Honors College, Matt Gold’s “Looking for Whitman” project, the CUNY Academic Commons and the GC Digital Humanities Initiative. Now we also have the MA in Digital Humanities and many other initiatives that have come into being since 2012. Given the wealth of initiatives for educational reform developed with digital technologies within CUNY, I like to think that Noble would reverse his Marxist critique of digital technologies in the university were he alive today to witness the equalizing power for educational change digital technologies clearly provide.

A Network Analysis of our Initial Class Readings

This praxis project visualizes a network analysis of the bibliographies from the September 4th required readings in our class syllabus plus the recommended “Digital Humanities” piece by Professor Gold. My selection of topic was inspired by a feeling of being swamped by PDFs and links that were accumulating in my “readings” folder with little easy-to-reference surrounding context or differentiation. Some readings seemed to be in conversation with each other, but it was hard to keep track. I wanted a visualization to help clarify points of connection between the readings. This is inherently reductionist and (unless I’m misquoting here, in which case sorry!) it makes Professor Gold “shudder”, but charting things out need not replace the things themselves. To me, it’s about creating helpful new perspectives from which to consider material and ways to help it find purchase in my brain.

Data Prep
I copy/pasted author names from the bibliographies of each reading into a spreadsheet. Data cleaning (and a potential point for the introduction of error) consisted of manually editing names as needed to make all follow the same format (last name, first initial). For items with summarized “et al” authorship, I looked up and included all author names.

I performed the network analysis in Cytoscape, aided by Miram Posner’s clear and helpful tutorial. Visualizing helped me identify and fix errors in the data, such as an extra space causing two otherwise identical names to display separately.

The default Circular Layout option in the “default black” style rendered an attractive graph with the nodes arranged around two perfect circles, but unfortunately the labels overlapped and many were illegible. To fix the overlapping I individually adjusted the placement of the nodes, dragging alternating nodes either toward or away from the center to create room for each label to appear and be readable in its own space. I also changed the label color from gray to white for improved contrast and added yellow directional indicators, as discussed below. I think the result is beautiful.

Network Analysis Graph
Click the placeholder image below and a high-res version will open in a new tab. You can zoom in and read all labels on the high-res file.

An interactive version of my graph is available on CyNetShare, though unfortunately that platform is stripping out my styling. The un-styled, harder-to-read, but interactive version can be seen here.

Author nodes in this graph are white circles and connecting edges are green lines. This network analysis graph is directional. The class readings are depicted with in-bound connections from the works cited terminating in yellow diamond shapes. From the clustering of yellow diamonds around certain nodes, one can identify that our readings were authored by Kirschenbaum, Fitzpatrick, Gold, Klein, Spiro, Hockey, Alvarado, Ramsey, and (off in the lower left) Burke. Some of these authors cited each other, as can be seen by the green edges between yellow-diamond-cluster nodes. Loops at a node indicate the author citing themselves. Multiple lines connecting the same two nodes indicate citations of multiple pieces by the same author.

It is easy to see in this graph that all of the readings were connected in some way, with the exception of an isolated two-node constellation in the lower left of my graph. That constellation represents “The Humane Digital” by Burke, which had only one item (which was by J. Scott) in its bibliography. Neither Burke nor Scott authored nor were cited in any of the other readings, therefore they have no connections to the larger graph.

The vast majority of the nodes fall into two concentric circle forms. The outer circle contains the names of those who were cited in only one of the class readings. The inner circle contains those who were cited in more than one reading, including citations by readings-authors of other readings-authors. These inner circle authors have greater out-degree connectedness and therefore more influence in this graphed network than do the outer circle authors. The authors with the highest degree of total connections among the inner circle are Gold, Klein, Kirschenbaum, and Spiro. The inner circle is a hub of interconnected digital humanities activity.

We can see that Spiro and Hockey had comparitively extensive bibliographies, but that Spiro’s work has many more connections to the inner circle digital humanities hub. This is likely at least partly due to the fact that Hockey’s piece is from 2004, while the rest of the readings are from 2012 or 2016 (plus one which will be published next year in 2019). One possible factor, some of the other authors may not have been yet publishing related work when Hockey was writing her piece in the early 2000’s. Six of our readings were from 2012, the year of Spiro’s piece. Perhaps a much richer and more interconnected conversation about the digital humanities developed at some point between 2004 and 2012.

This network analysis and visualization is useful for me as a mnemonic aide for keeping the readings straight. It can also serve to refer a student of the digital humanities to authors they may find it useful to read more of or follow on Twitter.

A Learning about Names
I have no indication that this is or isn’t occurring in my network analysis, but in the process of working on this I realized any name changes, such as due to a change in marital status, would make an author appear as two different people. This predominantly affects women and, without a corrective in place, could make them appear less central in graphed networks.

There are instances where people may have published with different sets of initials. In the bibliography to Hockey’s ‘The History of Humanities Computing,’ an article by ‘Wisbey, R.’ is listed just above a collection edited by ‘Wisbey, R. A.’ These may be the same person but it cannot be determined with certainty from the bibliography data alone. Likewise, ‘Robinson, P.’ and ‘Robinson, P. M. W.’ are separately listed authors for works about Chaucer. These are likely the same person, but without further research I cannot be 100% certain. I chose to not manually intervene and so these entries remain separate. It is useful to be aware that changing how one lists oneself in authorship may affect how algorithms understand the networks to which you belong.

Potential Problems
I would like to learn to what extent the following are problematic and what remedies may exist. My network analysis graph:

  • Doesn’t distinguish between authors and editors
  • I had to split apart collaborative works into individual authors
  • Doesn’t include works that had no author or editor listed

Postscript: Loose Ties to a Current Reading
In “How Not to Teach Digital Humanities,” Ryan Cordell suggests that introductory classes should not lead with “meta-discussions about the field” or “interminable discussions of what counts or does not count [as digital humanities]”. In his experience, undergraduate and graduate students alike find this unmooring and dispiriting.

He recommends that instructors “scaffold everything [emphasis in the original]” to foster student engagement. There is no one-size-fits-all in pedagogy. Even within the same student learning may happen quicker or information may be stickier if it is presented in context or in more than one way. Providing multiple ways into the information that a course covers can lead to good student learning outcomes. It can also be useful to provide scaffolding for next steps or going beyond the basics for students who want to learn more. My network analysis graph is not perfect, but having something as a visual reference is useful to me and likely other students as well.

Cordell also endorses teaching how the digital humanities are practiced locally and clearly communicating how courses will build on each other. This can help anchor students in where their institution and education fit in with the larger discussions about what the field is and isn’t. Having gone through the handful of assigned “what is DH” pieces, I look forward to learning more about the local CUNY GC flavor in my time as a student here. This is an exciting field!


Update 11/6/18:

As I mentioned in the comments, it was bothering me that certain authors who appeared in the inner circle rightly belonged in the outer circle. This set of authors were ones who were cited once in the Introductions to the Debates in Digital Humanities M. K. Gold and L. Klein. Due to a challenge depicting co-authorship, M. K. Gold and L. Klein appear separately in the network article, so authors were appearing to be cited twice (once each by Gold and Klein), rather than the once time they were cited in the pieces co-authored by Gold and Klein.

I have attempted to clarify the status of those authors in the new version of my visualization below by moving them into the outer ring. It’s not a perfect solution, as each author still shows two edges instead of one, but it does make the visualization somewhat less misleading and clarifies who are the inner circle authors.


Text mining praxis: mining for evidence of course learning outcomes in student writing

I’ve been hearing more and more about building corpora of student writing of late, and while I haven’t actually consulted any of these, I was happy to have the opportunity to see what building a small corpus of student writing would be like in Voyant. I was particularly excited about using samples from ENGL 21007: Writing for Engineering which I taught at City College in Spring 2018, because I had a great time teaching that course and know the writing samples well.

Of the four essays written in ENGL 21007 I chose the first assignment, a memo, because it is all text (the subsequent assignments contain graphs, charts and images and I wasn’t sure how these would behave in Voyant). I downloaded the student essays from Blackboard as .docx and redacted them in Microsoft Word. This was a bad move because Microsoft Word 365 held on to the metadata, so student email accounts showed up when I uploaded my corpus to Voyant. I quickly removed my corpus from Voyant and googled how do I remove the metadata, then decided that it would be faster to convert all .docx to .pdf and redact them with Acrobat Pro (I got a one-week free trial) so I did this, zipped it up and voila.

22 Essays Written by Undergraduate Engineering Majors at City College of New York, Spring 2018

I love how Voyant automatically saves my corpus to the web. No registration, no logging in and out. There must be millions of corpora out there.

I was excited to see how the essays looked in Voyant and what I could do with them there. I decided to get the feeling of Voyant by first asking a simple question: what did students choose to write about? The assignment was to locate something on the City College campus, in one’s community or on one’s commute to college that could be improved with an engineering solution.

Cirrus view shows most frequently used words in 22 memos written by engineering majors in ENGL 21007

What strikes me as I look at the word cloud is that students’ concern with “time” (61 occurrences) was only slightly less marked than the reasonable – given that topics had to be related to the City College campus – concern with “students” (66 occurrences). I was interested to see that “escalators” (48 occurrences)  got more attention than “windows” (40 occurrences), but I think we all felt strongly about both. “Subway” (56 occurrences) and “MTA” (50 occurrences), which are the same thing, were a major concern. Uploading samples of student writing and seeing them magically visualized in a word cloud summarizes the topics we addressed in ENGL 21007 in a useful and powerful way.

Secondly and in a more pedagogical vein, I wanted to see how Voyant could be used to measure the achievement of course learning outcomes in a corpus of student writing. This turned out to be a way more difficult question than my first simple what did students write about. The challenge lies in figuring out what query will tell me whether the eight English 21007 course learning outcomes listed on the CCNY First Year Writing Program website  were achieved through the essay assignment that produced the 22 samples I put in Voyant, and whether evidence of having achieved or not achieved these outcomes can be mined from student essays with Voyant. Two of the course learning outcomes seemed more congenial to the Memo assignment than others. These are:

“Negotiate your own writing goals and audience expectations regarding conventions of genre, medium, and rhetorical situation.”

“Practice using various library resources, online databases, and the Internet to locate sources appropriate to your writing projects.”

To answer the question of whether students were successful in negotiating their writing goals would require knowing what their goals were. Not knowing this, I set this part of the question aside. Audience expectations was easier. In the assignment prompt I had told students that the memo had to be addressed to the department, office or institution that had the power to approve the implementation of proposed engineering solutions or the power to move engineering proposals on to the department, office or institution that could eventually approve these. There are, however, many differently named addressees in the student memos I put in this corpus. Furthermore, addressing the memo to an official body does not by itself achieve the course learning outcome. My question therefore becomes, what general conventions of genre, medium and rhetorical situation do all departments, offices or institutions expect to see in a memorandum, and how do I identify these in a query? What words or what combinations of words constitute memo-speak? To make things worse (or better :)!), I had told students that they could model their memos on the examples of memos I gave them or, if they preferred, they could model them differently so long as they were coherent and good. I therefore cannot rely on form to measure convention of genre. I’m sorry to say I have no answers to my questions as of yet; I’m still trying to figure out how to ask my corpus if students negotiated audience expectations regarding conventions of genre, medium and rhetorical situation (having said this, I think I can rule out medium, because I asked students to write the memo in Microsoft Word).

The second course learning outcome I selected – that students practice using library resources, online databases and the internet – strikes me as more quantifiable than the first.

Only one of 22 memos contains the words “Works Cited”

Unfortunately, I hadn’t required students do research for the memos I put in the corpus. When I looked for keywords that would indicate that students had done some research Voyant came up with one instance of “Bibliography,” one instance of “Works Cited” and no instances of “references” or “sources.” The second course learning outcome I selected is not as congenial to the memo assignment – or the memo assignment not congenial to that course learning outcome – as I first thought.

I tried asking Veliza the bot for help in figuring out whether course learning outcomes had been achieved (Veliza is the sister of Eliza, a psychotherapist, and says she isn’t too good at text analysis yet). Veliza answers questions with expressions of encouragement or more questions but she’s not much help. The “from text” button in the lower right corner of the Veliza tool is kind of fun because it fetches sentences from the text (according to what criteria I haven’t a clue) but conversation quickly gets surreal because Veliza doesn’t really engage.

In conclusion, I am unsure how to use text mining to measure course learning outcomes in student writing done in 200-level courses. I think that Voyant may work better for measuring course learning outcomes in courses with more of an emphasis on vocabulary and grammar, such as, for example, EAL. It’s a bit of a dilemma for me, because I think that the achievement of at least some course learning outcomes should be measurable in the writing students produce in a course.