As we have discussed in the class on Nov 27th, I think, Simone Browne’s article/chapter of “Race and Surveillance” brings us back to the task of constructing or weaving infrastructure for the equal distribution of social materials & tools and knowledge for everyone engaged in the construct of a community. Browne’s cautious (which is, I sense, “rationally” and scientifically processed for her scholarship) methodology of seeing technology and its operation as “marginalizing surveillance” is focused on seeing its unequal “racializing outcomes” on the level of governmental control of the populations. Instead of widening her scholarly attention to “marginalizing technology” as a lens to see subconscious layers of science & technology and its ways of perceiving every social agent and actor as well as their socio-geographies along the lines of races and classes, Browne examines the factual elements of “racializing outcomes” by addressing how the governmental agencies use photographies (eg. mug shots), collected data, biometrics, and so forth. However, the correlations between technology and racializing surveillance, I believe, permeate the form of everyday suspicion & paranoia and interpersonal (and communal) interaction that we (everyone) in the class are shaping and controlling in the logic of safety, sanity, and self-care. As much as we live with the class anxieties and pursue idealized or standardized values of well-being, we tend to categorize the others who don’t promote those values efficiently as “shameful” and even “dangerous” beings and push them away from our boundaries of living and communicating. Though I don’t want to sound emotional or metaphorical, I would say that technological, cognitive and social mechanism of surveillance is infinitely shaming itself as much as anxieties of class and racial degeneration wouldn’t decrease in the system of using marginalizing technologies and othering social differences along racial lines.
After reading Browne’s article which is centered on state surveillance and its technology and history, I wanted to question what if this racializing (and classifying) technologies are used in private sectors in agreement with the governmental policies that justify the use of surveilling technologies and media for perceiving the “truth” of the individuals against the privacy and dignity of the selective individuals and communities. This question reminded me of the article I read earlier this year that discusses the surveillance technology that groups and potentially criminalizes the neighbors of color: http://bostonreview.net/race-law-justice/clarence-harlan-orsi-hoverboarding-while-black (Please take a look when you have a moment.) My question is, if the contemporary ways of self-care and well-being are promoted by such marginalizing technology and “citizens” support it, how would we structurize and operate the digital humanities as resistance to it? Although Browne’s and other readings didn’t discuss it in particular, I believe that bell hooks’ argument for “oppositional gaze” still functions as an alternative way for us to approach the DH differently from the governmental (as well as privatizing) technologies and networks (over virtual and actual socio-geographies). How we can build the DH platforms where the marginalized can “look back” at the system-builders & agencies without fear of being punished and stigmatized? As we discussed in the class, learning from the narrative/storytelling and experience of “repairs” on the side of the marginalized in the social infrastructure would be a first step for this. But I believe that this will require interdisciplinary work that even provides legal aids to social actors and agents to counter-act the dominant use of technology and its networks for the governmentalization of the population as much as such oppositional acts or resistance can be promptly criminalized by the norms of safety in any modern societies that we are now living in. I don’t have a concrete answer here but I think binding the purpose of the DH and “oppositional gaze” under the appropriate legal support is important as much as the reparative or resistant narratives can be easily dissipated under the master narrative of safety and development. (For example, if activist hackers deliberately create the glitches to oppose marginalizing technologies, how are they going to avoid punishments? )