post-class response to Simone Browne’s “race and surveillance”

As we have discussed in the class on Nov 27th, I think, Simone Browne’s article/chapter of “Race and Surveillance” brings us back to the task of constructing or weaving infrastructure for the equal distribution of social materials & tools and knowledge for everyone engaged in the construct of a community. Browne’s cautious (which is, I sense, “rationally” and scientifically processed for her scholarship) methodology of seeing technology and its operation as “marginalizing surveillance” is focused on seeing its unequal “racializing outcomes” on the level of governmental control of the populations. Instead of widening her scholarly attention to “marginalizing technology” as a lens to see subconscious layers of science & technology and its ways of perceiving every social agent and actor as well as their socio-geographies along the lines of races and classes, Browne examines the factual elements of “racializing outcomes” by addressing how the governmental agencies use photographies (eg. mug shots), collected data, biometrics, and so forth. However, the correlations between technology and racializing surveillance, I believe, permeate the form of everyday suspicion & paranoia and interpersonal (and communal) interaction that we (everyone) in the class are shaping and controlling in the logic of safety, sanity, and self-care.  As much as we live with the class anxieties and pursue idealized or standardized values of well-being, we tend to categorize the others who don’t promote those values efficiently as “shameful” and even “dangerous” beings and push them away from our boundaries of living and communicating. Though I don’t want to sound emotional or metaphorical, I would say that technological, cognitive and social mechanism of surveillance is infinitely shaming itself as much as anxieties of class and racial degeneration wouldn’t decrease in the system of using marginalizing technologies and othering social differences along racial lines.

After reading Browne’s article which is centered on state surveillance and its technology and history, I wanted to question what if this racializing (and classifying) technologies are used in private sectors in agreement with the governmental policies that justify the use of surveilling technologies and media for perceiving the “truth” of the individuals against the privacy and dignity of the selective individuals and communities. This question reminded me of the article I read earlier this year that discusses the surveillance technology that groups and potentially criminalizes the neighbors of color:  http://bostonreview.net/race-law-justice/clarence-harlan-orsi-hoverboarding-while-black (Please take a look when you have a moment.) My question is, if the contemporary ways of self-care and well-being are promoted by such marginalizing technology and “citizens” support it, how would we structurize and operate the digital humanities as resistance to it? Although Browne’s and other readings didn’t discuss it in particular, I believe that bell hooks’ argument for “oppositional gaze” still functions as an alternative way for us to approach the DH differently from the governmental (as well as privatizing) technologies and networks (over virtual and actual socio-geographies). How we can build the DH platforms where the marginalized can “look back” at the system-builders & agencies without fear of being punished and stigmatized? As we discussed in the class, learning from the narrative/storytelling and experience of “repairs” on the side of the marginalized in the social infrastructure would be a first step for this.  But I believe that this will require interdisciplinary work that even provides legal aids to social actors and agents to counter-act the dominant use of technology and its networks for the governmentalization of the population as much as such oppositional acts or resistance can be promptly criminalized by the norms of safety in any modern societies that we are now living in.  I don’t have a concrete answer here but I think binding the purpose of the DH and “oppositional gaze” under the appropriate legal support is important as much as the reparative or resistant narratives can be easily dissipated under the master narrative of safety and development. (For example, if activist hackers deliberately create the glitches to oppose marginalizing technologies, how are they going to avoid punishments? )

 

 

 

4 thoughts on “post-class response to Simone Browne’s “race and surveillance”

  1. Nancy Foasberg

    Hyemin, what a great post — and thanks for sharing that Boston Review article! I think your point about “shaping and controlling [interaction] in the logic of safety, sanity, and self-care” is a really smart one. When the technologies that enable surveillance and encourage this kind of suspicion, paranoia, and racism are the very same ones on which social connections are formed and enacted, AND the one on which people hope to share their critiques of that surveillance, it becomes really difficult to challenge them. Can I engage in an “oppositional gaze” on Facebook or (shudder) NextDoor without just being banned?
    I love your idea of creating DH platforms that could mount some kind of challenge to the logics of the Facebooks of the world. On the other hand, oppositional or “alternative” social networks seem to arise every year, only to be crushed under the difficulty of administering them and the domination of their audience by the bigger networks.

    1. Jean ʒɑ̃ (they/them) Post author

      Thank you for the comment, Nancy. I’d like to know more about those “alternative” social network platforms. I think they need fund to hire people to manage and supportive policies. (Ephemeral approach to alternative networks might be still utopic poetically and suggest something more subtle or perceptive, but resistance or oppositional gaze also needs infrastructural duration for its impact on the social construct of living together)

      1. Nancy Foasberg

        I’m not sure they’d necessarily fit all your requirements, but social networks like Diaspora, Ello, and currently Mastadon and Pillowfort (they haven’t fully were all born out of a desire to get away from the bigger, commercial social networks. You’re totally right about infrastructure — there’s a lot of work into building it, iterating it, and managing it — but it’s also really difficult to build a network, because everyone wants to be where everyone else is, so new networks are always going to be at a disadvantage. I think it says something that a lot of these networks emerged at moments in which users were disgusted with some existing network — but if disgust were enough to kill a social network, Facebook would be long gone by now.
        I don’t know the answer to this problem. It’s a difficult one.

        1. Jean ʒɑ̃ (they/them) Post author

          Thank you for the further thought & knowledge. Yes, it’s very challenging. I suspect people also want convenience and (relatively) immediate rewards from networking, so, like you well observed, it’s hard to abandon old “disgusting” platforms like Facebook.

Comments are closed.