Tag Archives: algorithms

I Guess We Should Talk about Tumblr

We talked last week about the oppressive effects of a lot of media-based technology, so I guess it’s a convenient time for Tumblr to have announced their latest policies.

For those whose news stream may differ from mine, what happened is this: Tumblr was removed from Apple’s App Store for child pornography (a genuine problem). They announced that they were planning to change their policies, and yesterday, they announced this policy, which includes a ban on “female-presenting nipples” (???) and vaguely-defined “sex acts,” although there are exceptions for political or medical situations and fine art (who gets to decide what’s fine art and what’s  just smutty art? Tumblr does, apparently). There are lots of news stories about it; this is one: https://www.latimes.com/business/technology/la-fi-tn-tumblr-adult-content-ban-20181203-story.html

In any case, there are several interesting features of in relation to our readings and discussions:

  1. The enforcement mechanism here is Apple.  These changes don’t come in response to the needs of people using the platform. There have been multiple complaints from Tumblr users about the content that’s made available on the platform, but these complaints are apparently ineffective (judging by the fact that this is even happening).  They also don’t come in response to any kind of official regulation.  Noble asks a lot of questions about how companies online are to be held accountable for the content they present; it appears that the answer at present is via rules set by other kinds of corporate gatekeepers (advertisers, of course, also wield a lot of power in this area).
  2. Tumblr is relying on algorithms to do this work.  These algorithms aren’t very accurate — many users have already posted examples of content that has been flagged for no obvious reason at all — but they’re probably a lot faster than humans, and Tumblr apparently has so much trust in them that they’re removing Safe Mode (another algorithmic tool and doubtless with many problems of its own) to rely on this general filtering exclusively. Noble, again, has a lot to say about how relying on algorithms to identify acceptable and unacceptable content can shield a company from accountability; that seems pretty clearly what is happening here. They’ve muted certain hashtags entirely; there isn’t a lot of subtlety to their approach.
  3. The focus on pornographic content, especially when that is poorly defined, puts a particular kind of bracket around what’s considered acceptable and unacceptable.  Child pornography should obviously be removed (and prosecuted), but the particular framing of sexual content in this (and similar) policies tells us something about the platform’s priorities. For instance, Tumblr is not trying to eliminate racist content from their platform. Many users have posted examples of white supremacist blogs that show up quite easily in a search.  On the other hand, by framing this around what is and isn’t considered pornographic, the context and purpose of the blogs is not considered. So they’re not specifically going after pornbots, which are a persistent nuisance on the site. On the other hand, we know from experience with other kinds of “filtering” used on the internet, policies like this often target LGBTQ information because they’re created with the unspoken assumption that sexual minorities are somehow inherently sexualized.  Sex workers who use Tumblr are also likely to be targeted (which is in line with a “no sexual content” policy and which I can understand from a corporate point of view, but which is dangerous to sex workers who’ve been using the platform as a relatively safe place to do their work).  So this is done in a way that is more harmful to the  more vulnerable users of Tumblr. One of the things I really appreciated about Noble’s analysis was that she was critical of pornographic content specifically as it sexualizes marginalized people in an exploitative way; she writes that “[m]arginalized and oppressed people are linked to the status of their group and are less likely to be afforded individual status and insulation from the status of the group with which they are identified” (26). She is interested in how this reflects from online pornographic content into the broader society — but here’s an example where we can see that it also reflects back into how the work created by members of marginalized groups are treated when internet companies decide to take a more active role.

It’s not clear what this will do to Tumblr — a lot of people are comparing this to “strikethrough” which happened on LiveJournal in 2007, and I don’t know enough about LiveJournal to say to what extent that contributed to the decline of that site.

But it’s interesting. I have lots of questions about who CAN be trusted to make decisions about how searching and social media can work; corporate platforms aren’t making a great argument for themselves as the appropriate guardians for this sort of thing.

(aaaannd I should note that as I’m choosing the tags for this post, I’m making decisions based on how it’ll be algorithmically “seen.” If I tag it “pornography” because it talks about that kind of content, will it disappear from the blog and/or from Google? Have I already used the word too many times in this post?? Remember how we talked about the Panopticon in class???)