Yet another debate over Facebook’s control over its users’ content simmered this week, though it was a bit different from the privacy flaps of the past. A coalition of feminist groups called Women, Action, and the Media wrote an open letter to Facebook last week urging it to remove content that trivializes or glorifies violence against women, noting that Facebook already moderates what it considers hate speech and pornographic content.
The groups also campaigned to Facebook’s advertisers, succeeding in getting several of them to pull their advertising until Facebook took some action. Facebook ultimately responded by posting a statement saying it hadn’t policed gender-related hate speech as well as it should have and vowing to take several steps to more closely moderate such content. The New York Times has a good, quick summary tying together the advertiser campaign and Facebook’s response.
While Valleywag’s Sam Biddle argued that all Facebook did was try to placate those protesting rather than commit to any real action, while Forbes’ Kashmir Hill and Reuters’ Jack Shafer noted that Facebook probably didn’t do this out of any morally consistent concern over content, but simply because of advertiser pressure. Hill concluded that “the procedure appears to be that they will draw the line when advertisers start complaining to them,” and Shafer argued that Facebook has only pushed this discourse underground, further away from the voices of reason and shame.
And while everyone seemed to agree that Facebook’s well within its rights to police speech on its own platform (and that it’s clamping down on a particularly heinous form of speech in this case), they also wondered about the precedent. Mathew Ingram of GigaOM wondered about the slippery slope of what Facebook considers hate speech.