An opinion piece in the NYT:
Manal al-Sharif, co-founder and leader of the #Women2Drive movement and founder and CEO of Women2Hack Academy, is author of the memoir “Daring to Drive: A Saudi Woman’s Awakening.”
As a Saudi Arabian woman who has lived most of her life under one of the last surviving absolute monarchies in the world, the closest I have come to experiencing democracy has been in challenging the status quo through my tweets.
In 2016 a lot of Americans felt that way. Donald Trump’s victory was more Arab Spring maybe than the Arab Spring–way less foreign intervention, I’ll bet.
For activists and citizen journalists in the Arab world, social media has become a powerful way to express dissent, to disrupt and to organize. Digital activism, however, comes at a high price: The very tools we use for our cause can be — and have been — used to undermine us. While social media platforms were designed as a way to connect people online, activists used them as technological tools of liberation, devising creative hacks to defy state censorship, connect with like-minded people, mobilize the masses, influence public opinion, push for social change and ignite revolutions. With these opportunities came risks: The more we posted and engaged, the more vulnerable we became, as our aggregated data was weaponized against us.
Likewise, after the catastrophe of Trump, the socials and old media rally to shut down dissent by classifying our arguments Hate–by weaponizing our words against us. Regardless of truth, or genuine “hate” for that matter.
Over time, such data can be used to build an accurate picture not only of users’ preferences, likes and behaviors, but also of their beliefs, political views and intimate personal details; things that even their family and friends may not know about them.
It strikes me that “build[ing] an accurate picture” of “beliefs, political views” is precisely one of the things those combating Hate Online are trying to do to right wingers.
Attempts to censor right wing speech online look increasingly to focusing on individuals’ histories and associations, likes and links, as systems focusing on word combinations to flag actual speech transgressions can always be dodged with creative speech as this article laments:
To try and answer that, we need to step way, way back and first talk about bigotry not as an algorithm, but as social entity. Who exactly are bigots and what makes them tick, not by dictionary definition one would expect to find in a heavily padded college essay, but by practical, real world manifestations that quickly make them stand out. They don’t just use slurs, or bash liberal or egalitarian ideas by calling them something vile or comparing them to some horrible disease, which means the bigots in question will quickly catch on to how they’re being filtered out and switch to more subtle or confusing terms, maybe even treating it like a game.
White supremacists keep behaving in un-hateful fashion, unfortunately. But when did “hate” become forbidden? We lapsed in a fit of absentmindedness from robust freedom of speech into a bizarre system ostensibly censoring the emotion “hate”.
Just note how Google’s algorithm goes astray when given quotes light on invective but heavy on the bigoted subtext and what’s known in journalist circles as dog whistles. Sarcasm adds another problem. How could you know on the basis of one comment that the person isn’t just mocking a bigot by pretending to be them, or conversely, mocking those calling out his bigoted statements? Well, the obvious answer is that we need context every time we evaluate a comment because two of the core features of bigotry are sincerity and a self-defensive attitude. Simply put, bigots say bigoted things because they truly believe them, and they hate being called bigots for it.
Google’s “harassment tool” did not impress. Richard Spencer’s “at the end of the day, America belongs to white men” somehow only scored 29 percent toxic on their meter rating speech from “healthy” to “toxic” (why not “unhealthy”? is this the difference between hate and Hate?). The disappointment with which censorship proponents in the media greet these programs and how they go about testing them (plugging in crimespeak quotes to see if they pass) reveals comically that it’s content, and not hate they’re after.
If they have their way, perhaps after Trump (or, counter-intuitively, maybe they’ll let up, no longer in panic because of him) we can expect internet censorship to focus on individuals and their associations to just choke off the “hate” at the source.
I fear we’ll view this already repressive time as when free speech cops thought they could get away with writing tickets on the street, instead of kicking down your door.