Free speech in the filter age

Opinion Tuesday 20/February/2018 15:38 PM
By: Times News Service
Free speech in the filter age

Germany’s Network Enforcement Act – according to which social-media platforms like Facebook and YouTube could be fined €50 million ($63 million) for every “obviously illegal” post within 24 hours of receiving a notification – has been controversial from the start.
After it entered fully into effect in January, there was a tremendous outcry, with critics from all over the political map arguing that it was an enticement to censorship. Government was relinquishing its powers to private interests, they protested.
Of course not. To be sure, Germany’s Netzwerkdurchsetzungsgesetz (or NetzDG) is the strictest regulation of its kind in a Europe that is growing increasingly annoyed with America’s powerful social-media companies. And critics do have some valid points about the law’s weaknesses. But the possibilities for free expression will remain abundant, even if some posts are deleted mistakenly.
The truth is that the law sends an important message: democracies won’t stay silent while their citizens are exposed to hateful and violent speech and images – content that, as we know, can spur real-life hate and violence.
Refusing to protect the public, especially the most vulnerable, from dangerous content in the name of “free speech” actually serves the interests of those who are already privileged, beginning with the powerful companies that drive the dissemination of information.
Speech has always been filtered. In democratic societies, everyone has the right to express themselves within the boundaries of the law, but no one has ever been guaranteed an audience. To have an impact, citizens have always needed to appeal to – or bypass – the “gatekeepers” who decide which causes and ideas are relevant and worth amplifying, whether through the media, political institutions, or protest.
The same is true today, except that the gatekeepers are the algorithms that automatically filter and rank all contributions. Of course, algorithms can be programmed any way companies like, meaning that they may place a premium on qualities shared by professional journalists: credibility, intelligence, and coherence.
But today’s social-media platforms are far more likely to prioritize potential for advertising revenue above all else. So the noisiest are often rewarded with a megaphone, while less polarizing, less privileged voices are drowned out, even if they are providing the smart and nuanced perspectives that can truly enrich public discussions.
If the algorithm doesn’t do the job of silencing less privileged voices, online trolls often step in, directing hateful and threatening speech at whomever they choose. Women and minorities are particularly likely to be victims of online harassment, but anyone may be targeted. The German blogger Richard Gutjahr, for example, became the object of conspiracy theories and the target of intense harassment after being present at two terrorist attacks within two weeks of each other.
Victims of online harassment often respond with self-censorship, and many, with their sense of security and even self-worth eroded, remove themselves from social media altogether. In this sense, by offering blanket protections in the name of “free speech,” countries actually privilege hate speech. But why should a victim’s rights count less than those of their bullies?
In a democracy, the rights of the many cannot come at the expense of the rights of the few. In the age of algorithms, government must, more than ever, ensure the protection of vulnerable voices, even erring on victims’ side at times. If already-vulnerable people are besieged by mobs of extremists and aggressors, it is entirely understandable that they will fear speaking up. If that happens, “free speech” is dead.
Not all NetzDG critics dispute this assessment: some agree that the speech of the vulnerable does need extra protection. But they argue that the necessary protections are already in place.
After all, severe insult and incitement to hatred and violence are prohibited, and perpetrators can be prosecuted. French President Emmanuel Macron, for example, favours focusing on strengthening the judicial system’s ability to deal with hate speech and misinformation.
But, in the digital age, speed is decisive. The technology is instant, and online posts can be shared widely within minutes. Democratic institutions move rather slowly – much too slowly for police and the courts to be effective in fighting trolls and online hate. And many victims are not in any position to hire a high-quality lawyer, as Gutjahr did. Relying on the state’s most cumbersome institutions alone is not an effective strategy for protecting free speech on today’s digital communication networks.
Hate speech and other kinds of dangerous and illegal content must be attacked at the source. On one hand, there is a need for increased media literacy on the part of consumers, who need to be taught, from a young age, about the real-world consequences of online hate speech.
On the other hand – and this is what the NetzDG attempts to ensure – social-media platforms must ensure that their products are designed in ways that encourage responsible use.
But this is no quick fix. On the contrary, it demands a fundamental rethink of business models that facilitate and even reward hate speech. Firms cannot be allowed to profit from damaging content, while shrugging off responsibility for its consequences.
Instead, they must revise their algorithms more effectively and scrupulously to flag content that humans should monitor and assess, while entrenching in all of their business decisions an awareness of their responsibility in the fight for truly free speech.
This may contradict the straightforward business logic of doing whatever maximizes profit and shareholder value. But it is, without a doubt, what is best for society. The German government is right to push companies in the proper direction. - Project Syndicate