In the last few years, driven by pressure from governments and the public, technology companies have been ramping up censorship on their platforms. Almost overnight, private companies are policing what is opinion and what is propaganda, what is disagreement and what is abuse, who may speak and who should be silenced, without accountability or oversight. In a drive to reign in the chaos of the internet, ad hoc, knee jerk, and heavy-handed approaches have combined to create a dangerous time for free speech and individual rights on the internet.

Many of the measures have been voluntarily adopted by tech companies preemptively to avert government regulation by demonstrating to lawmakers that the they can police their own platforms.

For example, in 2018 Twitter suspended over 70 million accounts and introduced new rules restricting advertising. Youtube is working on new ways to identify ‘authoritative sources’. Whatsapp is now flagging forwarded messages and working with fact checkers. Google announced changes to its search algorithms to “to help surface more authoritative pages and demote low-quality content.” Facebook changed its newsfeed algorithm, hired editors to moderate content and partnered with third-party fact-checking organizations. The company has also teamed up with The Atlantic Council, the International Republican Institute (IRI) and the National Democratic Institute (NDI) to help them to “to slow the global spread of
misinformation that could influence elections”.

In other cases, tech companies are suspending accounts and moderating content due to demands from governments. They are not required to disclose such requests, and while social media and app store platforms have been urged to be fully transparent about how often and why they have complied with takedown requests, the results are mixed.

Because many of the platforms’ moderation policies and community standards are opaque, it is hard for affected people to know why their speech has been curtailed, and what recourse they have to dispute it. Was their account flagged by other users, by the company itself, or by the government?

The lack of accountability can have very real and very negative consequences. Rules against hate speech and inflammatory content have been used to silence women of color, remove images of indigenous peoples, muzzle journalists, and censor images of breastfeeding and childbirth. Arbitrary changes to search algorithms reduced traffic to some alternative news sites overnight by up to 70%. Restrictions on violent imagery have been used to block videos of police brutality. Requests by governments have been used to remove documentation of ethnic cleansing and war atrocities.

What’s to be done?

Censorship is a powerful weapon that can easily be misused, especially when the lines between governments and corporations have become so blurred. As the Electronic Frontier Foundation argues, censorship should be the last resort against hate speech, fraud and online abuse, not the first. Many countries already have defamation laws that could be applied online without infringing the the rights of individuals to engage in public debate. Likewise regulations governing media and publishing could be applied in digital domains. In the US, election laws prohibit foreign governments from using ads to  influence support for candidates, require materials distributed by foreign governments to clearly display where they are coming from, and to be filed with the US Attorney General. In many cases, the laws just need to be updated to include online media and subsequently enforced.

Censorship is a political act. Organizations that play fact-checking or moderation roles need to be strictly impartial. Facebook’s partnerships with the IRI, NDI and Atlantic Council have raised concerns because of their partisan affiliations. The first two organizations are explicitly connected with the two dominant US political parties, and the Atlantic Council receives its funding in part from foreign governments and defense contractors. It is legitimate to question whether the political agendas of these organizations might influence the restrictions that they advocate for, especially since their focus with Facebook is specifically on foreign interference in elections.

Accountability and transparency are crucial and sorely lacking in these developments. Companies should be required to disclose which content is being restricted and why. There need to be processes in place to appeal decisions. The Manila Principles  provide a framework to protect companies from government overreach while protecting the rights of users on their platforms—ideally these principles should serve as a blueprint for companies and governments moving forward.

Algorithmic decision-making and artificial intelligence are increasingly determining the information diets of billions of people. Evidence is mounting that the social biases of their creators become embedded these systems so it’s imperative that they are subject to independent audits that examine the downstream effects of their use—both intended and unintended.

Better yet, people should be put in charge of their own information flows. Too many decisions about what what is and isn’t important are being taken out of people’s hands by corporate platforms whose primary interest is to serve advertisers. A new generation of platforms, such as Factr, restore control to end users by enabling them to control their own information based on the sources they trust and the filters that are relevant to them. Why use intermediaries whose primary motivation is to mine your data and attention, and resell them for profit to whoever will pay?

Unfortunately, transforming the “censor-first” mentality is a massive challenge. The tech giants fight every effort at regulation with tooth and nail, and they have very deep pockets. Many governments find it very useful to be able to outsource censorship to companies who aren’t as accountable to the public on issues of free speech. Governments who do want to safeguard citizens’ rights find the the legislative complexity of regulating a global industry very challenging (although GDPR has shown that with political will it is possible.) And the speed of technological innovation means governance efforts will always be several steps behind.

That doesn’t mean that we should submit to the status quo, however. As end users of these systems, we have an important role to play in shaping the future of the internet we inhabit. Our decisions about the tools we choose to adopt, and the compromises we are willing to accept will ultimately play a massive role in the services that companies provide.

Every day we discover more of the hidden costs of these technologies. We need to demand solutions that protect our rights to free thought and to free speech—instead of entrusting them to the powerful and unscrupulous. They have proven unworthy of our trust.