Image
Share

General Monitoring, A Restriction to Freedom of Speech

Mouine Meddeb
Project Officer at LEED Initiative
09-01-2022
Opinion
3 mins

"Freedom of opinion and expression are fundamental rights of every human being. Indispensable for individual dignity and fulfillment, they also constitute essential foundations for democracy… ".

EU Guidelines on Human Right Defenders

General Monitoring contravenes the foundational human rights principles of proportionality and necessity by subjecting users to automated and often arbitrary decision-making.

It is perceived as intricate since you cannot define what you are seeking. Even if you consider that online intermediaries should be more proactive in detecting, deprioritizing, or removing certain user speech; the requirements on intermediaries to review all content before publication, often called “general monitoring” or “upload filtering”, raises serious human rights concerns, both for freedom of expression and for privacy.

General monitoring is highly problematic both when it is directly required by law and when it is effectively mandatory due to the legal risks of not doing. Specifically, these indirect requirements incentivize platforms to proactively monitor user behaviors, filter, and check user content, and remove or filter anything that is controversial, objectionable, or potentially illegal to avoid legal responsibility. This inevitably leads to over-censorship of online content as platforms seek to avoid liability for failing to act “reasonably” or remove user content they “should have known” was harmful.

On the other hand, some governments are resorting to more aggressive and heavy-handed approaches in order to adopt intermediary regulation, with policymakers across the globe calling on platforms to remove allegedly legal but ‘undesirable’ or ‘harmful’ content from their sites; while also expecting platforms to detect and remove illegal content. In doing so, states fail to protect fundamental rights to freedom of expression and fall short of their obligations to ensure a free online environment with no undue restrictions on legal content; whilst also restricting the rights of users to share and receive impartial and unfiltered information. Consequently, this has a chilling effect on the individual’s right to free speech wherein users change their behavior and abstain from communicating freely if they know they are being actively observed, leading to a pernicious culture of self-censorship.

When sites cooperate with governmental agencies, it leaves the platform inherently biased in favor of the government's favored positions. It provides government entities outsized influence to manipulate content moderation systems for their own political goals to eventually control public dialogue, suppress dissent, silence political opponents, or blunt social movements. And once such systems are established, it is easy for government and particularly law enforcement to use the systems to coerce and pressure platforms to moderate speech they may not otherwise have chosen to moderate.

For example, the Israeli Cyber Unit has boasted of high compliance rates of up to 90 percent with its takedown requests across all social media platforms. But these requests unfairly target Palestinian rights activists, news organizations, and civil society, and one such incident prompted the Facebook Oversight Board to recommend that Facebook “formalizes a transparent process on how it receives and responds to all government requests for content removal and ensure that they are included in transparency reporting.”

How can users’ rights to privacy and free speech be protected whilst ensuring illegal content can be detected and removed?

 

The Pathway towards Safe and Protected Regulations

Some NGOs have developed Manila Principles which emphasize that intermediaries should not be held liable for user speech unless the content in question has been fully adjudicated as illegal and a court has validly ordered its removal. They further argue that it should be up to independent, impartial, and autonomous judicial authorities to determine whether the material at issue is unlawful. Moreover, elevating courts to adjudicate content removal means liability is no longer based on the inaccurate decisions of platforms. This would subsequently ensure that takedown orders are limited to the specific piece of illegal content as decided by courts or similar authorities. 

In conclusion, regulators must take more effective voluntary actions against harmful content and adopt moderation frameworks that are consistent with human rights to make the internet free and limit the power of government agencies in flagging and removing potentially illegal content.

 

 

The article represents the views of its writer and not that of LEED Initiative.