What constitutes harm?
This is the question currently debated by Canadian lawmakers surrounding the Online Harms Bill — a controversial bill currently under development that seeks to protect vulnerable people online and curb the dissemination of ‘harmful’ content.
In July 2022, the Canadian Heritage Department — the department tasked with overseeing the online harms bill — restarted the consultation process. The consultation process is part of the Canadian federal government’s efforts to revive the bill.
Canadian Heritage Minister Pablo Rodriguez has been trying to push the bill forward despite an earlier version introduced last year that was met with widespread criticism. While lawmakers haven’t yet reached a consensus on what exactly is meant by ‘harm,’ there seems to be unanimous agreement that “we have to do something,” Rodriguez told reporters.
Rodriguez added, “[P]eople are seeing things that they shouldn’t be seeing on the internet, facing threats, receiving all kinds of stuff, very nasty stuff… and it’s our obligation as a government to act.”
Rodriguez is also behind advancing another highly contested bill, Bill C-11, that would see a more regulated internet for Canadian users and creators alike. It seems Rodriguez is on a mission to regulate the internet in more ways than one.
Based on the initial proposal, we know that lawmakers seek to include five types of content they deem harmful, including: terrorist content; content that incites violence; hate speech; non-consensual sharing of intimate images; and child sexual exploitation content. While these definitions are defined in the Criminal Code, it is noted that lawmakers may further modify them to a ‘regulatory context,’ which may cause confusion, increase censorship, and potentially undermine the Canadian Charter of Rights and Freedoms.
Every one of these categories is subject to different interpretations depending on who is assigning meaning to it. We already know that hate speech laws can be used as a tool to silence marginalized voices or quash dissent. We know that terrorist content can be subject to various interpretations, e.g., some Canadians have expressed concern that Muslim Canadians may be disproportionately impacted. Even defining ‘child sexual exploitation content’ has become more contested in this age of polarization and cultural degradation.
Some Canadians are even calling for the removal of ‘conspiracy theory’ content. This concept, much like the concept of ‘harm,’ comes up against the same challenges and limitations. Who would determine what is a ‘conspiracy theory’? This concept could easily be used as a tool to silence anyone who holds an opinion that deviates from the status quo.
Lawmakers are also tossing around the idea of implementing a 24-hour takedown requirement for ‘harmful’ content. This may force online platforms to “over compliance” and preemptively take down content that does not fall within the ‘harmful’ categorization. Canadians who want to challenge the removal of their content may also be subject to a new appeal system which may further bureaucratize an increasingly sanitized and regulated internet.
The government is expected to hold public hearings to give Canadians the opportunity to voice their concerns, though based on past hearings (particularly for Bill C-11), it’s unclear whether these concerns will actually be fairly heard by MPs.
Ultimately, those who argue for heavy-handed moderation often cite the reason of ‘harm’ as to why certain speech should be censored, but they’re often sticky on details as to what would be considered harmful.
While defining ‘harm’ is one part of the problem, the other is actually enforcing it.
While the government can pass laws stipulating what is harmful and on what grounds, it’s up to online platforms to actually enforce these new rules. These platforms already wield so much power online–this move would give them even more power to censor content with the government’s go-ahead.
Tech companies are already susceptible to their own set of biases. Content moderators may be liberal in the way they apply this law depending on their own belief systems. For them, it would be easier to justify the removal of content if the definition of ‘harm’ is so wide ranging. Platforms may take the extra steps to remove content they personally disagree with, not content that is necessarily deemed ‘harmful.’ While this is already being done on major tech platforms, this law would give even more power to Big Tech and away from the user.
Bills like this are dangerous because they rely on broad concepts that could easily be interpreted in such a way that punishes users.
Nations, including Canada, may have delineated hate speech laws to determine what is considered hateful, but regulating the concept of ‘harm’ would take it so much further.
In a polarized climate, this bill can be easily used to silence wrongthink, punish dissidents, and facilitate a chilling effect on online discourse. Ultimately, passing a bill that primarily revolves around the premise of what is deemed ‘harmful’ would be a disaster for online speech.