The UK government has revealed (again) more modifications to the Online Safety Bill, which is an enormous and divisive attempt to police online material. It claims that the most recent round of changes to the draft is designed to safeguard online users from anonymous trolling. The Bill as a whole has far broader goals, including a broad content moderation regime aimed at explicitly illegal content as well as ‘legal but harmful’ content, with the stated goal of protecting children from a variety of online harms, including cyberbullying, pro-suicide content, and exposure to pornography.
Meanwhile, critics claim that the Act would suffocate free expression and isolate the UK, resulting in a splinternet Britain, as well as increasing the legal risk and cost of doing business in the UK. (Unless, of course, you’re a member of the safety tech’ club that sells services to assist platforms with their compliance.) Two parliamentary committees have examined the draft law in recent months. One advocated for a more focused strategy on unlawful content, while another cautioned that the government’s policy is both a risk to online speech and unlikely to be robust enough to meet safety issues – so it’s safe to say that ministers are under pressure to make changes. As a result, the bill continues to evolve or, to put it another way, expand in scope.
Other recent (significant) modifications to the draft include a mandate that adult material websites utilize age verification technology, as well as a large extension of the liability regime, with a longer list of illegal content put to the bill’s face. Platforms will be compelled to give consumers tools to control how much (possibly) damaging but technically legal information they are exposed to, according to the new rules, which the Department of Digital, Culture, Media, and Sport (DCMS) claims would only apply to the largest digital businesses.
Campaigners for online safety regularly link account anonymity to the growth of targeted abuse like racial hate speech or cyberbullying, but it’s unclear what proof they’re relying on — beyond anecdotal complaints of abusive anonymous accounts. However, examples of abusive content being disseminated by named and verified accounts are just as easy to uncover. Not least the sharp-tongued secretary of state for digital, Nadine Dorries, whose recent tweets criticizing an LBC journalist resulted in this awkward gotcha moment during a parliamentary committee hearing.
The point is that single cases, no matter how high-profile, don’t really tell you much about systemic issues. Meanwhile, a recent judgment by the European Court of Human Rights reiterated the importance of online anonymity as a vehicle for “the free movement of opinions, ideas, and information,” with the court plainly stating that anonymity is a crucial component of freedom of expression. Clearly, UK legislators must step carefully if the government’s claims for laws that will make the UK “the safest place to go online” while still protecting free speech are not to be shattered.
Given that internet trolling is a systemic issue that is particularly problematic on certain high-reach, mainstream, ad-funded platforms, where truly vile content can be massively amplified, lawmakers might be better served to consider the financial incentives linked to how content spreads — expressed through ‘data-driven’ content-ranking/surfacing algorithms (such as Facebook’s polarizing “engagement-based ranking,” as called out by whistleblower Frances H.