How the EU's Digital Services Act Will Limit Free Speech
The obligation to remove “illegal content” and the threat of massive fines if the social media companies don’t step up censorship.
"A safer internet for everyone," the Commissioner for Internal Market Thierry Breton told his followers on X (formerly Twitter) on August 25 to mark the entry into force of the first phase of the Digital Services Act (DSA). "The DSA is here. Here to protect free speech against arbitrary decisions and at the same time to protect our citizens and democracies against illegal content," he added.
Sounds pretty good, right? Who wouldn't want a safer internet, democracy and free speech? Unfortunately, these are just slogans, as instead of protecting free speech, the new regulation actually gives rulers just much more effective tools to restrict freedom of expression.
Understandably, Breton did not tell his followers about this. Nor is it enough to consult the materials explaining the new regulation to understand this. Take, for example, the website introducing the Digital Services Act. Looking at it, the question arises: What has freedom of expression got to do with any of this? The aim, it says, is to create a safe digital space, since more and more people are buying goods online. The proliferation of counterfeit goods must therefore be combated.
It is not until the very end of the page, when the need to address current digital issues is mentioned, that we notice the familiar concepts related to freedom of expression: hate speech and disinformation. These are the proud-sounding labels on the basis of which truthful information is often blocked, people's opinions ridiculed, and people cancelled and even prosecuted. I have written about the poor state of freedom of expression in various European countries earlier as well. If you do not agree to call a man who calls himself a woman a woman and you say outright on Facebook that he is a man, you may face criminal prosecution for hate speech – the Norwegian example. Comparing a politician to a genital, or sharing a photo of another politician on the internet with a quote attributed to her that is untrue, you find a bunch of policemen outside your door one early morning – the situation in Germany. We can find similarly ludicrous cases from France, Finland, and elsewhere as well.
Much of what happened during the Covid pandemic is similarly a cautionary tale of how true information can be misrepresented and how social media companies, under pressure from governments, prevent its spread. This is something that Facebook founder Mark Zuckerberg has acknowledged and regretted. Due to such restrictions on the dissemination of truthful information, a federal judge in the United States also recently banned employees of President Joe Biden's administration from interacting with social media companies. According to the judge, the US government appears to have assumed the role of an Orwellian Ministry of Truth during the pandemic – postings on Covid, including messages from respected scientists that differed from the government talking points, were repeatedly censored.
The DSA, as a supra-European Union (EU) regulation, however, deals with all this again. It potentially goes a step further in restricting freedom of expression, as the regulation will apply across the Union at once. If a message can be banned under German law, it is not possible for the same message to remain available in Poland, where it might not be banned.
Broadly defined “illegal content”
So what will the DSA do? In simple terms, it will require internet service providers to remove illegal content from their platforms. If they are notified, they must do so without delay, and swiftly. According to the Act, "any information that, in itself or in relation to an activity, including the sale of products or the provision of services, is not in compliance with Union law or the law of any Member State which is in compliance with Union law, irrespective of the precise subject matter or nature of that law" is classified as illegal.
The long explanatory part of the Act adds that the concept of "illegal content" must be considered and defined broadly for the purposes of this regulation, "to cover information relating to illegal content, products, services and activities". "In particular, that concept should be understood to refer to information, irrespective of its form, that under the applicable law is either itself illegal, such as illegal hate speech or terrorist content and unlawful discriminatory content, or that the applicable rules render illegal in view of the fact that it relates to illegal activities," the explanatory memorandum states.
So it is not just a question of preventing the sale of counterfeit goods or removing terrorist, pedophile, and similar criminal content from speech and video platforms, but also of so-called hate speech and opinions classified as discriminatory.
Trusted flaggers
In order to report this 'illegal content', companies have had to create an easy-to-find and universally accessible way to report it on their platforms. So whether you see something truly criminal or you don't like someone's language, you can report it to the platform.
As a matter of priority, platforms will have to deal with reports of content from 'trusted flaggers'. The status of trusted flagger will be granted by the body designated by the Member State as Digital Services Coordinator and “should only be awarded to entities, and not individuals, that have demonstrated, among other things, that they have particular expertise and competence in tackling illegal content and that they work in a diligent, accurate and objective manner ”. Such status could, for example, according to the explanatory memorandum, be granted to Europol. It could also be given to organisations that are “committed to notifying illegal racist and xenophobic expressions online”, according to the same explanation. "In particular, industry associations representing their members' interests are encouraged to apply for the status of trusted flaggers," the regulation says.
Companies also have a range of other obligations relating to reporting, risk assessment, etc. Particular attention is paid to crisis response. According to the regulation, "a crisis should be considered to occur when extraordinary circumstances occur that can lead to a serious threat to public security or public health in the Union or significant parts thereof". "Such crises could result from armed conflicts or acts of terrorism, including emerging conflicts or acts of terrorism, natural disasters such as earthquakes and hurricanes, as well as from pandemics and other serious cross-border threats to public health," the explanatory memorandum states. In the event of a crisis, the Commission will be empowered to require the platforms to take a range of different measures, including moderating content and prominently displaying information on the crisis situation provided by “Member States’ authorities or at Union level, or, depending on the context of the crisis, by other relevant reliable bodies”.
The fines are steep
Platforms that fail to comply with or breach their obligations under the DSA could face significant fines – up to 6% of the global turnover of the company hosting the platform. For example, Meta, the holding company of Facebook and Instagram, had a turnover of $116.6 billion (€107 billion) last year, of which 6% would be $7 billion (€6.4 billion). X (formerly Twitter) had a turnover of $4.4 billion (€4.05 billion) last year, of which 6% would be €264 million (€243 million). The Commission can also suspend the platform's activities in Europe. Perhaps with such sanctions, it is to be expected that platforms will want to mitigate risks and will do more to restrict content rather than less. Posters of content classified as illegal will face penalties ranging from restrictions on the distribution of posts to account closure.
Initially, the new requirements had to be brought into line by the major online service providers, 19 in total, that have 45 million monthly users in Europe. This includes the best-known social media platforms, search engines, and online retailers such as Facebook, Instagram, Google, X, YouTube, Amazon, etc. Smaller ones have half a year to adapt.
Do the EU leaders themselves believe they are defending freedom of expression with the DSA?
All in all, it is quite clear that, even if this act helps to some extent against criminal content and the sale of counterfeit goods, it does not protect freedom of expression, as Breton assured us. In fact, previous comments by Europe's top politicians, including Breton himself, show that it is precisely a question of restricting freedom of expression.
For example, Elon Musk, the owner of X, who has positioned himself as a champion of free speech, has previously had a lengthy dispute with the EU over how much content should be censored on the X platform. X, or Twitter at the time, decided in May to withdraw from the EU's voluntary code to fight disinformation online. Commenting on the move, French Digital Minister Jean-Noël Barrot said: “Twitter, if it repeatedly doesn’t follow our rules, will be banned from the EU.”
Breton also spoke on the same subject, referring at the time directly to the new rules now in force. "Obligations remain. You can run but you can’t hide. Our teams are ready for enforcement," he said.
In July, however, Breton commented on French President Emmanuel Macron's idea that if there were mass unrest in the country, as there had just been in France, the authorities might consider cutting off access to at least some of the social media. Breton said that when it comes to content that incites hatred and calls for riots, killing, and burning of cars, it should be deleted immediately. "If they fail to do so, they will be immediately sanctioned. We have teams who can intervene immediately," he said. "If they don't act immediately, then yes, at that point we'll be able not only to impose a fine but also to ban the operation [of the platforms] on our territory," he added.
Apparently, no one will argue that public calls to kill someone need an effective response to protect public order. But how can we be sure that diligent officials will only react in such cases? Examples of censorship to the contrary abound – our published analysis "How censorship has exploded" gives a good overview of the issue (Part 1 and Part 2).
Perhaps, in the case of the French protests, we can ask: will they ban news of bad people rioting in the streets, or will they also desire to ban, for example, the material of riot police’s excessive use of force against the protesters, and if such videos go viral and the act is condemned? At what time will these Breton teams, ready to enforce the rules, get into action? We shall soon see.