How will the EU's new act deal with online lies and fake news?

The new Digital Services Act act represents incremental progress, but it remains far from clear whether its provisions will improve anti-disinformation efforts, writes Ethan Shattock, PhD candidate Department of Law

Disinformation has become even more amplified in the last year. Covid-19 has proven to be fertile breeding ground for lies, continuing into the 2021 with the vaccine roll out. In Ireland, digital lies circulated following the shooting of George Nkencho, while campaigns of disinformation flooded last year's US presidential election.

These events are symptomatic of a social media landscape that continually threatens democratic institutions. In December 2020, the European Union introduced the Digital Services Act to replace outdated rules for digital platforms with standards fit for the 21st century. While hailed by some as a watershed moment for big tech regulation, its role in curtailing digital disinformation remains ambiguous. While the act carries important signs of progress, it represents only an incremental step in the continuing battle between European institutions and anti-democratic actors who seek to influence elections.

The act is the latest attempt by the European Union to regulate how "digital services" operate. The obsolete  E-Commerce Directive has until now been the flagship European legal framework for regulating digital services in the single market. However, the communication landscape has been significantly altered since the introduction of the directive in 2000.

Major platforms like FacebookTwitter and YouTube were absent from the public sphere back then, but are now important conduits not just for commerce but also for democratic elections. Unsurprisingly, this prompted calls for reform.

What does the act do?
Broadly, the act targets how technological platforms safeguard the rights of users and encourages greater transparency in how they operate. It introduces harmonised standards for tackling illegal content, and provides a basic framework to facilitate greater user knowledge into how content is moderated and distributed through algorithms.
Provisions of the act introduce mechanisms for "flagging" unlawful content, challenging content moderation decisions, and insulating against misuse and abuse of "very large online platforms." The act also provides researchers with limited access to data on how the largest platforms operate. Through these necessary developments, the act takes a step in the right direction by incorporating two crucial principles of transparency and accountability.
Less certain is how the act will insulate European democracy from digital disinformation. It is supposed to signify a transition from self-regulation to co-regulation but, in the context of disinformation, the current self-regulation of platforms simply means that platforms like Facebook and Twitter effectively craft their own rules that determine how they tackle disinformation.

While there have technically been guidelines from the European Commission since 2018 through the Codes of Practice on Disinformation, large platforms have never been legally obliged to incorporate procedures from these guidelines. This lack of enforceability has led to implementation gaps in how different platforms safeguard against disinformation. As a form of co-regulation, the act allows the EU to set an essential framework through a legislative proposal with the platforms filling in details for how these will materialise.

'Harmful but not illegal content'
An ongoing problem that the act will not resolve is the issue of harmful but not illegal content. The act focuses on how platforms curtail "illegal" content, and introduces "trusted flaggers" to notify platforms when such content is recognised. New transparency obligations require platforms to share data surrounding how they suspend those who disseminate "manifestly illegal content".
Risk assessments that platforms carry out in accordance with the act primarily address how to avoid the misuse of services to disseminate illegal content. However, unlike many other forms of harmful content, disinformation is often not strictly illegal. While co-ordinated disinformation campaigns distort the democratic process, the false content disseminated is often not unlawful, and therefore can avoid the same legal scrutiny that is applied to child pornography or material inciting terrorism.

Because the act focuses heavily on illegal content, its main contribution to fighting disinformation is to beef up the existing guidelines under the current Codes of Practice, which the European Commission has pledged to update by Spring 2021. While this comes across as a type of self-regulation 2.0, promising developments are in the legislative pipeline through the European Democracy Action Plan. This includes planned legislation to improve transparency of sponsored political content.

It must also be acknowledged that proposed "risk management" procedures within the act demonstrate an awareness of "intentional manipulation" of services and "inauthentic use of automated exploitation" of networks. While encouraging, these steps are piecemeal in the bigger picture.

Not a catch-all solution
Legislation alone cannot minimise disinformation and the problem is not as simple as removing individual pieces of "bad" content. Modern disinformation, often not illegal, exists along nodes of a sophisticated and layered system of content distribution. Users as well as platforms shape this system. Sophisticated actors exploit existing social divisions, and weaponise platforms in dynamic and inventive ways. This is evident through WhatsApp's involvement in spreading Covid-19 misinformation in Ireland and widespread lies on TikTok before the US presidential election.
The act is playing a game of catch-up with technology. What's fundamentally needed are legal frameworks that can both keep up with and anticipate technological changes. For disinformation, the act is not the holy grail, but hopefully lays regulatory scaffolding for more concrete progress.