Elon Musk's platform still has obligations to combat disinformation under EU law but the true extent of these remains uncertain, writes PhD student Ethan Shattock, School of Law and Criminology
Recently, EU internal market Commissioner Thierry Breton announced that Twitter had left the Union's Code of Practice on Disinformation. Accompanying this was the commissioner’s stern warning that Elon Musk’s platform could 'run’ but not 'hide’ from obligations to combat disinformation under the EU’s new Digital Services Act.
Highlighting legal requirements which come into effect on August 25th, the Commissioner’s message was clear. Large social media platforms will not have voluntary discretion to tackle—or refuse to tackle—the spread of disinformation under new EU law. This poses timely questions surrounding what Twitter’s departure from the EU Code of Practice on Disinformation means and the platform’s obligations under the Digital Services Act which have potential relevance in this area.
What does the EU Code of Practice on Disinformation say?
The Code of Practice on Disinformation was introduced in 2018 due to concerns surrounding Russian interference in European Parliamentary elections. Initial members of the 2018 Code—known as 'signatories'—not only included social media giants like Twitter and Facebook but also internet search providers such as Google.
Collaborating with the European Commission, these platforms signed off on several voluntary commitments designed to limit the spread of disinformation on their respective platforms. Key commitments involved platforms pledging to increase transparency of political advertising, to disrupt revenue flowing to accounts that spread disinformation, and to empower researchers with knowledge that could shed light on platform measures to limit misleading content.
While the European Commission praised Twitter's compliance with several commitments in an assessment in 2020, the Code’s effectiveness has been hampered by several flaws. One critical flaw is that platforms have not consistently provided researchers with valuable data as they had voluntarily agreed to under the Code’s voluntary commitments. A related problem is that the Code’s commitments contained vague terminology while providing platforms with considerable leeway in how they adopt measures to tackle misleading content.
Spurred by these failures, the EU beefed up the Code of Practice on Disinformation in 2022. The revamped Code provides more detailed commitments, a wider range of members, and more concrete standards to measure how platforms minimize the spread of disinformation on their platforms. While remaining a voluntary initiative, the renewed Code of Practice is accompanied by legally binding obligations for platforms such as Twitter to combat disinformation under the EU’s new Digital Services Act.
Will Digital Service Act force Twitter to combat disinformation?
The Digital Services Act is a significant EU law which sets out a wide range of obligations for various online platforms of all shapes and sizes. Crucially, this legislation sets out its most extensive obligations for entities which are considered to be ‘Very Large Online Platforms.’These are platforms which have over 45 million 'average active monthly recipients' of their service across the EU. As Twitter fits into this category, it is designated as a Very Large Online Platform in the Digital Services Act’s framework.
This status is vital in the disinformation context. Under the Digital Services Act, such platforms must identify and mitigate ‘systemic risks’ arising from their services. Importantly, 'systemic risks' not only include the ‘dissemination of illegal content’ but also the ‘intentional manipulation’ of services with negative effects on ‘electoral processes.’ Such risks may also extend to situations whereby individuals create fake accounts. In other words, disinformation is considered to be a systemic risk which Very Large Online Platforms like Twitter are obliged to curtail under the Digital Services Act.
Does the Digital Service Act give Twitter any wiggle room?
While Twitter's obligations to combat 'systemic risks’ are intended to include problems like disinformation, the true extent of these obligations under the EU’s Digital Services Act remains unclear. Importantly, this law does not directly specify that Very Large Online Platforms must adopt measures to remove disinformation as part of their obligations to combat systemic risks.
Relatedly, the Digital Services Act provides discretion for platforms such as Twitter to decide on appropriate measures to combat risks such as disinformation. Such measures under this legislation may simply include limiting advertisements, but could also involve considerable modifications to how platforms control what content their users are exposed to. Considering perceptions of Musk as a ‘free speech absolutist’, it's possible that the discretionary nature of the Digital Services Act enables tycoons such as Musk to simply refuse to acknowledge disinformation as a problem requiring intervention.
The true extent of how Twitter approaches disinformation under the Digital Services Act will be overseen by the European Commission and a set of 'independent auditors' which this new law establishes. Specifically, these stakeholders will assess how Very Large Online Platforms are complying with their obligations to mitigate risks. In principle, this introduces a legal obligation for platforms like Twitter to report on how they are taking measures to protect their users from risks such as disinformation.
But in practice, the Digital Services Act still provides leeway regarding how Twitter decides to take action or even classify disinformation as a systemic risk which requires effective counter measures. Going forward, the role of the European Commission and independent auditors in assessing compliance with the Digital Services Act will be crucial in providing a clear picture of how platforms such as Twitter are tackling—or failing to tackle—systemic risks such as disinformation.
This article originally appeared on RTÉ Brainstorm