Europese kernwaarden
Disinformation: The EU Commission’s response to the Covid-19 infodemic and the feasibility of a consumer-centric solution
By Ruairi Harrison
As conversations around the globe concerning the issue of online disinformation gather gravity and frequency, it is tempting to view disinformation as a 21st century problem. Yet this phenomenon can be traced back to Octavian’s grappling for power in the turbulent post-Caesar Civil War period. Here, the first Roman Emperor manipulated information concerning his first adversary, Marcus Antonius, using brief rhetorical notes engraved on coins and circulated around Rome. These notes painted his rival as a drunk, a womaniser and a headstrong soldier incapable of ruling an empire. They ultimately proved their effectiveness in gaining the public’s support and their simple, accessible form and message could be compared to a modern day ‘Tweet’. Think Trump calling the mail-in ballot system into disrepute in a series of easy-to-read tweets devoid of evidence.
Although any comparisons between the outgoing US President and Caesar Augustus may begin and end there, the potency of 21st century digital disinformation is what sets the issue apart from previous incarnations including Stalin’s dezinformatsiya campaigns. This is because online disinformation campaigns weaponise the user community to spread swathes of disinformation as fact. Thus, while the campaign may be traced back to, for example, a Kremlin operation, its damage is done by the millions of users spreading such messages as fact. The definitional differences between mis- and disinformation are therefore crucial. ‘Misinformation’ is the sharing of factually incorrect information without any intent to mislead whereas ‘disinformation’ is sharing false or misleading content with intent to deceive.
In light of the increased focus on disinformation after 2016’s Brexit Vote and US Presidential Election, the EU Commission scrambled to address the problem, eventually drafting the 2018 Code of Practice on Disinformation. Even prior to the Covid-19 pandemic, statements from both within the EU institutions and from academic commentators agreed that the soft law nature of the 2018 Code was lacklustre in addressing disinformation. However, the online Infodemic associated with the coronavirus brought renewed attention to the scale and urgency of the problem, its capacity to blur the lines between fact and fiction and its divisive impact on society. Unfortunately, many of the decisive questions surrounding this issue remain, including:
- Should disinformation be regulated in the first place?
- Who should decide what is (and is not) disinformation?
- If disinformation is regulated, who should enforce the rules and how should they be enforced?
- Is there a softer approach available than EU-wide regulation?
- What weight of responsibility rests on users vis-à-vis online platforms?
Addressing all these questions is beyond the scope of a blog post. Nevertheless, these are unavoidable questions which one hopes will be addressed in the EU’s upcoming Digital Services Act (DSA).
Although there are several ‘legal solutions’ to addressing EU disinformation this post will focus on one which looks specifically to questions 4 and 5 above. This is the potential of the ‘consumer empowerment’ solution to disinformation. This could be dismissed as an idealised solution or one which is only workable in the long-term, but its capability to evade the direct curtailment of online free speech (as would be inevitable with an EU regulation) means it should be the EU Commission’s first port of legislative call.
Moving beyond protecting consumers to empowering consumers
Looking first to the importance of consumer protection online, the initial stages of the Covid-19 pandemic exposed the vulnerability of EU consumers in an unprecedented global crisis. The EU Commission noted how this manipulation took many forms; phishers used buzz words such as ‘coronavirus’ and ‘mask’ to direct consumers to websites which stole personal data, bank details and spread malware onto users’ devices. Scammers and opportunistic manufacturers engaged in consumer fraud by selling ‘miracle products’ to ‘cure’ coronavirus with unsubstantiated health claims. Associated with this was of course an unprecedented increase in general health misinformation as users struggled to make sense of a virus with so many more questions than answers. Facebook’s more interventionist approach for health disinformation (vis-à-vis political) and the EU Consumer Protection Cooperation Network’s ‘sweep’ of platforms have been successful in removing millions of misleading claims and scam products.
Yet it is imperative that EU law does not solely see online consumers as needing protecting. Empowering consumers to recognise false or misleading content is a sustainable solution to tackling online disinformation. Given the frequency (and commonly disguised nature) of online disinformation, it is idealistic to think that platforms can tackle all misleading posts even with a new EU regulation directing them how to do so. More importantly, from a fundamental rights perspective, a regulatory approach which fails to appreciate the value of open discourse by prioritising the removal of all misleading posts would undoubtedly infringe EU citizens’ freedom of expression (and information) rights.
Recent EU Commission statements at least acknowledge the impact of greater regulation on freedom of expression alongside referring to the need to empower consumers as one of the ‘lessons learned’ from the Infodemic. Unfortunately, there is no discussion as to why this softer approach should be the first port of call when assessing a long-term solution to disinformation. Moreover, the importance of digital media literacy skills to empower consumers in disinformation detection should be highlighted as the crucial lesson from the 2020 Infodemic. This could take the form of mandatory media literacy classes at secondary school level such that critical analysis of information is engrained in how young people consume media.
It must be remembered that the impressionability of teenagers and young adults makes them perfect targets for disinformation. If we give these groups the skills to detect bias and disguised political agendas as well as helping them to form positive online habits such as seeking out multiple, verifiable news sources, we are helping to effect positive, sustainable change online. This change will ensure informed public debate can take place thus promoting broader democratic values.
Finally, empowering users to address disinformation also has some benefits which address certain weaknesses associated with disinformation regulation. Although AI disinformation detection tools have improved drastically in the past year, human content moderation still plays a role at platforms. Any future EU regulation should realise the potential of an educated user community to detect the more nuanced instances of mis- and disinformation which AI tools currently miss.
The Considerable Caveat & Concluding Remarks
This approach could be criticised for its idealistic nature in such a politically divisive time. As things stand, underlying biases appear as part of the fabric of the online community; thus, handing over the reins to the community could lead to an even more partisan Fox/CNN-esque divide, with different bubbles of the internet propagating their own ‘truth’. A prime example of how users’ inherent biases continue to escalate disinformation is the rapid spread of the QAnon conspiracy from the fringes of the internet onto Facebook, Instagram and Twitter. The scale and variance of this conspiracy theory has led tech giants to admit that they cannot fully curb its circulation. It follows that the central caveat to the consumer empowerment solution is the necessity of acknowledging its inadequacy as a short-term solution. Thus, a stricter co-regulation framework is needed to address disinformation at the EU level. It remains to be seen how the EU Commission will frame the disinformation problem and enforce new rules, yet this will likely manifest itself in the upcoming DSA.
In short, consumer empowerment is not the ‘be-all and end-all’ solution to disinformation. It is nonetheless submitted as an imperative facet of all other legal solutions in ensuring a more sustainable, stable online environment into the EU’s future. One hopes that the DSA will go beyond paying mere ‘lip service’ to the importance of empowering the user community as a softer, long-term measure to online disinformation. If some form of mandatory EU digital media literacy programme could be put in place, its potential effects on users would go some way in curbing the scale of disinformation in the next infodemic as it inevitably arises.