Go to content

4. Open and informed public debates

The spread of misinformation and hate speech on social media can have detrimental consequences for democracy and public debate since it may prevent citizens from knowing fact from falsehood or from engaging in public debate at all. Especially since some groups are proven to be more vulnerable and exposed to these phenomena. The Nordics should have transparent and informed public debates. Public debate in the Nordics has been characterised by freedom of speech, education and access to credible news as well as a strong culture of credible information use
Hallin, D. C. & Mancini, P. Comparing Media Systems: Three Models of Media and Politics. (Cambridge University Press, 2004).
,
Syvertsen, T., Enli, G., Mjøs, O. J. & Moe, H. The Media Welfare State: Nordic Media in the Digital Era. (University of Michigan Press, 2014). doi:10.2307/j.ctv65swsg.
,
Ohlsson, J., Blach-Ørsten, M., & Willig, I. (2021). Covid-19 och de nordiska nyhetsmedierna. NORDICOM. https://doi.org/10.48335/9789188855473
. This should continue to be the case.

Recommendation 4A Give public service media a strong digital mandate for online presence, content creation and development of platforms for democratic debate online

Disinformation, diminishing trust in democratic institutions, as well as digital divide in access to quality news content challenge democratic debate in many countries. The Nordic countries have robust and pluralistic national media systems where both commercial and public service media play an important role in informing public debate. However, the latter, by nature, hold special responsibility for fostering democratic debate and participation.
Public service media should be able to effectively navigate the digital landscape and lead the way when it comes to transparency and the promotion of spaces for open public dialogue.
We recommend that Nordic public service media are given a strong mandate for online presence and content creation. Public service media should be given a strong mandate to freely utilise digital productions in any relevant formats and develop new competencies and practices in the transparent use of algorithms and technologies to strengthen public cohesion.
We recommend that Nordic public service media are given a mandate to develop, together with other national and Nordic partners and in accordance with established state aid rules, public service alternatives to commercial online platforms for participating in democratic debate online.

Recommendation 4B – Step up support for independent fact-checkers

The spread of disinformation online urges democracies across the world and in the Nordics to defend trustworthy public debate. In combination with literacy, one way to counter the spread of false information online is fact-checking statements and reporting online to detect and inform of new disinformation campaigns and coordinated activities.
We recommend that the Nordic countries step up the support to independent fact-checking organisations that guarantee diversity, independence, and expertise in countering mis- and disinformation. On their part, fact-checkers should implement and continuously improve tools and practices in their processes to sufficiently battle false information online at a pace that matches the spread of such information.

Recommendation 4C – Push for better content moderation in the Nordics

Content moderation is an important tool for weeding out harmful content on online platforms. Such moderation is to a large degree carried out by artificial intelligence developed for content in English and with limited human involvement in the process. The transparency of online platforms when it comes to their moderation practices is inadequate, and this hampers public oversight. Moderation in smaller languages may thus be of much lower quality than in larger languages, which has consequences for whom and what controls both freedom of speech as well as the limits hereof in the Nordic countries
Ytringsfrihetskommisjonen (2022). En åpen og opplyst offentlig samtale. Ytringsfrihetskommisjonens utredning. Retrieved from Regjeringen.no:  https://www.regjeringen.no/contentassets/753af2a75c21435795cd21bc86faeb2d/no/pdfs/nou202220220009000dddpdfs.pdf
.
The Digital Services Act establishes a right for insight into platforms’ moderation practices and a demand for biannual reporting from very large online platforms. However, greater insight into how our democratic debate is moderated by Big Tech is needed in order to keep trust in the democratic system high.
Online platforms should consequently do more to support public oversight.
We recommend that the Nordic countries jointly push for moderation of high quality in the Nordics – both in the EU and vis-à-vis Big Tech. This may include appeals to employ Nordic moderators who can perform high-quality moderation of content in the Nordics with respect to the distinct Nordic cultures, democratic values, freedom of speech and freedom of information.
We recommend that Nordic countries push for more transparency in moderation practices with a view to securing transparent and high-quality moderation in the Nordics. This should include an obligation to disclose information on both algorithmic and manual moderation practices categorised by language and cultural background (e.g., what content and actors are downgraded and deleted and by whom).

Recommendation 4D Initiate a Nordic task force to oppose the risks to democracy from disinformation generated by artificial intelligence

The past year brought about several breakthroughs within the area of content generated by artificial intelligence (AI). Most publicly known was the release of ChatGPT (GPT 3.5) with its authentic AI-powered text generation, but also the release of other AI tools to generate image, voice, and video. These tools mark the acceleration of an era where artificial intelligence will not only filter our democratic conversation but also produce some of its content.
While fascinating, the misuse of this technology to manipulate and undermine democratic debates and elections poses a particularly serious threat to the trust-based democracies of the Nordics.
With the proposed AI Act currently being negotiated in the EU and expected to enter into force in 2025 or 2026, AI-generated deep fakes will likely be subject to transparency obligations where users should be informed if a piece of content is AI-generated or manipulated. This new proposed legislation constitutes an important step towards addressing challenges from disinformation generated by artificial intelligence. However, we worry that transparency will not be enough. Disinformation is created and spread with hostile intent, and we cannot rely exclusively on hostile actors to comply with European regulations.
We recommend that the Nordic countries act promptly on this new risk and commission a provisional Nordic task force on AI-powered disinformation. Composed of experts from the interplay of tech, policy and disinformation, and in collaboration with relevant Big Tech companies, the task force should explore short-term mitigating actions along with structural and long-term counter-measures to the risks from AI-generated content to complement those proposed in the AI Act. The results should inform the governments of the Nordic countries along with relevant Big Tech companies and the Nordic countries should work to lift relevant solutions to a European level to ensure efficient responses to the rising challenge.