Recommendation 4D – Initiate a Nordic task force to oppose the risks to democracy from disinformation generated by artificial intelligence
The past year brought about several breakthroughs within the area of content generated by artificial intelligence (AI). Most publicly known was the release of ChatGPT (GPT 3.5) with its authentic AI-powered text generation, but also the release of other AI tools to generate image, voice, and video. These tools mark the acceleration of an era where artificial intelligence will not only filter our democratic conversation but also produce some of its content.
While fascinating, the misuse of this technology to manipulate and undermine democratic debates and elections poses a particularly serious threat to the trust-based democracies of the Nordics.
With the proposed AI Act currently being negotiated in the EU and expected to enter into force in 2025 or 2026, AI-generated deep fakes will likely be subject to transparency obligations where users should be informed if a piece of content is AI-generated or manipulated. This new proposed legislation constitutes an important step towards addressing challenges from disinformation generated by artificial intelligence. However, we worry that transparency will not be enough. Disinformation is created and spread with hostile intent, and we cannot rely exclusively on hostile actors to comply with European regulations.