European AI Act: Mandatory Labeling for AI-Generated Content

Authenticity & AI Detection

April 9, 2024

European AI Act: Mandatory Labeling for AI-Generated Content

⇥ QUICK READ

<div class="hs-cta-embed hs-cta-simple-placeholder hs-cta-embed-148431706223"
 style="max-width:100%; max-height:100%; width:170px;height:570.703125px" data-hubspot-wrapper-cta-id="148431706223">
 <a href="https://cta-service-cms2.hubspot.com/web-interactives/public/v1/track/redirect?encryptedPayload=AVxigLJrUZ4Wxh0Qdo01FJ4Q3Oz5oJ8RUjWzxCE%2BkELt17Qi2%2BAAgPkuS8Pt33eRL3Zai2iDrxc%2FuuXDvNXR%2BItXjwgeAhIXwgNyEvtavhEWQ1W%2B3kCeRwLJYI0D0JQDHo%2F0khIj%2FAZrBa0oo%2F%2BIZ6ECmgP4hWoacTzMsQhF5Lrsykdo2Mj93DVdYAiPwA2cdnSRc5KmPlLxe7GX8sRN3ji8mITa6cPIJhWFgcXNP4z9K5TzbvTAVkNT04SmM8LpLg%3D%3D&webInteractiveContentId=148431706223&portalId=4144962" target="_blank" rel="noopener" crossorigin="anonymous">
   <img alt="History of Digital Watermarking &nbsp; Download our infographic &quot;the Invisible Digital Watermarking Saga: A Journey Through Time&quot;" loading="lazy" src="https://no-cache.hubspot.com/cta/default/4144962/interactive-148431706223.png" style="height: 100%; width: 100%; object-fit: fill"
     onerror="this.style.display='none'" />
 </a>
</div>

Share this article

Subscribe to the blog!

I subscribe

Discover the European AI Act's new requirement for transparent labeling of AI-generated content. Learn how digital watermarking technology plays a key role in distinguishing synthetic media, and explore the implications for AI providers and the fight against deepfakes.

The European Artificial Intelligence Act ("AI ACT") was adopted by the European Parliament on March 13th and will come into effect in May 2024, 20 days after its publication in the Official Journal of the EU. Prohibitions on AI systems deemed to pose "unacceptable risks" will be enforced six months later (November 2024), governance rules and obligations for general-purpose AI models will apply after 12 months (May 2025), and full implementation is expected after 24 months (May 2026).

This law applies globally to all artificial intelligence systems, excluding those used for scientific research and in the military sector, as long as they are available within the European Union's territory.

General Approach of the AI ACT

The law regulates AI systems through a risk-based approach, defining four levels of risk for AI systems, each with specific prohibitions and obligations:

The four levels of risk defined by the AI Act

Generative AI systems are considered limited-risk systems

Among the various AI systems, generative artificial intelligence systems are defined in Article 50(2) as: "AI systems, including general-purpose AI systems, that generate synthetic type content such as audio, images, videos, or text". Examples of these systems include Adobe’s Project Music GenAI Control for audio generation, Midjourney for image creation, OpenAI’s Sora for video production, and Anthropic’s Claude text generator. These systems are classified as limited-risk artificial intelligence systems under the AI ACT and are subject to transparency requirements, such as the obligation to inform users that AI has generated the content.

The most powerful and multimodal generative AI models (capable of generating any type of content), such as OpenAI’s ChatGPT or Google’s Gemini, are viewed as general-purpose AI systems that could lead to systemic risk. They are subject, in addition to transparency obligations, to detailed assessments and adversarial testing to prevent any significant incidents that could have adverse consequences, such as the production of illegal content.

Transparency Obligation: Mandatory Labeling of AI-Generated Content

All providers of generative AI systems, regardless of their location, size, power, or nature, whether open (free and open-source) or proprietary, must comply with fundamental transparency obligations. These allow the public to know if the content has been artificially generated and to distinguish it from authentic human-generated content.

The key challenge is that these systems can be used to create ultra-realistic deepfakes or fake news, contributing to large-scale disinformation campaigns, fraud, identity theft, or deception. To counter this phenomenon, the AI Act (Art. 50 2) mandates that "Providers of AI systems, including GPAI systems, generating synthetic audio, image, video or text content, shall ensure the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated. Providers shall ensure their technical solutions are effective, interoperable, robust, and reliable as far as this is technically feasible, taking into account specificities and limitations of different types of content, costs of implementation, and the generally acknowledged state-of-the-art, as may be reflected in relevant technical standards."

A Mandatory, Effective, Robust, and Reliable Digital Watermarking

The transparency obligation goes far beyond a simple statement indicating "AI-generated content", which can be easily removed, and targets robust and machine-readable technological methods for safely displaying to the public that the content is generated content.

Recital 133 of the AI Act states that "such techniques and methods should be sufficiently reliable, interoperable, effective and robust as far as this is technically feasible, taking into account available techniques or a combination of such techniques, such as watermarks, metadata identifications, cryptographic methods for proving provenance and authenticity of content, logging methods, fingerprints or other techniques, as may be appropriate."

The AI Office, created by the AI Act and currently being set up, will conduct a state-of-the-art review in this field and define the standards that all generative AI providers must strictly follow. A valid combination, currently at the highest standard, could be adopting the C2PA standard, which associates various types of metadata unique to content (author, date, generative system used, etc.) and protects it from alterations with a cryptographic key, and associating it with a robust digital watermark, such as IMATAG's, to prevent the deletion of these metadata. The content is then protected from any alteration or deletion of its metadata and remains perfectly sourced and integral throughout its life in the informational sphere.

Content Credentials
C2PA standard

Labeling ai-generated with an invisible watermark

Déverrouiller l'Avenir de l'Authentification de Contenu : La Percée d'IMATAG dans le Marquage des Images Générées par IA
Generated content can also be watermarked during the generation process (read our article)

Penalties for Non-Compliance with the Labeling Obligation

Failure to comply with this labeling obligation automatically results in a fine of up to 15 million EUR or up to 3% of the total global annual turnover achieved in the preceding financial year, whichever is higher.

Time: The Crucial Factor Against Deepfakes

General-purpose AI systems that include synthetic content generation functionalities must comply before May 2025, and all other systems by May 2026. This timeline is considered very late, especially as technological developments and the widespread adoption of generative and general-purpose AI systems, as well as the ultra-realism of the generated content, have significantly accelerated in recent months.

This concern about the late deployment timing of the generative AI labeling obligation was recently shared by Ms. Anna Ascani, Vice President of the Italian Chamber of Deputies, who is considering the adoption of a national law to accelerate its deployment. This reaction was particularly in response to a deepfake of Ukrainian President Volodymyr Zelensky's spokesman claiming responsibility for a terrorist attack in Moscow, which spread across all online platforms, highlighting once again the impact of disinformation due to artificially generated content.

The US presidential decree of October 30, 2023, for safe, secure, and trustworthy artificial intelligence, plans for widespread deployment of digital watermarking of generated content starting in early 2025, much earlier than what the European AI Act prescribes.

On their part, at the Munich Security Conference on February 16, 2024, many players in the generative AI field (such as Anthropic, Google, Meta, Microsoft, OpenAI, and Stability AI) have committed to the "Tech Accord to Combat Deceptive Use of AI in 2024 Elections" to deploy technologies to counter harmful AI-generated content aimed at deceiving voters and to detect AI-generated content, notably using digital watermarking.

In this context of emergency, to facilitate the transition to the new regulatory framework, the European Commission has launched the AI Pact, a voluntary initiative that aims to support future implementation and invites AI developers from Europe and elsewhere to comply in advance with the AI Act obligations and to start implementing its requirements before the legal deadline, including on content watermarking obligations.

Beyond the Law : Ethics

Beyond legal obligations, no generative AI provider wants their name associated with a disinformation campaign, fraud, or manipulation of public opinion for reasons of image, reputation, and, ultimately, economic viability.

Labeling generated content plays a crucial role in combating deepfakes and misinformation, mitigating information overload for internet users by clearly distinguishing authentic materials from generated ones, and supporting media literacy efforts. It's a matter of accountability and ethics, as recently highlighted by the non-binding resolution of the UN General Assembly of March 21, 2024, sponsored by the United States and co-sponsored by 123 countries, including China, which definitely brings the labeling of generated content into the field of ethics, before it enters into the field of pure legal compliance.

Generative AI providers from all countries, don't wait to mark your generated content with IMATAG's technology, one of the most robust and secure in the world!

Photo : C2PA compliant AI-generated image.

Learn more about IMATAG's solution to insert invisible watermarks in your visual content.

Want to "see" an Invisible Watermark?

Learn more about IMATAG's solution to insert invisible watermarks in your visual content.

Book a demo

These articles may also interest you

California's Groundbreaking Legislation: The "California Provenance, Authenticity and Watermarking Standards Act"

Authenticity & AI Detection

June 14, 2024

California's Groundbreaking Legislation: The "California Provenance, Authenticity and Watermarking Standards Act"

See more
Embedded Watermarking: How Manufacturers Ensure Digital Authenticity at the Point of Capture

Authenticity & AI Detection

March 5, 2024

Embedded Watermarking: How Manufacturers Ensure Digital Authenticity at the Point of Capture

See more
Integrating Watermarking into C2PA Standards: A Must for Online Content Authenticity

Authenticity & AI Detection

March 14, 2024

Integrating Watermarking into C2PA Standards: A Must for Online Content Authenticity

See more

Subscribe to the blog!

By submitting this form, I agree to Imatag Privacy Policy.