Starting August 2, 2026, the transparency obligations of the EU AI Act will apply, and this of course affects you as a content professional, if you are based in the EU. However, various pieces of misinformation are circulating about this law. For instance, just the other day I saw the claim that soon, all AI-generated content will have to be labeled as such without exception. Spoiler alert: That is nonsense. The law does not demand this so broadly.
Hopefully, this article brings you more clarity for your daily content work. In it, I take a close look at Article 50 of the AI Act and explain which rules actually apply to content creation with Artificial Intelligence.
Furthermore, I address the fact that this topic isn’t just about legal issues. It also has an ethical component.
The most important rule for text
Let’s first look at what the law says about text. Here, paragraph 4 of Article 50 is crucial. It initially states a seemingly clear basic rule: Anyone who uses AI to generate or manipulate text, and publishes it, must disclose this. This applies to texts intended to inform the public on “matters of public interest.”
The exact quote:
“Deployers of an AI system that generates or manipulates text which is published with the purpose of informing the public on matters of public interest shall disclose that the text has been artificially generated or manipulated.”
To clarify one term: In legal jargon, we are “deployers of an AI system” when we use services like ChatGPT, Gemini, Claude, etc. But the real question is: Do we have to label everything? No, because the law also formulates a crucial exception to this disclosure obligation. You will find it in the sentence that directly follows the quote above:
“This obligation shall not apply […] where the AI-generated content has undergone a process of human review or editorial control and where a natural or legal person holds editorial responsibility for the publication of the content.”
Translated into everyday language: You do not have to disclose the use of AI if two conditions are met. First, the text must have been reviewed and checked by humans. Second, a person or a company must hold editorial responsibility for its publication.
In other words: The sometimes misquoted “obligation to label everything” only targets fully automated texts. The idea is to make clear to the readers that the facts and statements in a text were generated by a machine without human oversight.
For daily content work, this means: If you proofread an AI-generated draft, edit it, and verify information, the legal obligation to label it no longer applies. At the same time, however, you are just as responsible for such AI content as for any other publication. You cannot shift this responsibility onto the provider of the AI tool.
Rules for images, video, and audio
While the law leaves a lot of leeway for editorial teams when it comes to text, the requirements for visual and audio content are stricter at first glance. Here too, paragraph 4 of Article 50 provides the foundation. In the first part, it states:
“Deployers of an AI system that generates or manipulates image, audio or video content constituting a deep fake, shall disclose that the content has been artificially generated or manipulated.”
An editorial exception, as we saw above for text, does not exist. In this case, it is irrelevant whether a person or an editorial team stands behind it and assumes responsibility.
Do you therefore have to label all AI-generated images, videos, and audio files now? Here too, the answer is: No, not without exception. The central word is “deep fake”. According to the definitions of the AI Act, this word means …:
“… AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful;”
In a nutshell: The legislator is evidently primarily concerned with the danger of deception.
An example of this interpretation: You write a text for the tourism website of a seaside resort and have the AI generate a beach photo. In this case you have to label it because the image does not reflect reality, but it could appear that way to your readership. If, on the other hand, you write an advice article for vacationers and use an AI-generated beach photo purely for illustrative purposes, this obligation does not apply.
Another example: You work for a fashion shop and have AI images generated in which fictional models wear the offered clothing items. You have to label this because these images can look real but do not reflect reality. However, if you create a brochure for a supermarket and use an AI image of grapes, this is not necessary. It is, after all, merely a symbolic representation. Nobody expects the grapes in the store to look exactly like the ones in the product image.
At the same time, there is an important special rule for artistic formats. The law phrases it like this:
“Where the content forms part of an evidently artistic, creative, satirical, fictional or analogous work or programme, the transparency obligations set out in this paragraph are limited to disclosure of the existence of such generated or manipulated content in an appropriate manner that does not hamper the display or enjoyment of the work.”
This offers you a certain degree of freedom in creative projects. If you create a satirical video or a work of art, you still have to make the use of AI transparent. But you are not forced to slap a huge warning label on the image. A subtle note in the credits, in the metadata, or in the accompanying image description should suffice in such cases.
What the tool providers must deliver
So far, we have mainly talked about ourselves as users (the “deployers” of the AI tools). But the AI Act also holds the developers of the AI models themselves accountable. These are referred to as “providers”. Their tasks are regulated by paragraph 2 of Article 50.
It states there:
“Providers of AI systems […] shall ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated.”
Companies like OpenAI or Midjourney must therefore develop their systems in such a way that the generated content contains invisible metadata or digital watermarks. These should be machine-readable in order to make AI content detectable on a technical level.
For you as a content professional, I have a practical tip if you want to be on the safe side: You should pay attention to whether the tools you use meet this requirement. For example, with open-source models, which you find on creative platforms among other places, this could be a weak point. Remember: Whoever publishes the content bears the responsibility for it. If the technical labeling is missing, that could be problematic.
However, paragraph 2 also contains an exception:
“This obligation shall not apply to the extent the AI systems perform an assistive function for standard editing or do not substantially alter the input data provided by the deployer or the semantics thereof […]”
This means: If you use an AI merely as an assistant, for example to find typos, make a sentence flow more smoothly, or adjust the brightness of a photo, these activities very likely do not fall under the transparency obligations.
The unavoidable gray areas
As with every new law, there’s room for interpretation. Clear boundaries and rules will only emerge in legal practice. For us content professionals, this means living with certain gray areas for now.
For instance, the term “public interest” caught my attention when it comes to texts. It is not defined in the legal text. Does this refer to everything that is not internal? Or does it only cover particularly important information, such as health and financial topics? Let’s take a blog post on a company website or a specialist article in an industry magazine as an example: Do these also fall under “public interest”? And where is the line drawn here?
The topic of deception regarding “deep fakes” also strikes me as rather vague. When could a video, an image, or an audio recording “falsely appear to a person to be authentic or truthful”? And at what point is that even a problem?
Furthermore, the law speaks of “standard editing”. This is permitted as long as the semantics of a text or image do not change “substantially”. But this boundary is naturally fluid, as we all know. Correcting spelling mistakes seems unproblematic to me. But if an AI reformulates a paragraph and adjusts nuances of content in the process, is that already a “substantial” alteration?
The EU AI Office is supposed to draw up codes of practice that will hopefully answer these and other questions. Ideally, these documents will provide detailed examples of how to implement the rules in everyday work. Until these guidelines are published, common sense is primarily required for interpretation. And as is so often the case, when in doubt, it is better to be more cautious than seems absolutely necessary.
Transparency as an ethical decision
However, the legal framework of the AI Act is only one side of the coin. For me, the topic of transparency also has an ethical dimension: Shouldn’t you always label AI content for the sake of honesty, even if the law does not require it?
In the content industry, we are unfortunately confronted with a dilemma at this point. Because on the one hand, ethical behavior naturally feels excellent. We are doing “the right thing”. On the other hand, many people react strongly when it comes to AI. Anything created with machine assistance is then dismissed as worthless “slop”, no matter how much time, effort, and dedication went into it.
The temptation to conceal the artificial help is correspondingly great. However, I advise against that, especially when content is created almost exclusively by machines.
How I handle it myself: Example Smart Content Report
Using a concrete example: I put a lot of energy and passion into the Smart Content Report. This website and the newsletter cost me quite a few hours a month. Without AI assistance, this would be impossible for me to do.
It is very tempting to simply conceal my use of AI. I could let everything appear under my own name, or add a few fictional authors so it isn’t so obvious.
Instead, I opted for transparency, even if many will dismiss my work as “slop” as a result.
This is why there are two author names on this site:
You fin my own name where I took the lead on a piece of content. That applies to this post, for example. Google Gemini assisted me with the writing, but the lion’s share of the work comes from me. I reformulated, added, guided, and also corrected: Gemini, for instance, had ignored the word “deep fake” in the legal text in the section on image, video, and audio, and didn’t immediately understand that the strict rules only apply under certain circumstances. That was a perfect example of why human oversight is so important.
The abbreviation “SCR”, on the other hand, is used when the AI performs a central part of the work. I always choose the topics, but an AI writes the first version of the text. I check and revise, handle all the manual tasks like the tags. I determine the headline.
In the author description, this is explained as follows:
“Articles with the author name SCR are created with the help of AI. All topics are manually picked by Jan Tissler. Each article is checked and edited by him before publication. He takes full editorial responsibility.”
According to the AI Act, I wouldn’t have to disclose this. But I do it anyway.
Moreover, you can read exactly how I create this website and which prompts I use.
I hope that this honesty combined with my passion for the topic will prevail in the end. And if that shouldn’t be the case, I still have one important consolation: I can continue to look myself in the eye with a clear conscience.
