AI Trends

Too good to be true? How to recognize AI content

Job van den Berg
Job van den Berg
February 1, 2026
2
min read
Too good to be true? How to recognize AI content
In article 50, the EU AI Act requires clear labeling when content has been generated or manipulated by AI.

AI tools are becoming so good that distinguishing between real and created is becoming increasingly difficult. We scroll through perfect photos, flawless voices, and stutter-free videos, while behind-the-scenes technology is rapidly improving. That's impressive, but also tricky: how do you still know what to trust?

What does the law say?

In article 50, the EU AI Act requires clear labeling when content has been generated or manipulated by AI—such as text, images, audio, and video. In practice, this still happens very little and enforcement is limited for now. So don't count on a label; take your own checks seriously.

Quick check: think like a fact checker

Start with a healthy dose of doubt. Is something too good to be true—too perfect, too appropriate, too spectacular? Then assume that AI can play a role. Next, look at the context: where was it published, who shares it, and what interest does the sender have? Sources you don't know and accounts with no history deserve extra suspicion.

The date is a signal, not proof

Always check the publication or upload date. Videos from years ago may have been re-uploaded or edited, and AI existed even then — but it was often less convincing. Therefore, see the date as an indication, not as conclusive evidence. Combine it with other signals, such as inconsistencies in shadows, hands, teeth, earrings, text on signs, or weird lip-sync.

Practical checks that do work

  • Search the other way around (reverse image/video search) to find previous use or alternative context.
  • Watch for artifacts: deformed fingers, “glassy” skin, unnatural patterns, failed letters and logos.
  • Listen critically: monotone intonation, unnatural pauses in breathing, or identical voice coloring can indicate AI audio.
  • Compare multiple sources: real events leave traces with reliable media and eyewitnesses.
  • Check out the maker: Does the account have a track record, clear bio, and consistent style?

AI makes creation more accessible, but it also makes deception easier. Don't rely blindly on labels or likes; use your own verification ritual. Do you have doubts? First, share your question (“source?”) instead of the mail itself. This way, you not only help yourself, but also your network to be more resistant to AI-driven disinformation.

Remy Gieling
Job van den Berg

Like the Article?

Share the AI experience with your friends