Too good to be true? How to recognize AI content


AI tools are becoming so good that distinguishing between real and created is becoming increasingly difficult. We scroll through perfect photos, flawless voices, and stutter-free videos, while behind-the-scenes technology is rapidly improving. That's impressive, but also tricky: how do you still know what to trust?
In article 50, the EU AI Act requires clear labeling when content has been generated or manipulated by AI—such as text, images, audio, and video. In practice, this still happens very little and enforcement is limited for now. So don't count on a label; take your own checks seriously.
Start with a healthy dose of doubt. Is something too good to be true—too perfect, too appropriate, too spectacular? Then assume that AI can play a role. Next, look at the context: where was it published, who shares it, and what interest does the sender have? Sources you don't know and accounts with no history deserve extra suspicion.
Always check the publication or upload date. Videos from years ago may have been re-uploaded or edited, and AI existed even then — but it was often less convincing. Therefore, see the date as an indication, not as conclusive evidence. Combine it with other signals, such as inconsistencies in shadows, hands, teeth, earrings, text on signs, or weird lip-sync.
AI makes creation more accessible, but it also makes deception easier. Don't rely blindly on labels or likes; use your own verification ritual. Do you have doubts? First, share your question (“source?”) instead of the mail itself. This way, you not only help yourself, but also your network to be more resistant to AI-driven disinformation.

