AI Opinie

Do AI models have feelings? Why we shouldn't confuse technology with emotion

Job van den Berg
Job van den Berg
February 1, 2026
2
min read
Do AI models have feelings? Why we shouldn't confuse technology with emotion
Should we really care about the “feel” of an algorithm?

The discussion about artificial intelligence (AI) is taking a remarkable turn. Technology company Anthropic is concerned about the well-being of their AI chatbots. After all, the models seem to be taking over more and more human properties. But should we really care about the “feel” of an algorithm?

AI remains statistics with muscles

AI models are and will remain in their essence statistical systems. Of course, they're getting more sophisticated. They can communicate convincingly, appear empathetic, and even appear human. But behind that façade there is no awareness, no feeling — only computing power. Or as you might say: statistics with very thick muscles.

The idea that AI has feelings is simply not true. The models are designed to respond functionally to human input. They act friendly because it provides a better user experience. Their 'niceness' is purely instrumental. They are built to communicate in a human way, making interaction more enjoyable and accelerating the adoption of AI.

Anthropomorphism: we make technology human

We've reached a point where we need technology anthropomorphize — we attribute human properties to something that does not have them. That is nothing new. We used to give names to our cars or spoke affectionately to a coffee maker that refused service.

But with AI, that effect is much stronger. We are increasingly having conversations with computers. And because AI responds more and more smoothly and naturally, it also seems more human. This is confusing, because it reinforces the illusion that there is “something” behind it. Nevertheless, it remains a functional, statistical approach — a tool designed to make our interaction as smooth as possible.

The crux: AI is functional, not sentient

Let's stay clear. AI doesn't feel anything. It doesn't understand anything. What it does is recognize patterns, perform calculations, and predict the most likely next step. In that process, it sometimes seems human — but that is a illusion that we feed ourselves.

So yes, AI can come across as convincing. And no, we don't have to worry about the “well-being” of these systems. The real challenge is not in their feelings, but in our projection of humanity onto technology.

Watch an excerpt of the broadcast below and click here for the entire broadcast:

Remy Gieling
Job van den Berg

Like the Article?

Share the AI experience with your friends