AI Fundamentals

How to prevent errors with AI search engines (Perplexity & ChatGPT Search)

Job van den Berg
Job van den Berg
February 1, 2026
3
min read
How to prevent errors with AI search engines (Perplexity & ChatGPT Search)
AI tools are powerful, but they are not oracles. With the right approach, you can achieve much better, more reliable results.

AI tools like Perplexity and ChatGPT Search are rapidly emerging as alternative search engines. They promise immediate answers to your questions, complete with a source citation. Sounds ideal, right?

But it increasingly appears: these tools don't always give you the right answer. And that is partly due to how we ask our questions.

AI search engines: fast, smart, but also susceptible to confirmation bias

One of the major pitfalls of using AI search engines is confirmation bias. This means that the tool is inclined to answer your question corroborative to answer β€” even if it's wrong.

Practical example:
I just got a new car and asked via ChatGPT Search:
πŸ‘‰ β€œIs it true that option XYZ is included in this version?”
Answer: Yes.

A little later, I asked the question again, but as:
πŸ‘‰ β€œIs it true that option XYZ is not included in this version?”
Answer: Yes.

Two opposite answers, both presented with conviction. How is that possible?

Why this happens: the role of language models and statistics

AI search engines such as ChatGPT Search combine a language model with a search engine. They use statistical probabilities to generate responses based on your prompt. And if you ask an affirmative question, chances are that the model wants to confirm it too β€” because that's what it β€œthinks” you're looking for.

πŸ’‘ In short: Asking affirmative questions often leads to misleading answers.

Prompting is key β€” even with search-based AI

What a lot of people don't realize: good prompting is also crucial for these AI search engines. We're used to keeping search queries simple, like we do at Google. But AI needs more context.

So, rather ask neutral or open questions:
❌ Not: β€œIs it true that...”
βœ… Well: β€œWhat are the arguments for and against the presence of option XYZ in this version?”

By making your prompt more neutral and complete, you reduce the chance of incorrect or one-sided answers.

So what is reliable? Deep Research via ChatGPT Plus

Fortunately, there is a better option for really serious queries: Deep Research within the paid version of ChatGPT (ChatGPT Plus with GPT-4 Turbo). Hereby:

  • Is there an active internet search?
  • Are sources collected and analysed
  • Does it take a little longer (an average of 20-30 minutes)
  • Do you get a comprehensive report in response?

This approach is much more reliable because it not only uses language statistics, but also uses critical reasoning and seeks real data.

Summary: How to prevent AI search engine deception

βœ… Don't ask affirmative questions
βœ… Use neutral, open prompts
βœ… Think critically about the answer
βœ… Use Deep Research for key queries
βœ… Always check the source (and how reliable it is)

Please note: AI tools are powerful, but they are not oracles. With the right approach, you can achieve much better, more reliable results.

‍

Want to learn more? Book one keynote with us or build reliable solutions via The Automation Group

‍

Remy Gieling
Job van den Berg

Like the Article?

Share the AI experience with your friends