How to prevent errors with AI search engines (Perplexity & ChatGPT Search)


AI tools like Perplexity and ChatGPT Search are rapidly emerging as alternative search engines. They promise immediate answers to your questions, complete with a source citation. Sounds ideal, right?
But it increasingly appears: these tools don't always give you the right answer. And that is partly due to how we ask our questions.
One of the major pitfalls of using AI search engines is confirmation bias. This means that the tool is inclined to answer your question corroborative to answer β even if it's wrong.
Practical example:
I just got a new car and asked via ChatGPT Search:
π βIs it true that option XYZ is included in this version?β
Answer: Yes.
A little later, I asked the question again, but as:
π βIs it true that option XYZ is not included in this version?β
Answer: Yes.
Two opposite answers, both presented with conviction. How is that possible?
AI search engines such as ChatGPT Search combine a language model with a search engine. They use statistical probabilities to generate responses based on your prompt. And if you ask an affirmative question, chances are that the model wants to confirm it too β because that's what it βthinksβ you're looking for.
π‘ In short: Asking affirmative questions often leads to misleading answers.
What a lot of people don't realize: good prompting is also crucial for these AI search engines. We're used to keeping search queries simple, like we do at Google. But AI needs more context.
So, rather ask neutral or open questions:
β Not: βIs it true that...β
β
Well: βWhat are the arguments for and against the presence of option XYZ in this version?β
By making your prompt more neutral and complete, you reduce the chance of incorrect or one-sided answers.
Fortunately, there is a better option for really serious queries: Deep Research within the paid version of ChatGPT (ChatGPT Plus with GPT-4 Turbo). Hereby:
This approach is much more reliable because it not only uses language statistics, but also uses critical reasoning and seeks real data.
β
Don't ask affirmative questions
β
Use neutral, open prompts
β
Think critically about the answer
β
Use Deep Research for key queries
β
Always check the source (and how reliable it is)
Please note: AI tools are powerful, but they are not oracles. With the right approach, you can achieve much better, more reliable results.
β
Want to learn more? Book one keynote with us or build reliable solutions via The Automation Group
β

