Call us at 030 227 21 68 or reach out at info@ai.nl.

Distrust in AI models: Four ways to improve C-suite executive hesitancy and improve higher-level decision-making

The rise of artificial intelligence (AI) and development of AI-powered programs has come along with distrust. It is now a well known fact that AI investments have not done enough to improve the trust in insights delivered by artificial intelligence.

This distrust in AI is predominantly seen among the C-suite with executives resisting use of AI-driven decision making. C-suite executives have long resisted data analytics for higher-level decision making and have relied on gut-level decision-making based on field experience.

AI use in low-level decision making

According to Harvard Business Review, AI is already being widely adopted for “tactical, low-level decision-making” in many industries. This includes adoption of AI for credit scoring, upselling recommendations, chatbots, or managing machine performance.

However, the ability of AI programs to make accurate decisions is yet to be proven for higher-level strategic decisions. For C-suite executives to trust AI decisions, it needs to prove its mettle in areas such as “recasting product lines, changing corporate strategies, re-allocating human resources across functions, or establishing relationships with new partners.”

“AI is mainly being used for tactical rather than strategic purposes — in fact, finding a cohesive long-term AI strategic vision is rare,” Amit Joshi and Michael Wade of IMD Business School in Switzerland, write in an examination of AI activities among financial and retail organisations.

A survey by Deloitte found that 67 per cent of executives say they are “not comfortable” accessing or using data from advanced analytics systems. Even among companies with strong data-driven cultures, 37 per cent expressed discomfort.

A similar survey by KPMG found 67 per cent of CEOs indicate they often prefer to make decisions based on their own intuition and experience over insights generated through data analytics. Overcoming the C-suite’s distrust of AI is a challenge and HBR offers four ways to boost executive confidence in AI-assisted decision making.

Reliable AI models

One of the problems with AI models is that they often tend to give a negative experience and some of the AI biases don’t help much with building trust in them. Data scientists and analysts working on these failed AI models collectively agree that the common denominator behind such failure is lack of quality data.

C-suite executives are particularly wary of the vast amount of unstructured data being used to create machine learning (ML) and deep learning (DL) models. It is easier to collect unstructured data but it is unusable unless it is properly classified, labelled, and cleansed for the AI systems to create and train models.

“As a result, data fed into AI systems may be outdated, not relevant, redundant, limited, or inaccurate. Partial data fed into AI/ML models will only provide a partial view of the enterprise,” Andy Thurai and Joe McKendrick note for HBR.

It is imperative that companies focus on data preparation to create reliable AI models capable of delivering proper results. In order to gain executive confidence, context and reliability is key and use of proper AI tools will allow for creation of reliable AI models.

Avoid data biases

Another concern causing executive hesitancy is ongoing and justifiable concern around AI results leading to discrimination within their organisation or affecting customers. This issue once again highlights the need to cleanse the data from any biases.

If an AI model is trained using biassed data then the resulting model will also be skewed towards biassed recommendations. The AI models and the decisions made by them will only be as good as the non-bias in the data. There are an estimated 175 human biases that need to be addressed through analysis of incoming data for biases and other negative traits.

The data used in higher-level decision-making need to be thoroughly vetted to assure executives that it is “proven, authoritative, authenticated, and from reliable sources.”

AI that supports ethical and moral decision-making

Businesses are under pressure to prove that they are operating morally and ethically. AI-assisted decisions need to reflect these ethical and moral values as well. For executives, this is important because they want to appear as a company with ethical and moral values and operate with integrity.

The need to ensure that AI provides decisions that are ethical and moral also has legal liabilities. Wrong decisions made by AI-assisted models can be challenged in courts and will only draw further scrutiny.

The European Commission has introduced the AI Liability Directive that allows those affected by AI models to seek compensation through a well-defined legal framework. One of the ways that executives can overcome their hesitancy and maintain ethical and moral values is by ensuring that human values are applied to their AI systems.

Explainable AI

The most critical aspect to eliminating executive hesitancy would be creating AI models that are explainable. Explainable AI remains a major challenge and most AI decisions don’t have explainability built into it.

“When a decision is made and an action is taken that risks millions of dollars for an enterprise, or it involves people’s lives/jobs, saying AI made this decision so we are acting on it is not good enough,” the authors write.

The results produced by AI and actions taken based on the AI model cannot be opaque. There is a need for AI systems to be developed in such a way that the data used to train the model is easily explainable. A third-party governance framework built into higher-level decision-making will elevate the confidence in such AI models.

2048 1366 Editorial Staff
My name is HAL 9000, how can I assist you?
This website uses cookies to ensure the best possible experience. By clicking accept, you agree to our use of cookies and similar technologies.
Privacy Policy