paint-brush
Explainable AI Principles: What Should You Know About XAIby@itrex
219 reads

Explainable AI Principles: What Should You Know About XAI

by ITRexAugust 28th, 2021
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Explainable artificial intelligence (XAI) refers to a set of techniques, design principles, and processes that help developers/organizations add a layer of transparency to AI algorithms so that they can justify their predictions. XAI can describe AI models, their expected impact, and potential biases. With this technology, human experts can understand the resulting predictions and build trust and confidence in the results.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Explainable AI Principles: What Should You Know About XAI
ITRex HackerNoon profile picture

An inmate at a New York correctional facility, Glenn Rodriguez, was due for parole soon. The man has been on his best behavior and was looking forward to being released and starting a new life. To Glenn’s horror, he was denied parole. An AI algorithm the parole board used gave this inmate a poor score, and as it wasn’t explainable, no one knew that something was going terribly wrong. Mr. Rodriguez fought his case and eventually was released after spending another unnecessary year in prison.

Unfortunately, this type of mistake can occur whenever AI is deployed. If we don’t see the reasoning behind algorithms’ decisions, we can’t spot the problem. You can prevent this issue in your organization by following the explainable AI principles while developing your artificial intelligence solution.

So, what is explainable artificial intelligence (XAI)? How to decide on the right level of explainability for your sector? And which challenges to expect on the way?

What is explainable AI, and why should you care?

When speaking of AI, many people think of black-box algorithms that take millions of input data points, work their magic, and deliver unexplainable results that users are supposed to trust. This kind of model is created directly from data, and not even its engineers can explain its outcome. 

Black-box models, such as neural networks, possess superior skills when it comes to challenging prediction tasks. They produce results of remarkable accuracy, but no one can understand how algorithms arrived at their predictions. 

In contrast, users can understand the rationale behind its decisions with explainable white box AI, making it increasingly popular in business settings. These models are not as technically impressive as black-box algorithms. Still, their transparency is a tradeoff as it offers a higher level of reliability and is preferable in highly regulated industries.

What is explainable AI?

Explainable AI (XAI) refers to a set of techniques, design principles, and processes that help developers/organizations add a layer of transparency to AI algorithms so that they can justify their predictions. XAI can describe AI models, their expected impact, and potential biases. Human experts can understand the resulting predictions and build trust and confidence in the results with this technology.

When speaking of explainability, it all boils down to what you want to explain. 

There are two possibilities: 

  1. Explaining AI model’s pedigree: how the model was trained, which data was used, which types of bias are possible, and how they can be mitigated. 
  2. Explaining the overall model: this is also called “model interpretability.”

There are two approaches to this technique: 

  • Proxy modeling: a more understandable model, such as a decision tree, is used as an approximation of a more cumbersome AI model. Even though this technique gives a simple overview of what to expect, it remains an approximation and can differ from real-life results. 
  • Design for interpretability: designing AI models in a way that forces simple easy-to-explain behavior. This technique can result in less powerful models as it eliminates some complicated tools from the developer’s toolkit.

Source

Explainable AI principles

The US National Institute of Standards and Technology (NIST) developed four explainable AI principles:

  • The system should be able to explain its output and provide supporting evidence (at least). There are several explanation types: 
    ○ Explanations that benefit the end-user 
    ○ Explanations that are designed to gain trust in the system 
    ○ Explanations that are expected to meet regulatory requirements 
    ○ Explanations that can help with algorithm development and maintenance 
    ○ Explanations that benefit the model’s owner, such as movie recommendation engines
  • The given explanation has to be meaningful, enabling users to complete their tasks. If there is a range of users with diverse skill sets, the system needs to provide several explanations catering to the available user groups
  • This explanation needs to be clear and accurate, which is different from output accuracy
  • The system has to operate within its designed knowledge limits to ensure a reasonable outcome

Explainable AI example

XAI can provide a detailed model-level explanation of why a particular decision was made. This explanation comes in a set of understandable rules. Following a simplified loan application example below, when applicants are denied a loan, they will receive a straightforward justification: everyone over 40 years old saves less than $433 per month and applies for credit with a payback period of over 38 years will be denied a loan. The same goes for younger applicants who save less than $657 per month.

Why is explainable AI important?

In some industries, an explanation is necessary for AI algorithms to be accepted. This can be either due to regulations and/or human factors. Think about brain tumor classification. No doctor will be comfortable preparing for a surgery solely based on “the algorithm said so.” And what about loan granting? Clients who got their application denied would want to understand why. Yes, there are more tolerant use cases where an explanation is not essential. For instance, predictive maintenance applications are not a matter of life or death, but even then, employees would feel more confident knowing why particular equipment might need preemptive repair. 

Senior management often understands the value of AI applications, but they also have their concerns. According to Gaurav Deshpande, VP of Marketing at TigerGraph, there is always a “but” in executives’ reasoning: “…but if you can’t explain how you arrived at the answer, I can’t use it. This is because of the risk of bias in the black box AI system that can lead to lawsuits and significant liability and risk to the company brand as well as the balance sheet.” 

The ideal XAI solution is the one that is reasonably accurate and can explain its results to practitioners, executives, and end-users. Incorporating explainable AI principles into intelligent software:

  • Brings relief to system users. They understand the reasoning and can get behind the decisions being made. For example, a loan officer will be more comfortable informing a customer their loan application was declined if he can understand how the decision was made.
  • Ensures compliance. By verifying the provided explanation, users can see whether the algorithm’s rules are sound and in accordance with the law and ethics.
  • Allows for system optimization. When designers and developers see the explanation, they can spot what is going wrong and fix it.
  • Eliminates bias. When users view the explanation, they can spot any presence of biased judgment, override the system’s decision, and correct the algorithm to avoid similar scenarios in the future.
  • Empowers employees to act upon the system’s output. For example, an XAI might predict that a particular corporate customer will not renew their software license. The first reaction of the manager can be to offer a discount. But what if the reason behind leaving was poor customer service? The system will tell this in its explanation.
  • Empowers people to take action. XAI enables the parties affected by certain decisions to challenge and potentially change the outcome (such as mortgage granting situations).

Which industries need XAI the most?

  • Healthcare
  • Finances
  • Automotive
  • Manufacturing

1. Explainable AI in healthcare

AI has many applications in healthcare. Various AI-powered medical solutions can save doctors’ time on repetitive tasks, allowing them to primarily focus on patient-facing care. Additionally, algorithms are good at diagnosing various health conditions as they can be trained to spot minor details that escape the human eye. However, when doctors cannot explain the outcome, they are hesitant to use this technology and act on its recommendations. 

One example comes from Duke University Hospital. A team of researchers installed a machine learning application called Sepsis Watch, which would send an alert when a patient was at risk of developing sepsis. The researchers discovered that doctors were skeptical of the algorithm and reluctant to act on its warnings because they did not understand it. 

This lack of trust is passed to patients who are hesitant to be examined by AI. Harvard Business Review published a study where participants were invited to take a free assessment of their stress level. 40% of the participants registered for the test when they knew a human doctor would do the evaluation. Only 26% signed up when an algorithm was performing the diagnosis. 

When it comes to diagnosing and treatments, the decisions made can be life-changing. No surprise that doctors are desperate for transparency. Luckily, with explainable AI, this becomes a reality. For example, Keith Collins, CIO of SAS, mentioned his company is already developing such a technology. Here is what he said: “We’re presently working on a case where physicians use AI analytics to help detect cancerous lesions more accurately. The technology acts as the physician’s ‘virtual assistant,’ and it explains how each variable in an MRI image, for example, contributes to the technology identifying suspicious areas as probable for cancer while other suspicious areas are not.”

2. XAI in finances

Finance is another heavily regulated industry where decisions need to be explained. It is vital that AI-powered solutions are auditable; otherwise, they will struggle to enter the market. 

AI can help assign credit scores, assess insurance claims, and optimize investment portfolios, among other applications. However, if the algorithms provide biased output, it can result in reputational loss and even lawsuits. 

Not long ago, Apple made headlines with its Apple Card product, which was inherently biased against women, lowering their credit limits. Apple’s co-founder, Steve Wozniak, confirmed this claim. He recalled that together with his wife, they have no separate bank accounts nor separate assets, and still when applying for Apple Card, his granted limit was ten times higher than his wife’s. As a result of this unfortunate event, the company was investigated by the New York State Department of Financial Services. 

With explainable AI, one can avoid such scandalous situations by justifying the output. For example, loan granting is one use case that can benefit from XAI. The system would be able to justify its final recommendation and give clients a detailed explanation if their loan application was declined. This allows users to improve their credit profiles and reapply later.

3. Explainable AI in the automotive industry

Autonomous vehicles operate on vast amounts of data, requiring AI to analyze and make sense of it all. However, the system’s decisions need to be transparent for drivers, technologists, authorities, and insurance companies in case of any incidents. 

Also, it is crucial to understand how vehicles will behave in case of an emergency. Here is how Paul Appleby, former CEO of a data management software company Kinetica, voiced his concern: “If a self-driving car finds itself in a position where an accident is inevitable, what measures should it take? Prioritize the protection of the driver and put pedestrians in grave danger? Avoid pedestrians while putting the passengers’ safety at risk?” 

These are tough questions to answer, and people would disagree on how to handle such situations. But it is important to set guidelines that the algorithm can follow in such cases. This will help passengers decide whether they are comfortable traveling in a car designed to make certain decisions. Additionally, after an incident, the provided explanation will help developers improve the algorithm in the future.

4. Explainable artificial intelligence in manufacturing

AI has many applications in manufacturing, including predictive maintenance, inventory management, and logistics optimization. With its analytical capabilities, this technology can add to the “tribal knowledge” of human employees. But it is easier to adopt decisions when you understand the logic behind them. 

Heena Purohit, Senior Product Manager for IBM Watson IoT, explains how their AI-based maintenance product approaches explainable AI. The system offers human employees several options on how to repair a piece of equipment. Every option includes a percentage confidence interval. So, the user can still consult their “tribal knowledge” and expertise when making a choice. Also, each recommendation can project the knowledge graph output together with the input used in the training phase.

Challenges on the way to explainable AI

The need to compromise on the predictive power 

Black box algorithms, such as neural networks, have high predictive power but offer no output justification. As a result, users need to blindly trust the system, which can be challenging in certain circumstances. White box AI offers the much-needed explainability, but its algorithms need to remain simple, compromising on predictive power. 

For example, AI has applications in radiology where algorithms produce remarkable results classifying brain tumors and spotting breast cancer faster than humans. However, when doctors decide on patients’ treatment, it can be a life and death situation, and they want to understand why the algorithm came up with this diagnosis. It can be daunting for doctors to rely on something they do not understand.

The concept of explainability 

There is no universal definition of explainability. It is often a subjective concept. Users might expect one type of explanation, while developers are willing to provide something else. Also, different audiences require tailored justifications, which results in one XAI system having to explain the same output in several different ways.

Security and robustness-related issues 

With XAI, if clients gain access to the algorithm’s decision-making process, they might apply adversarial behaviors, meaning they will take deliberate actions to change their behavior to influence the output. One study published a concern that someone with technical skills can recover parts of the dataset used for algorithm training after seeing the explanation, thereby violating privacy regulations.

How to get started with responsible and explainable AI

When your company is preparing to deploy responsible XAI solutions, the first thing would be to determine what exactly you need to explain and to whom.

Some of the questions addressed during the planning stage can be:

  • Is it a particular decision or the overall model?
  • Who is your audience? Are they technical users?
  • Do you need to provide different explanations for every output, or does one explanation suffice for everyone involved?

Next, your company needs to decide on the degree of explainability. Not all AI-based tools require the same degree of interpretability. PwC identifies six application criticality components that will help you determine your desired XAI level:

  • Decision impact: how will the decision affect all the involved parties? This is not limited to revenue-related aspects. For example, an incorrect cancer diagnosis will have a dire impact, while recommending an irrelevant movie will not be a big issue.
  • Control level: is it an autonomous AI system that makes decisions and acts upon them, or does the system’s output recommend actions to humans who can choose whether to follow or not?
  • Risks: any potential harm that the algorithms can cause, such as impacting operations, sales, employees, any ethical, legal, environmental, or health-related considerations.
  • Regulations: the legal framework of the industry and whether the decisions need to be explained.
  • Reputation: how the system interacts with stakeholders — within the business and with the society in general.
  • Rigor: if the application is accurate and able to generalize its conclusions to unseen data.

Finally, even after explainable artificial intelligence is in place, it is best to take action to ensure your data usage is ethical. Perform timely audits of your algorithms. A strong explainability feature can reveal any bias sneaking into your software. However, if you settled on a limited explainability solution, bias could remain unnoticed. Also, you can subscribe to the Partnership on AI consortium. Or even develop your own set of ethical principles of data usage, as Microsoft did. They included fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.

On a final note…

Implementing traditional AI is full of challenges, let alone its explainable and responsible version. Despite the obstacles, it will bring relief to your employees, who will be more motivated to act upon the system’s recommendations when they understand the rationale behind it. Moreover, XAI will help you comply with your industry’s regulations and ethical considerations.

If you have an idea of an explainable AI solution to build, or if you are still unsure of how explainable your software needs to be, consult ITRex XAI experts.