If you’ve ever received a letter from a bank explaining how different financial issues affect a credit application, you’ve seen explainable AI at work — a computer that uses math and a complex set of formulas to calculate a score and Determine whether to approve or deny your application.
Some data points are more or less important in making that decision. Maybe your long history of paying on time or your low debt will help your application get approved.
Likewise, explainable AI shows humans how it arrives at decisions by evaluating different inputs in a computation. While this may sound arcane or relevant only to the most hardcore data folks, explainable AI offers significant business benefits that should be considered by anyone interested in applied AI. Explainable AI also provides a window into how AI works and builds trust in its recommendations.
Not all AI is explainable
Humans build AI systems, but those builders cannot always determine exactly how an AI will reach a particular decision or output. Such AI systems are sometimes called “opaque” because it’s hard to know what’s going on inside them. They make a decision or spit out a number, but it’s hard to see exactly what process led to that result.
However, many AI processes are built so that humans can understand how they came to their conclusions. These are called “interpretable”. In some industries and countries, there is growing concern about regulations requiring explainability when using AI in specific areas such as financial services, human resources, and healthcare. Marketing also has a role to play in responsible AI governance, especially when explainable analytics are used in the marketing process.
What Explainable AI Offers Businesses
Explainable AI can reveal which factors are most important to the system as a whole, and which factors are most important to any particular decision or output. In data science, these factors are often referred to as “features”.
There are two possible explanations:
- global interpretation. For example, if you predict customer churn, you might learn that customer service interactions and website visits are the top features that guide your model to predict overall churn for all customers.
- local interpretation. While a global interpretation of AI predictions is helpful, finding the most effective features for a specific prediction is more valuable. In the churn example, why is it predicted that a particular customer is highly likely to churn? Their personal reasons could be a recent decrease in activity and late payments. Once you understand these reasons, you can decide to take the right action to keep that customer, or let them churn.
Ideally, explainable AI systems that predict customer churn will provide both global and local explanations. You can gain detailed insight into the importance of features at two levels – broader issues related to churn as well as specific account-level explanations. Both interpretations are of great use to businesses leveraging AI.
Use the Global Interpretation Strategy
A global interpretation of a predictive model helps to understand how the model is functioning and assess whether it is performing as expected. The features that are most important in guiding the model’s predictions should be meaningful, even if you find something surprising. As you examine important features, you may also notice issues: features that appear out of order, or features that you know should be missing. The global feature importance information can guide the next steps to improve the model.
Once you are comfortable with the model, the global interpretation of the model’s predictions can also serve as a valuable source of information to guide business strategy. These advanced predictive insights can reveal key areas for investment or improvement.
For example, let’s go back to the customer churn scenario. You noticed that an important predictor in the customer churn model is the feature representing the number of times a customer contacts customer service. This insight may help you determine that customer service training and staffing are areas your business needs to invest in to help reduce churn across the board.
Use local interpretation to personalize the customer experience
This kind of high-level guidance is just one way explainable AI can help achieve business goals. Even more valuable is the ability to explain individual forecasts—for example, the forecasted behavior of each customer analyzed using the model.
Knowing which features contribute most to the predicted outcome for an individual customer is invaluable information. You can now precisely address these factors in an attempt to influence (or enhance) your client’s results.
In the case of churn, perhaps the strongest predictor of a particular customer who is likely to churn is a reduction in your service activity. Armed with these details, you can now plan how to reach that particular customer in order to retain them. Maybe it’s a promotion or special event invitation that rekindles their interest, keeps them engaged and keeps them from churn.
Of course, you don’t wade through row after row of feature importance details for each of your thousands of customers and individually choose the right action to take. Instead, you can use these row-level predictive insights to define customer segments, each of which receives different actions that match their needs.
Of course, customer segmentation is a common activity in business analysis, but it is usually based on a business rule set that includes information about past customer behavior. When explainable AI provides forward-looking predictions, these segments can instead be based on what is likely to happen in the future. This is a more valuable perspective for those who want to personalize and shape tomorrow’s customer experience than just seeing what happened yesterday.
The Limits of Explainable AI
It is important to note that while explainable approaches are still much better for achieving business outcomes, they are not without some limitations. First, for complex machine learning models, often the only form of interpretability available is feature importance. It is difficult to understand exactly how these features are used together in the model to generate the final prediction.
Of course, even explainable AI is useless if those features are based on poor-quality data. If you don’t have accurate, meaningful features based on clean, well-prepared data, your model’s performance and interpretability will suffer.
This problem is why data science professionals are increasingly turning to a data-centric mindset that emphasizes the importance of high-quality data and well-constructed features. (I’ve suggested elsewhere that this perspective should be extended from NLP/Computer Vision applications to tabular data.) While many data scientists spend a lot of time tinkering with models, it may be more beneficial to invest in data preparation for performance, interpretability Modeling of gender and business outcomes.
Regardless of how it is achieved, model interpretability enhances understanding of important business processes, and it enables high-level and individual-level actions to act on future knowledge. In an era where businesses need as much foresight as possible amid rapidly changing market conditions, explainable AI provides a window into current and future trends that is invaluable.