Explainable AI is a set of tools that can help engineers better understand how Machine Learning models arrive at predictions and exercise transparency when it comes to instilling confidence in the results of these complex models.

“Explainable AI is a set of tools and techniques that help us to understand model decisions and uncover problems like bias,” says Stephane Marouani, the Country Manager at MathWorks Australia. “Explainability can help those working with AI understand how machine learning models arrive at predictions, which can be as simple as understanding which features drive model decisions but more difficult when explaining complex models.”

AI is transforming nearly every industry and application area. With that comes requirements for highly accurate AI models. These AI models can often be more accurate than traditional methods, yet this can sometimes come at a price. Most advanced AI models are “black boxes” by nature of the underlying neural network techniques where results are hard to trace and it’s difficult to understand the line of reasoning.

This is never truer than today in the face of the world’s race to electrify everything and the ubiquitous integration of AI, which are two separate but interconnected megatrends. AI is being deployed into heavy construction vehicles that are connected to both the cloud and each other within the fleet of vehicles, which are themselves being electrified to move to zero emissions.

For all the positives about moving to more complex AI models with larger data sets including decision accuracy and predictability, the ability to understand what is happening inside the model becomes increasingly challenging. In general, more powerful models tend to be less explainable so engineers will need new approaches to maintain confidence in their AI models as predictive power increases.

There are two Explainable methods. Firstly, global methods provide an overview of the most influential variables in the model based on input data and predicted output.  Global methods include feature ranking, which sorts features by their impact on model predictions, and partial dependence plots, which hone in on one specific feature’s impact on model predictions across the whole range of its values.

Local methods explain a single prediction result.  The most popular local methods are:

  • LIME for machine and deep learning: Local Interpretable, Model-agnostic Explanation (LIME) can be used in both traditional machine learning and deep neural network debugging. The idea is to approximate a complex model with a simple, explainable model in the vicinity of a point of interest and thus determine which predictors most influenced the decision.
  • Shapely values: The Shapley value of a feature for a query point explains the deviation of the prediction from the average prediction due to the feature. Use the Shapley values to demonstrate the contribution of individual features of a prediction at the specified query point.

Visualisations are one of the best ways to assess explainability when building models for image processing or computer vision applications.  Local methods like Grad-CAM and occlusion sensitivity can identify locations in images and text that most strongly influenced the model’s prediction.

Explainability is important when facing regulatory requirements. For example, in 2018, the European Union introduced a right to explanation in the General Data Protection Right (GDPR). As AI models become more pervasive and more complex, applications that are safety critical are good candidates for explainable AI. Explainability can be included in verification and validation processes to ensure that minimum standards are met and engineers’ can have confidence in the reliability and robustness of the system against data bias or adversarial attacks

The more complex the AI model is, the harder it is to explain the results,” states Marouani. “This set an obvious challenge for engineers. How do we create complex AI models to tackle large data sets and increase decision accuracy while demonstrating transparency in the model? As explainability methods continue to be developed, engineers will need to make sure AI tools provide transparency; however, they also must be able to demonstrate results and transparency of their models to end users from the beginning.”

I started my career in the 90s working with decision-making software based on truth maintenance systems techniques such as rule or inference engines,” explains Marouani. “Such systems were popular and until now as it is easy to demonstrate the line of reasoning from facts to conclusions. Neural network techniques, which are the basis of machine learning models, were not as popular with businesses because the decisions and associated biases were harder to detect and explain.

In the last 20 years, with the strong investment in AI, explainability techniques have been an important development to reduce the angst linked to AI-based automated decisions amongst business users.

As AI becomes more complex and pervasive across many different industries,  I foresee increased pressure from business users as well as regulation for AI explainability. Engineers will need to integrate explainability in the development cycle of their systems.”

 

mathworks.com.au