

To be able to understand the decisions of a model, we need to use an explanation method. Using SHapley Additive exPlanations (SHAP) as an explanation method Since model-agnostic methods are also applied after training the model, they are post-hoc methods by their nature as well. Most of the scientific research related to XAI focuses on the post-hoc methods which aim to explain complex models after the training process. For in-modelling, it’s to develop models that are self-explaining or fully observable. The aim of pre-modelling explainability is more towards exploring and understanding the dataset. These stages are pre-modelling, in-modelling and post-modelling (post-hoc). The third approach is categorisation based on the stage that we apply the explainability methods. Categorisation based on the stage that we apply the explainability methods Therefore, when the model has many parameters, global explainability can be difficult to achieve. On the other hand, global explanations aim to answer the question of how the parameters of a model affect its decisions. Local interpretability aims to explain the decisions a model makes about an instance. The explanation might be needed at the instance level or the model level. Categorisation based on the scope of the explanationĪnother approach is categorisation based on the scope of the explanation. Model-specific approaches are also called intrinsic methods because they have access to the internals of a model such as the weights in a linear regression model. This can be seen as an advantage when compared to model-specific methods that are limited to only one specific model, however, it doesn’t mean that one is always better than the other. As models gain more predictive power, they can lose their transparency, therefore, applying the method after the training process prevents sacrificing the predictive power for interpretability. When we’re able to apply the explanation method to any model after it’s trained, we can call it model-agnostic. The first approach is based on the method’s applicability to different models. Here are 3 different approaches of categorisation: Categorisation based on the method’s applicability to different models Therefore, there can be different perspectives on categorising these methods. There is a wide variety of methods used for XAI, but since it is a very new field, there isn’t an established consensus on the terms and the taxonomy yet.

If you need more convincing, you can find many more reasons that are mentioned in the why of explainable AI. XAI can be useful in helping us to detect and understand fairness issues and therefore be able to eliminate them. This issue is also investigated in the recent documentary on Netflix emphasizing the importance of biased Machine Learning algorithms and their effect on society. Bias problems can result in unfairness between discriminative characteristics such as gender or race. Since bias can be found in many aspects of daily life, it’s not a surprise that it can exist in the datasets used in practice, even the ones that are well established. Another reason XAI is needed is to detect and understand the bias in these decisions.

Especially when making a decision based on the predictions, where the results of this decision concern human safety. It’s important to explain the model’s decisions to be able to trust them. Why is XAI needed?Īs explained by a colleague in a previous blog can we trust AI if we don’t know how it works?, the main reason why we need XAI is to enable trust. However, this is not the case for complex models. These models can provide some information about the relationship between the feature value and the target outcome which makes them easier to interpret. Of course, not all Machine Learning models are too complex, there are also transparent models like linear/logistic regression and decision trees. This black-box represents the models that are too complex to interpret, in other words, opaque models such as the highly-favoured Deep Learning models. XAI is an emerging field that focuses on different techniques to break the black-box nature of Machine Learning models and produce human-level explanations. This is where eXplainable Artificial Intelligence (XAI) takes the stage. Despite the fact that these models produce very accurate results, there needs to be an explanation in order to understand and trust the model’s decisions. However, this predictive power comes with an increasing complexity which can result in difficulties to interpret these models. Recent developments in Artificial Intelligence (AI) are introducing new Machine Learning techniques to solve increasingly complicated problems with higher predictive capacity. An introduction to Explainable Artificial Intelligence (XAI)
