Visualizing AI: PDP Reliability using d-ICE

Visualizing AI: PDP Reliability Using D-ICE

Introduction

Hello everyone, this is Yushiro, a data scientist at HACARUS. 

These days, with libraries such as Scikit-learn, TensorFlow, and PyTorch, it is becoming increasingly easy to create machine learning models using data. These models are commonly used to make predictions for new data based on data from the past. 

However, as these models become more complex, the issues of black-boxing become more prevalent. This makes it difficult for users to understand the reasoning behind the model’s decision-making. Black-boxing is a big problem for fields that require detailed explanations, such as medicine. 

For this reason, the interpretability of machine learning models (XAI) has attracted a lot of attention lately. Some of the more well-known models include LIME and SHAP. Other methods for interpreting machine learning models have also been proposed. In the previous article, I introduced several of these methods, including the Partial Dependence Plot (PDP) and the individual Conditional Expectation (ICE) plot. 

Unfortunately, PDP is used under the assumption that the features of the model are independent. This may lead to misinterpretation if the features interact with each other. Therefore, ICE plots are used to visualize the transition for the predicted values. However, when the number of instances is large, the visualization becomes complicated and difficult to interpret. 

In this article, we will discuss the convenient tool, Derivative Individual Conditional Expectation (d-ICE), which can be used to visualize the reliability of PDPs. This topic can also be found in chapter 5 of the book Interpretable Machine Learning by Christoph Molnar.

Explaining PDP, ICE, and d-ICE

To begin, I want to give a brief overview of PDP and ICe plots.

ICE plot: A visualization of the predicted values when only certain explanatory variables of individual instances are varied. 

PDP: The average of the predicted values based on the ICE plot. 

Since PDPs visualize changes in predictions based on the surrounding distributions, they can become unreliable if the model’s features interact. This is because they are calculated based on instances that are unlikely to occur. 

Next, let’s look at an example that uses PDP and ICE plots following my previous article. We will use the “tumor radius-benign tumor probability’ relationship from the RandomForest model trained using the breast cancer data set. 

# 乳がんデータセットの読み込み
dataset = load_breast_cancer()
df = pd.DataFrame(dataset.data, columns=dataset.feature_names)
# 使用する特徴量の選択
feature_names = df.columns[[7, 20, 21, 24, 27, 28]] df = df[feature_names] # データセットの正規化 (範囲: [0, 1])
mms = MinMaxScaler(feature_range=(0, 1), copy=True)
X = mms.fit_transform(df)
X = pd.DataFrame(X, index=df.index, columns=df.columns)
# 正解ラベルの取り出し
y = dataset.target

# ランダムフォレストで分類モデルを学習
rfc = RandomForestClassifier()
rfc.fit(X, y)

# PDP, ICE プロットを描画
feature_id = 1
grid_resolution = 200
pdp_ice_result = partial_dependence(
estimator=rfc, X=X, features=[feature_id], percentiles=(0, 1), grid_resolution=grid_resolution, method=’brute’, kind=’both’)

pdp = pdp_ice_result[“average”][0] ices = pdp_ice_result[“individual”][0] grid_value = pdp_ice_result[“values”][0]

plt.title(“PDP and ICE for ” + feature_names[feature_id])
plt.plot(grid_value, pdp, ‘r’)
plt.plot(grid_value, ices.T, ‘c’, alpha = 0.05)
plt.ylabel(‘benign prob.’)
plt.xlabel(feature_names[feature_id])
plt.grid(True, linestyle=’dotted’, lw=0.5)

Based on the PDP above, we can see that as the tumor radius increases, the probability of determining a tumor to be benign decreases. In other words, the model judges the tumor to be malignant. On the other hand, the ICE plot seems to show a different trend. The plot also visualizes the transition of the predicted values when only a certain explanatory variable is changed in each instance. Since some of these instances contradict the PDP it may be due to the interaction effect. 

When using PDPs, it is important to determine whether their interpretation should be trusted based on whether there is an interaction present. The easiest way to do this is by checking the interactions from the ICE plot. However, when the number of instances is large, it is difficult to intuitively understand how much interaction is present. 

Therefore, as a way to visualize whether the features of interest have interactions, and to what extent they are related, we will use a method known as d-ICE. d-ICE is a method for calculating the variance of the ICE plots. In cases where a strong interaction between features is found, the variability between instances varies greatly, which also increases the variance of d-ICE. For a more in-depth explanation about this method, please refer to Chapter 5 of Interpretable Machine Learning.   

Now, let’s actually visualize the variance using d-ICE. 

dices = np.diff(ices, 1, axis=1)
std_dice = np.std(dices, axis=0)

plt.title(“std of d-ICE for ” + feature_names[feature_id])
plt.ylabel(‘std d-ICE’)
plt.xlabel(feature_names[feature_id])
plt.plot(grid_value[1:], std_dice, ‘c’)
plt.grid(True, linestyle=’dotted’, lw=0.5)

Looking at the image of the d-ICE plot above, it can be seen that the model contains interactions between features around the ‘tumor radius’ values of 0.2 and 0.4.

Next, we will visualize the variability of PDPs using the standard deviation of d-ICE using the following code: 

sig = 2 # 約95%の区間
pdp_upper = pdp.copy()
pdp_lower = pdp.copy()
pdp_upper[1:] += sig * std_dice
pdp_lower[1:] -= sig * std_dice

In the figure above, we visualize the variability of predictions based on the PDP’s interactions. It also shows the degree to which the feature of interest interacts with other features. This allows us to see which parts of the PDP we can actually trust. 

d-ICE in the Absence of Interaction

Testing the Random Forest model, d-ICE found a strong interaction at the decision boundary where the predicted value is around 0.5. Now that we understand what a d-ICE plot looks like when there is strong variable interaction, let’s look at a plot (ex. linear model) without interaction.   

Here, we can see a visualization of the PDP, ICE, and d-ICE plots trained using the logistic regression model using the same dataset as before.

Looking at these plots, the results of superimposing the PDP and d-ICE variability show that there is no interaction within the model. This also means that the PDP plot is reliable in its entirety. 

Conclusion

In today’s article, I introduced the topic of using d-ICE to check the reliability of PDP plots. These PDP plots are a way to visualize the explanatory and interpretive properties of a model. These experiments can also be performed at home and all of the Python code used can be found on Google Colab. If you found today’s article interesting, please try it out for yourself. 

d-ICE is also a valuable tool in cases where you are unsure about the explainability of a model. For cases where the performance of the model doesn’t improve or the predictions cannot be explained, d-ICE should be used in conjunction with the PDP and ICE plots to give you a clue. 

Thank you for reading today’s article. I hope that it was informative and interesting. If so, please try conducting some of these experiments for yourself. I look forward to seeing you again in the 3rd and final installment of my short article series. 

Subscribe to our newsletter

Click here to sign up