Explain Python Machine Learning Models with SHAP Library

Explain Python Machine Learning Models with SHAP Library – Minimatech

(能翻墙直接看原文)

Explain Python Machine Learning Models with SHAP Library

  • 11 September 2021
  • Muhammad Fawi
  • Machine Learning

Using SHapley Additive exPlainations (SHAP) Library to Explain Python ML Models

Almost always after developing an ML model, we find ourselves in a position where we need to explain this model. Even when the model is very good, it is still a black box that needs to be deciphered. Explaining a model is a very important step in a data science project that we usually overlook. SHAP library helps in explaining python machine learning models, even deep learning ones, so easy with intuitive visualizations. It also demonstrates feature importances and how each feature affects model output.

Here we are going to explore some of SHAP’s power in explaining a Logistic Regression model.

We will use the Bank Marketing dataset[1] to predict whether a customer will subscribe a term deposit.

Data Exploration

We will start by importing all necessary libraries and reading the data. We will use the smaller dataset in the bank-additional zip file.

import pandas as pd

import numpy as np

import matplotlib.pyplot as plt

import seaborn as sns

import shap

import zipfile

from sklearn.impute import SimpleImputer

from sklearn.preprocessing import OneHotEncoder, StandardScaler

from sklearn.pipeline import Pipeline

from sklearn.compose import ColumnTransformer

from sklearn.linear_model import LogisticRegression

from sklearn.model_selection import train_test_split

from sklearn.metrics import confusion_matrix, precision_recall_curve

from sklearn.metrics import accuracy_score, precision_score

from sklearn.metrics import recall_score, auc, roc_curve

zf = zipfile.ZipFile("bank-additional.zip")

df = pd.read_csv(zf.open("bank-additional/bank-additional.csv"), sep = ";")

df.shape

# (4119, 21)

Let’s look closely at the data and its structure. We will not go in depth in the exploratory data analysis step. However, we will see how data looks like and perform sum summary and descriptive stats.

df.isnull().sum().sum() # no NAs

# 0

## looking at numeric variables summary stats

df.describe()

Let’s have a quick look at how the object variables are distributed between the two classes; yes and no.

## counts

df.groupby("y").size()

# y

# no 3668

# yes 451

# dtype: int64

num_cols = list(df.select_dtypes(np.number).columns)

print(num_cols)

# ['age', 'duration', 'campaign', 'pdays', 'previous', 'emp.var.rate', 'cons.price.idx', 'cons.conf.idx', 'euribor3m', 'nr.employed']

obj_cols = list(df.select_dtypes(object).drop("y", axis = 1).columns)

print(obj_cols)

# ['job', 'marital', 'education', 'default', 'housing', 'loan', 'contact', 'month', 'day_of_week', 'poutcome']

df[obj_cols + ["y"]].groupby("y").agg(["nunique"])

# job marital education default housing loan contact month day_of_week poutcome

# nunique nunique nunique nunique nunique nunique nunique nunique nunique nunique

# y

# no 12 4 8 3 3 3 2 10 5 3

# yes 12 4 7 2 3 3 2 10 5 3

Seems like categorical variables are equally distributed between the classes.

I know that this is so quick analysis and shallow. But EDA is out of the scope of this blog.

Feature Preprocessing

Now it is time to prepare the features for the LR model. Scaling the numer variables and one hot encode the categorical ones. We will use ColumnTransformer to apply different preprocessors on different columns and wrap everything in a pipeline.

## change classes to float

df["y"] = np.where(df["y"] == "yes", 1., 0.)

## the pipeline

scaler = Pipeline(steps = [

## there are no NAs anyways

("imputer", SimpleImputer(strategy = "median")),

("scaler", StandardScaler())

])

encoder = Pipeline(steps = [

("imputer", SimpleImputer(strategy = "constant", fill_value = "missing")),

("onehot", OneHotEncoder(handle_unknown = "ignore")),

])

preprocessor = ColumnTransformer(

transformers = [

("num", scaler, num_cols),

("cat", encoder, obj_cols)

])

pipe = Pipeline(steps = [("preprocessor", preprocessor)])

Split data into train and test and fit the pipeline on train data and transform both train and test.

X_train, X_test, y_train, y_test = train_test_split(

df.drop("y", axis = 1), df.y,

stratify = df.y,

random_state = 13,

test_size = 0.25)

X_train = pipe.fit_transform(X_train)

X_test = pipe.transform(X_test)

Reverting to the exploratory phase. A good way to visualize one hot encoded data, sparse matrices with 1s and 0s, is by using imshow(). We will look at the last contact month columns which is now is converted into several columns with 1 in the month when the contact happened. The plot will also be split between yes and no.

First let’s get the new feature names from the pipeline.

## getting feature names from the pipeline

nums = pipe["preprocessor"].transformers_[0][2]

obj = list(pipe["preprocessor"].transformers_[1][1]["onehot"].get_feature_names(obj_cols))

fnames = nums + obj

len(fnames) ## new number of columns due to one hot encoder

# 62

Let’s now visualize!

from matplotlib.colors import ListedColormap

print([i for i in obj if "month" in i])

# ['month_apr', 'month_aug', 'month_dec', 'month_jul', 'month_jun', 'month_mar', 'month_may', 'month_nov', 'month_oct', 'month_sep']

## filter the train data on the month data

tr = X_train[:, [True if "month" in i else False for i in fnames]]

fig, (ax1, ax2) = plt.subplots(1, 2, figsize = (15,7))

fig.suptitle("Subscription per Contact Month", fontsize = 20)

cmapmine1 = ListedColormap(["w", "r"], N = 2)

cmapmine2 = ListedColormap(["w", "b"], N = 2)

ax1.imshow(tr[y_train == 0.0], cmap = cmapmine1, interpolation = "none", extent = [3, 6, 9, 12])

ax1.set_title("Not Subscribed")

ax2.imshow(tr[y_train == 1.0], cmap = cmapmine2, interpolation = "none", extent = [3, 6, 9, 12])

ax2.set_title("Subscribed")

plt.show()

Of course, we need to sort the columns with months order and put labels so that the plot can be more readable. But it is just to quickly visualize sparse matrices with 1s and 0s.

Model Development

Now it is time to develop the model and fit it.

clf = LogisticRegression(

solver = "newton-cg", max_iter = 50, C = .1, penalty = "l2"

)

clf.fit(X_train, y_train)

# LogisticRegression(C=0.1, max_iter=50, solver='newton-cg')

Now we will look at model’s AUC and set the threshold to predict the test data.

y_pred_proba = clf.predict_proba(X_test)[:, 1]

fpr, tpr, _ = roc_curve(y_test, y_pred_proba)

roc_auc = auc(fpr, tpr)

plt.plot(fpr, tpr, ls = "--", label = "LR AUC = %0.2f" % roc_auc)

plt.plot([0,1], [0,1], c = "r", label = "No Skill AUC = 0.5")

plt.legend(loc = "lower right")

plt.ylabel("true positive rate")

plt.xlabel("false positive rate")

plt.show()

The model shows a very good AUC. Let’s now set the threshold that gives the best combination between recall and precision.

precision, recall, threshold = precision_recall_curve(

y_test, y_pred_proba)

tst_prt = pd.DataFrame({

"threshold": threshold,

"recall": recall[1:],

"precision": precision[1:]

})

tst_prt_melted = pd.melt(tst_prt, id_vars = ["threshold"],

value_vars = ["recall", "precision"])

sns.lineplot(x = "threshold", y = "value",

hue = "variable", data = tst_prt_melted)

We can spot that 0.3 can be a very good threshold. Let’s test it on test data.

y_pred = np.zeros(len(y_test))

y_pred[y_pred_proba >= 0.3] = 1.

print("Accuracy: %.2f%%" % (100 * accuracy_score(y_test, y_pred)))

print("Precision: %.2f%%" % (100 * precision_score(y_test, y_pred)))

print("Recall: %.2f%%" % (100 * recall_score(y_test, y_pred)))

# Accuracy: 91.65%

# Precision: 61.54%

# Recall: 63.72%

Great! The model is performing good. Maybe it can be enhanced, but for now let’s go and try to explain how it behaves with SHAP.

Model Explanation and Feature Importance

Introducing SHAP

From SHAP’s documentation; SHAP (SHapley Additive exPlanations) is a game theoretic approach to explain the output of any machine learning model. It connects optimal credit allocation with local explanations using the classic Shapley values from game theory and their related extensions.

In brief, aside from the math behind, this is how it works. When we pass a model and a training dataset, a base value is calculated, which is the average model output over the training dataset. Then shap values are calculated for each feature per each example. Then each feature, with its shap values, contributes to push the model output from that base value to left and right. In a binary classification model, features that push the model output above the base value contribute to the positive class. While the features contributing to negative class will push towards below the base value.

Let’s have a look at how this looks like. First we define our explainer and calculate the shap values.

explainer = shap.Explainer(clf, X_train, feature_names = np.array(fnames))

shap_values = explainer(X_test)

Now let’s visualize how this works in an example.

Individual Visualization

## we init JS once in our session

shap.initjs()

ind = np.argmax(y_test == 0)

print("actual is:", y_test.values[ind], "while pred is:", y_pred[ind])

shap.plots.force(shap_values[ind])

# actual is: 0.0 while pred is: 0.0

We can see how the shown observations (scaled) of duration, number of employees, 3 month euribor and contact via telephone = 1 push the model below the base value (-3.03) resulting in a negative example. While last contact in June not May and 1.53 scaled consumer price index tried to push to the right but couldn’t beat the blue force.

We can also look at the same graph using waterfall graph representing cumulative sum and how the shap values are added together to give the model output from the base value.

shap.plots.waterfall(shap_values[ind])

We can see the collision between the features pushing left and right until we have the output. The numbers on the left side is the actual observations in the data. While the numbers inside the graph are the shap values for each feature for this example.

Let’s look at a positive example using the same two graphs.

ind = np.argmax(y_test == 1)

print("actual is:", y_test.values[ind], "while pred is:", y_pred[ind])

shap.plots.force(shap_values[ind])

# actual is: 1.0 while pred is: 1.0

shap.plots.waterfall(shap_values[ind])

It is too obvious how values are contributing now to the positive class. We can see from the two examples that high duration contributes to positive class while low duration contributes to negative. Unlike number of employees. High nr_employed contributes to negative and low nr_employed contibutes to positive.

Collective Visualization

We saw how the force plot shows how features explain the model output. However, it is only for one observation. We now will look at the same force plot but for multiple observations at the same time.

shap.force_plot(explainer.expected_value, shap_values.values, X_test, feature_names = fnames)

This plot (interactive in the notebook) is the same as individual force plot. Just imagine multiple force plots rotated 90 degrees and added together for each example. A heatmap also can be viewed to see the effect of each feature on each example.

shap.plots.heatmap(shap_values)

The heatmap shows the shap value of each feature per each example in the data. Also, above the map, the model output per each example is shown. The small line plot going above and below the base line.

Another very useful graph is the beeswarm. It gives an overview of which features are most important for the model. It plots the shap values of every feature for every sample as the heatmap and sorts these features by the sum of its shap value magnitudes over all examples.

shap.plots.beeswarm(shap_values)

We can see that duration is the most important variable and high duration increases the probability for positive class, subscription in our example. While high number of employees decreases the probability for subscription.

We can also get the mean of the absolute shap values for each feature and plot a bar chart.

shap.plots.bar(shap_values)

Fantastic! We have seen how SHAP can help in explaining our logistic regression model with very useful visualizations. The library can explain so many models including neural networks and the project github repo has so many notebook examples.

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mfbz.cn/a/713672.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

Linux--MQTT(二)通信基本原理

一、MQTT 通信基本原理 MQTT 是一种基于 客户端 - 服务端 架构的消息传输协议,所以在 MQTT 协议通信中,有两个最为重要的角色,它们便是服务端 和 客户端 。 举例:若开发板向“芯片温度”这一主题发布消息,那么服务…

父亲节:我要做爸爸的健康监督员

父亲节将至,总想着能为爸爸做些什么,来表达我们的感激与关爱。在这个特殊的日子里,成为爸爸的健康监督员,用华为 Watch 4 的智慧健康功能,任何时刻都可以关注爸爸的健康状况,放心又安心了。 用一键微体检…

创建一个electron桌面备忘录

Sound Of Silence 1.创建electron项目命令: npm create quick-start/electron my-new-project 2选择:√ Select a framework: vue √ Add TypeScript? ... No √ Add Electron updater plugin? ... Yes √ Enable Electron download mirror proxy? .…

多模态大模型:基础架构

大模型技术论文不断,每个月总会新增上千篇。本专栏精选论文重点解读,主题还是围绕着行业实践和工程量产。若在某个环节出现卡点,可以回到大模型必备腔调或者LLM背后的基础模型重新阅读。而最新科技(Mamba,xLSTM,KAN)则…

【使用 WSL子系统 在 Windows 上安装 Linux(官方教程)】

提示:文章写完后,目录可以自动生成,如何生成可参考右边的帮助文档 文章目录 前言一、使用 wsl --install二、额外的命令 前言 在最新的Windows Insider Preview版本中,只需运行wsl.exe-install,就可以安装运行WSL所需…

Matlab|基于V图的配电网电动汽车充电站选址定容-可视化

1主要内容 基于粒子群算法的电动汽车充电站和光伏最优选址和定容 关键词:选址定容 电动汽车 充电站位置 仿真平台:MATLAB 主要内容:代码主要做的是一个电动汽车充电站和分布式光伏的选址定容问题,提出了能够计及地理因素和服…

【原创】springboot+mysql小区用水监控管理系统设计与实现

个人主页:程序猿小小杨 个人简介:从事开发多年,Java、Php、Python、前端开发均有涉猎 博客内容:Java项目实战、项目演示、技术分享 文末有作者名片,希望和大家一起共同进步,你只管努力,剩下的交…

C++ 45 之 赋值运算符的重载

#include <iostream> #include <string> #include <cstring> using namespace std;class Students05{ public:int m_age;char* m_name;Students05(){}Students05(const char* name,int age){// 申请堆空间保存m_name;this->m_name new char[strlen(name)…

Kotlin 语言基础学习

什么是Kotlin ? Kotiln翻译为中文是:靠他灵。它是由JetBrains 这家公司开发的,JetBrains 是一家编译器软件起家的,例如常用的WebStorm、IntelliJ IDEA等软件。 Kotlin官网 JetBrains 官网 Kotlin 语言目前的现状: 目前Android 已将Kotlin 作为官方开发语言。 Spring 框…

应急响应 | 基本技能 | 01-系统排查

系统排查 目录 系统基本信息 Windows系统Linux系统 用户信息 Windows系统 1、命令行方式2、图形界面方法3、注册表方法4、wmic方法 Linux系统 查看所有用户信息分析超级权限账户查看可登录的用户查看用户错误的登录信息查看所有用户最后的登录信息查看用户最近登录信息查看当…

快速上手SpringBoot

黑马程序员Spring Boot2 文章目录 1、SpringBoot 入门程序开发1.1 创建一个新的项目 2、浅谈入门程序工作原理2.1 parent2.2 starter2.3 引导类2.4 内嵌tomcat 1、SpringBoot 入门程序开发 1.1 创建一个新的项目 file > new > project > empty Project 创建新模块&a…

ubuntu20.04桌面蓝屏问题解决

前些天做仿真项目&#xff0c;遇到了ubuntu蓝屏问题&#xff0c;于是想着找几个参考办法修复&#xff0c;但不管用&#xff0c;疑似是重要组件损坏。 损坏的原因是强制关机&#xff0c;但究竟是强制关了哪一个卡死的进程&#xff0c;不得而知&#xff0c;我有一个关不掉的仿真…

Waf 绕过手法测试

设备类型 由上到下,waf的检测细腻度依次降低 网络层WAF&#xff1a;先拦截流量&#xff0c;进行检测后再转发给 应用层WAF&#xff1a;先经过apache/nginx解析后再交给php处理 云 WAF&#xff08;CDNWAF&#xff09;&#xff1a;简单的看成CDN加上软件WAF的结合体&#xff0c…

vue格网图

先看效果 再看代码 <n-gridv-elsex-gap"20":y-gap"20"cols"2 s:2 m:3 l:3 xl:3 2xl:4"responsive"screen" ><n-grid-itemv-for"(item,index) in newSongList":key"item.id"class"cursor-pointer …

Spring底层架构核心概念解析

BeanDefinition BeanDefinition表示Bean定义,BeanDefinition中存在很多属性用来描述一个Bean的特点.比如: beanClass:表示Bean类型scope:表示Bean作用域,单例/原型等lazyInit:表示Bean是否懒加载initMethodName:表示Bean初始化时要执行的方法destoryMethodName:表示Bean销毁时…

文章解读与仿真程序复现思路——电工技术学报EI\CSCD\北大核心《计及台风时空特性和灵活性资源协同优化的配电网弹性提升策略》

本专栏栏目提供文章与程序复现思路&#xff0c;具体已有的论文与论文源程序可翻阅本博主免费的专栏栏目《论文与完整程序》 论文与完整源程序_电网论文源程序的博客-CSDN博客https://blog.csdn.net/liang674027206/category_12531414.html 电网论文源程序-CSDN博客电网论文源…

SpringBootWeb 篇-入门了解 Spring Cache 、Spring Task 与 WebSocket 框架

&#x1f525;博客主页&#xff1a; 【小扳_-CSDN博客】 ❤感谢大家点赞&#x1f44d;收藏⭐评论✍ 文章目录 1.0 Spring Cache 概述 1.1 Spring Cache 具体使用 1.1.1 引入依赖 1.1.2 Spring Cache 相关注解的介绍 2.0 Spring Task 概述 2.1 cron 表达式 2.2 Spring Task 使用…

程序猿大战Python——函数——拆包和交换变量值与引用

拆包 目标&#xff1a;了解拆包的使用。 先来看看在现实生活中的拆包。比如&#xff0c;张同学背着背包来教室上课后&#xff0c;需要从背包中拿出电脑、鼠标、数据线、电源线等&#xff0c;这个过程就是拆包! 接着&#xff0c;看一下在Python程序中的拆包&#xff1a;把组合形…

基于单片机和GP2Y1010AU粉尘传感器的空气质量检测仪设计

摘要 随着社会的发展,随着工业的发展,其给人们的生活带来很多便利。然而,工业生产过程中会产生很多对人体有害的因素,比如煤炭开采、水泥生产等行业中的粉尘污染。其在各种危害因素中对人体健康的影响最为严重。粉尘对人体的危害最直接、最严重的是引起尘肺病。当粉尘浓度过…