Home > Technology peripherals > AI > body text

6 Recommended Python Frameworks for Building Explainable Artificial Intelligence Systems (XAI)

WBOY
Release: 2023-04-26 10:49:08
forward
1759 people have browsed it

AI is like a black box that can make decisions on its own, but people don’t know why. Build an AI model, input data, and then output the results, but one problem is that we cannot explain why the AI ​​reaches such a conclusion. There is a need to understand the reasoning behind how an AI reaches a certain conclusion, rather than just accepting a result that is output without context or explanation.

Interpretability is designed to help people understand:

  • How is it learned?
  • What is learned?
  • Why a specific input Making such a decision?
  • Is the decision reliable?

In this article, I will introduce 6 Python frameworks for interpretability.

SHAP

SHapley Additive explanation (SHapley Additive explanation) is a game theory method for explaining the output of any machine learning model. It utilizes the classic Shapley value from game theory and its related extensions to relate optimal credit allocation to local interpretations (see paper for details and citations).

The contribution of each feature in the dataset to the model prediction is explained by the Shapley value. Lundberg and Lee's SHAP algorithm was originally published in 2017, and the algorithm has been widely adopted by the community in many different fields.

六个优秀的可解释AI (XAI)的Python框架推荐

Use pip or conda to install the shap library.

# install with pippip install shap# install with condaconda install -c conda-forge shap
Copy after login

六个优秀的可解释AI (XAI)的Python框架推荐

Use Shap library to build waterfall chart

六个优秀的可解释AI (XAI)的Python框架推荐

##Use Shap library to build Beeswarm chart

六个优秀的可解释AI (XAI)的Python框架推荐

Using the Shap library to build partial dependency graphs

LIME

In the field of interpretability, one of the first well-known methods is LIME. It can help explain what machine learning models are learning and why they predict a certain way. Lime currently supports interpretation of tabular data, text classifiers, and image classifiers.

Knowing why the model predicts the way it does is crucial to tuning the algorithm. With LIME's explanation, you can understand why the model behaves this way. If the model does not run as planned, chances are a mistake was made during the data preparation phase.

六个优秀的可解释AI (XAI)的Python框架推荐

Use pip to install

pip install lime
Copy after login

六个优秀的可解释AI (XAI)的Python框架推荐##Partial explanation diagram built by LIME

六个优秀的可解释AI (XAI)的Python框架推荐Beeswarm graph built by LIME

Shapash

“Shapash is a Python library that makes machine learning interpretable and understandable for everyone. Shapash provides several types of visualizations , displays clear labels that everyone can understand. Data scientists can more easily understand their models and share results. End users can use the most standard summary to understand how the model made its judgments."

To express findings that contain stories, insights, and models in your data, interactivity and beautiful charts are essential. The best way for business and data scientists/analysts to present and interact with AI/ML results is to visualize them and put them on the web. The Shapash library can generate interactive dashboards and has a collection of many visualization charts. Related to shape/lime interpretability. It can use SHAP/Lime as the backend, which means it only provides better-looking charts.

六个优秀的可解释AI (XAI)的Python框架推荐Using Shapash to build feature contribution graph

六个优秀的可解释AI (XAI)的Python框架推荐Interactive dashboard created using Shapash library

六个优秀的可解释AI (XAI)的Python框架推荐Partial Interpretation Graph built using Shapash

InterpretML

InterpretML is an open source Python package that provides machine learning interpretability algorithms to researchers. InterpretML supports training interpretable models (glassbox), as well as interpreting existing ML pipelines (blackbox).

InterpretML demonstrates two types of interpretability: glassbox models – machine learning models designed for interpretability (e.g. linear models, rule lists, generalized additive models) and black box interpretability techniques – using Used to explain existing systems (e.g. partial dependencies, LIME). Using a unified API and encapsulating multiple methods, with a built-in, extensible visualization platform, this package enables researchers to easily compare interpretability algorithms. InterpretML also includes the first implementation of the explanation Boosting Machine, a powerful, interpretable, glassbox model that can be as accurate as many black-box models.

六个优秀的可解释AI (XAI)的Python框架推荐

Local explanation interactive graph built using InterpretML

六个优秀的可解释AI (XAI)的Python框架推荐

Global explanation graph built using InterpretML

ELI5

ELI5 is a Python library that can help debug machine learning classifiers and interpret their predictions. Currently the following machine learning frameworks are supported:

  • scikit-learn
  • XGBoost, LightGBM CatBoost
  • Keras

ELI5 has two main Ways to explain a classification or regression model:

  • Examine the model parameters and explain how the model works globally;
  • Examine the model's individual predictions and explain what the model would make. Decide.

六个优秀的可解释AI (XAI)的Python框架推荐

Use the ELI5 library to generate global weights

六个优秀的可解释AI (XAI)的Python框架推荐

Use the ELI5 library to generate local weights

OmniXAI

OmniXAI (short for Omni explained AI) is a Python library recently developed and open sourced by Salesforce. It provides a full range of explainable artificial intelligence and explainable machine learning capabilities to solve several problems that require judgment in the generation of machine learning models in practice. For data scientists, ML researchers who need to interpret various types of data, models and explanation techniques at various stages of the ML process, OmniXAI hopes to provide a one-stop comprehensive library that makes explainable AI simple.

六个优秀的可解释AI (XAI)的Python框架推荐

The following is a comparison between what OmniXAI provides and other similar libraries

六个优秀的可解释AI (XAI)的Python框架推荐

The above is the detailed content of 6 Recommended Python Frameworks for Building Explainable Artificial Intelligence Systems (XAI). For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!