ml-model-explanation▌
aj-geddes/useful-ai-prompts · updated Apr 8, 2026
Model explainability makes machine learning decisions transparent and interpretable, enabling trust, compliance, debugging, and actionable insights from predictions.
ML Model Explanation
Model explainability makes machine learning decisions transparent and interpretable, enabling trust, compliance, debugging, and actionable insights from predictions.
Explanation Techniques
- Feature Importance: Global feature contribution to predictions
- SHAP Values: Game theory-based feature attribution
- LIME: Local linear approximations for individual predictions
- Partial Dependence Plots: Feature relationship with predictions
- Attention Maps: Visualization of model focus areas
- Surrogate Models: Simpler interpretable approximations
Explainability Types
- Global: Overall model behavior and patterns
- Local: Explanation for individual predictions
- Feature-Level: Which features matter most
- Model-Level: How different components interact
Python Implementation
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier, plot_tree
from sklearn.inspection import partial_dependence, permutation_importance
import warnings
warnings.filterwarnings('ignore')
print("=== 1. Feature Importance Analysis ===")
# Create dataset
X, y = make_classification(n_samples=1000, n_features=20, n_informative=10,
n_redundant=5, random_state=42)
feature_names = [f'Feature_{i}' for i in range(20)]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Train models
rf_model = RandomForestClassifier(n_estimators=100, random_state=42)
rf_model.fit(X_train, y_train)
gb_model = GradientBoostingClassifier(n_estimators=100, random_state=42)
gb_model.fit(X_train, y_train)
# Feature importance methods
print("\n=== Feature Importance Comparison ===")
# 1. Impurity-based importance (default)
impurity_importance = rf_model.feature_importances_
# 2. Permutation importance
perm_importance = permutation_importance(rf_model, X_test, y_test, n_repeats=10, random_state=42)
# Create comparison dataframe
importance_df = pd.DataFrame({
'Feature': feature_names,
'Impurity': impurity_importance,
'Permutation': perm_importance.importances_mean
}).sort_values('Impurity', ascending=False)
print("\nTop 10 Most Important Features (by Impurity):")
print(importance_df.head(10)[['Feature', 'Impurity']])
# 2. SHAP-like Feature Attribution
print("\n=== SHAP-like Feature Attribution ===")
class SimpleShapCalculator:
def __init__(self, model, X_background):
self.model = model
self.X_background = X_background
self.baseline = model.predict_proba(X_background.mean(axis=0).reshape(1, -1))[0]
def predict_difference(self, X_sample):
"""Get prediction difference from baseline"""
pred = self.model.predict_proba(X_sample)[0]
return pred - self.baseline
def calculate_shap_values(self, X_instance, n_iterations=100):
"""Approximate SHAP values"""
shap_values = np.zeros(X_instance.shape[1])
n_features = X_instance.shape[1]
for i in range(n_iterations):
# Random feature subset
subset_mask = np.random.random(n_features) > 0.5
# With and without feature
X_with = X_instance.copy()
X_without = X_instance.copy()
X_without[0, ~subset_mask] = self.X_background[0, ~subset_mask]
# Marginal contribution
contribution = (self.predict_difference(X_with)[1] -
self.predict_difference(X_without)[1])
shap_values[~subset_mask] += contribution / n_iterations
return shap_values
shap_calc = SimpleShapCalculator(rf_model, X_train)
# Calculate SHAP values for a sample
sample_idx = 0
shap_vals = shap_calc.calculate_shap_values(X_test[sample_idx:sample_idx+1], n_iterations=50)
print(f"\nSHAP Values for Sample {sample_idx}:")
shap_df = pd.DataFrame({
'Feature': feature_names,
'SHAP_Value': shap_vals
}).sort_values('SHAP_Value', key=abs, ascending=False)
print(shap_df.head(10)[['Feature', 'SHAP_Value']])
# 3. Partial Dependence Analysis
print("\n=== 3. Partial Dependence Analysis ===")
# Calculate partial dependence for top features
top_features = importance_df['Feature'].head(3).values
top_feature_indices = [feature_names.index(f) for f in top_features]
pd_data = {}
for feature_idx in top_feature_indices:
pd_result = partial_dependence(rf_model, X_test, [feature_idx])
pd_data[feature_names[feature_idx]Discussion
Product Hunt–style comments (not star reviews)- No comments yet — start the thread.
Ratings
4.6★★★★★59 reviews- ★★★★★Evelyn Srinivasan· Dec 28, 2024
ml-model-explanation fits our agent workflows well — practical, well scoped, and easy to wire into existing repos.
- ★★★★★Chaitanya Patil· Dec 24, 2024
ml-model-explanation fits our agent workflows well — practical, well scoped, and easy to wire into existing repos.
- ★★★★★Yuki Haddad· Dec 24, 2024
ml-model-explanation is among the better-maintained entries we tried; worth keeping pinned for repeat workflows.
- ★★★★★Aditi Desai· Dec 16, 2024
ml-model-explanation reduced setup friction for our internal harness; good balance of opinion and flexibility.
- ★★★★★Yuki Mensah· Dec 12, 2024
ml-model-explanation is among the better-maintained entries we tried; worth keeping pinned for repeat workflows.
- ★★★★★Aditi Nasser· Dec 4, 2024
I recommend ml-model-explanation for anyone iterating fast on agent tooling; clear intent and a small, reviewable surface area.
- ★★★★★Yusuf Okafor· Nov 19, 2024
Registry listing for ml-model-explanation matched our evaluation — installs cleanly and behaves as described in the markdown.
- ★★★★★Piyush G· Nov 15, 2024
Registry listing for ml-model-explanation matched our evaluation — installs cleanly and behaves as described in the markdown.
- ★★★★★Hassan Singh· Nov 15, 2024
Solid pick for teams standardizing on skills: ml-model-explanation is focused, and the summary matches what you get after install.
- ★★★★★Evelyn Abbas· Nov 11, 2024
Useful defaults in ml-model-explanation — fewer surprises than typical one-off scripts, and it plays nicely with `npx skills` flows.
showing 1-10 of 59