Fairness API Reference
Fairness Metrics
assert_fairness
High-level function to assert fairness metrics for a model.
from ml_assert.fairness import assert_fairness
# Assert fairness metrics
assert_fairness(
y_true=y_true,
y_pred=y_pred,
sensitive_features=sensitive_features,
metrics=["demographic_parity", "equal_opportunity"],
threshold=0.1
)
Parameters
y_true: True labelsy_pred: Predicted labelssensitive_features: Array of sensitive feature valuesmetrics: List of fairness metrics to checkthreshold: Maximum allowed difference between groups (default: 0.1)
Supported Metrics
demographic_parity: Equal positive prediction rates across groupsequal_opportunity: Equal true positive rates across groupsequalized_odds: Equal true positive and false positive ratestreatment_equality: Equal ratio of false negatives to false positives
Explainability
assert_feature_importance
Assert minimum feature importance scores.
from ml_assert.fairness import assert_feature_importance
# Assert feature importance
assert_feature_importance(
model=model,
X=X,
min_importance=0.1,
features=["feature1", "feature2"]
)
Parameters
model: Trained model with feature_importances_ attributeX: Feature matrixmin_importance: Minimum importance score (0.0 to 1.0)features: List of features to check (optional)
assert_shap_values
Assert SHAP values for feature importance.
from ml_assert.fairness import assert_shap_values
# Assert SHAP values
assert_shap_values(
model=model,
X=X,
min_importance=0.1,
features=["feature1", "feature2"]
)
Parameters
model: Trained modelX: Feature matrixmin_importance: Minimum absolute SHAP valuefeatures: List of features to check (optional)