FeaturesUse CasesBlogAPI ReferenceWhy CorePlexMLPricing
Start Free

What-If Analysis & Model Comparison

Two powerful modules in one workspace. What-If Analysis lets you explore model behavior interactively. Model Comparison provides a 20-tab diagnostic workspace with SHAP, PDP, ROC curves, fairness metrics, and more across all your models.

platform.coreplexml.io
ML Studio with What-If analysis and Model Comparison workspace showing SHAP, PDP, and performance diagnostics

Key Capabilities

Everything you need to get the most out of this module.

What-If Scenarios

Compare baseline vs modified scenarios side-by-side. See exactly how changes affect predictions with SHAP explanations.

Model Comparison

Full diagnostic workspace with 20 analysis tabs across 6 categories. Compare up to 10+ models simultaneously.

Explainability Suite

SHAP analysis, LIME explanations, variable importance, partial dependence plots, and feature interactions.

Advanced Analytics

ROC curves, confusion matrices, calibration plots, learning curves, fairness metrics, and ensemble builder.

What-If Analysis

Explore model behavior interactively. Change input features, create scenarios, and compare predictions with SHAP explanations — no code required.

1. Create Analysis Session

Select a deployed model and provide baseline input values. The system automatically generates an input form from your dataset schema.

2. Build Scenarios

Modify feature values to create alternative scenarios. Change age, income, credit score — any input — and name each scenario for easy reference.

3. Compare Predictions

Run all scenarios against the deployed model. View side-by-side predictions with delta values showing exactly how each change impacts the outcome.

4. Understand Why

SHAP contributions reveal which features drive each prediction. See per-feature impact with directional indicators for business-friendly explanations.

Supports all prediction types

Regression

Numeric outputs with SHAP contributions

Predicted price: $425,000 (+$32,000 vs baseline)
Binary Classification

Probability + class label with explanations

Fraud: 87.3% probability (High Risk)
Multiclass

All class probabilities + predicted class

Category A: 62%, B: 25%, C: 13%

What-If scenario analysis

platform.coreplexml.io/studio/sessions/...
What-If Studio scenario comparison with baseline predictions and SHAP contributions

Scenario comparison with SHAP contributions

Model Comparison Workspace

A full diagnostic workspace with 20 analysis tabs across 6 categories. Select models from any experiment, compare them simultaneously, and make data-driven decisions about which model to deploy.

20
Analysis Tabs
6
Categories
10+
Models Compared
CSV, PNG
Export Formats

Compare

2 tabs
Overview

Performance chart comparing all models on AUC, Accuracy, F1, LogLoss, or RMSE. Toggle between chart and table view.

Metrics Table

Complete metrics matrix — MAE, MSE, RMSE, RMSLE, AIC, training time, and more — across every model in one exportable table.

Classify

4 tabs
Confusion Matrix

Side-by-side confusion matrices for each model. Visualize true/false positives and negatives at a glance.

ROC Curves

Receiver Operating Characteristic curves overlaid for direct comparison. AUC values annotated per model.

Precision-Recall

PR curves showing the precision-recall trade-off for each model, critical for imbalanced datasets.

Calibration

Calibration plots showing how well predicted probabilities match actual outcomes. Identify overconfident or underconfident models.

Explain

3 tabs
Variable Importance

Grouped bar chart showing feature importance rankings across all models simultaneously. Spot which features matter most.

SHAP Analysis

SHAP feature impact with bar, violin, and beeswarm plot types. Per-model SHAP values with individual feature explanations.

Partial Dependence

1D PDP, 2D interaction heatmaps, and ICE plots. See how each feature influences predictions across its range.

Analyze

3 tabs
Predictions

Actual vs. predicted scatter plots and residual analysis. Identify patterns in prediction errors.

Error Analysis

Error distribution by feature value ranges. Find where your model struggles and why.

Data Exploration

Feature distribution histograms and correlation analysis. Understand the data your models were trained on.

Advanced

3 tabs
Gains & Lift

Cumulative gains and lift charts for evaluating model effectiveness at different population percentiles.

Learning Curves

Training vs. validation performance as data size increases. Diagnose overfitting, underfitting, and data sufficiency.

Fairness Metrics

Demographic parity, equal opportunity, and disparate impact metrics. Audit models for bias across protected groups.

Operations

4 tabs
Parameters

Full hyperparameter comparison table. Diff configurations across models to understand what drives performance differences.

Ensemble Builder

Build custom weighted ensembles from selected models. Optimize weights and evaluate the combined model.

Deployment

Latency, throughput, and resource usage metrics for deployed models. Compare operational characteristics.

Experiment Tracking

Complete experiment lineage — dataset version, training config, runtime, and metric evolution over time.

Model Comparison workspace

platform.coreplexml.io/automl/compare
Model Comparison performance chart — RMSE comparison across 10 models with sidebar navigation showing all 20 analysis tabs

Performance comparison chart (10 models, 5 metrics)

platform.coreplexml.io/automl/compare → Metrics Table
All Metrics table comparing MAE, MSE, RMSE, RMSLE, training time across 10 models

Complete metrics table with export

platform.coreplexml.io/automl/compare → Variable Importance
Variable importance comparison — grouped bar chart showing feature importance across all 10 models

Variable importance comparison across models

platform.coreplexml.io/automl/compare → SHAP Analysis
SHAP feature impact analysis with bar chart and individual feature explanations

SHAP analysis with feature impact and LIME explanations

platform.coreplexml.io/automl/compare → Partial Dependence
Partial Dependence Plot showing feature-target relationship with 1D, 2D, and ICE plot options

Partial dependence with 1D, 2D heatmap, and ICE modes

platform.coreplexml.io/automl/compare → Learning Curves
Learning curves showing training vs validation performance and overfitting diagnostics

Learning curves with overfitting/underfitting diagnosis

Analysis across industries

From loan underwriting to manufacturing quality — use What-If scenarios and model comparison to make better decisions.

Loan Underwriting

Use What-If to test how income or credit score changes affect approval. Use Compare to select the best model from dozens of candidates with fairness auditing.

Insurance Pricing

Compare model candidates on calibration plots to ensure accurate premium predictions. Run What-If scenarios for different risk profiles.

Customer Churn

Compare models on ROC and precision-recall curves for imbalanced churn data. Use SHAP to explain which factors drive churn predictions.

Healthcare Outcomes

Run fairness metrics across demographic groups. Use PDP plots to understand how treatment dosage affects predicted outcomes.

Fraud Detection

Compare model performance at different thresholds using gains/lift charts. What-If testing for edge-case transactions.

Manufacturing Quality

Use learning curves to determine if more training data would help. Build optimized ensembles from multiple model types.

Automate with the SDK

Both What-If sessions and model comparison data are available programmatically through the Python SDK.

what_if_analysis.py
from coreplexml import CorePlexMLClient

client = CorePlexMLClient(
    base_url="https://api.coreplexml.io",
    api_key="sk_your_api_key"
)

# Create a What-If session
session = client.studio.create_session(
    project_id="proj_abc",
    deployment_id="dep_fraud_v2",
    baseline_input={
        "amount": 150, "merchant": "grocery",
        "hour": 14, "country": "US"
    }
)
print(f"Baseline prediction: {session['baseline']['prediction']}")

# Add high-risk scenario
scenario = client.studio.create_scenario(
    session_id=session["id"],
    name="High-risk transaction",
    changes={"amount": 9500, "hour": 3, "country": "NG"}
)

# Run and compare
result = client.studio.run_scenario(scenario["id"])
print(f"Scenario: {result['prediction']} (delta: {result['delta']:+.2%})")
model_comparison.py
# Model Comparison API
models = client.models.list(
    project_id="proj_abc",
    experiment_id="exp_t1",
    limit=10
)

# Get metrics for comparison
for model in models["items"]:
    metrics = model["metrics"]
    print(f"{model['algorithm']}: "
          f"RMSE={metrics.get('rmse', 'N/A')}, "
          f"MAE={metrics.get('mae', 'N/A')}")

# Get SHAP values for a model
shap = client.models.get_shap(
    model_id="mod_xgb_v2",
    feature_count=10
)
for feat in shap["features"]:
    print(f"  {feat['name']}: {feat['mean_impact']:.4f}")

# Get variable importance comparison
varimp = client.models.get_variable_importance(
    model_id="mod_xgb_v2"
)

# Get partial dependence plot data
pdp = client.models.get_pdp(
    model_id="mod_xgb_v2",
    feature="tenure_months",
    nbins=20
)

ML Studio API

Endpoints for What-If sessions, scenario management, model metrics, SHAP analysis, partial dependence, and experiment leaderboards.

POST
/api/studio/sessions

Create a What-If analysis session with baseline input

GET
/api/studio/deployments/{id}/schema

Get input schema for auto-generating forms

POST
/api/studio/sessions/{id}/scenarios

Create a new scenario with modified feature values

POST
/api/studio/scenarios/{id}/run

Execute scenario and get prediction with SHAP

GET
/api/studio/sessions/{id}/compare

Compare all scenarios side-by-side with deltas

GET
/api/models

List models with filters for project, experiment, algorithm

GET
/api/models/{id}

Get model details, metrics, and hyperparameters

GET
/api/models/{id}/variable-importance

Get variable importance rankings for a model

GET
/api/models/{id}/shap

Get SHAP feature impact values (bar, violin, beeswarm)

GET
/api/models/{id}/pdp

Get partial dependence plot data (1D, 2D, ICE)

GET
/api/models/{id}/contributions

Get per-prediction SHAP contributions

GET
/api/experiments/{id}/leaderboard

Get ranked model leaderboard with metrics

Ready to get started?

Start building with CorePlexML today. Free tier available — no credit card required.