FeaturesUse CasesBlogAPI ReferenceWhy CorePlexMLPricing
Start Free

Build ML Pipelines in Python

The official CorePlexML Python SDK. Train models, deploy endpoints, and manage your entire ML lifecycle programmatically.

$pip install coreplexml

Up and running in 60 seconds

Install the SDK, authenticate with your API key, and start building.

terminal
# Install from PyPI
$ pip install coreplexml

# Requirements
# Python 3.9+
# requests >= 2.28
authenticate.py
from coreplexml import CorePlexMLClient

client = CorePlexMLClient(
    base_url="https://your-instance.coreplexml.io",
    api_key="sk_your_api_key"
)

End-to-end in one script

From data upload to production predictions — a complete ML workflow in a single Python file.

quickstart.py
from coreplexml import CorePlexMLClient

client = CorePlexMLClient(base_url="https://api.coreplexml.io", api_key="sk_xxx")

# Create a project
project = client.projects.create(name="Churn Prediction", description="Q1 2026 model")

# Upload training data
dataset = client.datasets.upload(
    project_id=project["id"],
    file_path="customers.csv",
    name="Customer Data"
)

# Train with AutoML
experiment = client.experiments.create(
    project_id=project["id"],
    dataset_version_id=dataset["dataset_version_id"],
    target_column="churn",
    problem_type="classification"
)

# Wait for training to complete
result = client.experiments.wait(experiment["id"])
print(f"Best model: {result['best_model_id']}")

# Deploy to production
deployment = client.deployments.create(
    project_id=project["id"],
    model_id=result["best_model_id"],
    name="churn-prod",
    stage="production"
)

# Make predictions
prediction = client.deployments.predict(
    deployment_id=deployment["id"],
    inputs={"age": 35, "tenure": 24, "monthly_charges": 65.50}
)
print(f"Churn probability: {prediction['probability']:.2%}")

One client, 13 modules

Access every CorePlexML resource through intuitive, namespaced methods — from AutoML training to A/B testing and alerts.

Projects

Create and manage workspaces. Organize datasets, experiments, and deployments under a single project.

projects = client.projects.list()
project = client.projects.create(name="My Project")

Datasets

Upload CSVs, manage dataset versions, and retrieve column metadata for your training data.

dataset = client.datasets.upload(project_id, "data.csv", name="Training Data")
versions = client.datasets.versions(dataset_id)

Experiments (AutoML)

Launch AutoML training runs with automatic algorithm selection and hyperparameter tuning.

exp = client.experiments.create(project_id, dataset_version_id, target_column="target")
result = client.experiments.wait(exp["id"], timeout=3600)

Deployments (MLOps)

Deploy models to production endpoints with canary rollouts, A/B testing, and real-time monitoring.

dep = client.deployments.create(project_id, model_id, name="prod", stage="production")
pred = client.deployments.predict(dep["id"], inputs={...})

Privacy Suite

Detect and transform PII across 72+ types. Built-in HIPAA, GDPR, PCI-DSS, and CCPA profiles.

policy = client.privacy.create_policy(project_id, name="HIPAA", profile="hipaa")
session = client.privacy.create_session(policy_id, dataset_id)
client.privacy.detect(session["id"])

SynthGen

Generate privacy-safe synthetic data with CTGAN, CopulaGAN, and TVAE engines.

model = client.synthgen.create_model(project_id, dataset_version_id, model_type="ctgan")
synthetic = client.synthgen.generate(model["id"], num_rows=10000)

Batch Predictions

Run predictions on entire datasets asynchronously. Upload CSV, start a batch job, and download results when ready.

job = client.predictions.create(deployment_id, file_path="batch.csv")
result = client.predictions.wait(job["id"])
client.predictions.download(job["id"], "output.csv")

Streaming Predictions

Real-time WebSocket streaming for batch inference progress and live prediction results.

for row in client.streaming.predict(deployment_id, data):
    print(row["prediction"], row["confidence"])

Model Registry

Semantic versioning, stage transitions (dev → staging → prod), model cards, and lineage tracking.

ver = client.registry.create_version(project_id, model_id, version="1.2.0")
client.registry.transition_stage(ver["id"], stage="production")

Reports

Generate PDF reports for model performance, feature importance, drift analysis, and deployment summaries.

report = client.reports.generate(project_id, kind="performance")
result = client.reports.wait(report["id"])
client.reports.download(report["id"], "report.pdf")

A/B Testing

Create experiments between model variants with configurable traffic splits and statistical analysis.

test = client.ab_tests.create(project_id, model_a, model_b, split=50)
results = client.ab_tests.get_results(test["id"])

Alerts

Configure monitoring alert rules with multi-channel notifications: Slack, email, and webhooks.

rule = client.alerts.create_rule(deployment_id, metric="drift_psi", threshold=0.2)
client.alerts.add_channel(rule["id"], channel_type="slack")

Admin

Platform administration: manage users, view system settings, and monitor platform health.

users = client.admin.list_users(page=1, per_page=50)
settings = client.admin.manage_settings(gpu_enabled=True)

Everything you need, nothing you don't

The SDK mirrors every API endpoint in a clean, Pythonic interface — with smart defaults so you can move fast.

  • Full CRUD for all platform resources across 13 modules
  • Blocking wait with polling for async operations (training, report generation)
  • Typed error handling (AuthenticationError, NotFoundError, ValidationError, APIError)
  • Configurable timeouts and retry logic
  • File upload support (CSV, Excel, JSON, XML)
  • Batch and streaming predictions (REST + WebSocket)
  • Report generation and download (7 report types)
  • Model registry with semantic versioning and stage management
  • A/B testing with statistical significance analysis
  • Multi-channel alert configuration (Slack, email, webhooks)
  • Privacy Suite and SynthGen integration
  • Admin operations for user and platform management

Typed exceptions, clear diagnostics

Every error type is a distinct exception with actionable context. No more guessing from raw HTTP codes.

error_handling.py
from coreplexml import CorePlexMLClient
from coreplexml.exceptions import (
    AuthenticationError,
    NotFoundError,
    ValidationError,
    APIError,
)

client = CorePlexMLClient(base_url="https://api.coreplexml.io", api_key="sk_xxx")

try:
    experiment = client.experiments.create(
        project_id="proj_abc",
        dataset_version_id="dsv_123",
        target_column="revenue"
    )
except AuthenticationError:
    print("Invalid or expired API key")
except NotFoundError as e:
    print(f"Resource not found: {e.resource_id}")
except ValidationError as e:
    print(f"Invalid params: {e.errors}")
except APIError as e:
    print(f"HTTP {e.status_code}: {e.message}")

Start building with the SDK
today

Get your API key, install the package, and ship your first model in minutes.