Cloud Cost Optimization for ML Workloads With NVIDIA DCGM
SQL partitioning, predictive forecasting, GPU partitioning, and spot VMs can cut ML costs by up to 60% without hurting performance.
Join the DZone community and get the full member experience.
Join For FreeIntroduction
Running machine learning (ML) workloads in the cloud can become prohibitively expensive when teams overlook resource orchestration. Large-scale data ingestion, GPU-based inference, and ephemeral tasks often rack up unexpected fees. This article offers a detailed look at advanced strategies for cost management, including:
- Dynamic Extract, Transfer, Load (ETL) schedules using SQL triggers and partitioning
- Time-series modeling—Seasonal Autoregressive Integrated Moving Average (SARIMA) and Prophet—with hyperparameter tuning
- GPU provisioning with NVIDIA DCGM and multi-instance GPU configurations
- In-depth autoscaling examples for AI services
Our team reduced expenses by 48% while maintaining performance for large ML pipelines. This guide outlines our process in code.
Advanced ETL Management With SQL Partitioning and Triggers
Partitioned ETL Logs for Cost Insights
A typical ETL costs table might suffer from slow queries if it grows unbounded. We partitioned usage logs by month to accelerate cost analysis.
CREATE TABLE etl_usage_logs (
pipeline_id VARCHAR(50) NOT NULL,
execution_time INTERVAL,
cost NUMERIC(10, 2),
execution_date TIMESTAMP NOT NULL
) PARTITION BY RANGE (execution_date);
CREATE TABLE IF NOT EXISTS etl_usage_jan2025
PARTITION OF etl_usage_logs
FOR VALUES FROM ('2025-01-01') TO ('2025-02-01');
This setup ensures queries run efficiently, even when the dataset is massive.
Dynamic ETL Trigger Based on Data Volumes
Instead of blindly scheduling hourly ETL runs we used an event-based trigger that checks data volume thresholds.
CREATE OR REPLACE FUNCTION run_etl_if_threshold() RETURNS TRIGGER AS $$
BEGIN
IF (SELECT COUNT(*) FROM staging_table) > 100000 THEN
PERFORM run_etl_pipeline('staging_table');
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER etl_threshold_trigger
AFTER INSERT ON staging_table
FOR EACH STATEMENT
EXECUTE FUNCTION run_etl_if_threshold();
This logic only executes the ETL when the staging_table
surpasses 100,000 rows. We observed a 44% reduction in ETL costs.
Advanced Time-Series Cost Forecasting
SARIMA With Custom Seasonality
Our cost dataset exhibited weekly and monthly seasonality. Basic SARIMA might overlook multiple cycles, so we tuned multiple seasonal orders.
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from statsmodels.tsa.statespace.sarimax import SARIMAX
#Assuming this is where store the cloud costs data
cost_data = pd.read_csv('cloud_costs.csv', parse_dates=['date'], index_col='date')
cost_data['diff_cost'] = cost_data['cost'].diff(1).dropna()
model = SARIMAX(
cost_data['diff_cost'].dropna(),
order=(1,1,1),
seasonal_order=(0,1,1,7),
trend='n'
)
results = model.fit(disp=False)
forecast_steps = 30
prediction = results.get_forecast(steps=forecast_steps)
forecast_ci = prediction.conf_int()
Our advanced SARIMA approach detected weekly cost spikes (typically Monday usage surges). We used differencing for daily fluctuations and a separate seasonal term for weekly patterns.
Enhanced Prophet With Capacity and Holidays
Prophet can incorporate capacity constraints (upper and lower bounds) and custom holiday events.
from fbprophet import Prophet
prophet_df = cost_data.reset_index()[['date', 'cost']]
prophet_df.columns = ['ds', 'y']
holidays = pd.DataFrame({
'holiday': 'cloud_discount',
'ds': pd.to_datetime(['2025-02-15', '2025-11-25']),
'lower_window': 0,
'upper_window': 1
})
m = Prophet(
growth='logistic',
holidays=holidays,
)
prophet_df['cap'] = 30000
prophet_df['floor'] = 0
m.fit(prophet_df)
future = m.make_future_dataframe(periods=60)
future['cap'] = 30000
future['floor'] = 0
forecast = m.predict(future)
We recognized that after a certain point, departmental budgets capped daily spending. By modeling these constraints, Prophet produced more realistic forecasts.
GPU Provisioning With NVIDIA DCGM and MIG
GPU Monitoring
We used NVIDIA DCGM for comprehensive monitoring of GPU memory, temperature, and SM usage. This script logs DCGM metrics to a time-series database (e.g InfluxDB):
while true; do
dcgmi diag -r 3 | grep "SM Active" >> /var/log/gpu_sm_usage.log
dcgmi diag -r 3 | grep "Memory" >> /var/log/gpu_mem_usage.log
sleep 60
done
Multi-Instance GPU (MIG)
On NVIDIA A100 or newer GPUs, MIG enables partitioning the GPU into multiple logical instances. This approach helps allocate just enough GPU resources per inference job.
nvidia-smi mig -i 0 -cgi 19g.2g
nvidia-smi mig -i 0 -cci
We reconfigured our inference service so that smaller models run on smaller MIG partitions. This yielded a 30% cost reduction versus running multiple large GPU instances.
Autoscaling AI Services
Kubernetes Custom Metrics
We often need more than CPU usage to scale ML microservices. Using a Kubernetes Metrics Server or Prometheus, we can scale on custom GPU or memory metrics.
apiVersion: autoscaling.k8s.io/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: ai-inference-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: ai-inference
minReplicas: 2
maxReplicas: 20
metrics:
- type: Pods
pods:
metric:
name: "gpu_util"
target:
type: AverageValue
averageValue: "60"
This autoscaler scales up if average GPU utilization across pods exceeds 60.
Spot Instances for Non-Critical Tasks
For cost-conscious experiments, we scheduled some model training on AWS Spot Instances or Google Cloud Preemptible VMs:
aws ec2 request-spot-instances --spot-price "0.20" --instance-count 2 --type persistent --launch-specification file://spot_specification.json
Loss of these instances won’t disrupt mission-critical inference, but saves significant budget for training or batch tasks.
Real-World Results
After implementing advanced partitioning, time-series forecasting, GPU partitioning, and custom autoscaling, our monthly cloud bill dropped by 48%. This was while supporting 10 million daily inferences and daily ETL on a 5 TB dataset.
- SQL Partitioning and Triggers
- Achieved a 44% reduction in ETL costs
- Removed idle runs by automating table partitions
- SARIMA + Prophet
- Decreased overall cloud spending by 48%
- Forecasted usage spikes to preempt over-allocation
- MIG GPU Partitioning
- Lowered GPU expenses by 30%
- Allocated smaller GPU slices for less intensive models
- Spot/Preemptible VMs
- Reduced training overhead by 60%
- Used transient instances where interruptions posed minimal risk
Conclusion
Controlling cloud spending for ML workloads isn’t just about imposing surface-level cost caps. By integrating predictive analytics, database partitioning, targeted GPU provisioning, and well-chosen VM types, teams can capture substantial savings without downgrading model performance. Our experience demonstrates that a methodical blend of SQL triggers, time-series forecasting, and flexible GPU resources delivers measurable financial benefits. If your ML budget seems unwieldy, these techniques can help you tighten spending while still meeting ambitious performance goals.
Opinions expressed by DZone contributors are their own.
Comments