Getting Started With Snowflake Snowpark ML: A Step-by-Step Guide
This guide discusses setting up the Snowpark ML library, configuring your environment, and implementing a basic ML use case.
Join the DZone community and get the full member experience.
Join For FreeSnowflake’s Snowpark brings machine learning (ML) closer to your data by enabling developers and data scientists to use Python for ML workflows directly within the Snowflake Data Cloud.
Here are some of the advantages of using Snowpark for machine learning:
- Process data and build models within Snowflake, reducing data movement and latency.
- Scale ML tasks efficiently using Snowflake's elastic compute capabilities.
- Centralize data pipelines, transformations, and ML workflows in one environment.
- Write code in Python, Java, or Scala for seamless library integration.
- Integrate Snowpark with tools like Jupyter and Streamlit for enhanced workflows.
In this tutorial, I'll walk you through setting up the Snowpark ML library, configuring your environment, and implementing a basic ML use case.
Step 1: Prerequisites
Before getting into Snowpark ML, ensure you have the following:
- A Snowflake account.
- SnowSQL CLI or any supported Snowflake IDE (e.g., Snowsight).
- Python 3.8+ installed locally.
- Necessary Python packages:
snowflake-snowpark-python
andscikit-learn
.
Install the required packages using pip:
pip install snowflake-snowpark-python scikit-learn
Step 2: Set Up Snowpark ML Library
1. Ensure your Snowflake account is Snowpark-enabled. You can verify or enable it via your Snowflake admin console.
2. Create a Python runtime environment in Snowflake to execute your ML models.
CREATE STAGE my_python_lib;
3. Upload your required Python packages (like scikit-learn
) to the stage. For example, use this command to upload a file:
snowsql -q "PUT file://path/to/your/package.zip @my_python_lib AUTO_COMPRESS=TRUE;"
4. Grant permissions to the Snowpark role to use external libraries:
GRANT USAGE ON STAGE my_python_lib TO ROLE my_role;
Step 3: Configure Snowflake Connection in Python
Set up your Python script to connect to Snowflake:
from snowflake.snowpark import Session
# Define your Snowflake connection parameters
connection_parameters = {
"account": "your_account",
"user": "your_username",
"password": "your_password",
"role": "your_role",
"warehouse": "your_warehouse",
"database": "your_database",
"schema": "your_schema"
}
# Create a Snowpark session
session = Session.builder.configs(connection_parameters).create()
print("Connection successful!")
Step 4: A Simple ML Use Case – Predicting Customer Attrition
Data Preparation
1. Load a sample dataset into Snowflake:
CREATE OR REPLACE TABLE cust_data (
cust_id INT,
age INT,
monthly_exp FLOAT,
attrition INT
);
INSERT INTO cust_data VALUES
(1, 25, 50.5, 0),
(2, 45, 80.3, 1),
(3, 30, 60.2, 0),
(4, 50, 90.7, 1);
2. Access the data in Snowpark:
df = session.table("cust_data")
print(df.collect())
Building an Attrition Prediction Model
1. Extract features and labels:
from snowflake.snowpark.functions import col
features = df.select(col("age"), col("monthly_exp"))
labels = df.select(col("attrition"))
2. Locally train a Logistic Regression model using scikit-learn
:
from sklearn.linear_model import LogisticRegression
import numpy as np
# Prepare data
X = np.array(features.collect())
y = np.array(labels.collect()).ravel()
# Train model
model = LogisticRegression()
model.fit(X, y)
print("Model trained successfully!")
3. Locally save the model and deploy it to Snowflake as a stage file:
pickle.dump(model, open("attrition_model.pkl", "wb"))
snowsql -q "PUT file://attrition_model.pkl @my_python_lib AUTO_COMPRESS=TRUE;"
Predict Customer Attrition in Snowflake
1. Use Snowflake’s UDFs to load and use the model:
from snowflake.snowpark.types import PandasDataFrame, PandasSeries
import pickle
# Define a UDF
def predict_attrition(age, monthly_exp):
model = pickle.load(open("attrition_model.pkl", "rb"))
return model.predict([[age, monthly_exp]])[0]
# Register UDF
session.udf.register(predict_attrition, return_type=IntType(), input_types=[IntType(), FloatType()])
2. Apply the UDF to predict Attrition:
result = df.select("cust_id", predict_attrition("age", "monthly_exp").alias("attrition_prediction"))
result.show()
Best Practices for Snowflake Snowpark in ML
- Use Snowflake's SQL engine for preprocessing to boost performance.
- Design efficient UDFs for non-native computations and limit data passed to them.
- Version and store models centrally for easy deployment and tracking.
- Monitor resource usage with query profiling and optimize warehouse scaling.
- Validate pipelines with sample data before running on full datasets.
Conclusion
You’ve successfully set up Snowpark ML, configured your environment, and implemented a basic Attrition Prediction model. Snowpark allows you to scale ML workflows directly within Snowflake, reducing data movement and improving operational efficiency.
Opinions expressed by DZone contributors are their own.
Comments