DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workkloads.

Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

Related

  • Python and Low-Code Development: Smooth Sailing With Jupyter Notebooks
  • How to Create a Simple ETL Job Locally With Spark, Python, and MySQL
  • OpenCV Integration With Live 360 Video for Robotics
  • Feature Engineering Transforming Predictive Models

Trending

  • AI Meets Vector Databases: Redefining Data Retrieval in the Age of Intelligence
  • Recurrent Workflows With Cloud Native Dapr Jobs
  • A Guide to Container Runtimes
  • Create Your Own AI-Powered Virtual Tutor: An Easy Tutorial
  1. DZone
  2. Data Engineering
  3. Big Data
  4. The Tech Files: Pushing Jupyter Notebooks to Production

The Tech Files: Pushing Jupyter Notebooks to Production

Learn how one company embarked upon their data science journey to incorporate recommendation algorithms into their client-facing product.

By 
Teja Srivastasa user avatar
Teja Srivastasa
·
Feb. 21, 18 · Analysis
Likes (4)
Comment
Save
Tweet
Share
19.4K Views

Join the DZone community and get the full member experience.

Join For Free

No one can deny how large the online support community for data science is. Today, it’s possible to teach yourself Python and other programming languages in a matter of weeks. And if you’re ever in doubt, there’s a StackOverflow thread or something similar waiting to give you the perfect piece of code to help you.

But when it came to pushing it to production, we found very little documentation online. Most data scientists seem to work on Python notebooks in a silo. They process large volumes of data and analyze it — but within the confines of Jupyter Notebooks. And most of the resources we’ve found while growing as data scientists revolve around Jupyter Notebooks.

Our Solution

I began by evaluating web applications in Python. The first big name was Django, followed by Flask. After studying a few comparisons, I went ahead with Flask. Since all we wanted was to expose an API to trigger our Python code, I felt Flask was the choice, as it’s light (i.e. the package size was relatively small). I found the closest match for a flask-starter app on GitHub and cloned it.

I then read the already-written code to understand how web apps are configured in Python [requirements.txt, .cfg, etc.]. Then, I wrote the first API and ran it on local. KA-BOOM.

Integration

I downloaded the Python version of the notebook from Jupyter and structured the code into modules. I then attached it to the API that I wrote earlier, passing input variables (in our case, it was just the job configured by an employer) as a route param done with the integration. Tested it on the local and it worked!

Deployment

What was now at this point was the deployment. Okay — but how? All our other mircoservices are running in Docker containers. So, I decided to find a way to dockerize the Flask application. But what struck me was that given our requirements and volume, we didn’t need an application to run 24/7. Since this is our first version of a recommendation engine, it was a fairly simple algorithm that, when benchmarked was always under 30s. I felt this was the perfect use case for a serverless architecture.

Serverless

I went into my AWS console and opened Lambda but had no idea where to begin. Lambda has a single handler function inside which all your code should run. Writing the code on the Lambda console wouldn’t work for us, as we had external dependencies. I came across Zappa to deploy Flask apps on AWS Lambda. The entire infrastructure management is auto-configured. After reading the readme file, I was amazed at its applications and capability.

Zappa

Zappa needs a virtual environment to auto-package your dependencies, as any Python web app has requirements.txt with all your dependencies. Follow the instruction of Zappa and you might get stuck at configuring roles on AWS. Here's a tutorial on how to deploy Flask-ask skills to AWS Lambda with Zappa.

In my application, Zappa creates the API gateway and registers the Lambda function routes the log to CloudWatch. All of this setup is done automatically. You could listen to all events in AWS and trigger code execution based on events like s3 file upload, message queue, or cron job.


What We Learned

Moving Jupyter Notebooks to production is now a lot easier using Zappa. Once you are ready for the next step of updating to your code, all you have to do is to run zappa update. Keep in mind that the cost spent on the productionizing is pretty low since Amazon provide one million requests per month for free.

Here is an example to get started with deploying data science models. We take the example of the veteran IRIS dataset. We shall experiment, build, and deploy the best model and also perform prediction on the new data points.

All you have to do is:

  1. Create an interface to auto-convert your Jupyter Notebook to Python applications at scale.

  2. Create a Jenkins pipeline to trigger deployment upon code commit.

Check out this tutorial on how to embed Jupyter Notebooks into your Python application.

jupyter notebook Data science Production (computer science) application Python (language)

Opinions expressed by DZone contributors are their own.

Related

  • Python and Low-Code Development: Smooth Sailing With Jupyter Notebooks
  • How to Create a Simple ETL Job Locally With Spark, Python, and MySQL
  • OpenCV Integration With Live 360 Video for Robotics
  • Feature Engineering Transforming Predictive Models

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!