Introducing Functions as a Service (FaaS)
There's no reason cloud providers should have a monopoly on serverless computing. Check out this open source framework that helps you use FaaS.
Join the DZone community and get the full member experience.
Join For FreeFunctions as a Service (FaaS) is a framework for building serverless functions on top of containers. I began this project as a proof of concept in October last year when I wanted to understand if I could run Alexa skills or Lambda functions on Docker Swarm. After some initial success, I released the first version of the code in Golang on GitHub in December.
This post gives a straightforward introduction to serverless computing, then covers my top 3 features introduced in FaaS over the last 500 commits, and finishes with what's coming next and how to get involved.
From that first commit, FaaS went on to gain momentum and over 2,500 stars on GitHub along with a small community of developers and hackers, who have been giving talks at meetups, writing their own cool functions, and contributing code. The highlight for me was winning a place in Moby's Cool Hacks keynote session at Dockercon in Austin in April. The remit for entries was to push the boundaries of what Docker was designed to do.
What Is Serverless?
Architecture Is Evolving
"Serverless" is a misnomer — we're talking about a new architectural pattern for event-driven systems. For this reason, serverless functions are often used as connective glue between other services or in an event-driven architecture. In the days of old, we called this a service bus.
Serverless is an evolution
Serverless Functions
A serverless function is a small, discrete, and reusable chunk of code that:
- Is short-lived
- Is not a daemon (long-running)
- Does not publish TCP services
- Is not stateful
- Makes use of your existing services or third-party resources
- Executes in a few seconds (based on AWS Lambda's default)
We also need to make a distinction between serverless products from IaaS providers and open source software projects.
On one hand, we have serverless products from IaaS providers such as Lambda, Google Cloud Functions, and Azure Functions. On the other hand, we have frameworks such as FaaS, which let an orchestration platform such as Docker Swarm or Kubernetes do the heavy lifting.
Cloud Native — bring your favorite cluster.
A serverless product from an IaaS vendor is completely managed so it offers a high degree of convenience and per-second/minute billing. On the flip-side, you are also very much tied into their release and support cycle. Open-source FaaS exists to promote diversity and offer choice.
What's the Difference With FaaS?
FaaS builds upon industry-standard Cloud Native technology:
The FaaS stack
The difference with the FaaS project is that any process can become a serverless function through the watchdog component and a Docker container. That means three things:
- You can run code in whatever language you want
- For however long you need
- Wherever you want to
Going Serverless shouldn't have to mean re-writing your code in another programming language. Just carry on using whatever your business and team needs.
Example:
For example, cat
or sha512sum
can be a function without any changes, since functions communicate through stdin/stdout. Windows functions are also supported through Docker CE.
This is the primary difference between FaaS and the other open-source serverless frameworks which depend on bespoke runtimes for each supported language.
Let's look at three of the big features that have come along since Dockercon, including CLI and function templating, Kubernetes support, and asynchronous processing.
1. The New CLI
Easy Deployments
I added a CLI to the FaaS project for making deploying functions easier and scriptable. Prior to this, you could use the API Gateway's UI or curl
. The CLI allows functions to be defined in a YAML file and then be deployed to the API Gateway.
Finnian Anderson wrote a great intro to the FaaS CLI on Practical Dev/dev.to
Utility Script and Brew
There is an installation script available, and John McCabe helped the project get a recipe on brew
.
$ brew install faas-cli
or
$ curl -sL https://cli.get-faas.com/ | sudo sh
Templating
Templating in the CLI is where you only need to write a handler in your chosen programming language and the CLI will use a template to bundle it into a Docker container — with the FaaS magic handled for you.
There are two templates provided for Python and Node.js, but you can create your own easily.
There are three actions the CLI supports:
-action build
: creates Docker images locally from your templates-action push
: pushes your templates to your desired registry or the Hub.-action deploy
: deploys your FaaS functions
If you have a single-node cluster, you don't need to push your images to deploy them.
Here's an example of the CLI configuration file in YAML:
provider:
name: faas
gateway: http://localhost:8080
functions:
url_ping:
lang: python
handler: ./sample/url_ping
image: alexellis2/faas-urlping
sample.yml
Here is the bare minimum handler for a Python function:
def handle(req):
print(req)
This is an example that pings
a URL over HTTP for its status code:
import requests
def print_url(url):
try:
r = requests.get(url,timeout = 1)
print(url +" => " + str(r.status_code))
except:
print("Timed out trying to reach URL.")
def handle(req):
print_url(req)
./sample/url_ping/handler.py
If you need additional pip
modules, then also place a requirements.txt
file alongside your handler.py file.
$ faas-cli -action build -f ./sample.yml
You'll then find a Docker image called alexellis2/faas-urlping, which you can push to DockerHub with -action push
and deploy with -action deploy
.
You can find the CLI in its own repo.
2. Kubernetes Support
As a Docker Captain, I focus primarily on learning and writing about Docker Swarm, but I have always been curious about Kubernetes. I started learning how to setup Kubernetes up on Linux and Mac and wrote three tutorials on it, which were well-received in the community.
Architecting Kubernetes Support
Once I had a good understanding of how to map Docker Swarm concepts over to Kubernetes, I wrote a technical prototype and managed to port all the code over in a few days. I opted to create a new microservice daemon to speak to Kubernetes rather than introducing additional dependencies to the main FaaS codebase.
FaaS proxies the calls to the new daemon via a standard RESTful interface for operations such as: Deploy, List, Delete, Invoke, and Scale.
Using this approach meant that the UI, the CLI, and auto-scaling all worked out the box without changes. The resulting microservice is being maintained in a new GitHub repository called FaaS-netes and is available on DockerHub. You can set it up on your cluster in around 60 seconds.
Watch a Demo of Kubernetes Support
In this demo, I deploy FaaS to an empty cluster, then run through how to use the UI, Prometheus and trigger auto-scaling too.
But Wait... Aren't There Other Frameworks That Work on Kubernetes?
There are probably two categories of Serverless frameworks for Kubernetes — those which rely on a highly specific runtime for each supported programming language and ones like FaaS, which let any container become a function.
FaaS has bindings to the native API of Docker Swarm and Kubernetes, meaning it uses first-class objects that you are already used to managing such as Deployments and Services. This means there is less magic and code to decipher when you get into the nitty gritty of writing your new applications.
A consideration when picking a framework is whether you want to contribute features or fixes. OpenWhisk, for instance, is written in Scala. Most of the others are written in Golang.
3. Asynchronous Processing
One of the traits of a serverless function is that it's small and fast, typically completing synchronously within a few seconds. There are several reasons why you may want to process your function asynchronously:
- It's an event and the caller doesn't need a result
- It takes a long time to execute or initialize — i.e. TensorFlow/machine learning
- You're ingesting a large number of requests as a batch job
- You want to apply rate limiting
I started a prototype for asynchronous processing via a distributed queue. The implementation uses the NATS Streaming project but could be extended to use Kafka or any other abstraction that looks like a queue.
I have a Gist available for trying the asynchronous code out:
What's Next?
Thanks to the folks at Packet.net a new logo and website will be going live soon.
Packet are automating the Internet and offer great value bare-metal infrastructure in the cloud.
Speaking
I'll be speaking on Serverless and FaaS at LinuxCon North America in September. Come and meet me there, and if you can't make it, follow me on Twitter @alexellisuk
Get Started!
Please show support for FaaS and star the GitHub repository and share this blog post on Twitter.
You can get started with the TestDrive over on GitHub:
I'm most excited about the growing Kubernetes support and asynchronous processing. It would also be great to have someone take a look at running FaaS on top of Azure Container Instances.
All Contributions Are Welcome
Whether you want to help with issues, coding features, releasing the project, scripting, tests, benchmarking, documentation, updating samples, or even blogging about it, there is something for everyone, and it all helps keep the project moving forward.
So if you have feedback, ideas, or suggestions then please post them to me @alexellisuk or via one of the GitHub repositories.
Not sure where to start? Get inspired by the community talks and sample functions, including machine learning with TensorFlow, ASCII art, and easy integrations.
Published at DZone with permission of Alex Ellis. See the original article here.
Opinions expressed by DZone contributors are their own.
Trending
-
Building a Robust Data Engineering Pipeline in the Streaming Media Industry: An Insider’s Perspective
-
Redefining DevOps: The Transformative Power of Containerization
-
Using Render Log Streams to Log to Papertrail
-
Incident Response Guide
Comments