The Gorilla Guide to Serverless on Kubernetes, Chapter 6: Simplify Development With Fission
This excerpt details how Fission's tight integration with Kubernetes can make the deployment and operations cycles less painful.
Join the DZone community and get the full member experience.
Join For FreeDeployment and Operations
Fission supports declarative deployments using build specs. These specs describe Fission resources like functions and triggers and allow developers to deploy functions anywhere. This helps manage the complexity of deployments across different environments, making sure that a function is deployed across environments consistently.
These build specs use Kubernetes' custom resources and are stored as configuration files, which can be checked into version control. In a later release, Fission will support automatic deployment from the source repository.
In addition, Fission automatically generates the initial configuration. The initial configuration is a ready-to-use template for further customization, saving the developer time initially while still being flexible.
Fission also saves time once the code is written. The anxiety-inducing moment for every developer is deploying to production. Automated canary deployments help manage that risk, by sending only a small amount of traffic to the new version initially. As trust is gained, more traffic is pushed to the new version, ultimately removing older versions from the production roster. Conversely, if it is found that the end-users are experiencing issues, Fission will re-route users to older, more stable code so the issue can be resolved.
Fission has configuration settings for the distribution of traffic between versions (and the shift over time) and the threshold error rate.
Monitoring and Metrics
Monitoring is a traditionally tricky area for FaaS because of the short-term nature of containers and functions, and the amount of engineering cloud providers had to put into a monitoring solution for their version of FaaS.
For many FaaS services, this means that monitoring solutions are very basic, and don't integrate into more traditional third-party monitoring solutions or open APIs. But because of Fission's integration with Kubernetes and service meshes like Istio, much of the grunt work has already been done. Fission has integrated with Kubernetes monitoring, resulting in a first-class monitoring experience for FaaS.
Fission aggregates function logs using fluentd; the logs are then stored in a database, providing a lightweight and searchable solution.
Fission is also integrated with Prometheus, the de facto standard metrics system. Fission automatically tracks the number of requests (function call count), timing (execution time and overhead) and success/failure rate metrics, response size, and error codes for all functions. These are fed into Prometheus automatically, without adding any code to the functions. In addition, it adds contextual information (cold vs. hot starts) to these metrics to allow better interpretation.
Balancing Cost and Performance
Ideally, functions that don't run cost nothing. But we want every service to respond quickly, even if they're called for the first time. Since costs for disk and memory vary widely, there are tradeoffs in performance versus cost to be aware of.
This is part of a larger issue: That the actual cost of public cloud FaaS services depends heavily on the usage pattern of functions. In some cases, running the same code in containers or even VMs can be much cheaper than running them as serverless functions. Also, cost across clouds vary. In many pay-per-use models, which is popular with the public cloud, there is a real danger of costs getting out of hand, even when compared to containers or VMs.
For instance, how do you make sure the cost for idle functions is small while keeping latency low for often-used functions? This is the "cold start performance" problem. All FaaS services experience this problem, and each solves it with a different approach. Fission's approach is to provide a tunable cost-performance tradeoff. It also provides a pool of pre-warmed environments for functions.
By contrast, Kubernetes-based FaaS, regardless of where it's running, has a simpler cost model. You pay per-container, per-VM or even per-server for capacity. If you can then use spare resources sitting unused in previously-purchased hardware, you can change the cost structure completely.
In this scenario, optimizing for cost doesn't mean scaling back resources, limiting performance and increasing latency; instead, it means using what you already have in a smart way, providing new ways to develop code without breaking the bank.
On the next post, we'll discuss serverless in the real world, with practical examples for common types of applications.
To learn more about Serverless, download the Gorilla Guide: Serverless on Kubernetes.
Published at DZone with permission of Vamsi Chemitiganti, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments