The following is adapted from a presentation given by Owen Garrett at nginx.conf 2015, held in San Francisco in September. You can view the video of the talk here.
2015 has been a really, really big year for NGINX. As Gus said in his keynote presentation, the company has grown, our user base has grown, and the number of customers using our software, taking advantage of our services continues to grow.
But what I want to talk to you more about is how the product has changed.
0:30 Massive Pace of Development in 2015
Many of you track us on Mercurial, our source control system. If you’re tracking the check-ins, you can really get a sense of what we’re building into the product. There’s been a phenomenal amount of work done over the last 12 months in the open source product. Over 600 production or near-production ready check-ins, over 100 new features and updates, things like TCP proxying. A year ago, we could proxy and scale web applications. Now, you can use us for just about anything. If it goes over TCP, we’ll scale it.
HTTP/2 is another feature that I know people have been looking for and anticipating for months. We published an early release six or seven weeks ago, took feedback, iterated it, and improved it. Now, just yesterday, we pushed out the 1.9.5 release with HTTP/2. Some great performance improvements are here. We’re continually investing to grow and improve the performance of NGINX, creating features such as thread pools to help with content caching and disk-heavy workload and socket sharding.
Caching updates, SSL updates, and lots and lots of community-driven updates as well, to help you build and deliver and scale bigger and bigger more and more massive applications.
With NGINX Plus, we’ve continued to build on that. Our vision for NGINX Plus is that if you’re running a cluster of two or more NGINX devices, you’re going to need some sort of load balancer in front. That’s the gap we’re addressing with NGINX Plus: Take the technology that you love and trust, the configuration that you already know, and deploy that in front to help you scale out massively.
With NGINX Plus you get load balancing extensions, high availability improvements, great dashboard and monitoring capabilities, and all of this developed to an incredibly high standard by a small but immensely talented team of engineers, backed up by our community of vendors and independent developers contributing to the core code.
But open source is still our main focus. Over 75% of our engineering resources last year went into features that went directly into NGINX open-source. These are features that you can benefit from right now.
Why do people use NGINX? Gus’s keynote highlighted some of the market share figures, including the number of websites using NGINX. We are now the most commonly deployed platform for the top 10,000 websites, which is a fantastic milestone. But what does that mean once we dive into the data?
It’s not just that people are taking their old infrastructure, tearing it down, and putting NGINX in place. Yes, a lot of people are doing that, but it also means that innovators, people like you who are building new services and are breaking into becoming one of the top ten thousand or a hundred thousand websites, you guys, almost without fail, are relying on NGINX to deliver your application.
3:55 Why the Top Sites Choose NGINX
Every year the answer is consistent. You cite the speed, the performance, the fact that it’s lightweight, reliable, stable, secure, configuration is easy, and there’s a great community behind it. What that means for you in production is that you can get uncompromising performance for your services. You can deliver an awesome user experience, and you can do all of this within a sensible budget, using a sensible amount of resources, not having to spend a stupid amount of money on hardware or software solutions to help you keep your market-leading, game-changing service running.
4:58 Building a Great Application…
That, in a nutshell, is what we’re about. You build fantastic services. You use the technology stack and the tools that work for you best. But no matter how great your application is, if you deploy that application live in the real world and it doesn’t deliver the level of performance and user experience that you and your business requires, then it hasn’t succeeded.
Building the application is half the battle, delivering the application is the other half. And that’s what we’re here to help with.
Of course, things are changing. Looking forward, you’re using different development techniques, different toolchains, different stacks. There’s pressure and a desire to build and deliver new applications faster and faster, to accelerate the development cycle, to accelerate the deployment cycle. The applications that you will build in the future, and that many of you are already building now, will be dramatically different to the applications that you have built in the past.
No more LAMP stacks, Java stacks. No more monolithic big-bang delivery. The modern web requires a new approach to application delivery. It’s a journey that we’re all taking, and this is a journey that we want to be a part of with you.
6:30 Flawless Application Delivery
NGINX is about flawless application delivery; allowing you to build and deliver applications to your users, innovative, mission-critical, business-critical services without fault. As technology grows and changes, we need to grow and extend the range of tools and techniques and technology that we deliver to you as part of NGINX.
To enable flawless application delivery in the modern world with rich, fluid, changing, turbulent stacks, you need control. You need to be able to program how that application is delivered. You want to be able to customize the platform on which that application is delivered to end-users.
You need extensibility. There are features that are challenging to implement in an application that you can more easily implement in the layer that delivers it, in NGINX. So, you want to be able to extend the capability of NGINX.
You also want to have authority. You want to know that your application is operating correctly. You want to be confident that it’s meeting its performance needs, and you want to be confident that NGINX is correctly configured to give your users the optimum experience.
I’d like to talk about these three themes as part of our roadmap for the following year. And I want to begin by inviting Igor, the original author of NGINX, who is going to come up and tell us a little bit about some of the work he’s been doing and what we’re going to announce around programmability for NGINX.
Igor: Since I started developing NGINX, I always thought that NGINX should have some capability to program applications inside it. My first attempt to make this happen was in 2005 when I tried to embed Perl inside NGINX.
Igor: nginScript allows people to work with request details. They can define variables that can be used across all NGINX configurations, and they can also write content handlers that can send responses to the client.
Right now, if somebody wanted to get started with this technology, which we’ve called nginScript, what could they do?
Igor: Today, we have released this preliminary version of nginScript, and you can find it in our repository.
Owen: So at NGINX.com/nginScript you can check out our blog post about nginxScript, see a code sample, and check it out from the repo.
Igor will be speaking tomorrow in a session, “Configuring NGINX.” Make sure to catch that and learn a little bit more about the technology that sits behind this and the future roadmap. Igor, thank you very much.
nginScript is the first technology we’re announcing today, and the first pillar in a new wave of configuring and extending NGINX to make it a richer and more powerful platform for you. It addresses the challenge of programming your infrastructure to adapt and respond to traffic as you deliver it to your users.
13:30 Announcing Dynamic Modules
The next technology that we’re talking about this week at nginx.conf is about extensibility and dynamic modules. There is a broad community around creating extensions and modifications to extend the capability of NGINX. Igor mentioned Lua, that’s a fantastic example.
But to date, there’s been a challenge with building those extensions. They need to be built as modules and loaded into NGINX at compile time. It makes it difficult for you to distribute that additional functionality to your end-users. It makes it difficult for operating system vendors to package that functionality and ship it. It makes it difficult for us to provide that functionality with the certified support of binaries that we provide to many of our users.
That’s why we’ve taken the step of refactoring that interface and enabling dynamic modules. This gives the ability to create extensions for NGINX written in C, build those outside the NGINX core, using the same API that we have now, but then compile those externally and load them in at runtime so that you get a much richer development environment. You get a much better way to distribute those changes, those updates to your customer base, your community, and your users.
Our goal with this capability is to make it as easy as possible to take an existing NGINX module and make it dynamic. Our goal, and we are already very, very close to a very large majority of modules, is for you to be able to take a module and compile it as a dynamic module with a slight change in the specification but no code changes. And to help support that, to help you port and adopt this interface and become part of the broader NGINX community, we’ve established a developer relations team.
My colleague, Ruslan, one of our lead developers, is speaking tomorrow, exactly on this topic. He has been working with Maxim and the rest of the team to implement this capability.
Finally, for flawless application delivery, you need to know what’s going on. You want to be sure that you have configured and are using NGINX to its best capabilities.
So for that, I’d like to invite Andrew to the stage. Andrew is one of NGINX’s founders, and he’s been working on a project over the last few months to help you get the most out of your NGINX development.
Owen: Andrew, I know you’ve been working with our community to understand some of the challenges that they face. There are about 140 million websites running NGINX. What sort of challenges do people face as they try to deploy and configure NGINX?
Andrew: First of all, there is this problem of visibility and awareness; “What’s up with my NGINX instances? Are they down? Are they overloaded? Was there a configuration slip-up?” Basically, “What’s going on?”.
Certainly, there are many great APM and general monitoring tools out there, but NGINX is so critical to the application infrastructure and it’s also quite specialized, so we thought it’s really important to be able to drill down into what NGINX is doing specifically.
And then, there is this problem of configuration and security as well. It’s pretty easy to start with NGINX, but it’s also pretty easy to make configuration errors. Security issues happen with all sorts of open-source software as well, so it’s important to have a holistic view of what’s going on with the NGINX infrastructure.
Owen: Right. So, we’ve decided to build a management platform, a visualization, and monitoring platform, which we’re calling NGINX Amplify.
18:00 NGINX Amplify
Andrew: Yes, it’s called NGINX Amplify and it’s basically a SaaS tool. There is a Python-based agent which runs on the monitoring host. The agent will collect a bunch of metrics for the operating system and NGINX and will report them securely to the SaaS platform where you’ll be able to quickly drill down and see what’s going on with practically everything. Are the hosts up or down? What is the traffic pattern? How many requests per second are coming in? What’s happening with NGINX? And some other things as well.
Owen: Cool. So these are some screenshots of the early build. We have some capability to monitor and track what NGINX is doing, but you also talked about configuration and some of the issues people have with that.
Andrew: Right, exactly. So, NGINX configuration can be really black magic. There are a bunch of issues that you can face, so NGINX Amplify includes an analysis tool which can track your configuration on the fly, analyze it, and give a report, suggesting better ways of solving common problems with NGINX configuration.
Looking forward, we are planning to introduce dynamic analysis capabilities so that we can match your reported metrics on the fly against your configuration and suggest a better way of tuning up your NGINX to your traffic patterns or vice versa, to conserve memory if you’re allocating more memory than you need to be, and other optimizations.
Owen: Okay. I know this is at an early stage and it’s a work in progress, but if someone wanted to participate in learning about this, trying it out and giving us feedback, what could they do?
Andrew: We’re currently finalizing the initial build. You can also sign up for a private beta. We’re hoping for great feedback from our users so we can make it better.
Owen: A lot is going on within the team at NGINX, but we would be remiss not to mention the work that our partners and our community contributors are working on.
Trustwave has been working with the ModSecurity project to build a native build for NGINX, that they’ve promised me has got some fantastic performance characteristics. Adobe and Kong have both built API gateways on top of NGINX. With Google, we’re really pleased to announce that Google has joined the NGINX cloud family. We’re now providing NGINX Plus as a subscription service on the Google Cloud platform, as well as on Amazon and Azure.
And there is plenty more; our hardware partners, our community members, many more. I hope you mingle and learn about how people are using NGINX, take that away, use it at home in your own businesses to make the most of the technology that we provide.