As an operations guy I really don’t keep up with the latest and greatest when it comes to software development. I often hear developers discussing new languages or libraries but rarely care enough to jump in and figure out why it’s important to me. I’ve heard a lot of discussion about Node.js and seen many tweets, blogs, and articles that reference the term but up until now I haven’t really had the need or desire to jump in and figure out why it’s important.
There are three main reasons why I’ve chosen to learn about Node.js right now:
- Now that Node.js is starting to show up seemingly everywhere (everywhere I spend my digital time anyway), I am compelled to really dive in and learn about this rapidly advancing platform.
- Node.js adoption is growing at a rapid pace and operations staff need to know what is coming so they can be prepared for it.
- My company AppDynamics, just bought another company, Nodetime (a Node.js monitoring software company). You can read about it here.
So here is what I have found out about Node.js:
In my search for answers I came across this great post on Mozilla.org.
Realization #2: Node.js is used to build server side applications.
This information brings up an important question: Isn’t it a bad thing to have a single threaded application server if you want high scalability? I had to do more research.
Maybe this statement is related to overcoming the single threaded nature “event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices.”
First I need to understand what “event-driven” means. The definition itself is quite easy to understand. There is a lot of information about event driven programming available on the web but I liked this description from a Princeton University page the best:
“In computer programming, event-driven programming or event-based programming is a programming paradigm in which the flow of the program is determined by events—i.e., sensor outputs or user actions (mouse clicks, key presses) or messages from other programs or threads.
Event-driven programming can also be defined as an application architecture technique in which the application has a main loop which is clearly divided down to two sections: the first is event selection (or event detection), and the second is event handling. In embedded systems the same may be achieved using interrupts instead of a constantly running main loop; in that case the former portion of the architecture resides completely in hardware.
Event-driven programs can be written in any language, although the task is easier in languages that provide high-level abstractions, such as closures. Some integrated development environments provide code generation assistants that automate the most repetitive tasks required for event handling.”
Realization #3: The Node.js platform provides abstractions that make it easier to create event-driven applications.
This realization raised a question in my mind: Is it better, faster, or more scalable to use event-driven programming instead of multi-threaded programming? As it turns out, trying to answer this question is like falling down a never-ending rabbit hole. There are a ton of arguments on both sides and nothing that swayed me in either direction. It seems like one of those choices where you use what works better for your circumstance. Sorry, but you’ll have to figure out whether multi-threaded or event-driven is better for you on your own.
Moving onto the last part of the Node.js definition… “lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices.”
I’m going to have to trust that it is lightweight and efficient since I am not a developer but the last part of the definition (distributed devices) has me asking another question: How do you scale a Node.js application? Distributed implies a horizontal scaling methodology, and as we discovered earlier Node.js is single threaded so it can only use 1 CPU at any given time. How do you scale Node.js is the real world when you have a massive spike in workload? I found my answer in this excellent blog post . I’ll summarize here: Node.js has functionality you can use to have multiple processes listen on the same port. Each individual process can use a single CPU so you can scale out a multi-CPU server by using multiple processes. You can also take advantage of scaling across multiple host by using a reverse proxy, load balancing web server. This is more traditional clustering so it should be familiar if you have experience running distributed applications. Another important part of scaling Node.js is making sure you use Node to only server the dynamic content while offloading static content distribution to either a web server like nginx or by using a CDN.
Realization #4: Node.js scales on a per processor basis as well as across servers.
There seem to be many success stories out there for Node.js, but like with any technology it must be applied properly. Just like every other programming framework there will be efficient code and poor performing code. Anyone using Node.js will eventually need to troubleshoot their application(s) for performance and scalability issues and that is where Nodetime comes into play. If you’re currently using or are planning to use Node.js for an application you should either read this blog on monitoring Node.js or just jump right over to the Nodetime website and start your free trial today.