Over a million developers have joined DZone.

Cluster Computing with Node.js

DZone's Guide to

Cluster Computing with Node.js

· Performance Zone
Free Resource

Download our Introduction to API Performance Testing and learn why testing your API is just as important as testing your website, and how to start today.

A single instance of Node runs in a single thread. To take advantage of multi-core systems we may want to launch a cluster of Node processes to handle the load!

That is to say if a system has 8 cores in it a single instance of node would use only one core, but to make the most of it, we can make use of all the cores with the wonderful concept of workers and more interestingly they can share the same port!

This module is still in Stability: 1 - Experimental phase, check out the code below to enjoy the awesomeness of the cluster!

var cluster = require('cluster');
var http = require('http');
var numCPUs = require('os').cpus().length;
if (cluster.isMaster) {
  // Fork workers.
  // In case the worker dies!
  cluster.on('exit', function(worker, code, signal) {
    console.log('worker ' + worker.process.pid + ' died');
  // As workers come up.
  cluster.on('listening', function(worker, address) {
    console.log("A worker with #"+worker.id+" is now connected to " +\
     address.address +\
    ":" + address.port);
  // When the master gets a msg from the worker increment the request count.
  var reqCount = 0;
  Object.keys(cluster.workers).forEach(function(id) {
      if(msg.info && msg.info == 'ReqServMaster'){
        reqCount += 1;
  // Track the number of request served.
  setInterval(function() {
    console.log("Number of request served = ",reqCount);
  }, 1000);
} else {
  // Workers can share the same port!
  require('http').Server(function(req, res) {
    res.end("Hello from Cluster!");
    // Notify the master about the request.
    process.send({ info : 'ReqServMaster' });

On a quad core machine, the output would be as below, for each hit two works are responding.

Number of request served =  0
A worker with #2 is now connected to
A worker with #4 is now connected to
A worker with #1 is now connected to
A worker with #3 is now connected to
Number of request served =  0
Number of request served =  2
Number of request served =  4
Number of request served =  6

One can also try : ab -n 1000 -c 5 to benchmark this!

Benchmarking (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests
Server Software:        
Server Hostname:
Server Port:            8000
Document Path:          /
Document Length:        19 bytes
Concurrency Level:      5
Time taken for tests:   0.171 seconds
Complete requests:      1000
Failed requests:        0
Write errors:           0
Total transferred:      94000 bytes
HTML transferred:       19000 bytes
Requests per second:    5841.57 [#/sec] (mean)
Time per request:       0.856 [ms] (mean)
Time per request:       0.171 [ms] (mean, across all concurrent requests)
Transfer rate:          536.24 [Kbytes/sec] received
Percentage of the requests served within a certain time (ms)
  50%      1
  66%      1
  75%      1
  80%      1
  90%      1
  95%      2
  98%      4
  99%      5
 100%      8 (longest request)



Find scaling and performance issues before your customers do with our Introduction to High-Capacity Load Testing guide.


Published at DZone with permission of Hemanth HM, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.


Dev Resources & Solutions Straight to Your Inbox

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.


{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}