Over a million developers have joined DZone.

3 Common Node.js Design Patterns That Are Misused

Node.js can be tricky, but it can also be fast if done correctly. Below I’m outlining the first 3 patterns in a series of posts where we’ll tackle how you should be using Node.js.

· Web Dev Zone

Start coding today to experience the powerful engine that drives data application’s development, brought to you in partnership with Qlik.

Today we’re excited to start our new 3 part series on the right way to code common node.js design patterns from Node.js contributor and AppNeta engineer Stephen Belanger. With 6 years of experience, Stephen has helped the Node.js community thrive and also contributed robust application performance monitoring through his work on the AppNeta instrumentation for node.js.

From event emitters and streams (included below) to constructors (included in part 2) and promises (included in part 3), patterns are central to Node.js development. While traditional design patterns exist in the JavaScript world, many have been retrofitted and updated to take advantage of the asynchronous nature of Node.js. There are many ways to use the most common design patterns, but outlined below are common uses and common mistakes that even seasoned developers will make starting out in Node.js. To illustrate these patterns examples have been included to the explanations and links have been added to relevant docs on the nodejs.org site.

Common Patterns

With a lot of new developers in the Node.js community, I’ve seen a lot of misunderstanding around some of the common design patterns that stem from similar patterns in other languages. Node.js can be tricky, but it can also be fast if done correctly. Below I’m outlining the first 3 patterns in a series of posts where we’ll tackle how you should be using Node.js.


Callbacks are possibly the most central pattern of node.js development. Most APIs in node.js core are based on callbacks, so it is important to understand how they work.

A callback is an anonymous function given to another function with the intent of calling it later. It looks something like this:

keyValueStore.get('my-data', function (err, data) {

This is a particular variety of callback Node.js users often refer to as an errback. An errback always has an error parameter first and subsequent parameters can be used to pass whatever data the interface is expected to return.

Callback-accepting functions in Node.js almost always expect errbacks, with the exception of fs.exists, which looks like this:

fs.exists('/etc/passwd', function (exists) {
 console.log(exists ? "it's there" : 'no passwd!')

An important thing to understand about callbacks is that in Node.js they are usually, but not always, asynchronous.

Calling `fs.readFile` is async:

fs.readFile('something', function (err, data) {
  console.log('this is reached second')

console.log('this is reached first')

But, calling `list.forEach` is sync.

var list = [1]

list.forEach(function (v) {
  console.log('this is reached first')

console.log('this is reached second')

Confusing, right? This trips up most people new to Node.js. The way to understand it is that any code which interacts with data outside of the process memory should be async, as disks and networks are slow, so we don’t want to wait for them.

Event Emitters

An event emitter is a special construct designed to allow an interface to designate many callbacks for many different behaviors that may occur once, many times, or even never. In Node.js, event emitters work by exposing a common API on an object, providing several functions for registering and triggering these callbacks. This interface is often attached via inheritance.

Consider this example:

var emitter = new EventEmitter

emitter.on('triggered', function () {
  console.log('The event was triggered')


Event emitters are themselves synchronous, but the things they can be attached to sometimes are not, which is another source of confusion regarding asynchrony in Node.js. The point from calling `emitter.emit(…)` to when the callback given to `emitter.on(…)` is triggered is synchronous, but the object can be passed around and the emit function could be used some time in the future.


A stream is a special variety of event emitter designed specifically for consuming a sequence of data events without having to buffer the entire sequence in memory. This is particularly useful in cases where a sequence is infinite.

A common use for streams is to read files. Loading a large file into memory all at once does not scale well, so you can use streams to allow you to operate on chunks of the file. To read a file as a stream, you’d do something like this:

var file = fs.createReadStream('something')

file.on('error', function (err) {
  console.error('an error occured', err)

file.on('data', function (data) {
 console.log('I got some data: ' + data)

file.on('end', function () {
 console.log('no more data')

The `data` event is actually part of the “flowing” or “push” mode introduced in the streams 1 API. It pushes data through the pipe as fast as it can. Often what one really needs, to scale well, is a pull stream, which can be done using the "readable" event and the "read(…)" function.

file.on('readable', function () {
  var chunk = file.read()
  if (chunk) {
    console.log('I got some data: ' + data)

Note that you need to check if the chunk is null, as streams are null terminated.

Streams include extra functions on them beyond what event emitters provide and there are a few different varieties of streams. There are Readable, Writable, Duplex, and Transform streams to cover various forms of data access.

If you want to read from a file stream, but also want to write the output of the output to a write stream, all that event machinery might be a bit excessive. There’s a handy function on streams called pipe that automatically propagates the appropriate events from the source stream to the target stream. It’s also chainable, which is great for using Transform streams to interpret data protocols.

A theoretical use of a stream to parse a JSON array coming over the network might look something like this:

socket.pipe(jsonParseStream()).pipe(eachObject(function (item) {
  console.log('got an item', item)

The `jsonParseStream` would emit each element of the parsed array, as it gets to it, as an object mode stream. The `eachObject` stream would then be able to receive each of those object mode events and do some operation on them, in this case logging them.

It’s important to understand that, on piped streams, error events are not propagated. You therefore would need to write the previous example more like this, for safety:

  .on('error', function (err) {
    console.error('a socket error occurred', err)


  .on('error', function (err) {

    console.error('a json parsing error occurred', err)

  .pipe(iterateStream(function (item) {
    console.log('got an item', item)

  .on('error', function (err) {
    console.error('an iteration error occurred', err)

What's Next?

Tune in next week where I’ll continue this series with 3 more common patterns and how not to misuse them!

Create data driven applications in Qlik’s free and easy to use coding environment, brought to you in partnership with Qlik.

node js,callback,streams,event,web dev

Published at DZone with permission of Alec Pinkham, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}