Over a million developers have joined DZone.

Storing Created Files With Node.js

Writing to cloud storage has never been easier. With options ranging from sending from a file on a disk to using streams, there is a solution for virtually every use case.

· Web Dev Zone

Start coding today to experience the powerful engine that drives data application’s development, brought to you in partnership with Qlik.

You have the following requirement: You need to create a file in your web application (say, creating a PDF) and store said file to be accessed later by your web application.

For storage these days, cloud services have come into their own as of late. Take, for example, two offerings by a pair of the world's leading cloud provider: Amazon's S3 Storage service and Microsoft Azure's Storage service. By offloading the storage and management to a remote, cloud-based service, we are able to (affordably) work with the files we need to without burdening our own application/storage servers.

And given current network throughput and available bandwidth, accessing these services via HTTP has made the whole solution a simple, workable one.

So you say to yourself, "Great! I have this super-modern Node.js application that can create my files. Time to save me some blobs and documents in the cloud!"

Not so fast, bucko.

Image title

(Courtesy: frinkiac.com)

Let's have a look at a few ways to handle this scenario, and see if we can't find an ideal way to do it. "Look before you leap," as they say.

Option 1: The Easy Way

One might argue that the quickest (or, at least, the most obvious) way to create and upload your file is the following:

  1. Create the file and save it to local disk temporarily.

  2. Use the desired cloud storage API and point it at the file on disk.

  3. Send.

  4. Delete/clean up any temporary files stored.

Sure, this looks easy, but with any dramatic increase in traffic, you look to have your server I/O-bound as it must do several disk operations in order to save, read, and ultimately delete the temporary file. Not such a hot option.

How can we improve upon this?

Option 2: This Disk-ussion Is Over!

What about cutting out the disk ops altogether? What if we employed a Node.js specialty, i.e. memory streams?

The process would look something like this:

  1. Create a memory stream.

  2. Create the file and .pipe() it to the memory stream above.

  3. Take the memory stream and use your cloud API to read the stream in order to create a new, API-specific stream.

  4. Finish writing to the stream above.

This is definitely an improvement, but the introduction of an intermediate stream feels unnecessary and, at worst, can burden your server's RAM.

Is there a better way still?

Option 3: API Streams

This option is proof that the cloud providers are listening to the needs of developers. For example, in the case of both AWS and Azure, each allows you to pipe directly to an API-specific stream. So now your process goes something like this:

  1. Create an API-specific stream.

  2. Create the file and .pipe() it to the stream from above.

  3. You're done.

No intermediate streams or unnecessary steps! The code itself is brief and to the point.


Writing to cloud storage has never been easier. With options ranging from sending from a file on disk to using streams, there is a solution for virtually every use case.

Happy writing!

Create data driven applications in Qlik’s free and easy to use coding environment, brought to you in partnership with Qlik.

cloud ,azure ,aws ,streams ,files ,node.js ,api

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}