{{announcement.body}}
{{announcement.title}}

Uploading and Downloading Files: Buffering in Node.js

DZone 's Guide to

Uploading and Downloading Files: Buffering in Node.js

In this post, we'll learn how to perform file uploads and downloads using buffered binds and fetches.

· Web Dev Zone ·
Free Resource

Of the various ways to transfer files to and from Oracle Database, buffering in Node.js is the easiest to implement. In this post, you'll learn how to perform file uploads and downloads using buffered binds and fetches.

Please Note: This post is part of a series on uploading and downloading files with Node.js and Oracle Database. See that post for details on the project and links to other parts. Get the code here.

Overview

Buffering means that a file's contents are fully materialized (buffered) in Node.js before being transferred to either the database or the client. This is as opposed to streaming, where the contents are passed along intermittently. Streaming is more efficient with respect to memory utilization, but it is also more complex to implement.

When buffering in Node.js, it's essential to consider the amount of memory available to the Node.js process, the number of concurrent file uploads and downloads, and the size of the files being transferred. For example, if your Node.js process has 1 GB of memory and you have at most 10 concurrent transfers, and the max file size is 50 MB, then buffering may be ideal. But if you double the concurrent transfers or the file sizes, then you may run into problems. Also, the underlying Oracle Call Interface (OCI) libraries limit buffers to 1 GB, so you'll need to use streaming APIs for files larger than that.

Uploading Files

Controller Logic

In a typical Node.js web server, the request object passed to the request listener is an instance of the http.IncomingMessage class. This class implements the Readable Stream interface, which allows you to get incoming data via the data event.

To buffer the data in Node.js, all you need to do is push the data into an array. Once the data is done streaming in, it can be passed to the database logic to be inserted. Here's what the controller logic that handles an HTTP post looks like using buffering.

async function post(req, res, next) {
  try {
    const maxFileSize = 1024 * 1024 * 50; // 50 MB
    let contentBuffer = [];
    let totalBytesInBuffer = 0;
    let contentType = req.headers['content-type'] || 'application/octet';
    let fileName = req.headers['x-file-name'];

    if (fileName === '') {
      res.status(400).json({error: `The file name must be passed in the via x-file-name header`});
      return;
    }

    req.on('data', chunk => {
      contentBuffer.push(chunk);
      totalBytesInBuffer += chunk.length;

      // Look to see if the file size is too large.
      if (totalBytesInBuffer > maxFileSize) {
        req.pause();

        res.header('Connection', 'close');
        res.status(413).json({error: `The file size exceeded the limit of ${maxFileSize} bytes`});

        req.connection.destroy();
      }
    });

    // Could happen if the client cancels the upload.
    req.on('aborted', function() {
      // Nothing to do with buffering, garbage collection will clean everything up.
    });

    req.on('end', async function() {
      contentBuffer = Buffer.concat(contentBuffer, totalBytesInBuffer);

      try {
        const fileId = await files.create(fileName, contentType, contentBuffer);

        res.status(201).json({fileId: fileId});
      } catch (err) {
        console.error(err);

        res.header('Connection', 'close');
        res.status(500).json({error: 'Oops, something broke!'});

        req.connection.destroy();
      }
    });
  } catch (err) {
    next(err);
  }
}

Overview:

  • Lines 3-12: Several variables are declared, including an array named contentBuffer which will be used to buffer the data as it streams into Node.js. Note that the file name and content type are sent from the client as HTTP headers since the body of the request is reserved for the file content.
  • Lines 14-27: Each time the data event fires, the chunk of raw data passed in is added to the contentBuffer array. Also, the amount of data buffered is checked to see if it exceeds the cap set via maxFileSize. If so, the streaming is stopped and an error is returned to the client.
  • Lines 30-32: A no-op listener is added to the aborted event. This event would fire if the client aborted the request, say by refreshing the browser while in the middle of an upload. When buffering, it's not necessary to do anything with this event since the data was never passed to the database APIs. Eventually, garbage collection will free the memory used.
  • Lines 34-49: When the end event fires, the content buffer array is converted into a scalar Buffer and then passed along to the database API to be inserted. If successful, the id of the file is returned to the client, otherwise a generic error is returned.

Database Logic

Once the controller logic has passed along the buffered file content and related metadata, the database logic can execute the SQL to insert it. This is where the driver's ability to accept buffers as bind variables comes in very handy.

const createSql =
 `insert into jsao_files (
    file_name,
    content_type,
    blob_data
  ) values (
    :file_name,
    :content_type,
    :content_buffer
  ) returning id into :id`;

async function create(fileName, contentType, contentBuffer) {
  const binds = {
    file_name: fileName,
    content_type: contentType,
    content_buffer: contentBuffer,
    id: {
      type: oracledb.NUMBER,
      dir: oracledb.BIND_OUT
    }
  };

  result = await database.simpleExecute(createSql, binds);

  return result.outBinds.id[0];
}

Overview:

  • Lines 1-10: A SQL insert statement is declared. The bind variable named :content_buffer is mapped to the blob_data column of the table used to store files.
  • Lines 12-26: A variable named binds is declared for the bind definitions. Note that contentBuffer is a simple in bind (just like fileName and contentType) — the driver does all the heavy lifting. Finally, the data is inserted and the id of the row is returned.

Downloading Files

Controller Logic

When downloading a file from the database, the controller logic requests the entire file contents from a database API. Once the file data is obtained, it's sent to the client in a single shot.

async function get(req, res, next) {
  try {
    const id = parseInt(req.params.id, 10);

    if (isNaN(id)) {
      res.status(400).json({error: `Missing or invalid file id`});
      return;
    }

    const rows = await files.get(id);

    if (rows.length === 1) {
      res.status(200);

      res.set({
        'Cache-Control': 'no-cache',
        'Content-Type': rows[0].content_type,
        'Content-Length': rows[0].file_length,
        'Content-Disposition': 'attachment; filename=' + rows[0].file_name
      });

      res.send(rows[0].blob_data);
    } else {
      res.status(404).end();
    }
  } catch (err) {
    next(err);
  }
}

Overview:

  • Lines 3-10: The id of the file to be downloaded is parsed from the request parameters and passed along to the get method of the database API to fetch the entire file.
  • Lines 13-22: If the database API successfully returns a row, then the HTTP response status is set to 200 OK. Next, HTTP headers (including metadata related to the file) are added to the response. Smart clients, such as browsers, know how to use such headers when downloading files. Finally, the file contents are sent to the client by passing the file buffer to the send method of the response object.

Database Logic

When it comes to fetching the file content back out of the database, the fetchInfo object can be used to fetch the data as a fully materialized buffer.

const getSql =
 `select file_name "file_name",
    dbms_lob.getlength(blob_data) "file_length",
    content_type "content_type",
    blob_data "blob_data"
  from jsao_files
  where id = :id`;

async function get(id) {
  const binds = {
    id: id
  };
  const opts = {
    fetchInfo: {
      blob_data: {
        type: oracledb.BUFFER
      }
    }
  };

  const result = await database.simpleExecute(getSql, binds, opts);

  return result.rows;
}

Overview:

  • Lines 1-7: A SQL statement to fetch the file data is declared.
  • Lines 9-24: Within the get function, variables are declared for the bind definitions and the execute options. Within the execute options, fetchInfo is used to override the default return type of a BLOB column (Lob/readable stream) to be a Node.js Buffer. Finally, the query is executed and the result rows are passed back to the controller logic.

I hope you agree that the driver's buffer APIs make inserting and fetching file contents very simple. All it takes is a few lines of code to implement file upload and download capabilities to your application. The only drawbacks are related to file size limits and scalability. In the next part, I'll show you how to use streaming APIs to address both issues.

Topics:
web dev ,node.js tutorial ,buffering ,oracle database ,web application development ,tutorial

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}