DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

Related

  • Containerization of a Node.js Service
  • Running Serverless Service as Serverful
  • Docker and Kubernetes Transforming Modern Deployment
  • Mastering Node.js: The Ultimate Guide

Trending

  • How to Use AWS Aurora Database for a Retail Point of Sale (POS) Transaction System
  • Bridging UI, DevOps, and AI: A Full-Stack Engineer’s Approach to Resilient Systems
  • Can You Run a MariaDB Cluster on a $150 Kubernetes Lab? I Gave It a Shot
  • How GitHub Copilot Helps You Write More Secure Code
  1. DZone
  2. Software Design and Architecture
  3. Cloud Architecture
  4. How to Bring Swagger and Node.js Together

How to Bring Swagger and Node.js Together

Devs love writing code, but not necessarily docs. We take a look at how to integrate Swagger and Node to help you programmatically automate the doc writing process.

By 
Valentin Zlydnev user avatar
Valentin Zlydnev
·
May. 02, 18 · Tutorial
Likes (2)
Comment
Save
Tweet
Share
29.9K Views

Join the DZone community and get the full member experience.

Join For Free

What is any developer's favorite activity? Coding, of course. However, if a developer does nothing but write code, the situation starts looking pretty sad. In order to achieve optimal results, software engineers need to communicate with each other. In the software development process, documentation is one of the best ways you can support such communication.

One of the best examples of documentation - improving the lives of front-end and backend developers as well as testers - comes from Swagger. The platform's features include descriptions of input and output models, request parameters, the ability to change environments, and implication and expansion of models. However, as Benjen Stark once said, "You know, my brother once told me that nothing someone says before the word 'BUT' really counts." In this context, our "BUT" is the combination of Swagger and Node.js. Swagger and Node.js exist as separate entities, and - in our non-magic-enabled world - combining them isn't as smooth a process as developers would like.

There are several different ways to approach the integration of Swagger and Node.js.

Using the Swagger-Node Module

The first method is to use the swagger-node module. This module is really easy to work with:

$> npm install -g swagger
$> swagger project create hello-world
$> swagger project edit
… editing controllers, adding descriptions …
$> swagger project start

However, this method has one significant disadvantage. In non-experimental, real-life situations, information required to create documentation (input and output parameters and filters) must be described by the controller, making its code incredibly heavy, hard to read, and difficult to understand. Accordingly, this isn't the best method for a serious project.

Using Swagger UI

The second method - a method more applicable to large-scale projects - is Swagger UI. It can be used as a module of an existing application, or launched in a separate Docker container:

$> npm install -g swagger-ui swagger-dist 
// app.js
const express = require('express')
const pathToSwaggerUi = require('swagger-ui').absolutePath()
const app = express()
app.use(express.static(pathToSwaggerUi))
app.listen(3000)

Creating the Docker Container

To create the Docker container, you can use a readymade image, launching the container and indicating the path to the documentation file. I recommend using a different approach, however - one which allows more precise adjusting and tuning:

  1. Wrap Swagger UI and NGINX into a docker-container.
  2. Indicate the endpoint for the app in order to create proper requests.
  3. Add a link to the documentation file.

At this point, you understand that you need to have a united configuration file. However, working with a giant file that contains the entire project's documentation is a highly uncomfortable experience. The only way to work with any project bigger than "Hello, World!" is to divide everything into modules.

Dividing Documentation Into Modules

There are two ways to divide documentation into modules: You can use third-party libraries, or you can solve the problem on your own. The most popular library makes it possible to use the internal reference tool.

$ref: ‘../path/to/some/file’

This is obviously comfortable. However, this approach limits you with the links to the description of the models within the same file.

You may think that other proprietary solutions will be less sophisticated, but let's not jump to conclusions. Swagger documentation consists of several big sections described in JSON format:

paths:
    /api/login/:
    /api/user/:
    /api/some-model/:
definitions:
    LoginResponseModel:
    LoginPayloadModel:
    UserModel:

Collecting and Generating Documentation

After looking at this kind of representation, you may conclude that one of the most convenient ways to collect documentation from files is via simple concatenation. In this approach, you describe each model and path in the file.

The documentation generator can be also divided into several logical parts. The first one is preparatory: You are going to connect all required libraries and a "header" for Swagger documentation.

const _ = require(’lodash’);
const fs = require(’fs’);
const yaml = require(’js-yaml’);
const header = {
    swagger: ’2.0′,
    info: {
    version: ’1.0.0′,
    title: ’Some Title’,
    description: ’Some funny description’
    },
    schemes: [’http’, ’https’],
    basePath: ’/api’
};

The next important part is a function that will open the path sent as an argument, read the list of the files, process them, and combine them in the final documentation file.

/* object */ function readDocs(dir /* string */) {
    const files = fs.readdirSync(dir);
    const docs = {};

    files.forEach(file => {
        if (! /\.yaml$/.test(file)) {
            /* do nothing */
            return;
        }
        const fullpath = path.join(dir, file)

        const data = yaml.load(fs.readFileSync(fullpath, 'utf-8').toString());
        addGeneratedFields(data);
        _.merge(docs, data);
    });

    return docs;
}

The addGeneratedFields function (shown above) adds all the announced documentation endpoints along with their required fields, such as the Authorization header and variables to access various elements of the collections. In the long term, you will save a lot of time and code strings because you won't need to copy many parts, and all such parameters will be described on a consistent basis. Furthermore, the process of adding new required parameters will become much easier and faster.

/* void */ function addGeneratedFields(data /* object */) {
    const authParam = { 
      in: 'header',
      name: 'Authorization',
      description: 'Authorization token using bearer schema',
      required: true, type: 'string'
    };
    const pathRequiredParam = (name /* string */) => ({
      in: 'path',
      name: name,
      required: true,
      type: 'string',
      description: 'Path parameter'
    });

    if (! ('paths' in data)) {
        return;
    }

    for (const path in data.paths) {
        for (const method in data.paths[path]) {
            const parameters = data.paths[path][method].parameters || [];

            const pathParams = path.match(/{([\w\s\|]+)}/gi);
            if (pathParams) {
                /* path params exists */
                for (const pathParam of pathParams.reverse()) {
                    const paramName = pathParam.replace('{', '').replace('}', '')
                    parameters.unshift(pathRequiredParam(paramName));
                }
            }

            parameters.unshift(authParam);
            data.paths[path][method].parameters = parameters;
        }
    }
}

Saving Your Documentation to a File

The last required part is to save everything into a file. To do that, it's recommended to use the fs.createWriteStream tool and stream.write from Node.js kernel.

module.exports = function GenerateDocs(dir /* string */, outFile /* string */) {
    const stream = fs.createWriteStream(outFile, { encoding: ’utf-8′, flags: ’w’ });

    stream.once(’open’, fd => {
    stream.write(’# This file is generated\n’);
    stream.write(yaml.dump(header));
    stream.write(yaml.dump(readDocs(dir)));
    stream.end();
    });
}

Fewer than 100 strings are needed to make your documentation modular. Undoubtedly, they can be improved and expanded by:

  • Adding segmentation by subdirectories.
  • Adding the name of the file to the model name in case the documentation is fairly voluminous and you are afraid of potential name crosscutting.
  • Implementing Swagger's $allOf, $oneOf , or $anyOf model inheritance mechanisms.
  • Using limit/offset/page for list routes.


Big-Picture Benefits That Are Worth the Effort

After looking at the final result, you may be tempted to think, "Hey, aren't we trying to reinvent the wheel here?" You are right and wrong at the same time. Self-written pieces of code can help make the process of team documentation writing more precise (and the process can be totally different for you). You can call it a necessary evil that makes the process of documentation writing easier and more convenient. In addition, later, your "reinvented wheel" may become a flexible library ready to be used across the board.

Swagger is a great tool that improves communication across the software development team. However, it's important to remember that documentation is a much bigger term that includes not only Swagger but also descriptions of processes, preservation of agreements, and step-by-step instructions. Writing good documentation is as difficult as writing good code, BUT the final result is absolutely worth it. And as Benjen Stark would remind us, nothing that comes before the "but" in that sentence really matters.

Documentation Node.js Software development file IO Docker (software)

Published at DZone with permission of Valentin Zlydnev, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Containerization of a Node.js Service
  • Running Serverless Service as Serverful
  • Docker and Kubernetes Transforming Modern Deployment
  • Mastering Node.js: The Ultimate Guide

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!