How to Install JS-errors Tracker

DZone 's Guide to

How to Install JS-errors Tracker

In contemporary web development, the challenge of JavaScript error collection and analysis is very important for quality and stability. In this article, I will tell you how to install and setup fault tolerance LogPacker Cluster for JS-error collection and analysis which is designed for any load as it is easily scalable and can process an unlimited number of events.

· Web Dev Zone ·
Free Resource

Image title

In contemporary web development, the challenge of JavaScript error collection and analysis is very important. The quality and stability of a software program can be increased by fast error collection and analysis. Today, there are lots of companies with different services for solving this problem. As a rule, these are subscription services, where a user pays a fee for a certain number of received events.

In this article, I will tell you how to install and setup fault tolerance LogPacker Cluster for JS-error collection and analysis absolutely free. The service is designed for any load as it is easily scalable and can process an unlimited number of events.

The main LogPacker advantage is that it works with any type of log file on any platform, such as:

  • Server logs (Linux, MacOS ,and Windows Server).
  • Mobile logs and crashes (iOS, Android, and Windows Phone).
  • Custom application logs (any languages).
  • JS-errors.
  • Any third party applications and databases (full support out of the box).

Let’s consider the primary steps of the whole process for log collection and analysis by the example of working with JavaScript errors:

  • Infrastructure setup.
  • Cluster setting for log collection and analysis.
  • Dashboard setup.
  • JS-tracker connection to a website.
  • Notification setup.

We are going to review every point in detail, and let’s start with the architecture configuration for storing and analyzing log files.

Infrastructure Setting

LogPacker Cluster stores and processes JS-errors. It can consist of several linked LogPacker-servers or it can be a standalone application, installed on a Linux-server. Several nodes in a cluster allow handling load balancing and saving logs concurrently to different types of storages.

We provide you with a free license with a limit of five servers. This means that you can build a cluster of five nodes for free that can endure any load.

First, let’s see how to run a cluster of two servers. After registration, you will be able to use the console daemon-application. It is necessary to install it on two servers. You can do it, for example, with the help of rpm/deb pack or install it manually from the tar-archive.

Let’s consider the install process for each way in detail:

Install RPM:

sudo rpm -ihv logpacker-1.0_27_10_2015_03_45_30-1.x86_64.rpm

Daemon will be installed to the folder /opt/logpacker.

Install DEB:

sudo dpkg -i logpacker_1.0-27-10-2015_03-45-30_amd64.deb

Installation from .tar archive allows you to install LogPacker-daemon to any directory and have several daemons on one machine (if necessary):

tar -xvf logpacker-27-10-2015_03-45-30.tar.gz

Before it's run, we need to setup log storage. This is Elasticsearch by default, working on localhost:9200. LogPacker server supports the following storage types: file, elasticsearch, mysql, postgresql, mongodb, hbase, influxdb, kafka, tarantool, memcached. We will set up the servers for concurrent write to two services: Elasticsearch and Apache Kafka.

Apache Kafka installation is described in detail on DigitalOcean.

Kafka Server is available at

We also need Supervisord for daemonizing LogPacker application. Let’s create /etc/supervisord.d/logpacker_daemon.ini file with the following content:

command=/opt/logpacker/logpacker_daemon --server

We inform Supervisord of a new program:

sudo supervisorctl reread

In the end, we should get the following architecture:

architecture scheme

Our infrastructure is ready, and we can start LogPacker cluster setup.

Cluster Setting for Log Collection and Analysis

Then, we need to combine all five servers to the cluster. Each server has a configuration file configs/server.ini that we need to set up.

For full configuration, we need to indicate other cluster nodes to only one server. This is an example of the first server configuration:





If you are working with ES version 2 or higher, you need to change the provider’s name to elasticsearch2.

In the configuration file of the second server and the following ones, we need to change the following:


Let’s run all LogPacker nodes with the help of Supervisorctl:

sudo supervisorctl start logpacker_daemon

Now our cluster can receive logs, errors, and any other events. As messages can be received from different devices and networks, it is necessary to open an externally accessible API port. Proxy pass for local port in Nginx is okay for that:

server {
    listen 80;
    server_name logpacker.mywebsite.com;
    location / {
        proxy_set_header    X-Real-IP   $remote_addr;
        proxy_set_header    Host        $http_host;
        proxy_pass          http://localhost:9997;

It is possible to scale the API on all five servers and install Load Balancer that distributes the load.

Dashboard Setting

Now, let’s turn to displaying received data. Cluster has two standard ways of displaying logs:

Let’s consider the first one. For Kibana setting for logs displaying in real time we need:

This function has the following dashboards:

  • Server Logs.
  • JS-errors.
  • Mobile Logs.

JavaScript errors will be available in JS-error dashboard. They will contain full information regarding errors, including User Agent, IP, etc.

kibana screen

JS-script Setting

Now, let’s setup script for collecting errors and transferring them to the cluster. JS-script is available in user account on my.logpacker.com.

It is necessary to indicate the URL of the running cluster. There are also two optional parameters: User ID/Username, containing JS-code that will return their values.

Ready script looks like:

<script type="text/javascript">
    var clusterURL = "http://logpacker.mywebsite.com";
    var userID = "";
    var userName = "";
    (function() {
        var lp = document.createElement("script");
        lp.type = "text/javascript";
        lp.async = true;
        lp.src = ("https:" == document.location.protocol ? "https://" : "http://") + "logpacker.com/js/logpacker.js";
        var s = document.getElementsByTagName("script")[0];
        s.parentNode.insertBefore(lp, s);

Then it is important to add this script to all pages of your website (or several websites).

Notification Setting

LogPacker Cluster sends all received and processed fatal errors to the email of your account once an hour, by default. It is only possible if there is an installed local sendmail on your server.

The service also supports the following types of notifications:

  • Sendmail (by default)
  • Slack
  • SMTP
  • Twilio SMS

Message intervals and levels could be set in configs/notify.ini file:


As simple as that, we have setup the system for JS-error collection and analysis. This system can be easily set up and scaled and, as a result, it can handle any load. At the same time, the server involves minimal resources and is totally fault-tolerant thanks to the clusterization function.

In the built system, it is enough to implement log collection and analysis. At the moment, LogPacker supports all popular types of platforms.

In competitive IT market software, quality and stability are among the most important aspects. LogPacker effectively solves the problem of log collection, transfer, and analysis.

Register and install your own LogPacker Cluster absolutely free!

data sources, devops, javascript, js, log analysis, log management, web development

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}