Microservices Approach to Processing System Logs into Real Time Actionable Events
Microservices Approach to Processing System Logs into Real Time Actionable Events
System logs are now being generated from more sources than ever, each one as crucial as the last. Can traditional processing and architecture handle this growing and changing scale? Or is there a better fit?
Join the DZone community and get the full member experience.Join For Free
SnapLogic is the leading self-service enterprise-grade integration platform. Download the 2018 GartnerMagic Quadrant for Enterprise iPaaS or play around on the platform, risk free, for 30 days.
With the growth in scale of IT services, software, and infrastructure from industries small to large and local to global, real time monitoring and notification of critical systems and security logs has becoming increasingly important. These logs are not just logs, they are truly events that a business needs to listen to and care about in real time to be agile and be prepared to handle the challenges that lie in the marketplace.
Given that logs contain vast amounts of data, and logistically it is not possible to inspect every log manually on every network or the system, logging is important but often neglected tool to preemptively react to a crisis. Although there are applications available that consolidate logs into a central place, what is needed is the ability to unlock pertinent events automatically for real time monitoring and notification and make them actionable. Making these system events actionable as they happen depends on your ability to make the right data available at right time and with the right person or the system. This is critical since we live in a world that is increasingly event driven thanks to convergence of systems, devices, people and processes through advancement in internet, mobile, and communications technologies.
But how is this possible without some level of component or perhaps, re-design from the ground up of large distributed or legacy systems? A Microservices-based solution comes to the rescue…
A Microservices-Based Solution
With the rapid increase in cloud hosted services and SaaS products over the last few years, there has been a paradigm shift in the application landscape described most clearly by two words — diverse and distributed. What it means is that there is data distributed across your diverse and specialized SaaS applications on various cloud hosted services. To get a view of this distributed data across cloud services, there is a need for an integration architecture that cannot be provided by legacy SOA (Service Oriented Architecture). Although the core tenets of SOA are still valid and widely applicable, it has to evolve to support the world of Cloud, SaaS, PaaS, and IoT.
The good news is that we have such an evolution of SOA architecture fit for today's world: microservices architecture. In a nutshell, microservices are independent, atomic, and portable services that can be deployed anywhere (on premise, on cloud and on any system or operating system) with no deployment dependencies. These microservices do one well defined task very well and can be chained together using Message Oriented Middleware (MOM) like RoboMQ to create complex business processes. Microservices architecture thus provides a "Lego" approach to building applications which are auto-scalable, future proof, elastic, and expandable. You could follow our other blog on this topic to get into the architectural and philosophical detail at the Lego approach of building applications.
A typical log generation, acquisition, and subsequent preemptive action can be illustrated as shown in Figure 1 below. This is a high-level workflow that can be applied to any IT or technology system, monitoring process, or business application where automated events can be triggered based on logs generated from any system and in any format.
Figure 1: Event processing using Microservices
If you follow the picture from left to right, we have system events available as log files, database records, or simply data streams from applications, systems or devices. These events are captured or acquired by the middleware messaging platform. The events are subsequently processed by a chain of Microservices designed for the organization specific business needs. During this processing by Microservices the log events may be reformatted, evaluated against the thresholds or known alert conditions, or fed to a machine learning system to learn and identify threats and issues. At the end of the processing, an action is taken such as making a phone call, sending a SMS or email, feeding the event to a real time analytic engine, or creating a case or a ticket in Salesforce, ServiceNow, or Jira.
For the purpose of this article, we will be using the microservices platform provided by RoboMQ. All the tasks referenced above are performed using our connectors and adapters to various enterprise systems and almost all protocols through “ThingsConnect” suite of adapters and connectors. Microservices development needs three core components that are provided by RoboMQ. Let me introduce these since they will referred to in the following sections:
- API Gateway — RoboMQ provides an enhanced multi-protocol API Gateway through itsThingsConnect suite of adapters and connectors, so that you can process events from any system in any protocol or format.
- Messaging Fabric — At the core, RoboMQ is a Message Oriented Middleware(MOM) that provides a truly distributed and federated messaging layer, also referred to as a “Hybrid Messaging cloud”.
- Microservice Framework — We provide a microservices framework and have built our platform from the ground up using this framework.
In this use case, all the components are built as microservices, and the messaging fabric provides the chaining of services over a robust and scalable infrastructure that supports guaranteed and reliable delivery of the log information. The microservices layer consists of well-defined atomic tasks or components that can be assembled to create an event-driven workflow that automatically generates events from any source file or database and outputs to any destination system including the next microservice in the chain of processing. The following sequence descries the event processing scenario as shown in Figure 2 below:
- The first step of event processing involve “listener” microservices that capture and listen for log events from files, server logs, or API calls. Additionally, it can filter events based on criticality (e.g. errors, warnings, information). While processing bulk logs as in server log files, a listener will emit each log event as an individual message. This allows for on-demand scaling and parallel processing of the log events through the processing streams.
- The next step in the chain are microservices called “Executor” which takes the captured event and based on a rules set, determine what action, or alert should be generated and which should the recipient system for the action.
- The final step in the process are microservices, called “adapters”, that extract the event content and converts it to the format or protocol for the receiving system. This could be translated to REST calls for ServiceNow or Salesforce or an SMTP call for email or SMS. The adapter takes care of this protocol translation and needed data transformation. There are wide possibilities here using the RoboMQ ThingsConnect suite of adapters and connectors that support Any-to-Any integration.
Figure 2: Log processing through chaining of Microservices
You can run one or more instances of each of the above three microservices to provide scaling by parallel processing. The good news is that you do not need to do any code or configuration change, just throw the microservices in the mix and they auto load-balance the workload when using RoboMQ.
Putting It All Together
As an example, we'll take typical Linux system logs that we all are familiar with. In the picture below is a scenario where the system log contains all kinds of information. Some of the information, shown in red, is critical and needs immediate action, and some are simply warnings which could be benign but may point to a low priority corrective action needed in the a long run.
Figure 3: Processing System logs as complex event processing stream
When processed thorough the schema as suggested in Figure 2, these critical events are captured in real time and trigger an immediate action, either escalating the incident, notifying a desired recipient, or logging it to a tracking or case management tool. There are various options possible and can be chosen depending on the business need. You could, for example:
- Create a case in Salesforce to engage technical support and triage teams to react to an issue or an incident.
- Generate and SMS alert for specified system administrators to immediately act upon.
- Log alert as a record to any relational or big data database for historical analysis or to provide an audit trail.
- Process the event through a machine learning system and then take one of the above actions based on the recommendation from machine learning.
This use case is a very simple but a good example of Complex Event Processing (CEP), which is possible using RoboMQ. To help build such workflows, we have created a library of fully Dockerized connectors, adapters, and utility components that provide the building blocks of microservices that can be chained together. These microservices can be deployed on the cloud, on premise in your data center, or on container management platforms like Google Kubernetes, AWS container engine, or IBM Bluemix with core infrastructure provided by RoboMQ.
Published at DZone with permission of fred.yatzeck , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.