Publish/Subscribe Logging Options
Zato publishes/subscribe message queues and topics give insight into the inner workings of all the components taking part in message delivery.
Join the DZone community and get the full member experience.Join For Free
Zato publishes/subscribe message queues and topics offer several ways to gain insight into the inner workings of all the components taking part in message delivery and this article presents an overview of the mechanisms available.
You may also like: Comparing Publish-Subscribe Messaging and Message Queuing
Two Types of Logging
Logging is broken out into two categories:
- Files or other destinations — there are several files where messages and events may be stored.
- GUI in web-admin — dedicated parts of web-admin let one check runtime configuration and observe events taking place in internal components.
Files or Other Destinations
By default, several files are employed to keep information about pub/sub-messages, each file stores data with its characteristics. As with other logging files, the details such as logging levels are kept in the logging.conf file for each server in a cluster.
Note that logging to files is just what pub/sub uses in default settings — everything in logging.conf is based on Python's standard logging facilities so it is possible to reconfigure it in any way desired to send logging entries to other destinations instead of, or in addition to, what the configuration says by default. Likewise, it is possible to disable them altogether if they are not needed.
zato_pubsub is the main file with internal events about the handling of pub/sub-messages. Any time a message is published or received by one of the endpoints, a new log entry will be added here.
Moreover, the file contains plenty of extra information — whether there are any matching subscribers for a message, what their subscription keys are, if they are currently connected, what happened to message if there were none if the message could not be delivered, what the reason was, how many attempts have been so far, all the relevant timestamps, headers, and other metadata.
In short, it is the full life-cycle of publications and subscriptions for each message processed. This is the place that all the details of what is going on under the hood can be found in.
Unlike the file above, the whole purpose of zato_pubsub_audit is to save data and metadata of all the messages received and published to topics — only that.
There is no additional information on what happened to each message, when and where it was delivered — it is purely an audit log of all the messages that Zato published.
It is at times convenient to use it precisely because it has basic information only, without any details.
Prevention of Data Loss
The whole purpose of zato_pubsub_overflow is to retain messages that would be otherwise dropped if topics reached their maximum depth yet subscribers were slow to consume them.
Consider the topic below — it has a maximum depth of 10,000 in-RAM messages. If there is a steady inflow of messages to this topic but subscribers do not continuously consume them, which means that the messages will not be placed in their queues in a timely fashion, this limit, the maximum depth, will be eventually reached.
The question then is, what should be done with such topics overflowing with messages? By default, Zato will save them to disk just in case they should be kept around for manual inspection or resubmission.
If this is not needed, zato_pubsub_overflow can be configured to log on level WARN or ERROR. This will disable logging to disk and any such superfluous messages will be ignored.
Several web-admin screens let one understand what kind of internal events and structures participate in publish/subscribe:
- Delivery tasks — their job is to deliver messages from subscriber queues to endpoints.
- Sync tasks — they are responsible for internal synchronization of state, data, and metadata between topics and queues.
Of particular interest is the event log — it shows step-by-step what happens to each message published in terms of decision making and code branches.
Note that the log has a maximum size of 1,000 events, with new events replacing any older ones, thus in a busy server it will suffice for a few seconds at most and it is primarily of importance in environments with low traffic to topics, such as development and test ones.
Note that runtime pub/sub information in web-admin is always presented for each server process separately — for instance, if a server is restarted, its process ID (PID) will likely change and all the data structures will be repopulated. The same holds for the event log which is not preserved across server restarts.
Publish/subscribe messages and their corresponding internal mechanisms are covered by a comprehensive logging infrastructure that lets one understand both the overall picture as well as low-level details of the functionality.
From an audit log, through events to tracing of the individual server processes, every angle is addressed to make sure that topics and queues work smoothly and reliably.
Microservices in Publish-Subscribe Communication Using Apache Kafka
Publish/Subscribe and Asynchronous API Integrations
Opinions expressed by DZone contributors are their own.
How to Use an Anti-Corruption Layer Pattern for Improved Microservices Communication
How AI Will Change Agile Project Management
Azure Virtual Machines
Java Concurrency: Condition