Monitoring the Systems That Monitor the Weather
Monitoring your IoT system is as important as gathering IoT data. This overview covers a stack you can use to visualize data about your IoT solution.
Join the DZone community and get the full member experience.Join For Free
influxdb is a wickedly cool time series database, which makes it super easy to store time-based measurements, such as weather measurements, but also more traditional metrics, like server cpu and disk utilization.
in the past , i’ve used munin, an rrdtool-based monitoring utility, but it feels kind of clunky these days, and rrdtool can be a bit of a bugger to extract data back out, and there’s a certain degree of lossy compression in the graphs too, which can make particularly spiky events look a lot smoother than perhaps they might be.
enter telegraf .
telegraf is one of the other components produced by influx, and it’s basically a massively extensible, plugin-driven data collecting thingy.
there are many dozens of plugins, batteries included. for example, this is the list of ‘input’ plugins for it, that you get for free when you install it.
it supports squirting data into a wide variety of databases too, not just influxdb, but things like elasticsearch, mqtt, graylog, amqp (for rabbitmq), plain old tcp sockets, and many others.
in this case, i’m just using it to gather stats from the server’s cpu, memory, disk, and so on, and squirt those into influxdb, using the same grafana instance that hosts the weather dashboard to host a few system dashboards for the various bits of architecture in my home lab.
if you can’t find a plugin to monitor the thing you want to monitor, you have two options. you can either write a native plugin in go (if that’s your bag), or use the [[inputs.exec]] built-in which will run a script and take the output from stdout and slurp that data in.
now, i know this sounds like a sponsored post, but i can guarantee you it’s not. i just like sharing the neat tools i’ve discovered, and the slightly unusual ways i’m using them.
in grafana, i’ve basically configured a few simple graphs for things like load average, disk utilization, memory usage, and so on, and i’m starting to get a few pretty graphs out of it.
network throughput requires a subtly different approach because we’re after a rate, not a counter, so we have to use the built-in derivative(value, time) function , as derivative(mean($measurement), 1s) to get a rate in bytes/second, then multiply by 8 to get bits/second.
the full query looks like this:
select derivative(mean("bytes_recv"), 1s) *8 from "net" where "host" = 'werahost' and $timefilter group by time($__interval) fill(none)
Published at DZone with permission of Tom O'connor, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.