[This article was written by Jervin Real]
A few years back Deva wrote about how to use tcpdump on very busy hosts. That post sparked my interest about exploring how to measure the impact of tcpdump on very busy hosts. In this post, I wanted to highlight how much of an impact there really is and what options you have to make the query collection much more effective.
Some things you need to know:
- The test is a sysbench read-only workload, 8 tables, 8 threads, 1000000 rows each with 16G of buffer pool. Dataset fully in memory.
- sysbench is ran on the same host, on 1Gbps connection, sysbench can saturate the network and therefore affect my network test with netcat so I decided to run locally.
- There are 13 tests, 5 minutes each with 1 minute interval, varying on how the dump file is captured.
- First one as baseline is the MySQL slow query log.
A: mysql -e 'set global long_query_time=0, slow_query_log=1; select sleep(300); set global long_query_time=1, slow_query_log=0;' Second group is tcpdump with -w option, which means tcpdump itself is writing to the capture file. B: $DUMPCMD -i any -G 300 -W 1 -Z root -w tcpdump.pcap port 3306 C: $DUMPCMD -i any -G 300 -W 1 -Z root -w tcpdump.pcap 'port 3306 and tcp & 7 == 2 and tcp & 7 == 2' D: $DUMPCMD -i any -G 300 -W 1 -Z root -w tcpdump.pcap 'port 3306 and ( tcp & 7 == 2 or tcp & 7 == 2 )' Third group, is using “packet-buffered” (-U option) to see if there will be improvement on response time. E: $DUMPCMD -i any -G 300 -W 1 -Z root -U -w tcpdump.pcap port 3306 F: $DUMPCMD -i any -G 300 -W 1 -Z root -U -w tcpdump.pcap 'port 3306 and tcp & 7 == 2 and tcp & 7 == 2' G: $DUMPCMD -i any -G 300 -W 1 -Z root -U -w tcpdump.pcap 'port 3306 and ( tcp & 7 == 2 or tcp & 7 == 2 )' Next streams the backup to a remote location via netcat. H: $DUMPCMD -i any -G 300 -W 1 -Z root -w - port 3306 | nc remote_ip 33061 I: $DUMPCMD -i any -G 300 -W 1 -Z root -U -w - port 3306 | nc remote_ip 33062 J: $DUMPCMD -i any -G 300 -W 1 -Z root -U -w - 'port 3306 and tcp & 7 == 2 and tcp & 7 == 2' | nc remote_ip 33063 The last group, the one most of us are probably accustomed with is piping the dumped packets to file. K: timeout -s KILL 300 $DUMPCMD -i any port 3306 > tcpdump.pcap L: timeout -s KILL 300 $DUMPCMD -i any 'port 3306 and tcp & 7 == 2 and tcp & 7 == 2' > tcpdump.pcap M: timeout -s KILL 300 $DUMPCMD -i any 'port 3306 and ( tcp & 7 == 2 or tcp & 7 == 2 )' > tcpdump.pcap $DUMPCMD is defined as tcpdump -s 65535 -x -nn -q -tttt
- On each group there is an AND and OR variation in port filtering. I wanted to see whether how much of additional impact port expressions have. And as you will see below, they do not have significant impact on performance, but on number of queries captured.
I’ve graphed the sysbench data during the test and labeled each test for easy comparison on the graphs.
- We see that, piping the pcap (K, L, M) data as decoded packets has significant overhead in terms of number of captured queries, response time and reads requests completed.
- Using the slow log has about 30% overhead in response time, nearly 20% drop in throughput but have highest number of queries captured.
- Writing captured packets directly to binary file using the -w option has the lowest overhead in response time, around 10%. Throughput drops depending on how much filtering is involved though while also there are noticeable stalls when the operating system flushes the page cache. This side effect causes sysbench to drop to 0 reads or even reach response times of several seconds!
- Streaming packets to a capable remote server in terms of network bandwidth, IO performance combined with -w option to capture binary data produces 20-25% overhead in response time, 10-15% drop in throughput, no stalls and number of queries captured as close to slow query log.
Use tcpdump -w option in all cases and decode later. If you are looking for an overall view of ALL your queries, streaming tcpdump data to remote is also ideal. If you have low bandwidth though i.e. 100Mbps, this might not be enough as 5mins of binary tcpdump data produced 31G of file. That is 105MBps requirement! In which case, consider writing to a separate partition with enough IO.
If you are using Percona Server or MariaDB and is only looking to capture a portion of your workload i.e. table scans, temp table on disk or rate limit the collection, the extended slow query logging capability with this versions are also an excellent way to capture the data you need.