Virtual Log Files: 200 or 1000?
Virtual Log Files: 200 or 1000?
How many is too many? Does it even really matter? Read this article to find out the answer!
Join the DZone community and get the full member experience.Join For Free
Built by the engineers behind Netezza and the technology behind Amazon Redshift, AnzoGraph™ is a native, Massively Parallel Processing (MPP) distributed Graph OLAP (GOLAP) database that executes queries more than 100x faster than other vendors.
Last week, I had the privilege of reviewing possibly the best SQL Server production environment I've seen in Canada. During the follow-up meeting, the senior DBA and I had a discussion about Virtual Log Files (VLFs), disagreeing on the maximum number of Virtual Log Files a transaction log should have. I said 200, and he said 1000.
Both numbers are arbitrary, so let's explore why VLFs exist and why we might prefer one over the other.
To give you a succinct refresher, here's what I wrote about VLFs in my book:
A transaction log file is split into logical segments, called virtual log files. These segments are dynamically allocated when the transaction log file is created, and whenever the file grows. The size of each VLF is not fixed and is based on an internal algorithm, which depends on the version of SQL Server, the current file size, and file growth settings.
In a transaction log with too many or too few VLFs, we might experience performance issues under a normal workload as well as during the backup and restore process.
So what is the "right" amount? In customer engagements, I follow a guideline proposed by Glenn Berry of SQLskills.com in his Diagnostic Information Queries, to keep the number of VLFs at or below 200. In my opinion, any number higher than that is cause for concern.
On the other hand, Brent Ozar Unlimited has a popular script called, which proposes a maximum VLF count of 1000. To Brent, a number higher than that is cause for concern.
It doesn't matter because when we consider the number of VLFs in a transaction log file, what we care about are excessive counts. I've seen databases with more than 15,000 VLFs — clearly too high. If on the other hand a transaction log file has 201 VLFs, I'm not going to insist that the DBA shrink and resize the log immediately. Instead, I'll raise a flag and ask questions about file growth settings.
As a baseline, 200 VLFs is a reasonable maximum. If a transaction log file is small and has a lot of VLFs, that points to a problem with file growth settings, which is a relatively easy fix. If a transaction log file has more than 1000 VLFs, that should really set off alarm bells. Periodically keeping your eye on the VLF count or using more in-depth health checks like dbSnitch can go a long way to being proactive about identifying growing problems, rather than reactive firefighting.
In my customer's case, where they have very fast storage, low CPU utilization, and mostly small databases (below 100 GB), having 200 VLFs is a manageable target amount and doesn't appear to be causing noticeable performance issues. With good file growth settings, the VLF count won't even matter.
Share your VLF counts with me on Twitter at @bornsql.Photo by Bernard Hermant on Unsplash.
Published at DZone with permission of Randolph West , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.