5 Steps to Avoid Database Pitfalls
5 Steps to Avoid Database Pitfalls
Databases require proactive management to keep them healthy and optimized. Here are five ways to keep your database happy and running well.
Join the DZone community and get the full member experience.Join For Free
New whitepaper: Database DevOps – 6 Tips for Achieving Continuous Delivery. Discover 6 tips for continuous delivery with Database DevOps in this new whitepaper from Redgate. In 9 pages, it covers version control for databases and configurations, branching and testing, automation, using NuGet packages, and advice for how to start a pioneering Database DevOps project. Also includes further research on the industry-wide state of Database DevOps, how application and database development compare, plus practical steps for bringing DevOps to your database. Read it now free.
Databases are the safety deposit boxes of your business. They store and disseminate the data that keeps your business running. Without them, growth would be nearly impossible. When working with this valuable cargo, it's important to apply certain measures to help ensure extended use and sound performance. Failure to monitor and properly maintain your databases often leads to data loss and even business failure. Ask two DBAs each for the five things that you need to do to maintain healthy databases and you'll probably get about 20 answers. Nevertheless, here are five things to do to keep your databases healthy.
Dump and Load the Database"By failing to prepare, you are preparing to fail" - Benjamin Franklin. Over time, databases get scattered (fragmented), which leads to sluggish performance. To verify the level of scatter, run a database analysis report to determine the scatter factor. A scatter factor rating of 1.0 is good, but anything over 1.6 means the database is working harder than it should. An overworked database wears down quicker and costs you more because of poor performance. If for some reason you can't perform a dump and load, perform an index rebuild to defragment the indexes. An index rebuild is less effective than a dump and load, but it will help with performance.
Test Backups and Restore to Another MachineI've had many customers experience a disaster and then find out that their backups don't work. For example, a customer recently called me in a state of panic: "We just had a fail and we accessed our backup and it's no good. The last good backup we have is from four months ago!" I figured out that they had some after image files that I could roll forward to recover the data. The recovery wasn't cheap, it wasn't easy, and it took four days to complete. It probably saved the company, though. This is a prime example of why you need to test your backups regularly.
Test Your Disaster Recovery PlanThe wrong time to test your DR plan is after you have a disaster. This may sound obvious, but you'd be surprised how many companies create disaster recovery and business continuity plans and fail to test them. You may have a DR plan, but if you don't test it to see how long it takes to failover, it could be a surprise when it takes an entire day or longer. The options are simple; time to restore and run or data loss. This may be the time where you ask yourself, "Do we have a DR plan?" If you don't, you need one now. If you work for a publicly held company and you don't have a DR plan in place, you're in violation of the Sarbanes Oxley Act of 2002. The consequences could be significant. Ensure your DR plan is in place.
Check the Database's File Area GrowthDBAs sometimes make the mistake of ignoring a database's growth spurts. When a file area runs out of space, it will crash and can cause corruption. This is a simple thing to check. If you use a managed database administration service, you won't run into this problem. Running out of space is a bad way to crash and is easily prevented because disk space is cheap today. By simply monitoring your databases, you can avoid this simple and inexcusable mishap.
Don't Ignore Performance IndicatorsThe database can tell you exactly what is wrong. The DBA's job is to know what to look for. Items like buffers hits, database reads, index utilization, SQL queries, and large files will give you a good picture of where you are in terms of performance. Users can also be helpful because they'll certainly let you know if the database is slow. When buffer hits are below 80%, you need a bigger buffer. If you watch your database reads and you see them triple over one month and they stay that high, there's clearly something wrong — and it's more than likely a code or index issue. When it comes to index utilization, anything below 60% needs to be rebuilt. Underutilization can cause queries to run 10 times slower. All you have to do is compact the indexes — a relatively easy task. Then there are the SQL queries. A good number of users run reports with SQL products. If SQL slows your database down, consider real-time replication so you won't risk the stability of your transactional database. Finally, there's large file enablement that should be done automatically every time you create a database. However, a lot of DBAs simply forget to do it. The problem is when a file exceeds the 2GB limit — it shuts down your database.
The Bottom Line
Proactive database monitoring is smarter and more cost effective than reactive database administration. An MDBA solution can manage your databases and keep you out of the weeds. Without MBDA, you need to stay on top of the database's health in order to extend its longevity. For now, databases need human intervention to work as designed and to perform optimally. In a few years, they might not.
Published at DZone with permission of Barbara Ware , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.