Writing MySQL Purge Jobs in Ruby with the Cleansweep Gem
Join the DZone community and get the full member experience.
Join For Freeoriginally written by bill kayser
about the simplest thing you can do with a relational database is tell it to delete some data:
delete from accounts where accounts.cancelled = true
you don’t have to be a dba to figure out what that does. if only everything was that simple!
if you’re using
mysql
, as you begin collecting more and more data the challenges of deleting data slowly unfold. delete operations can be very expensive and cause contention. deleting too many records at a time can impact other sessions trying to modify the table.
you can sometimes work around this by deleting in chunks:
repeat until done: delete from accounts where accounts.cancelled = true limit 500
but this approach can lead to re-scanning rows unnecessarily and locking rows in long running transactions. and it only gets more complicated from there:
- how do you monitor progress and tune your script?
- what if you are moving data?
- how do you avoid introducing replication lag when deleting rows at a high rate?
the most common patterns we encounter for purging data are deleting by timestamps and deleting orphan data. there’s an excellent utility in the free percona toolkit called pt-archiver that provides many features to help you write purge scripts efficiently and with no impact on your application in production. in particular, it allows you to explicitly identify the index to traverse when doing deletes.
here’s a simple example deleting rows of the errors table that are older than one week and not flagged for archive:
pt-archiver \ --source u=dbuser,d=production,t=errors,i=index_on_timestamp \ --purge \ --limit 1000 \ --commit-each \ --bulk-delete \ --where "timestamp < now() - interval 1 week and archive = 0"
this specifies that rows should be deleted 1,000 at a time, descending the timestamp index, ensuring that each iteration starts scanning where the last row was deleted. it does this by first querying for the error ids to purge, and then deleting them separately. in this case that avoids re-scanning rows where archive is not 0.
building the cleansweep ruby gem
while pt-archiver works great for purges with single table scans, new relic engineers weren’t able to use it for more complex operations deleting orphan data using joins instead of subqueries. we also missed all the tools for scripting, scheduling, monitoring, and notification we use for our ruby on rails tasks.
i decided to build a ruby gem—called c leansweep —to give us many of pt-archiver’s benefits while also leveraging features of activerecord that simplified the construction of purge jobs, allowed us to easily handle purging orphan data, and was more easily integrated into our existing ruby tools.
using the cleansweep gem, you specify a purge in ruby like this:
copier = cleansweep::purgerunner.new \ model: error, index: 'index_on_timestamp' do | model | model.where(timestamp < ? and archive = ?', 1.week.ago, false) end copier.execute_in_batches
this creates a purgerunner instance and yields to an arel scope you can use to specify the where clause. it then iterates through the table deleting in chunks, traversing the timestamp index.
we delete orphan rows by joining with tables that reference the data we want to delete:
copier = cleansweep::purgerunner.new model: error do | model | model.joins('left join applications app on errors.app_id = app.id') .where('app.id is null’) end
in this version i don’t specify an index. by default, the cleansweep gem will use the primary key or the first unique index, and traverse that index in one direction. it does this in chunks, first querying for the rows and then deleting them. you can preview the exact queries by specifying:
puts copier.print_queries
you can use this output to examine query plans and tune the criteria:
initial query: select `errors`.`id` from `errors` force index(primary) left join applications on errors.app_id = app_id where (app.id is null) order by `errors`.`id` asc limit 500 chunk query: select `errors`.`id` from `errors` force index(primary) left join applications on errors.app_id = app_id where (app.id is null) and (`errors`.`id` > 1001) order by `errors`.`id` asc limit 500 delete statement: delete from `errors` where (`errors`.`id` in ( 1, 3, 4, … 1001))
being able to specify the scope becomes especially helpful as the criteria get more complex. for instance, perhaps you want to delete by a timestamp, but only on certain accounts. or maybe you want to delete orphan data, but you only need to look at a certain type of error.
more bells and whistles
the cleansweep gem also offers a number of other bells and whistles:
- option to throttle rate of deletes by sleeping between chunks
- built-in custom instrumentation for monitoring with new relic
- option for printing out progress statistics at an interval you specify
- ability to copy rows into another table instead of deleting them
- ability to purge rows in one table using ids in another table. at new relic, we use this to purge satellite tables by building a temporary table of ids and creating cleansweep instances for each table we need to purge that references the ids in the temp table
- ability to suspend when the replication lag exceeds a certain time threshold
- ability to suspend when the history list size exceeds a certain threshold
- ability to traverse an index in reverse order, or not traverse at all
- accept a logger instance to use for logging. we use this to pass in a facade to our remote logger
this only scratches the surface of all the things the cleansweep gem can do. you can find details, documentation, and more examples at http://bkayser.github.com/cleansweep .
Published at DZone with permission of Fredric Paul, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments