Right after releasing 1.3.0 I realized that I forgot about one thing.
If you're using ext3 (and possibly other, not sure) file system, removal of large file can cause problems due to heavy IO traffic.
We did hit this problem earlier at one of client sites, and devised a way to remove large files by truncating them, bit after bit, and getting them to small enough size to be removed in one go. I wrote about it earlier, of course.
Unfortunately – I forgot about this when releasing 1.3.0, but as soon as I tried to deploy at the client site, I noticed the missing functionality.
So, today I released 1.3.1, which adds two options to omnipitr-backup-cleanup:
If truncate is specified, and is more than 0, it will cause omnipitr-backup-slave to remove large files (larger than truncate value) in steps.
In pseudocode:
if param('truncate') {
file_size = file_to_be_removed.size()
while ( file_size > param('truncate') ) {
file_size = file_size - param('truncate')
file_to_be_removed.truncate_to( file_size )
sleep( param('sleep') )
}
}
file_to_be_removed.unlink()
So, for example, specifying –truncate=1000000, will remove the file truncating it first by 1MB blocks.
–sleep parameter is used to delay removal of next part of the file (it's used only in truncating loop, so has no meaning when truncate-loop is not used). It's value is in milliseconds, and defaults to 500 (0.5 second).
Hope you'll find it useful.