Using newsyslog to rotate backups
newsyslog(8) allows you to maintain system log files to manageable sizes.
Log files record various system activities and can be useful when diagnosing
problems. The system adds to the end of the file and they can be read in
chronological order from top to bottom. If this growth is not monitored and
limited, storage space will be eventually exhausted. This is where
newsyslog.conf(8) helps.
newsyslog(8) uses the concept of rotation. Old data is removed and new data
added to a fresh empty file. You decide how much data you want to keep,
how often, etc, and newsyslog does the rest.
newsyslog(8) can be choose to archive based upon three reasons:
- size
- elapsed time
- time of day
The motivation
The problem I’m solving is not log files, but it is disk space related to
files. On a daily basis, the databases are dumped to disk and then copied
from the database server to my server at home. At present, I include the
date in the file name after rsyncing. I also tar up the file. The existing
solution looks a bit like this:
YYYYMMDD=`eval date "+%Y.%m.%d"` FILES=`echo *.sql` for i in $FILES do tar -cvzf archive/`basename ${i} .sql`.${YYYYMMDD}.sql.tgz ${i} done
The issue with this solution is the removal of older backups. I’m already
backing up the .sql files via Bacula.
I keep additional copies just because I can. However, they need to be
trimmed.
The newsyslog pattern
Here is what worked for me:
/home/dan/backups/old-backups/*.sql dan:dan 640 60 * $D23 GZB
The key point is the G. From man newsyslog.conf
G indicates that the specified logfile_name is a shell pat- tern, and that newsyslog(8) should archive all filenames matching that pattern using the other options on this line. See glob(3) for details on syntax and matching rules.
This is exactly what I need. The Z flag requests compression. The B flag indicates the log file
has a special format; do not append an ASCII message to it.
I did have to make some fine tuning to this, but over time, it came out to just what I needed:
$ ls -lt | head -20 total 63929678 -rw-r--r-- 1 dan dan 5088 Oct 17 04:29 globals.sql -rw-r--r-- 1 dan dan 3835195066 Oct 17 04:29 pentabarf_pgcon.sql -rw-r--r-- 1 dan dan 731189913 Oct 17 04:26 pentabarf_bsdcan.sql -rw-r--r-- 1 dan dan 1900659 Oct 17 04:26 pentabarf.sql -rw-r--r-- 1 dan dan 623710100 Oct 17 04:26 openx.sql -rw-r--r-- 1 dan dan 192664 Oct 17 04:24 fsphorum.sql -rw-r--r-- 1 dan dan 4280026122 Oct 17 04:24 freshports.org.sql -rw-r--r-- 1 dan dan 2170423 Oct 17 04:17 fpphorum.sql -rw-r--r-- 1 dan dan 482262 Oct 17 04:17 bsdcert.sql -rw-r----- 1 dan dan 1319 Oct 16 04:29 globals.sql.0.gz -rw-r----- 1 dan dan 1496507511 Oct 16 04:29 pentabarf_pgcon.sql.0.gz -rw-r----- 1 dan dan 276787790 Oct 16 04:26 pentabarf_bsdcan.sql.0.gz -rw-r----- 1 dan dan 680104 Oct 16 04:26 pentabarf.sql.0.gz -rw-r----- 1 dan dan 60034865 Oct 16 04:26 openx.sql.0.gz -rw-r----- 1 dan dan 819085732 Oct 16 04:24 freshports.org.sql.0.gz -rw-r----- 1 dan dan 69310 Oct 16 04:24 fsphorum.sql.0.gz -rw-r----- 1 dan dan 113826 Oct 16 04:17 bsdcert.sql.0.gz -rw-r----- 1 dan dan 471781 Oct 16 04:17 fpphorum.sql.0.gz -rw-r----- 1 dan dan 1319 Oct 15 04:29 globals.sql.1.gz
This solution has been running since September 24. I like it.
What would you have done?
Great article and interesting approach! I may adopt the use of newsyslog for some of our stuff, too! We use a simple script I made to dump our databases out to a path. The script can be found here: http://gist.github.com/219233
The script can run via a cron job at your desired frequency. To prune old backups, we have another cron job that uses the find command:
0 2 * * * /usr/bin/find /path/to/backups -type f -mtime +30d -delete
If you wanted to be more specific about the names of the files being deleted, you could do:
0 2 * * * /usr/bin/find /path/to/backups -type f -mtime +30d -name ‘mybackup*’ -delete
I prefer to use a scripted approach.
#!/bin/sh
#
# mysql database backup
# author: Gianluca Sordiglioni
# date: 2002.12.31
#
# Dump all the databases using mysqldump.
#
# put login password in ~/.my.cnf:
#
# [client]
# user=root
# password=yourpassword
#
# Be sure to have enought space left in $BKTEMP and $BKPATH dirs !
#
BKPREF="mysql_backup_`hostname -s`"
BKNAME="$BKPREF.`date +%Y.%m.%d.at.%H.%M.%S`" # how to name your files
BKPATH="/usr/backup" # backup directory
BKFULL=$BKPATH/$BKNAME # backup full pathname
BKTEMP="/usr/tmp" # temporary directory
BKMAX=8 # number of backup files (days?) to keep
BKTOT=`ls $BKPATH/$BKPREF* | wc -l`
BKDEL=`expr $BKTOT – $BKMAX`
# Loop thru all databases and dump them.
echo "Begin MySQL backup at `date`"
mkdir $BKFULL
for db in $(/usr/local/bin/mysqlshow | cut -f2 -d"|" | cut -f2 | tail +4 | tail -r | tail +2)
do
if [ $db != ‘..’ ] && [ $db != ‘.’ ]; then
echo $db
/usr/local/bin/mysqldump -c –add-drop-table –add-drop-database $db | gzip -9 > $BKFULL/$db.gz
fi
done
# (Copy it offsite, if needed)
# Remove the old backup files we created
#
if [ $BKTOT -gt $BKMAX ]
then
for file in $(ls -rt $BKPATH/$BKPREF* | head -n $BKDEL)
do
rm -rf $file
done
fi
echo "End MySQL backup at `date`"
Thank you for posting that.
—
The Man Behind The Curtain