drive to drive backupRight after I finished finishing adding a new drive, I had to copy the contents of an existing drive to that new drive. This was pretty easy. The more I learn about tar, the more I like it.
tar is your friendFirst, I mounted the destination drive as /backup:
# mount /dev/wd1c /backup
I wanted to backup all of wd0. This consists of
xymix (someone on IRC) told me about these two methods:
cd /SRC;dump 0f - . | (cd /DST; restor -rf - )
cd /SRC;tar -cf - . | (cd /DST; tar xpf - )
So here’s what I did for the three slices I wanted to backup:
(cd / ; tar -cvlf - .) | (cd /backup/ ; tar xpf -) (cd /var ; tar -cvlf - .) | (cd /backup/var ; tar xpf -) (cd /usr ; tar -cvlf - .) | (cd /backup/usr ; tar xpf -)
I have used this method in the past with great success.
But there are some performance issues with this method. When you use NFS to move a lot of data, performance is asymetrical with regard to writing to NFS as opposed to reading from NFS.
When you write a block to a NFS server, the next write operation is stalled by waiting for an ack from the server telling you that the previous block was successfully written. When you read from NFS, there is no wait for an ack, so things go much faster. The speed difference between writing and reading is 2 to 10 times.
As this applies to producing a backup, if you have the recipient machine mount the drives of the machine to be backed up, you then run the commands on the recipient. This has an added advantage that you are not consuming as much CPU on the machine being backed up, so if it is a production server, for example, the impact of the backup is reduced. If you compress the backup, keeping the compression process off of the server makes a huge difference under load.
An added advantage of pulling the files rather than pushing them is that you can now build a "backup server" that keeps the backup scripts on a single central machine, so that maintence and scheduling are simplified.
Also, NFS is how I used to do this…. A more secure method would be to move the data with ssh, and not run NFS at all. This has the disadvantage of running processes on the machine being backed up, but if you compress on the backup server, you can still keep most of the workload off of the machine being archived.
A useful set of tools to do ssh based backups (to disk or to files) is contained in the ports/sysutils/flexbackup package. Flexbackup is not the end-all backup script, but it is very useful in reducing the time needed to automate a backup server. An interesting feature of flexbackup is it uses "levels" in a similar manner to dump, but it supports dump, tar, afio, cpio and zip. So you can have your dump level advantages and still use tar if that is what you prefer.
Also, be looking for snapshot support to add some new ways of backing up your data. Hopefully snapshot support will find its way into a release soon…
ditto Tools Backup scheduler
Manual Tape Dirve Configuration
Tape ditto 420mb Driver
I’m a FreeBSD newbie, and I’m looking for a step by step overview of the best way to backup one FreeBSD machine to another over a network. Preferably with a CRON command so it will do it overnight at a specified time. I’ve been told this can be done, but my problem is that I know practically nothing about tar. I have done a file transfer from one FreeBSD box to another using FTP, so I’m thinking if you could setup some type of CRON command to tar what you want, and then another command to FTP it to the box??? Or if someone has a better way to do it, I’m open to that also.
Many thanks in advance,
Use NFS and aither tar, cpio, dd, pax, or whatnot.
NFS a remove mount from the destination computer. Then use tar, cpio, or whatever to copy that data from one directory to the NFS directory. Unmount the NFS drive when you’re done.
We use this to backup our webservers. Much easier and more convenient than using tape or whatnot.