How to backup docroot using tar - linux

I have a question about performing routing backups on our webserver. Currently we are running Apache and we want to backup our doc root. I have a shell script that runs nightly and the command I use is: sudo tar cvzf filename targetFilename.
My question is is it safe to run a tar cvzf command on a doc root if files are being read, written, created? Is there a better way to do this? Is it a good idea to shut down Apache while creating the tar file?
I've done some research and I couldn't find a straight forward answer.
Thank you for your help!

it's safe
System Deny A and B are writing file same the time because file locked.
tar backup is reading the files and create new file . it's ok.but shutdown web server is the best way to backup files .

Related

Create new TAR file in current directory to a new directory without using "mv" (shell script)

I'm currently setting up a backupmanager to automatically archive directories from webserver. I'm searching for an answer how to create a new TAR file of the current directory to a new (different) directory without using mv after the archiving process. See my command:
dcreate=$(date +%Y_%d_%m) tar -cvpzf backup_$dcreate.tar.gz plugins/folder_to_archive/
This command works fine, but i'm struggeling now on how to move it to a new directory directly after the archiv process is terminated, for example:
plugins/plugin_name/ to plugins/backups/
Any help appreciated.
Regards
The -f option of tar is the destination file for the archive; it can be anywhere you want, i.e., you can change
-cvpzf "backup_$dcreate.tar.gz"
to
-cvpzf "plugins/backups/backup_$dcreate.tar.gz"
if you want your new archive to be created in plugins/backups/

bash: sending compressed files while compressing others

i have a simple bash script to download a lot of logs files over pretty slow network. i can compress logs on the remote side. basically it's:
ssh: compress whole directory
scp: download archive
ssh: rm archive
using lzma gives great compression but compressing the whole directory is slow. is there any tool or easy way to write a script that allows me to compress a single files (or a bunch of files) and start downloading them while other files/chunks are still being compressed? i was thinking about launching compressing for every single file in the background and in the loop downloading/rsync files with correct extension. but then i don't know how to check if compressing process finished its work
The easiest way would be to compress them in transit using ssh -C. However, if you have a large number of small files, you are better off tarring and gzip/bzipping the whole directory at once using tar zcf or tar jcf. You may be able to start downloading the file while it's still being written, though I haven't tried it.
best solution i found here. in my case it was:
ssh -T user#example.com 'tar ... | lzma -5 -' > big.compressed
Try sshing into your server and going to the log directory and using GNU Parallel to compress all the logs in parallel and as each one is compressed, change its name to add the .done suffix so you can do rsync. So, on the server you would run:
cd <LOG DIRECTORY>
rm ALL_COMPRESSED.marker
parallel 'lzma {}; mv {}.lzma {}.lzma.done' ::: *.log
touch ALL_COMPRESSED.marker

how to take linux shell backup script system?

I am facing problem in taking shell script backup of my system (Linux). Please help me to know how to take shell script backup of my system.
Thanks in advance
I prefer rsync for backing up my Centos boxes. It is very good at both file transfer and file synchronization and offers tons of options for compression and such. This is a modified one line example from my BASH backup script that backs up all of my media onto a temporarily mounted external hard drive:
rsync -avP --stats /media/* /mnt/ntfs/Media 1>/log/stats.info 2>>log/bkup.err
You can substitute /media/* with the directories you want to backup; the most simple would simply be /*, which will backup everything.
You can also use the --exclude directive to exclude directories.
The other simple method is a simple tar archive to get all the important system files, something like:
tar cvfj /root/sysBkup.bz2 --exclude=/root/sysBkup.bz2 /etc /var /root /sys
Then moving that backup to a remote share with the next line of bash script. But I would recommend getting familiar with rsync; very handy backup utility.

How to do a backup of files using the terminal?

I've already done a backup of my database, using mysqldump like this:
mysqldump -h localhost -u dbUsername -p dbDatabase > backup.sql
After that the file is in a location outside public access, in my server, ready for download.
How may I do something like that for files? I've tried to google it, but I get all kind of results, but that.
I need to tell the server running ubuntu to backup all files inside folder X, and put them into a zip file.
Thanks for your help.
You can use tar for creating backups for a full system backup
tar -cvpzf backup.tar.gz --exclude=/backup.tar.gz /
for a single folder
tar -cvpzf backup.tar.gz --exclude=/backup.tar.gz /your/folder
to create a gzipped tar file of your whole system. You might need additional excludes like
--exclude=/proc --exclude=/sys --exclude=/dev/pts.
If you are outside of the single folder you want to backup the --exclude=/backup.tar.gz isn't needed.
More details for example here (you can do it over network, split the archive etc.).

tar a folder into multiple files over SSH

Here is the thing
I have a server with total 85 GB disk space and right now i have a folder with the size of 50 GB which is containing over 60000 files .
Now i want to download these files on my localhost and in order to do that i need to tar the folder but I can't tar the whole folder because of disk space limitation.
So i'm looking for a way to archive the folder into two 25 GB tar file like part1.tar and part2.tar but when the first part is done it should wait for asking something like next part name or permission or anything so I can transfer the first part to an another server and then continue archiving to part2. Or a way to tar half of the folder like first 30000 files and then tar the rest.
Any idea? Thanks in advance
One of the earliest applications of rsync was to implement mirroring or backup for multiple Unix clients to a central Unix server using rsync/ssh and standard Unix accounts.
I use rsync to move compressed (and uncompressed) files between servers.
I think the command should be something like this
rsync -av host::src /dest
rsync solution was good enough but i found the solution for main question:
tar -c -M --tape-length=30000000 --file=filename.tar foldername
After reaching 29GB you will need to change the tape(in my case transferring the first part and removing it) and hit enter for continue.Additionally it is possible for give next parts name:
Prepare volume #2 for `filename.tar' and hit return:
n filename2.tar
Because it is going to take time i suggest using screen session over SSH :
http://thelinuxnoob.com/linux/screen-in-ssh/

Resources