I want to backup my NAS on multiple DVD's. What i had in mind was a script what does the following:
-Create a folder for each DVD
-Copy the files and filestructure into the DVD folders
-Stop / goto next DVD folder when the first DVD folder is full
i.e. the trigger is 4 GByte (which calculates easy for the example)
I have a datasrouce with 10 gb of data., so this will be 3 DVD's. So the script first create three folders: DVD-1, DVD-2 and DVD-3. Next the copy will start to copy 4 GB to the DVD-1 folder. After that, the remaining files must come in DVD-2 and DVD-3.
As far as i know, rsync and cp doesn't bother about calculating this. I know it is an option to do this by using archives like zip, tar or gz but at first i want to try it with unpacked files.
Is all above an option with standard Linux bash commands or is it insane?
No, there isn't any standard tool that does this out of the box. But it's pretty simple to code up, and there are a few projects to do it:
https://unix.stackexchange.com/questions/18628/generating-sets-of-files-that-fit-on-a-given-media-size-for-tar-t
Related
I have multiple files' directories and i need to copy them into specific folder using terminal, how do i do it. I also have access to GUI of the system, as all this is being done in virtual machine using ssh
According to its manpage, cp is capable of copying various source files to one output directory.
The syntax is as follows:
cp /dir1/file1 /dir1/file2 /dir2/file1_2 /outputdir/
Using this command, you can copy files from multiple directories (/dir1/ and /dir2/ in this example) to one output directory (/outputdir/).
I am trying to create a shell script to copy folders and files within those folders from one Linux machine to another linux machine. After copying I would like to delete only the files that are copied. I want to retain the folder structure as is.
Eg.
Machine X has a main folder named F with subfolders A,B,C folders in which each of them has 10 files.
I would like to make a copy in such a way that machine Y will have a folder named F with subfolders A,B,C containing the same files. Once the copy of all folders and files are complete, it should delete all the files in source folder but retain the folders.
The code below is untested. Use with care and backup first.
Something like this should get you started:
#!/bin/bash
srcdir=...
set -ex
rsync \
--verbose \
--recursive \
"${srcdir}/" \
user#host:/dstdir/
find "${srcdir}" -type f -delete
Set the srcdir variable and the remote argument to rsync to taste.
The rsync options are just from memory, so they may need tweaking. Read the documentation, especially options regarding deletion, backup, permissions and links.
(I'd rather not answer questions requests that show no signs of effort, but my fingers were itching, so there you go.)
scp the files, check the exit code of the scp and then delete the files locally.
Something like scp files user#remotehost:/path/ && rm files
If scp has failed, the second part of the command won't execute
I have a tar archive on an NTFS drive on a windows machine which contains a folder with files residing on a drive on my linux machine. I try to update the archive from a bash shell script from my linux machine with the -u (--update) tar option, so that only new versions of archive members are appended to the archive. However, due to the "time skew" between file times on two filesystems, tar appends to the archive ALL the files in the folder, even if the folder does not contain any new versions of files at all.
So the problem is: how to add to an archive on machine B only new version of files from a folder on machine A in conditions when there is time skew between machines?
Is there a way to solve this problem so that mtimes of individual files in archive were preserved or changed insignificantly (e.g. adjusted 10 minutes ahead to negate the time skew)? This probably can be accomplished by calling tar individually for appending each file, but is there a more optimal solution?
Maybe there is a way to change mtime individually for each file when it is added to the archive? The option --after-date for appending only files modified after certain date apparently is not quite suitable filter for this task.
Here is the thing
I have a server with total 85 GB disk space and right now i have a folder with the size of 50 GB which is containing over 60000 files .
Now i want to download these files on my localhost and in order to do that i need to tar the folder but I can't tar the whole folder because of disk space limitation.
So i'm looking for a way to archive the folder into two 25 GB tar file like part1.tar and part2.tar but when the first part is done it should wait for asking something like next part name or permission or anything so I can transfer the first part to an another server and then continue archiving to part2. Or a way to tar half of the folder like first 30000 files and then tar the rest.
Any idea? Thanks in advance
One of the earliest applications of rsync was to implement mirroring or backup for multiple Unix clients to a central Unix server using rsync/ssh and standard Unix accounts.
I use rsync to move compressed (and uncompressed) files between servers.
I think the command should be something like this
rsync -av host::src /dest
rsync solution was good enough but i found the solution for main question:
tar -c -M --tape-length=30000000 --file=filename.tar foldername
After reaching 29GB you will need to change the tape(in my case transferring the first part and removing it) and hit enter for continue.Additionally it is possible for give next parts name:
Prepare volume #2 for `filename.tar' and hit return:
n filename2.tar
Because it is going to take time i suggest using screen session over SSH :
http://thelinuxnoob.com/linux/screen-in-ssh/
I have a nightly back up script that makes a backup from one server of any files that have been modified and thensync them across to our back server.
/var/backups/backup-2011-04-02/backuped/ backuped files and folders
The format above is the nightly incremental backup, which copies all the files and folders to a date stamped folder and then another folder underneath.
Thinking of a script which would run after the back up script to merge all the files in the /var/backups/backup-2011-04-02/backuped/ into /var/www/live/documents
So in theory I need to merge a number of different folders from the backup into the live www on the backup server only with the right date
So whats the best way to go about this script?
You could run rsync on each backup directory to the destination in order of
creation:
$ for f in `ls -t /var/backups`; do rsync -aL "/var/backups/$f" /var/www/live/documents/; done
Of course you can put this line in a nightly cron job. The only thing to look out for is the line above will choke if the filenames in your backup directory have spaces in them, but it looks like they don't, so you may be ok.