Currently i only RSync-ing the Directories as like:
* * * * * rsync -avz /var/www/public_html/images root#<remote-ip>:/var/www/public_html
So how do i rsync one single file like, /var/www/public_html/.htaccess ?
You do it the same way as you would a directory, but you specify the full path to the filename as the source. In your example:
rsync -avz --status=progress /var/www/public_html/.htaccess root#<remote-ip>:/var/www/public_html/
As mentioned in the comments: since -a includes recurse, one little typo can make it kick off a full directory tree transfer, so a more fool-proof approach might to just use -vz, or replace it with -lptgoD.
Basic syntax
rsync options source destination
Example
rsync -az /var/www/public_html/filename root#<remote-ip>:/var/www/public_html
Read more
Michael Place's answer works great if, relative to the root directory for both the source and target, all of the directories in the file's path already exist.
But what if you want to sync a file with this source path:
/source-root/a/b/file
to a file with the following target path:
/target-root/a/b/file
and the directories a and b don't exist?
You need to run an rsync command like the following:
rsync -r --include="/a/" --include="/a/b/" --include="/a/b/file" --exclude="*" [source] [target]
To date, two of the answers aren't quite right, they'll get more than one file, and the other isn't as simple as it could be, here's a simpler answer IMO.
The following gets exactly one file, but you have to create the dest directory with mkdir. This is probably the fastest option:
mkdir -p ./local/path/to/file
rsync user#remote:/remote/path/to/file/ -zarv --include "filename" --exclude "*" ./local/path/to/file/
If there is only one instance of file in /remote/path, rsync can create directories for you if you do the following. This will probably take a little more time because it searches more directories. Plus it's will create empty directories for directories in /remote/path that are not in ./local
cd ./local
rsync user#remote:/remote/path -zarv --include "*/" --include "filename" --exclude "*" .
Keep in mind that the order of --include and --exclude matters.
Aside from the good above answers, rsync expects the destination to be a directory and not a filename. Suppose you are copying the word list file words to /tmp, don't do this:
rsync -az /user/share/dict/words /tmp/words # does not work
'cp' is tolerant of this form, but rsync isn't - it will fail because it doesn't see a directory at /tmp/words. Snip off the destination filename and it works:
rsync -az /user/share/dict/words /tmp
Note that rsync won't let you change the filename during the copy, and cp will.
Related
Hello I am trying to copy all files from Documents directory to the backup directory that has a timestamp. So I have created a folder called bk$( the time stamp of the folder) and I am trying to copy files from the Documents directory to the new created folder that is unique. This will be in a crontab backing up files from documents and when the backup will kick in, it will create new directory for each backup that is uniquely identified by the folder timestamp. For some reason I cannot get the cp or cpio -mdp. Now someone had mentioned I could use $PATH variable which seems promising, if that is the solution, if someone could help me out on making it work.
bkdest=home/user/backup/
bksource="/home/user/Documents/"
export PATH=/$bkdest:$PATH
mkdir /"$bkdest"bk.$(date +%Y_%m_%d_%H_%M_%S)
cp /"$bksource"* $PATH
My other approach which I have tried to use to make it work:
cp $bksource * ls | tail -l | $PATH
I could have gone with the ctime but unfortunately it does not work with the folder creation date.
This was my approach but with the latest created folder and not file
find $HOME -type d -daystart ctime 0
If someone could please help me out to copy to that new folder, I would really appreciate it. Thank you!
Store the target name in a variable:
bkdest=/home/user/backup
bksource=/home/user/Documents
target=${bkdest}/bk.$(date +%Y_%m_%d_%H_%M_%S)
mkdir -p $target
cp ${bksource}/* ${target}/
Note I tidied up your use of variables a little.
Also, this won't copy subdirectories. For that you need to use cp -R. When I do backups I prefer to use rsync.
I did not fully understand your approach or what exactly you want to do but here it goes.
CP Approach
You should not use cp for backups, rsync is far more suitable for this. But if for some reason you really need to use cp, you can use the following script.
#!/bin/bash
BKP_DIR=/tmp/bkp
BKP_SRC=/tmp/foo
SNAPSHOT=${BKP_DIR}/$(date +%F.%H-%M-%S.%N)
mkdir -p ${SNAPSHOT}
cp -r ${BKP_SRC}/* ${SNAPSHOT}
Rsync Approach
No big change here.
#!/bin/bash
BKP_DIR=/tmp/bkp
BKP_SRC=/tmp/foo
SNAPSHOT=${BKP_DIR}/$(date +%F.%H-%M-%S.%N)
rsync -a ${BKP_SRC}/ ${SNAPSHOT}/
Improved Rsync Approach (RECOMMENDED)
#!/bin/bash
BKP_DIR=/tmp/bkp
BKP_SRC=/tmp/foo
SNAPSHOT=${BKP_DIR}/$(date +%F.%H-%M-%S.%N)
LATEST=${BKP_DIR}/latest
rsync \
--archive \
--delete \
--backup \
--backup-dir=${SNAPSHOT} \
--log-file=${BKP_DIR}/rsync.log \
${BKP_SRC}/ ${LATEST}/
EXPLAINING: --archive plus --delete will make sure that $LATEST is a perfect copy of $BKP_SRC, it means that files that no longer exist in $BKP_SRC will be deleted from $LATEST. The --archive option also ensure that permissions and owners will be maintained, symlinks will be copied as symlinks, and more (look at man rsync for more information).
The --backup plus --backup-dir options will create a backup directory to put differential files. In other words, all files that were deleted or modified since last backup will be put in there, so you do not lost them as they are deleted from $LATEST.
--log-file is optional, but it is aways good to keep logs for debug purposes.
At the end you have an incremental backup.
I have a directory containing a set of subdirectories and files. I need to recursively copy all the content of this directory to all the subdirectories of another directory, also recursively.
How do I achieve this, preferably without using a script and only with the cp command?
You can write this in a script but you don't have to. Just write it line by line in the terminal:
# $TARGET is the directory containing subdirectories where you want to STORE the copies
# $SOURCE is the directory containing the subdirectories you want to COPY
for dir in $(ls $TARGET); do
cp -r $SOURCE/* $TARGET/$dir
done
Only uses cp and runs on both bash and zsh.
You can't. cp can copy multiple sources but will only copy to a single destination. You need to arrange to invoke cp multiple times - once per destination - for what you want to do; using, as you say, a loop or some other tool.
The first part of the command before the pipe instruct tar to create an archive of everything in the current directory and write it to standard output (the – in place of a file-name frequently indicates stdout).
tar cf - * | ( cd /target; tar xfp -)
The commands within parentheses cause the shell to change directory to the target directory and untar data from standard input. Since the cd and tar commands are contained within parentheses, their actions are performed together.
The -p option in the tar extraction command directs tar to preserve permission and ownership information, if possible given the user executing the command. If you are running the command as superuser, this option is turned on by default and can be omitted.
Also you can use the following command, but it seems to be quite slower than tar;
cp -a * /target
I am trying to copy a filesystem for a device I am programming for. After so much time trying to figure out why the filesystem I was installing wasn't working I found out that cp didn't get the job done. I used du -s to check the size of the original filesystem and the one that I copied with cp -r, as it turns out they differ by about 150 bytes.
Something is telling me that symbolic links or some sort of kernel objects aren't being copied correctly.
Is it possible to copy a folder/file system exactly? If so how would I go about it?
Try doing this the straightforward way :
cp -a src target
from man cp
-a, --archive
same as -dR --preserve=all
It preserve rights, symlinks...
Here I tried all the code in my Linux. Seems Rsync proposed by #seanmcl as the right one while others failed to keep owners and/or some special files or a denied result. The exact code is:
$ sudo rsync -aczvAXHS --progress /var/www/html /var/www/backup
Just remember to use just the directory name and not put a slash (/) or a wildcard (/*) at the end of source and target name otherwise the hidden files right below the source are not copied.
Another popular option is to use tar c source | (cd target && tar x ). See this linuxdevcenter.com article.
The most accurate way I know of copying files is with cpio:
cd /path/to/source
find . -xdev -print0 | cpio -oa0V | (cd /path/to/target && cpio -imV)
Not really easy to use, but this is very precise, preserving timestamps, owners, permissions, special files.
Rsync is the best way to copy a file system. They are myriad arguments that let you control exactly what is copied.
This is what I do, for example to duplicate directory A -> B:
$ mkdir B
$ cd A
$ cp -a ./ ../B
I want a simple and working (multiple) exclude option inside my rsync command. Lets say i will exclude a file and a directory:
/var/www/html/test.txt
/var/www/html/images/
What i did is:
rsync -avz --exclude="/var/www/html/test.txt" --exclude="/var/www/html/images/" /var/www/html root#xx.xx.xx.xx:/var/www
or
rsync -avz --exclude=/var/www/html/test.txt --exclude=/var/www/html/images/ /var/www/html root#xx.xx.xx.xx:/var/www
or
rsync -avz --exclude /var/www/html/test.txt --exclude /var/www/html/images/ /var/www/html root#xx.xx.xx.xx:/var/www
..
But however, the --exclude is NOT WORKING!
Everything is going out!
How to do it in this simple format please?
Note: I also don't want to use external exclusion list file. Just want all in one simple command.
i got it solved by myself after i've learned and tested many times. The real problem was the understandable (for me) --exclude option usage format.
I don't know how others are doing but i just found out that:
"--exclude" path CAN NOT be the full absolute path!
Because i was using the path(s) like: --exclude /var/www/html/text.txt which caused the thing DOES NOT work. So i used like:
--exclude text.txt --exclude images/
.. and it WORKS!
I personnaly like the --exclude={text.txt,images/} format.
Remember that with rsync, all exclude (or include) paths beginning with / are are anchored to the root of transfer which in your example will be the /var/www/html directory!!, so if you specify /text.txt it will be only the file which is # the root of your transfer directory not above in the tree. You can find more infos and examples here
I want to create script that copy my project and make it zip archive. I want to exclude all folder named .svn in all sub directories. Any suggestion?
I'd use rsync's FILTER RULES for this:
Create an .rsync-filter file (in the origin directory) containing, e.g.
-.svn/
Now run rsync like an exalted copy:
rsync -aFF origin/ destination/
You can do this using rsync. Although this is designed to synchronise directories across servers, it can also be used to copy directories on a single machine.
rsync has a --exclude option to exclude files and directories by pattern. See http://www.samba.org/ftp/rsync/rsync.html for help and examples.
Just call the zip utility on your project’s folder and use the -r option for recursive plus the -x option to exclude files / folders by pattern.
zip -r target-filename.zip source-folder -x \*exclude-pattern\*
exclude-pattern in your case would be .svn
See also man zip