How to copy files to the timestamp auto generated folder? - linux

Hello I am trying to copy all files from Documents directory to the backup directory that has a timestamp. So I have created a folder called bk$( the time stamp of the folder) and I am trying to copy files from the Documents directory to the new created folder that is unique. This will be in a crontab backing up files from documents and when the backup will kick in, it will create new directory for each backup that is uniquely identified by the folder timestamp. For some reason I cannot get the cp or cpio -mdp. Now someone had mentioned I could use $PATH variable which seems promising, if that is the solution, if someone could help me out on making it work.
bkdest=home/user/backup/
bksource="/home/user/Documents/"
export PATH=/$bkdest:$PATH
mkdir /"$bkdest"bk.$(date +%Y_%m_%d_%H_%M_%S)
cp /"$bksource"* $PATH
My other approach which I have tried to use to make it work:
cp $bksource * ls | tail -l | $PATH
I could have gone with the ctime but unfortunately it does not work with the folder creation date.
This was my approach but with the latest created folder and not file
find $HOME -type d -daystart ctime 0
If someone could please help me out to copy to that new folder, I would really appreciate it. Thank you!

Store the target name in a variable:
bkdest=/home/user/backup
bksource=/home/user/Documents
target=${bkdest}/bk.$(date +%Y_%m_%d_%H_%M_%S)
mkdir -p $target
cp ${bksource}/* ${target}/
Note I tidied up your use of variables a little.
Also, this won't copy subdirectories. For that you need to use cp -R. When I do backups I prefer to use rsync.

I did not fully understand your approach or what exactly you want to do but here it goes.
CP Approach
You should not use cp for backups, rsync is far more suitable for this. But if for some reason you really need to use cp, you can use the following script.
#!/bin/bash
BKP_DIR=/tmp/bkp
BKP_SRC=/tmp/foo
SNAPSHOT=${BKP_DIR}/$(date +%F.%H-%M-%S.%N)
mkdir -p ${SNAPSHOT}
cp -r ${BKP_SRC}/* ${SNAPSHOT}
Rsync Approach
No big change here.
#!/bin/bash
BKP_DIR=/tmp/bkp
BKP_SRC=/tmp/foo
SNAPSHOT=${BKP_DIR}/$(date +%F.%H-%M-%S.%N)
rsync -a ${BKP_SRC}/ ${SNAPSHOT}/
Improved Rsync Approach (RECOMMENDED)
#!/bin/bash
BKP_DIR=/tmp/bkp
BKP_SRC=/tmp/foo
SNAPSHOT=${BKP_DIR}/$(date +%F.%H-%M-%S.%N)
LATEST=${BKP_DIR}/latest
rsync \
--archive \
--delete \
--backup \
--backup-dir=${SNAPSHOT} \
--log-file=${BKP_DIR}/rsync.log \
${BKP_SRC}/ ${LATEST}/
EXPLAINING: --archive plus --delete will make sure that $LATEST is a perfect copy of $BKP_SRC, it means that files that no longer exist in $BKP_SRC will be deleted from $LATEST. The --archive option also ensure that permissions and owners will be maintained, symlinks will be copied as symlinks, and more (look at man rsync for more information).
The --backup plus --backup-dir options will create a backup directory to put differential files. In other words, all files that were deleted or modified since last backup will be put in there, so you do not lost them as they are deleted from $LATEST.
--log-file is optional, but it is aways good to keep logs for debug purposes.
At the end you have an incremental backup.

Related

Bash Scripting with xargs to BACK UP files

I need to copy a file from multiple locations to the BACK UP directory by retaining its directory structure. For example, I have a file "a.txt" at the following locations /a/b/a.txt /a/c/a.txt a/d/a.txt a/e/a.txt, I now need to copy this file from multiple locations to the backup directory /tmp/backup. The end result should be:
when i list /tmp/backup/a --> it should contain /b/a.txt /c/a.txt /d/a.txt & /e/a.txt.
For this, I had used the command: echo /a/*/a.txt | xargs -I {} -n 1 sudo cp --parent -vp {} /tmp/backup. This is throwing the error "cp: cannot stat '/a/b/a.txt /a/c/a.txt a/d/a.txt a/e/a.txt': No such file or directory"
-I option is taking the complete input from echo instead of individual values (like -n 1 does). If someone can help debug this issue that would be very helpful instead of providing an alternative command.
Use rsync with the --relative (-R) option to keep (parts of) the source paths.
I've used a wildcard for the source to match your example command rather than the explicit list of directories mentioned in your question.
rsync -avR /a/*/a.txt /tmp/backup/
Do the backups need to be exactly the same as the originals? In most cases, I'd prefer a little compression. [tar](https://man7.org/linux/man-pages/man1/tar.1.html) does a great job of bundling things including the directory structure.
tar cvzf /path/to/backup/tarball.tgz /source/path/
tar can't update compressed archives, so you can skip the compression
tar uf /path/to/backup/tarball.tar /source/path/
This gives you versioning of a sort, as if only updates changed files, but keeps the before and after versions, both.
If you have time and cycles and still want the compression, you can decompress before and recompress after.

Is it possible to create a folder with the filename into the tar file you are creating?

Let's say I'm trying to tar.gz all the files and folders in /usr/local/bin/data/*
The file name would be data-2015-10-01.tar.gz. When I untar it, is it possible that the root directory would be data-2015-10-01 followed by the contents of whatever is inside of data/* ?
If not, how can I tar /usr/local/bin/data/* but start at the /data/ folder level?
I can't do this unfortunately since the program spits out /usr/local/bin/data/ and I'm unable to change it.
cd /usr/local/bin
tar ... /data/*
There are a couple of ways to do what I think you're trying to accomplish. First, you can use the -C option to tar when creating the archive. That changes tar's current working directory to that directory before creating the archive. Not strictly required in your case, but probably helpful.
# tar -C /usr/local/bin -czf data-2015-10-01.tar.gz data/*
That at least gets you to a single directory named data. If you have control of the extraction (manually or via a script you provide to whomever is unpacking this), then you can do something like this on the extraction:
# mkdir -f data-2015-10-01 && tar -C data-2015-10-01 --strip-components=1 -xzf data-2015-10-01.tar.gz
This will remove the first path, which is "data" and extract everything from there into the directory which is your current working directory, data-2015-10-01. So, it isn't specifically tar that's doing the renaming, but you will effectively end up with the same result.
I've accomplished something similar with a symlink. This is not a great solution if you have (or might have) symlinks in the directory structure you're trying to archive. I have to say that I prefer #geis' solution to strip out the top-level directory on extract, but this gives you another option.
ln -s /usr/local/bin/data data-2015-10-01
tar -cvhf data-2015-10-01.tar.gz data-2015-10-01/
rm data-2015-10-01
(Note the additional -h option in the tar invocation.)

How to duplicate a folder exactly

I am trying to copy a filesystem for a device I am programming for. After so much time trying to figure out why the filesystem I was installing wasn't working I found out that cp didn't get the job done. I used du -s to check the size of the original filesystem and the one that I copied with cp -r, as it turns out they differ by about 150 bytes.
Something is telling me that symbolic links or some sort of kernel objects aren't being copied correctly.
Is it possible to copy a folder/file system exactly? If so how would I go about it?
Try doing this the straightforward way :
cp -a src target
from man cp
-a, --archive
same as -dR --preserve=all
It preserve rights, symlinks...
Here I tried all the code in my Linux. Seems Rsync proposed by #seanmcl as the right one while others failed to keep owners and/or some special files or a denied result. The exact code is:
$ sudo rsync -aczvAXHS --progress /var/www/html /var/www/backup
Just remember to use just the directory name and not put a slash (/) or a wildcard (/*) at the end of source and target name otherwise the hidden files right below the source are not copied.
Another popular option is to use tar c source | (cd target && tar x ). See this linuxdevcenter.com article.
The most accurate way I know of copying files is with cpio:
cd /path/to/source
find . -xdev -print0 | cpio -oa0V | (cd /path/to/target && cpio -imV)
Not really easy to use, but this is very precise, preserving timestamps, owners, permissions, special files.
Rsync is the best way to copy a file system. They are myriad arguments that let you control exactly what is copied.
This is what I do, for example to duplicate directory A -> B:
$ mkdir B
$ cd A
$ cp -a ./ ../B

How to RSYNC a single file?

Currently i only RSync-ing the Directories as like:
* * * * * rsync -avz /var/www/public_html/images root#<remote-ip>:/var/www/public_html
So how do i rsync one single file like, /var/www/public_html/.htaccess ?
You do it the same way as you would a directory, but you specify the full path to the filename as the source. In your example:
rsync -avz --status=progress /var/www/public_html/.htaccess root#<remote-ip>:/var/www/public_html/
As mentioned in the comments: since -a includes recurse, one little typo can make it kick off a full directory tree transfer, so a more fool-proof approach might to just use -vz, or replace it with -lptgoD.
Basic syntax
rsync options source destination
Example
rsync -az /var/www/public_html/filename root#<remote-ip>:/var/www/public_html
Read more
Michael Place's answer works great if, relative to the root directory for both the source and target, all of the directories in the file's path already exist.
But what if you want to sync a file with this source path:
/source-root/a/b/file
to a file with the following target path:
/target-root/a/b/file
and the directories a and b don't exist?
You need to run an rsync command like the following:
rsync -r --include="/a/" --include="/a/b/" --include="/a/b/file" --exclude="*" [source] [target]
To date, two of the answers aren't quite right, they'll get more than one file, and the other isn't as simple as it could be, here's a simpler answer IMO.
The following gets exactly one file, but you have to create the dest directory with mkdir. This is probably the fastest option:
mkdir -p ./local/path/to/file
rsync user#remote:/remote/path/to/file/ -zarv --include "filename" --exclude "*" ./local/path/to/file/
If there is only one instance of file in /remote/path, rsync can create directories for you if you do the following. This will probably take a little more time because it searches more directories. Plus it's will create empty directories for directories in /remote/path that are not in ./local
cd ./local
rsync user#remote:/remote/path -zarv --include "*/" --include "filename" --exclude "*" .
Keep in mind that the order of --include and --exclude matters.
Aside from the good above answers, rsync expects the destination to be a directory and not a filename. Suppose you are copying the word list file words to /tmp, don't do this:
rsync -az /user/share/dict/words /tmp/words # does not work
'cp' is tolerant of this form, but rsync isn't - it will fail because it doesn't see a directory at /tmp/words. Snip off the destination filename and it works:
rsync -az /user/share/dict/words /tmp
Note that rsync won't let you change the filename during the copy, and cp will.

How can I recursively copy a directory into another and replace only the files that have not changed?

I am looking to do a specific copy in Fedora.
I have two folders:
'webroot': holding ALL web files/images etc
'export': folder containing thousands of PHP, CSS, JS documents that are exported from my SVN repo.
The export directory contains many of the same files/folders that the root does, however the root contains additional ones not found in export.
I'd like to merge all of the contents of export with my webroot with the following options:
Overwriting the file in webroot if export's version contains different code than what
is inside of webroot's version (live)
Preserve the permissions/users/groups of the file if it is overwritten (the export
version replacing the live version) *NOTE I would like the webroots permissions/ownership maintained, but with export's contents
No prompting/stopping of the copy
of any kind (ie not verbose)
Recursive copy - obviously I
would like to copy all* files
folders and subfolders found in
export
I've done a bit of research into cp - would this do the job?:
cp -pruf ./export /path/to/webroot
It might, but any time the corresponding files in export and webroot have the same content but different modification times, you'd wind up performing an unnecessary copy operation. You'd probably get slightly smarter behavior from rsync:
rsync -pr ./export /path/to/webroot
Besides, rsync can copy files from one host to another over an SSH connection, if you ever have a need to do that. Plus, it has a zillion options you can specify to tweak its behavior - look in the man page for details.
EDIT: with respect to your clarification about what you mean by preserving permissions: you'd probably want to leave off the -p option.
-u overwrites existing files folder if the destination is older than source
-p perserves the permission and dates
-f turns off verbosity
-r makes the copy recursive
So looks like you got all the correct args to cp
Sounds like a job for cpio (and hence, probably, GNU tar can do it too):
cd export
find . -print | cpio -pvdm /path/to/webroot
If you need owners preserved, you have to do it as root, of course. The -p option is 'pass mode', meaning copy between locations; -v is verbose (but not interactive; there's a difference); -d means create directories as necessary; -m means preserve modification time. By default, without the -u option, cpio won't overwrite files in the target area that are newer than the one from the source area.

Resources