Update a file if it is common for two directory and delete it when its uniquely in one directory - linux

Suppose Directory1 has
1.File1
2.File2
3.Subdirectory1
Subdirectory1 has :
3.1. File 3
3.2. File 4
and
Directory2 has
1.File1
2.File3
3.Subdirectory1
Subdirectory1 has :
3.1. File 3
3.2. File 6
If any file is uniquely present in Directory2 it has to be deleted.
If the file is present in both Directory1 and Directory2 , files in Directory1 has to be copied to Directory2 with the same folder structure [Updates].

Simply use diff, eg :
diff -r dir1 dir2 | grep dir1
Only in dir1: file2
Only in dir1/subdir1: file4
Only in dir2/subdir1: file6
You can then awk, or store the result in a temporary file and use it in your script.

You seem to talk about a mirroring function, see the nice Open Source tool rsync.
https://rsync.samba.org/
It can do all of that and even more (also remote synchronization via LAN or via SSH if needed).
rsync -options --otherOptions sourceDir targetDir
Usually you would use these commandline options:
rsync -av /src/foo /dest
or
rsync -av /src/foo/ /dest/foo
Note: if you omit the trailing "/" of /src/foo, then rsync will mirror to /dest and create a foo subdir. You have either the one or the other choice how to use this command.

Related

rsync files/folders from a list in an input file and exclude from an exclude file

I have two files with a list of folders. I want to sync the folders, with relative paths, to the destination excluding those from the exclude file.
$ cat include.txt
/home/user
/etc
/data/app
/boot
$ cat exclude.txt
/data/app/temp
/etc/aide
I have tried using --include-from and --files-from but can't seem to figure out it.
This seems to sync the folders, but not the files:
rsync -av --files-from=include.txt / /destination
Ultimately I want to sync to /destination and have the folder structure look like:
/destination/home/user
/destination/home/user/...
/destination/etc
/destination/etc/...
/destination/data/app
/destination/data/app/...
/destination/boot
/destination/boot/...
Just add -r and --exclude-from options and You should be good to go:
rsync -av -r --files-from=./include.txt --exclude-from=./exclude.txt / /destination/

Zip folder exclude some folders

I'm trying to backup my www-folder but hidden folders like .config inside www are added to the backup. I want to exclude the folder "backups" and all folders (and files) starting with a dot.
The problem is that it copies all the hidden folders like .config to the zip-file.
Current code:
zip -r /var/www/backups/site/$(date +\%Y-\%m-\%d-\%H-\%M).zip /var/www -x "*backups*" "*.*" "*/.*"
This should work for you.
zip -r --exclude=*backups* --exclude=*/.* /var/www/backups/site/$(date +\%Y-\%m-\%d-\%H-\%M).zip /var/www
Use a linux find command with an exclude flag, then pipe it into zip.
The following command will exclude all paths under the current directory containing the keywords "backups" or files with "/." in the path and then pipe the files into zip.
find . | grep -v "\(backups\|/\.\)" | xargs zip archive.zip

rsync recursively and exclude content of specific directory, not the directory

I have a directory tree like this :
dir1/
file11
file12
file13
...
file1548216479524594
dir2/
file21
file22
dir3/
dir31/
file311
file312
dir32/
file321
I would like to rsync entire directory tree but without content of directory dir1.
If I use the basic rsync command :
rsync --progress -v -ar --delete --exclude="dir1/*" src/ dst/<br>
It works. But if I use -n to make a dry run before execute, it lasts very long because dir1 contains a lot file (I do not know why during the dry-run it lists all files, even those excluded).
If I use --exclude="dir1/", the dry-run is fast but I don't have my directory tree.
How can I do a rsync dry run fast (avoiding recursively dir1 files which are very numerous.) with my entire directory tree excluding all content of dir1 ?
In recent versions of rsync, you can use the -F option and put a file ".rsync-filter" in the directory src, containing:
- dir1/***
That seemed to work for me. I'm assuming that your hierarchy above is all under "src/".

Create zip file and ignore directory structure

I need to create a zip file using this command:
zip /dir/to/file/newZip /data/to/zip/data.txt
This works, but the created zip file creates a directory structure mimicking the directory to the raw file. It is a lot of extra folders that I don't need.
I didn't find an answer in a cursory glance over the man page or a Google hunt.
You can use -j.
-j
--junk-paths
Store just the name of a saved file (junk the path), and do not
store directory names. By default, zip will store the full path
(relative to the current directory).
Using -j won't work along with the -r option.
So the work-around for it can be this:
cd path/to/parent/dir/;
zip -r complete/path/to/name.zip ./* ;
cd -;
Or in-line version
cd path/to/parent/dir/ && zip -r complete/path/to/name.zip ./* && cd -
you can direct the output to /dev/null if you don't want the cd - output to appear on screen
Use the -j option:
-j Store just the name of a saved file (junk the path), and do not
store directory names. By default, zip will store the full path
(relative to the current path).
Somewhat related - I was looking for a solution to do the same for directories.
Unfortunately the -j option does not work for this :(
Here is a good solution on how to get it done:
https://superuser.com/questions/119649/avoid-unwanted-path-in-zip-file
Alternatively, you could create a temporary symbolic link to your file:
ln -s /data/to/zip/data.txt data.txt
zip /dir/to/file/newZip !$
rm !$
This works also for a directory.
Just use the -jrm option to remove the file and directory
structures
zip -jrm /path/to/file.zip /path/to/file
Retain the parent directory so unzip doesn't spew files everywhere
When zipping directories, keeping the parent directory in the archive will help to avoid littering your current directory when you later unzip the archive file
So to avoid retaining all paths, and since you can't use -j and -r together ( you'll get an error ), you can do this instead:
cd path/to/parent/dir/;
zip -r ../my.zip "../$(basename "$PWD")"
cd -;
The "../$(basename "$PWD")" is the magic that retains the parent directory.
So now unzip my.zip will give a folder containing all your files:
parent-directory
├── file1
├── file2
├── dir1
│ ├── file3
│ ├── file4
Instead of littering the current directory with the unzipped files:
file1
file2
dir1
├── file3
├── file4

cp command failing in Linux

I am facing a copy command problem while executing shell script in RHEL 5.
executed command is
cp -fp /fir1/dir2/*/bin/file1 `find . -name file1 -print`
error is
cp: Target ./6e0476aec9667638c87da1b17b6ccf46/file1 must be a directory
Would you please throw some ideas why it would be failing?
Thanks
Robert.
When cp is called with more than two filenames as arguments, it treats the last one as a target directory, and copies all the files named in the other arguments into that target directory. So, for example,
cp file1 file2 dir3
will create dir3/file1 and dir3/file2. It seems that in your case, the pattern /fir1/dir2/*/bin/file1 matches more than one filename, so cp is trying to treat the result of find as a target directory - which it isn't - and failing.
You can't copy many files to one location unless that location is a directory.
cp should be used thusly: cp sourcefile destinationfile or cp source1 source2 destinationdir.
As the others said you cannot copy multiple files to one file using cp. On the other hand, if you want to append the content of multiple files together into one destination file you can use cat.
For instance:
cat file1 file2 file3 > destinationfile
it is hard to answer without knowing what you are trying to achieve.
If, for example, you want to copy all files named "file1" within a directory structure to a target place /tmp, building the same directory structure there, this command will do the trick:
cd /dir1/dir2
find . -name file1 | cpio -pvd /tmp
You cannot copy multiple multiple files to a file, only to a directory, i.e.
cp file1 file2 file2 file4
is not possible, you need
cp file1 file2 file2 dir1

Resources