Recursively copy contents of directory to all target directories - linux

I have a directory containing a set of subdirectories and files. I need to recursively copy all the content of this directory to all the subdirectories of another directory, also recursively.
How do I achieve this, preferably without using a script and only with the cp command?

You can write this in a script but you don't have to. Just write it line by line in the terminal:
# $TARGET is the directory containing subdirectories where you want to STORE the copies
# $SOURCE is the directory containing the subdirectories you want to COPY
for dir in $(ls $TARGET); do
cp -r $SOURCE/* $TARGET/$dir
done
Only uses cp and runs on both bash and zsh.

You can't. cp can copy multiple sources but will only copy to a single destination. You need to arrange to invoke cp multiple times - once per destination - for what you want to do; using, as you say, a loop or some other tool.

The first part of the command before the pipe instruct tar to create an archive of everything in the current directory and write it to standard output (the – in place of a file-name frequently indicates stdout).
tar cf - * | ( cd /target; tar xfp -)
The commands within parentheses cause the shell to change directory to the target directory and untar data from standard input. Since the cd and tar commands are contained within parentheses, their actions are performed together.
The -p option in the tar extraction command directs tar to preserve permission and ownership information, if possible given the user executing the command. If you are running the command as superuser, this option is turned on by default and can be omitted.
Also you can use the following command, but it seems to be quite slower than tar;
cp -a * /target

Related

Bash Scripting with xargs to BACK UP files

I need to copy a file from multiple locations to the BACK UP directory by retaining its directory structure. For example, I have a file "a.txt" at the following locations /a/b/a.txt /a/c/a.txt a/d/a.txt a/e/a.txt, I now need to copy this file from multiple locations to the backup directory /tmp/backup. The end result should be:
when i list /tmp/backup/a --> it should contain /b/a.txt /c/a.txt /d/a.txt & /e/a.txt.
For this, I had used the command: echo /a/*/a.txt | xargs -I {} -n 1 sudo cp --parent -vp {} /tmp/backup. This is throwing the error "cp: cannot stat '/a/b/a.txt /a/c/a.txt a/d/a.txt a/e/a.txt': No such file or directory"
-I option is taking the complete input from echo instead of individual values (like -n 1 does). If someone can help debug this issue that would be very helpful instead of providing an alternative command.
Use rsync with the --relative (-R) option to keep (parts of) the source paths.
I've used a wildcard for the source to match your example command rather than the explicit list of directories mentioned in your question.
rsync -avR /a/*/a.txt /tmp/backup/
Do the backups need to be exactly the same as the originals? In most cases, I'd prefer a little compression. [tar](https://man7.org/linux/man-pages/man1/tar.1.html) does a great job of bundling things including the directory structure.
tar cvzf /path/to/backup/tarball.tgz /source/path/
tar can't update compressed archives, so you can skip the compression
tar uf /path/to/backup/tarball.tar /source/path/
This gives you versioning of a sort, as if only updates changed files, but keeps the before and after versions, both.
If you have time and cycles and still want the compression, you can decompress before and recompress after.

Extract tar archive excluding a specific folder and its contents

With PHP I am using exec("tar -xf archive.tar -C /home/user/target/folder") to extract the contents of a specific archive (archive.tar) into the target directory (/home/user/target/folder), so that all existing contents of the target directory will be overwritten by the new ones that are contained in the archive.
It works fine and all files in the target directory are being overwritten after extract, but there is one directory in the archive that I would like to omit (from extracting and thus overwriting the existing one in the target folder)...
For example, the archive.tar contains:
folderA/
folderB/
folderC/
folderD/
fileA.php
fileB.php
fileC.xml
How could I extract (and overwrite) all except (for example) folderC/? In other words, I want folderC and its contents to remain intact in the user's directory and not be overwritten by the one contained in the tar archive.
Any suggestions?
(Tar on the hosting server is GNU version 1.23.)
You can use '--exclude' to omit a folder:
tar -xf archive.tar -C /home/user/target/folder" --exclude="folderC"
There is the --exclude PATTERN option in the tar tool.
Check: tar on linuxcommand.org
To be on the safe side, you could remove all write permissions from the folder. For example:
$ chmod 000 folderC/
An then do a normal tar extract (as regular user). You'll get some error messages on console, but your folder will remain untouched.... At the end of the tar, change back your folder original permissions. For example:
$ chmod 775 folderC/
Of course '--exclude' tar option is the right solution to this particular problem, but, if you are not completely 100% sure about a command syntax, and yor're handling critical data, my solution puts you on the safe side :-).
Write --exclude='./folder' at the beginning of the tar command.
In your case that is,
exec("tar -x --exclude='./C' -f archive.tar -C /home/user/target/folder")

Copy directories on schedule linux

I am looking for help with a script that will allow me to copy the contents of a directory, or the whole directory to another directory on a schedule or when new files arrive in the source.
For example:
/stuff/folder1/file.txt
Copy to:
/stuff/folder2/file.txt
either when new files arrive or on a recurring schedule.
I do use a Centos machine.
You can use the following program to update folder2 whenever you save new files or update folder1:
while inotifywait -r -e modify -e move -e create -e delete; do
cp -r /stuff/folder1/. /stuff/folder2/
done
For the schedule thing I would add cp -r /stuff/folder1/. /stuff/folder2/ into a cron job. Instead of cp you can also use rsync. Please also have a look on the manpage of inotifywait.
Note: The above script will start the copy after the first file was altered inside the directory folder1. If you modify many files in folder1 in the same time, you might want to put a sleep command inside the while loop. But in this case it is better to add the copy command at the end of the program which alters the files of folder1.

Modifying files nested in tar archive

I am trying to do a grep and then a sed to search for specific strings inside files, which are inside multiple tars, all inside one master tar archive. Right now, I modify the files by
First extracting the master tar archive.
Then extracting all the tars inside it.
Then doing a recursive grep and then sed to replace a specific string in files.
Finally packaging everything again into tar archives, and all the archives inside the master archive.
Pretty tedious. How do I do this automatically using shell scripting?
There isn't going to be much option except automating the steps you outline, for the reasons demonstrated by the caveats in the answer by Kimvais.
tar modify operations
The tar command has some options to modify existing tar files. They are, however, not appropriate for your scenario for multiple reasons, one of them being that it is the nested tarballs that need editing rather than the master tarball. So, you will have to do the work longhand.
Assumptions
Are all the archives in the master archive extracted into the current directory or into a named/created sub-directory? That is, when you run tar -tf master.tar.gz, do you see:
subdir-1.23/tarball1.tar
subdir-1.23/tarball2.tar
...
or do you see:
tarball1.tar
tarball2.tar
(Note that nested tars should not themselves be gzipped if they are to be embedded in a bigger compressed tarball.)
master_repackager
Assuming you have the subdirectory notation, then you can do:
for master in "$#"
do
tmp=$(pwd)/xyz.$$
trap "rm -fr $tmp; exit 1" 0 1 2 3 13 15
cat $master |
(
mkdir $tmp
cd $tmp
tar -xf -
cd * # There is only one directory in the newly created one!
process_tarballs *
cd ..
tar -czf - * # There is only one directory down here
) > new.$master
rm -fr $tmp
trap 0
done
If you're working in a malicious environment, use something other than tmp.$$ for the directory name. However, this sort of repackaging is usually not done in a malicious environment, and the chosen name based on process ID is sufficient to give everything a unique name. The use of tar -f - for input and output allows you to switch directories but still handle relative pathnames on the command line. There are likely other ways to handle that if you want. I also used cat to feed the input to the sub-shell so that the top-to-bottom flow is clear; technically, I could improve things by using ) > new.$master < $master at the end, but that hides some crucial information multiple lines later.
The trap commands make sure that (a) if the script is interrupted (signals HUP, INT, QUIT, PIPE or TERM), the temporary directory is removed and the exit status is 1 (not success) and (b) once the subdirectory is removed, the process can exit with a zero status.
You might need to check whether new.$master exists before overwriting it. You might need to check that the extract operation actually extracted stuff. You might need to check whether the sub-tarball processing actually worked. If the master tarball extracts into multiple sub-directories, you need to convert the 'cd *' line into some loop that iterates over the sub-directories it creates.
All these issues can be skipped if you know enough about the contents and nothing goes wrong.
process_tarballs
The second script is process_tarballs; it processes each of the tarballs on its command line in turn, extracting the file, making the substitutions, repackaging the result, etc. One advantage of using two scripts is that you can test the tarball processing separately from the bigger task of dealing with a tarball containing multiple tarballs. Again, life will be much easier if each of the sub-tarballs extracts into its own sub-directory; if any of them extracts into the current directory, make sure you create a new sub-directory for it.
for tarball in "$#"
do
# Extract $tarball into sub-directory
tar -xf $tarball
# Locate appropriate sub-directory.
(
cd $subdirectory
find . -type f -print0 | xargs -0 sed -i 's/name/alternative-name/g'
)
mv $tarball old.$tarball
tar -cf $tarball $subdirectory
rm -f old.$tarball
done
You should add traps to clean up here, too, so the script can be run in isolation from the master script above and still not leave any intermediate directories around. In the context of the outer script, you might not need to be so careful to preserve the old tarball before the new is created (so rm -f $tarbal instead of the move and remove command), but treated in its own right, the script should be careful not to damage anything.
Summary
What you're attempting is not trivial.
Debuggability splits the job into two scripts that can be tested independently.
Handling the corner cases is much easier when you know what is really in the files.
You probably can sed the actual tar as tar itself does not do compression itself.
e.g.
zcat archive.tar.gz|sed -e 's/foo/bar/g'|gzip > archive2.tar.gz
However, beware that this will also replace foo with bar also in filenames, usernames and group names and ONLY works if foo and bar are of equal length

Command to zip a directory using a specific directory as the root

I'm writing a PHP script that downloads a series of generated files (using wget) into a directory, and then zips then up, using the zip command.
The downloads work perfectly, and the zipping mostly works. I run the command:
zip -r /var/www/oraviewer/rgn_download/download/fcst_20100318_0319.zip /var/www/oraviewer/rgn_download/download/fcst_20100318_0319
which yields a zip file with all the downloaded files, but it contains the full /var/www/oraviewer/rgn_download/download/ directories, before reaching the fcst_20100318_0319/ directory.
I'm probably just missing a flag, or something small, from the zip command, but how do I get it to use fcst_20100318_0319/ as the root directory?
I don't think zip has a flag to do that. I think the only way is something like:
cd /var/www/oraviewer/rgn_download/download/ && \
zip -r fcst_20100318_0319.zip fcst_20100318_0319
(The backslash is just for clarity, you can remove it and put everything on one line.)
Since PHP is executing the command in a subshell, it won't change your current directory.
I have also get it worked by using this command
exec('cd '.$_SERVER['DOCUMENT_ROOT'].' && zip -r com.zip "./"');
cd /home/public_html/site/upload/ && zip -r sub_upload.zip sub_upload/
Use the -j or --junk-paths option in your zip command.
From the zip man page:
-j
--junk-paths
Store just the name of a saved file (junk the path), and do not store
directory names. By default, zip will store the full path (relative
to the current directory).

Resources