I am currently using the following command to upload my site content:
scp -r web/* user#site.com:site.com/
This works great except that the .htaccess file is not sent. Presumably, this is because it's hidden.
I have tried adding a second line to send the file explicitely:
scp -r web/.htaccess user#site.com:site.com/.htaccess
This works great except now I have to enter my password twice.
Any thoughts on how to make this deploy with only 1 or 0 entries of my password?
Just combine the two commands:
scp -r web/* web/.htaccess user#site.com:site.com/
If you want 0 entries of your password you can set up public key authentication for ssh/scp.
Some background info: the * wildcard does not match so-called "dot-files" (i.e. files whose name begins with a dot).
Some shells allow you to set an option, so that it will match dot-files, however, doing that is asking for a lot of pain: now * will also match . (the current directory) and .. (the parent directory), which is usually not what is intended and can be quite surprising! (rm -rf * deleting the parent directory is probably not the best way to start a day ...)
A word of caution - don't attempt to match dotted files (like .htaccess) with .* - this inconveniently also matches .., and would result in copying all the files on the path to the root directory. I did this once (with rm, no less!) and I had to rebuild the server because I'd messed with /var.
#jwmittag:
I just did a test on Ubuntu and .* matches when I use cp. Here's an example:
root#krash:/# mkdir a
root#krash:/# mkdir b
root#krash:/# mkdir a/c
root#krash:/# touch a/d
root#krash:/# touch a/c/e
root#krash:/# cp -r a/c/.* b
cp: will not create hard link `b/c' to directory `b/.'
root#krash:/# ls b
d e
If .* did not match .., then d shouldn't be in b.
Related
I've got (what feels like) a fairly simple problem but my complete lack of experience in bash has left me stumped. I've spent all day trying to synthesize a script from many different SO threads explaining how to do specific things with unintuitive commands, but I can't figure out how to make them work together for the life of me.
Here is my situation: I've got a directory full of nested folders each containing a file with extension .7 and another file with extension .pc, plus a whole bunch of unrelated stuff. It looks like this:
Folder A
Folder 1
Folder x
data_01.7
helper_01.pc
...
Folder y
data_02.7
helper_02.pc
...
...
Folder 2
Folder z
data_03.7
helper_03.pc
...
...
Folder B
...
I've got a script that I need to run in each of these folders that takes in the name of the .7 file as an input.
pc_script -f data.7 -flag1 -other_flags
The current working directory needs to be the folder with the .7 file when running the script and the helper.pc file also needs to be present in it. After the script is finished running, there are a ton of new files and directories. However, I need to take just one of those output files, result.h5, and copy it to a new directory maintaining the same folder structure but with a new name:
Result Folder/Folder A/Folder 1/Folder x/new_result1.h5
I then need to run the same script again with a different flag, flag2, and copy the new version of that output file to the same result directory with a different name, new_result2.h5.
The folders all have pretty arbitrary names, though there aren't any spaces or special characters beyond underscores.
Here is an example of what I've tried:
#!/bin/bash
DIR=".../project/data"
for d in */ ; do
for e in */ ; do
for f in */ ; do
for PFILE in *.7 ; do
echo "$d/$e/$f/$PFILE"
cd "$DIR/$d/$e/$f"
echo "Performing operation 1"
pc_script -f "$PFILE" -flag1
mkdir -p ".../results/$d/$e/$f"
mv "results.h5" ".../project/results/$d/$e/$f/new_results1.h5"
echo "Performing operation 2"
pc_script -f "$PFILE" -flag 2
mv "results.h5" ".../project/results/$d/$e/$f/new_results2.h5"
done
done
done
done
Obviously, this didn't work. I've also tried using find with -execdir but then I couldn't figure out how to insert the name of the file into the script flag. I'd appreciate any help or suggestions on how to carry this out.
Another, perhaps more flexible, approach to the problem is to use the find command with the -exec option to run a short "helper-script" for each file found below a directory path that ends in ".7". The -name option allows find to locate all files ending in ".7" below a given directory using simple file-globbing (wildcards). The helper-script then performs the same operation on each file found by find and handles moving the result.h5 to the proper directory.
The form of the command will be:
find /path/to/search -type f -name "*.7" -exec /path/to/helper-script '{}` \;
Where the -f option tells find to only return files (not directories) ending in ".7". Your helper-script needs to be executable (e.g. chmod +x helper-script) and unless it is in your PATH, you must provide the full path to the script in the find command. The '{}' will be replaced by the filename (including relative path) and passed as an argument to your helper-script. The \; simply terminates the command executed by -exec.
(note there is another form for -exec called -execdir and another terminator '+' that can be used to process the command on all files in a given directory -- that is a bit safer, but has additional PATH requirements for the command being run. Since you have only one ".7" file per-directory -- there isn't much benefit here)
The helper-script just does what you need to do in each directory. Based on your description it could be something like the following:
#!/bin/bash
dir="${1%/*}" ## trim file.7 from end of path
cd "$dir" || { ## change to directory or handle error
printf "unable to change to directory %s\n" "$dir" >&2
exit 1
}
destdir="/Result_Folder/$dir" ## set destination dir for result.h5
mkdir -p "$destdir" || { ## create with all parent dirs or exit
printf "unable to create directory %s\n" "$dir" >&2
exit 1
}
ls *.pc 2>/dev/null || exit 1 ## check .pc file exists or exit
file7="${1##*/}" ## trim path from file.7 name
pc_script -f "$file7" -flags1 -other_flags ## first run
## check result.h5 exists and non-empty and copy to destdir
[ -s "result.h5" ] && cp -a "result.h5" "$destdir/new_result1.h5"
pc_script -f "$file7" -flags2 -other_flags ## second run
## check result.h5 exists and non-empty and copy to destdir
[ -s "result.h5" ] && cp -a "result.h5" "$destdir/new_result2.h5"
Which essentially stores the path part of the file.7 argument in dir and changes to that directory. If unable to change to the directory (due to read-permissions, etc..) the error is handled and the script exits. Next the full directory structure is created below your Result_Folder with mkdir -p with the same error handling if the directory cannot be created.
ls is used as a simple check to verify that a file ending in ".pc" exits in that directory. There are other ways to do this by piping the results to wc -l, but that spawns additional subshells that are best avoided.
(also note that Linux and Mac have files ending in ".pc" for use by pkg-config used when building programs from source -- they should not conflict with your files -- but be aware they exists in case you start chasing why weird ".pc" files are found)
After all tests are performed, the path is trimmed from the current ".7" filename storing just the filename in file7. The file7 variabli is then used in your pc_script command (which should also include the full path to the script if not in you PATH). After the pc_script is run [ -s "result.h5" ] is used to verify that result.h5 exists and is non-empty before moving that file to your Result_Folder location.
That should get you started. Using find to locate all .7 files is a simple way to let the tool designed to find the files for you do its job -- rather than trying to hand-roll your own solution. That way you only have to concentrate on what should be done for each file found. (note: I don't have pc_script or the files, so I have not testes this end-to-end, but it should be very close if not right-on-the-money)
There is nothing wrong in writing your own routine, but using find eliminates a lot of area where bugs can hide in your own solution.
Let me know if you have further questions.
I'm trying to create a GNU Makefile rule that copies files (found via VPATH) from one directory to another, preserving their directory structure.
There are zillions of ways to do this (starting with cp -r) but it seems that none of them work in the context of make, where the copying is initiated in the target directory.
E.g.
cp ../src/foo.c ../src/bar.c .
All the source files share a common directory (only known at runtime), and this common directory should be stripped away.
E.g.
$ srcdir=../../knurgl
$ cp ${srcdir}/src/foo.c ${srcdir}/src/bar.c .
$ find . -type f
./src/foo.c
./src/bar.c
even though the common directory is known at runtime, it can be arbitrary and even include the current directory . (in which case the operation should be a nop).
This is what i tried:
cp
cp --parent ${srcdir}/src/foo.c ${srcdir}/src/bar.c .
but rightfully this refuses to work when called from the target directory (as it would always copy the files onto themselves).
tar
tar c ${srcdir}/src/foo.c ${srcdir}/src/bar.c | tar x
this strips away any relative directories, but keeps the rest (so I end up with ./knurgl/src/foo.c instead of ./src/foo.c.
The --strip-components option doesn't help me much, as i don't know the depth of ${srcdir}.
Instead of
cp --parent ${srcdir}/src/foo.c ${srcdir}/src/bar.c .
(which doesn't work because it doesn't strip $srcdir) you can write
(wd=$PWD; cd $srcdir; cp --parent src/foo.c src/bar.c $wd)
make has built-in functions for handling strings. To replace old_base_dir with new_base_dir in the variable path, call:
$(path:old_base_dir/%=new_base_dir/%)
You can also let it perform the substitution on a list:
$(foreach path,$(path_list),$(path:old_base_dir/%=new_base_dir/%)
Here, the variable path_list contains multiple files. Note though that this will break if the file names contain spaces.
The manual of GNU make describes many more useful functions.
I have created a test directory structure:
t1.html
t2.php
a/t1.html
a/t2.php
b/t1.html
b/t2.php
All files contain the string "HELLO".
The following commands are run from the root folder above:
> grep -r "HELLO" *
b/t1.html:HELLO
b/t2.php:HELLO
c/t1.html:HELLO
c/t2.php:HELLO
t1.html:HELLO
t2.php:HELLO
> grep -r --include=*.html "HELLO" *
b/t1.html:HELLO
c/t1.html:HELLO
t2.php:HELLO
Why is it including the correct .html files from the sub-directories, but the .php file from the current directory?
If I pop up a level to the directory above my whole structure, then it gives following result:
grep -r --include=*.html "HELLO" *
a/t1.html:HELLO
a/c/t1.html:HELLO
a/b/t1.html:HELLO
This is what I expected when ran from within my structure.
I assume I can achieve the goal using find+grep together, but I thought this was valid usage of grep.
Thanks for any help.
Andy
Use a dot instead of the asterisk:
grep -r HELLO .
Asterisk gets evaluated by the shell and replaced with the list of all the files in the current directory (whose names don't start with a dot). All of them are then grepped recursively.
We have tomcat server located at /opt/tomcat7.0 i want to sync only logs directory to remote server, I am trying following rsync command with exclude * everything and include logs but it doesn't syncing anything.
following are tomcat directories (I only want to sync logs directory)
[rsync#server1]$ ls /opt/tomcat7.0
bin/ conf/ lib/ logs/ temp/ webapps/ work/
here is rsync command
[rsync#logserver]$ rsync -avz --delete --copy-links --include='logs' --exclude='*' server1:/opt/tomcat7.0 /path/to/destination/.
receiving incremental file list
sent 25 bytes received 10 bytes 6.36 bytes/sec
total size is 0 speedup is 0.00
what am i doing wrong?
Any reason not to do:
rsync -avz --delete --copy-links server1:/opt/tomcat7.0/logs /path/to/destination/.
The manpage explains why this does not work and what can be done to make it work:
Note that, when using the --recursive (-r) option (which is implied by -a), every subcomponent of every path
is visited from the top down, so include/exclude patterns get applied recursively to each subcomponent's full
name (e.g. to include "/foo/bar/baz" the subcomponents "/foo" and "/foo/bar" must not be excluded). The
exclude patterns actually short-circuit the directory traversal stage when rsync finds the files to send. If
a pattern excludes a particular parent directory, it can render a deeper include pattern ineffectual because
rsync did not descend through that excluded section of the hierarchy. This is particularly important when
using a trailing '*' rule. For instance, this won't work:
+ /some/path/this-file-will-not-be-found
+ /file-is-included
- *
This fails because the parent directory "some" is excluded by the '*' rule, so rsync never visits any of the
files in the "some" or "some/path" directories. One solution is to ask for all directories in the hierarchy
to be included by using a single rule: "+ */" (put it somewhere before the "- *" rule), and perhaps use the
--prune-empty-dirs option. Another solution is to add specific include rules for all the parent dirs that
need to be visited. For instance, this set of rules works fine:
+ /some/
+ /some/path/
+ /some/path/this-file-is-found
+ /file-also-included
- *
I'm having problems getting my rsync syntax right and I'm wondering if my scenario can actually be handled with rsync. First, I've confirmed that rsync is working just fine between my local host and my remote host. Doing a straight sync on a directory is successful.
Here's what my filesystem looks like:
uploads/
1260000000/
file_11_00.jpg
file_11_01.jpg
file_12_00.jpg
1270000000/
file_11_00.jpg
file_11_01.jpg
file_12_00.jpg
1280000000/
file_11_00.jpg
file_11_01.jpg
file_12_00.jpg
What I want to do is run rsync only on files that begin with "file_11_" in the subdirectories and I want to be able to run just one rsync job to sync all of these files in the subdirectories.
Here's the command that I'm trying:
rsync -nrv --include="**/file_11*.jpg" --exclude="*" /Storage/uploads/ /website/uploads/
This results in 0 files being marked for transfer in my dry run. I've tried various other combinations of --include and --exclude statements, but either continued to get no results or got everything as if no include or exclude options were set.
Anyone have any idea how to do this?
The problem is that --exclude="*" says to exclude (for example) the 1260000000/ directory, so rsync never examines the contents of that directory, so never notices that the directory contains files that would have been matched by your --include.
I think the closest thing to what you want is this:
rsync -nrv --include="*/" --include="file_11*.jpg" --exclude="*" /Storage/uploads/ /website/uploads/
(which will include all directories, and all files matching file_11*.jpg, but no other files), or maybe this:
rsync -nrv --include="/[0-9][0-9][0-9]0000000/" --include="file_11*.jpg" --exclude="*" /Storage/uploads/ /website/uploads/
(same concept, but much pickier about the directories it will include).
rsync include exclude pattern examples:
"*" means everything
"dir1" transfers empty directory [dir1]
"dir*" transfers empty directories like: "dir1", "dir2", "dir3", etc...
"file*" transfers files whose names start with [file]
"dir**" transfers every path that starts with [dir] like "dir1/file.txt", "dir2/bar/ffaa.html", etc...
"dir***" same as above
"dir1/*" does nothing
"dir1/**" does nothing
"dir1/***" transfers [dir1] directory and all its contents like "dir1/file.txt", "dir1/fooo.sh", "dir1/fold/baar.py", etc...
And final note is that simply dont rely on asterisks that are used in the beginning for evaluating paths; like "**dir" (its ok to use them for single folders or files but not paths) and note that more than two asterisks dont work for file names.
Here's my "teach a person to fish" answer:
Rsync's syntax is definitely non-intuitive, but it is worth understanding.
First, use -vvv to see the debug info for rsync.
$ rsync -nr -vvv --include="**/file_11*.jpg" --exclude="*" /Storage/uploads/ /website/uploads/
[sender] hiding directory 1280000000 because of pattern *
[sender] hiding directory 1260000000 because of pattern *
[sender] hiding directory 1270000000 because of pattern *
The key concept here is that rsync applies the include/exclude patterns for each directory recursively. As soon as the first include/exclude is matched, the processing stops.
The first directory it evaluates is /Storage/uploads. Storage/uploads has 1280000000/, 1260000000/, 1270000000/ dirs/files. None of them match file_11*.jpg to include. All of them match * to exclude. So they are excluded, and rsync ends.
The solution is to include all dirs (*/) first. Then the first dir component will be 1260000000/, 1270000000/, 1280000000/ since they match */. The next dir component will be 1260000000/. In 1260000000/, file_11_00.jpg matches --include="file_11*.jpg", so it is included. And so forth.
$ rsync -nrv --include='*/' --include="file_11*.jpg" --exclude="*" /Storage/uploads/ /website/uploads/
./
1260000000/
1260000000/file_11_00.jpg
1260000000/file_11_01.jpg
1270000000/
1270000000/file_11_00.jpg
1270000000/file_11_01.jpg
1280000000/
1280000000/file_11_00.jpg
1280000000/file_11_01.jpg
https://download.samba.org/pub/rsync/rsync.1