BASH File Spaces - linux

Hi but it appears that if my strings have spaces in it, it won't work properly. My entire script is here:
#!/bin/bash
echo $#; echo $#
MoveToTarget() {
#This takes to 2 arguments: source and target
echo ""$1" "$2""
cp -rf "$1"/* "$2"
rm -r "$1"
}
WaitForProcessToEnd() {
#This takes 1 argument. The PID to wait for
#Unlike the AutoIt version, this sleeps 1 second
while [ $(kill -0 "$1") ]; do
sleep 1
done
}
RunApplication() {
#This takes 1 application, the path to the thing to execute
open "$1"
}
#our main code block
pid="$1"
SourcePath="$2"
DestPath="$3"
ToExecute="$4"
WaitForProcessToEnd $pid
MoveToTarget "$SourcePath" "$DestPath"
RunApplication "$ToExecute"
exit
Note that I have tried the variables like $DestPath with and without quotes around them, with no luck. This code gets run with a Python script, and when the arguments are passed, quotes are around them. I appreciate any help!
Edit: (Python script)
bootstrapper_command = r'"%s" "%s" "%s" "%s" "%s"' % (bootstrapper_path, os.getpid(), extracted_path, self.app_path, self.postexecute)
shell = True
subprocess.Popen(bootstrapper_command, shell=shell)

Bash quotes are syntactic, not literal. Greg's Wiki, as usual, has the most excellent explanation you could wish for.

Try removing the *, it isn't needed for recursive copy.
cp -rf "$1"/* "$2"
to:
cp -rf "$1/" "$2"
I think globbing was ruining your quoting that was protecting you from spaces in filenames.

Related

Bash Loop with counter gives a count of 1 when no item found. Why?

In the function below my counter works fine as long as an item is found in $DT_FILES. If the find is empty for that folder the counter gives me a count of 1 instead of 0. I am not sure what I am missing.
What the script does here is 1) makes a variable containing all the parent folders. 2) Loop through each folder, cd inside each one and makes a list of all files that contain the string "-DT-". 3) If it finds a file that doesn't not end with ".tif", it then copy the DT files and put a .tif extension to it. Very simple.
I count the number of times the loop did create a new file with the ".tif" extension.
So I am not sure why I am getting a count of 1 at times.
function create_tifs()
{
IFS=$'\n'
# create list of main folders
LIST=$( find . -maxdepth 1 -mindepth 1 -type d )
for f in $LIST
do
echo -e "\n${OG}>>> Folder processed: ${f} ${NONE}"
cd ${f}
DT_FILES=$(find . -type f -name '*-DT-*' | grep -v '.jpg')
if (( ${#DT_FILES} ))
then
count=0
for b in ${DT_FILES}
do
if [[ "${b}" != *".tif" ]]
then
# cp -n "${b}" "${b}.tif"
echo -e "TIF created ${b} as ${b}.tif"
echo
((count++))
else
echo -e "TIF already done ${b}"
fi
done
fi
echo -e "\nCount = ${count}"
}
I can't repro your problem, but your code contains several dubious constructs. Here is a refactoring might coincidentally also remove whatever problem you were experiencing.
#!/bin/bash
# Don't use non-portable function definition syntax
create_tifs() {
# Don't pollute global namespace; don't attempt to parse find output
# See also https://mywiki.wooledge.org/BashFAQ/020
local f
for f in ./*/; do
# prefer printf over echo -e
# print diagnostic messages to standard error >&2
# XXX What are these undeclared global variables?
printf "\n%s>>> Folder processed: %s %s" "$OG" "$f" "$NONE" >&2
# Again, avoid parsing find output
find "$f" -name '*-DT-*' -not -name '*.jpg' -exec sh -c '
for b; do
if [[ "${b}" != *".tif" ]]
then
# cp -n "${b}" "${b}.tif"
printf "TIF created %s as %s.tif\n" "$b" "$b" >&2
# print one line for wc
printf ".\n"
else
# XXX No newline, really??
printf "TIF already done %s" "$b" >&2
fi
done
fi' _ {} +
# Missing done!
done |
# Count lines produced by printf inside tif creation
wc -l |
sed 's/.*/Count = &/'
}
This could be further simplified by using find ./*/ instead of looping over f but then you don't (easily) get to emit a diagnostic message for each folder separately. Similarly, you could add -not -name '*.tif' but then you don't get to print "tif already done" for those.
Tangentially perhaps see also Correct Bash and shell script variable capitalization; use lower case for your private variables.
Printing a newline before your actual message (like in the first printf) is a weird antipattern, especially when you don't do that consequently. The usual arrangement would be to put a newline at the end of each emitted message.
If you've got Bash 4.0 or later you can use globstar instead of (the error-prone) find. Try this Shellcheck-clean code:
#! /bin/bash -p
shopt -s dotglob extglob nullglob globstar
function create_tifs
{
local dir dtfile
local -i count
for dir in */; do
printf '\nFolder processed: %s\n' "$dir" >&2
count=0
for dtfile in "$dir"**/*-DT-!(*.jpg); do
if [[ $dtfile == *.tif ]]; then
printf 'TIF already done %s\n' "$dtfile" >&2
else
cp -v -n -- "$dtfile" "$dtfile".tif
count+=1
fi
done
printf 'Count = %d\n' "$count" >&2
done
return 0
}
shopt -s ... enables some Bash settings that are required by the code:
dotglob enables globs to match files and directories that begin with .. find shows such files by default.
extglob enables "extended globbing" (including patterns like !(*.jpg)). See the extglob section in glob - Greg's Wiki.
nullglob makes globs expand to nothing when nothing matches (otherwise they expand to the glob pattern itself, which is almost never useful in programs).
globstar enables the use of ** to match paths recursively through directory trees.
Note that globstar is potentially dangerous in versions of Bash prior to 4.3 because it follows symlinks, possibly leading to processing the same file or directory multiple times, or getting stuck in a cycle.
The -v option with cp causes it to print details of what it does. You might prefer to drop the option and print a different format of message instead.
See the accepted, and excellent, answer to Why is printf better than echo? for an explanation of why I used printf instead of echo.
I didn't use cd because it often leads to problems in programs.

bash: set variable inside loop when piping find and grep [duplicate]

i want to compute all *bin files inside a given directory. Initially I was working with a for-loop:
var=0
for i in *ls *bin
do
perform computations on $i ....
var+=1
done
echo $var
However, in some directories there are too many files resulting in an error: Argument list too long
Therefore, I was trying it with a piped while-loop:
var=0
ls *.bin | while read i;
do
perform computations on $i
var+=1
done
echo $var
The problem now is by using the pipe subshells are created. Thus, echo $var returns 0.
How can I deal with this problem?
The original Code:
#!/bin/bash
function entropyImpl {
if [[ -n "$1" ]]
then
if [[ -e "$1" ]]
then
echo "scale = 4; $(gzip -c ${1} | wc -c) / $(cat ${1} | wc -c)" | bc
else
echo "file ($1) not found"
fi
else
datafile="$(mktemp entropy.XXXXX)"
cat - > "$datafile"
entropy "$datafile"
rm "$datafile"
fi
return 1
}
declare acc_entropy=0
declare count=0
ls *.bin | while read i ;
do
echo "Computing $i" | tee -a entropy.txt
curr_entropy=`entropyImpl $i`
curr_entropy=`echo $curr_entropy | bc`
echo -e "\tEntropy: $curr_entropy" | tee -a entropy.txt
acc_entropy=`echo $acc_entropy + $curr_entropy | bc`
let count+=1
done
echo "Out of function: $count | $acc_entropy"
acc_entropy=`echo "scale=4; $acc_entropy / $count" | bc`
echo -e "===================================================\n" | tee -a entropy.txt
echo -e "Accumulated Entropy:\t$acc_entropy ($count files processed)\n" | tee -a entropy.txt
The problem is that the while loop is part of a pipeline. In a bash pipeline, every element of the pipeline is executed in its own subshell [ref]. So after the while loop terminates, the while loop subshell's copy of var is discarded, and the original var of the parent (whose value is unchanged) is echoed.
One way to fix this is by using Process Substitution as shown below:
var=0
while read i;
do
# perform computations on $i
((var++))
done < <(find . -type f -name "*.bin" -maxdepth 1)
Take a look at BashFAQ/024 for other workarounds.
Notice that I have also replaced ls with find because it is not good practice to parse ls.
A POSIX compliant solution would be to use a pipe (p file). This solution is very nice, portable, and POSIX, but writes something on the hard disk.
mkfifo mypipe
find . -type f -name "*.bin" -maxdepth 1 > mypipe &
while read line
do
# action
done < mypipe
rm mypipe
Your pipe is a file on your hard disk. If you want to avoid having useless files, do not forget to remove it.
So researching the generic issue, passing variables from a sub-shelled while loop to the parent. One solution I found, missing here, was to use a here-string. As that was bash-ish, and I preferred a POSIX solution, I found that a here-string is really just a shortcut for a here-document. With that knowledge at hand, I came up with the following, avoiding the subshell; thus allowing variables to be set in the loop.
#!/bin/sh
set -eu
passwd="username,password,uid,gid
root,admin,0,0
john,appleseed,1,1
jane,doe,2,2"
main()
{
while IFS="," read -r _user _pass _uid _gid; do
if [ "${_user}" = "${1:-}" ]; then
password="${_pass}"
fi
done <<-EOT
${passwd}
EOT
if [ -z "${password:-}" ]; then
echo "No password found."
exit 1
fi
echo "The password is '${password}'."
}
main "${#}"
exit 0
One important note to all copy pasters, is that the here-document is setup using the hyphen, indicating that tabs are to be ignored. This is needed to keep the layout somewhat nice. It is important to note, because stackoverflow doesn't render tabs in 'code' and replaces them with spaces. Grmbl. SO, don't mangle my code, just cause you guys favor spaces over tabs, it's irrelevant in this case!
This probably breaks on different editor(settings) and what not. So the alternative would be to have it as:
done <<-EOT
${passwd}
EOT
This could be done with a for loop, too:
var=0;
for file in `find . -type f -name "*.bin" -maxdepth 1`; do
# perform computations on "$i"
((var++))
done
echo $var

Bash Programming how to call "$i+1"

I am writing a script that will allow me to change a char in a string from "#" to something else, if I call an argument in terminal.
eg if I write
./myprogram testText.txt -r a
the -r argument will remove all "#" from testTxt.txt and replace them with "a"
My problem is I do not know how to write "If -r is $x, $x+1 is the char I want for replacement"
This is purely a syntax problem, I'm a bash noob :P. Here is the part of code I'm trying to work with.
for i in $*
do
if [[ $i = "-r" ]]
then
$customHashChoice=$((i+1))
# ^^^^^ Problematic Line ^^^^
fi
done
Try this:
customHashChoice=($(getopt "r:" "$#" 2>/dev/null))
if [ "${customHashChoice[0]}" == "-r" ]; then
customHashChoice="${customHashChoice[1]}"
else
echo "-r option is missing. Aborting..."
exit 1
fi
Syntax: getopt optstring parameters
From manual: getopt is used to break up (parse) options in command lines for easy parsing by shell procedures, and to check for legal options. It uses the GNU getopt(3) routines to do this.
Here, optstring is r:. It means, that the script accepts an option -r & the option takes an argument (implied by :).
The output of getopt "r:" "$#" is as below:
-r <argument to -r option> -- <unmatched parameters>
e.g. for command-line arguments,
./myprogram testText.txt -r a
getopt "r:" "$#" returns
-r a -- testText.txt
This output is stored in array & the second element of array is used, if the first element is equal to -r.
i=1
while [ "$i" -le $# ]
do
if [[ ${!i} = "-r" ]]
then
i=$(($i + 1))
customHashChoice=${!i}
i=$(($i + 1))
continue
fi
# do something useful
i=$(($i + 1))
done
The command line arguments are numbered 1 through $#. The above loops through each of them. If first checks if the current argument is -r and, if so, sets customHashChoice.
In the above, i contains the argument number. So, $i gives the value of i. To access the i'th command line argument, one uses ${!i}.
A more standard approach
The standard way to process command line arguments in shell scripts is getopts. It can handle many options. Here is sample code that that takes an option -r and requires it to have an argument, which is assigned to the shell variable char:
while getopts r: arg ; do case $arg in
r) char="$OPTARG" ;;
:) echo "${0##*/}: Must supply an argument to $OPTARG." ; exit 1 ;;
\?) echo "Invalid option" ; exit 1 ;;
esac
done
shift $(($OPTIND - 1))
echo "I will replace # with $char in file $1"
For getopts to work, the options have to come first. So, your command line would becomes:
./myprogram -r a testText.txt
If this is not acceptable, you can roll your own custom option processor. In the long run, there is some advantage, however, to standardizing on the usual approach.
You could do something like the following:
#!/bin/bash
val=
xval=
fname=$1
while [ "$*" != "" ]; do
case $1 in
"-r") val="${2}"; shift ;;
"-x") xval="${2}"; shift ;;
esac
shift
done
echo ${fname} ${val} ${xval}
Then when you pass the command like so
./myprogram testText.txt -r a
fname will be testText.txt, and the arguments will be parsed (where the -r will pick up a); for any other values you might want to parse, you'll need variable names to assign and test against. The output would be:
testText.txt a
Hope that helps

Find all files where no part of the path of the file is a symbolic link

Is there an easy way to find all files where no part of the path of the file is a symbolic link?
Short:
find myRootDir -type f -print
This would answer the question.
Care to not add a slash at end of specified dir ( not myRootDir/ but myRootDir ).
This won't print other than real files in real path.
No symlinked file nor file in symlinked dir.
But...
If you wanna ensure that a specified dir contain a symlink, there is a litte bash function to could do the job:
isPurePath() {
if [ -d "$1" ];then
while [ ! -L "$1" ] && [ ${#1} -gt 0 ] ;do
set -- "${1%/*}"
if [ "${1%/*}" == "$1" ] ;then
[ ! -L "$1" ] && return
set -- ''
fi
done
fi
false
}
if isPurePath /usr/share/texmf/dvips/xcolor ;then echo yes; else echo no;fi
yes
if isPurePath /usr/share/texmf/doc/pgf ;then echo yes; else echo no;fi
no
So you could Find all files where no part of the path of the file is a symbolic link in running this command:
isPurePath myRootDir && find myRootDir -type f -print
So if something is printed, there are no symlink part !
You can use this script : (copy/paste the whole code in a shell)
cat<<'EOF'>sympath
#!/bin/bash
cur="$1"
while [[ $cur ]]; do
cur="${cur%/*}"
if test -L "$cur"; then
echo >&2 "$cur is a symbolic link"
exit 1
fi
done
EOF
${cur%/*} is a bash parameter expansion
EXAMPLE
chmod +x sympath
./sympath /tmp/foo/bar/base
/tmp/foo/bar is a symbolic link
I don't know any easy way, but here's an answer that fully answers your question, using two methods (that are, in fact, essentially the same):
Using an auxiliary script
Create a file called hasnosymlinkinname (or choose a better name --- I've always sucked at choosing names):
#!/bin/bash
name=$1
if [[ "$1" = /* ]]; then
name="$(pwd)/$1"
else
name=$1
fi
IFS=/ read -r -a namearray <<< "$name"
for ((i=0;i<${#namearray[#]}; ++i)); do
IFS=/ read name <<< "${namearray[*]:0:i+1}"
[[ -L "$name" ]] && exit 1
done
exit 0
Then chmod +x hasnosymlinkinname. Then use with find:
find /path/where/stuff/is -exec ./hasnosymlinkinname {} \; -print
The scripts works like this: using IFS trickery, we decompose the filename into each part of the path (separated by the /) and put each part in an array namearray. Then, we loop through the (cumulative) parts of the array (joined with the / thanks to some IFS trickery) and if this part is a symlink (see the -L test), we exit with a non-success return code (1), otherwise, we exit with a success return code (0).
Then find runs this script to all files in /path/where/stuff/is. If the script exits with a success return code, the name of the file is printed out (but instead of -print you could do whatever else you like).
Using a one(!)-liner (if you have a large screen) to impress your grand-mother (or your dog)
find /path/where/stuff/is -exec bash -c 'if [[ "$0" = /* ]]; then name=$0; else name="$(pwd)/$0"; fi; IFS=/ read -r -a namearray <<< "$name"; for ((i=0;i<${#namearray[#]}; ++i)); do IFS=/ read name <<< "${namearray[*]:0:i+1}"; [[ -L "$name" ]] && exit 1; done; exit 0' {} \; -print
Note
This method is 100% safe regarding spaces or funny symbols that could appear in file names. I don't know how you'll use the output of this command, but please make sure that you'll use a good method that will also be safe regarding spaces and funny symbols that could appear in a file name, i.e., don't parse its output with another script unless you use -print0 or similar smart thing.

about the Linux Shell 'While' command

code:
path=$PATH:
while [ -n $path ]
do
ls -ld ${path%%:*}
path=${path#*:}
done
I want to get the each part of path .When run the script ,it can not get out of the while process 。Please tell me why . Is some problem in 'while [ -n $path ]' ?
The final cut never results in an empty string. If you have a:b:c, you'll strip off the a and then the b, but never the c. I.e., this:
${path#*:}
Will always result in a non-empty string for the last piece of the path. Since the -n check looks for an empty string, your loop runs forever.
If $path doesn't have a colon in it, ${path#*:} will return $path. So you have an infinite loop.
p="foo"
$ echo ${p#*:}
foo
$ p="foo:bar"
$ echo ${p#*:}
bar
You have some bugs in your code. This should do the trick:
path=$PATH
while [[ $path != '' ]]; do
# you can replace echo to whatever you need, like ls -ld
echo ${path%%:*}
if echo $path | grep ':' >/dev/null; then
path=${path#*:}
else path=''
fi
done
Your path, after is initialized, will always check True for [ -n path ] test. This is the main reason for which you never get out of the while loop.

Resources