Bash shell script function gives "find: missing argument to `-exec'" error - linux

I wrote a function in a Bash shell script to search a Linux tree for filenames matching a pattern containing a regular expression, with colour highlighting:
function ggrep {
LS_="ls --color {}|sed s~./~~"
[ -n "$1" -a "$1" != "*" ] && NAME_="-iname $1" || NAME_=
[ -n "$2" ] && EXEC_="egrep -q \"$2\" \"{}\" && $LS_ && egrep -n \"$2\" --color=always \"{}\"|sed s~^B~\ B~" || EXEC_=$LS_
FIND_="find . -type f $NAME_ -exec sh -c \"$EXEC_\" \\;"
echo -e \\e[7m $FIND_ \\e[0m
$FIND_
}
e.g. ggrep a* lists all files starting with a under the current directory tree,
and ggrep a* x lists of files starting with a and containing x
When I run it, I get:
find: missing argument to `-exec'
even though I get the correct output when I copy and paste the line output by "echo" into the terminal. Can anyone please tell me what I've done wrong?
Secondly, it would be great if ggrep * x listed all files containing x, but * expands to a list of filenames and I need to use \* or '*' instead. Is there a way around this? Thanks!

Terminate the find command with \; instead of \\; .
find . -type f $NAME_ -exec sh -c \"$EXEC_\" \;

eval $FIND_
in the last line of the function body works fine for me.
Expansions in BASH are generally not recursive, so if you load a command from a variable, you should always use "eval" to enforce reprocessing the expanded variable as it was a fresh input. Normally quotes are not handled properly within a string that has already been expanded.
To your second problem, I think there is no satisfactory solution. The shell will always expand * before passing it to anything controlled by you. You can disable this expansion, but that is a global setting. Anyway, I think that this expansion could actually act in favor of your function. Consider rewriting it in a way that takes advantage of it. (I did not analyze whether the current version was close to that or not.)

Related

how to pass asterisk into ls command inside bash script

Hi… Need a little help here…
I tried to emulate the DOS' dir command in Linux using bash script. Basically it's just a wrapped ls command with some parameters plus summary info. Here's the script:
#!/bin/bash
# default to current folder
if [ -z "$1" ]; then var=.;
else var="$1"; fi
# check file existence
if [ -a "$var" ]; then
# list contents with color, folder first
CMD="ls -lgG $var --color --group-directories-first"; $CMD;
# sum all files size
size=$(ls -lgGp "$var" | grep -v / | awk '{ sum += $3 }; END { print sum }')
if [ "$size" == "" ]; then size="0"; fi
# create summary
if [ -d "$var" ]; then
folder=$(find $var/* -maxdepth 0 -type d | wc -l)
file=$(find $var/* -maxdepth 0 -type f | wc -l)
echo "Found: $folder folders "
echo " $file files $size bytes"
fi
# error message
else
echo "dir: Error \"$var\": No such file or directory"
fi
The problem is when the argument contains an asterisk (*), the ls within the script acts differently compare to the direct ls command given at the prompt. Instead of return the whole files list, the script only returns the first file. See the video below to see the comparation in action. I don't know why it behaves like that.
Anyone knows how to fix it? Thank you.
Video: problem in action
UPDATE:
The problem has been solved. Thank you all for the answers. Now my script works as expected. See the video here: http://i.giphy.com/3o8dp1YLz4fIyCbOAU.gif
The asterisk * is expanded by the shell when it parses the command line. In other words, your script doesn't get a parameter containing an asterisk, it gets a list of files as arguments. Your script only works with $1, the first argument. It should work with "$#" instead.
This is because when you retrieve $1 you assume the shell does NOT expand *.
In fact, when * (or other glob) matches, it is expanded, and broken into segments by $IFS, and then passed as $1, $2, etc.
You're lucky if you simply retrieved the first file. When your first file's path contains spaces, you'll get an error because you only get the first segment before the space.
Seriously, read this and especially this. Really.
And please don't do things like
CMD=whatever you get from user input; $CMD;
You are begging for trouble. Don't execute arbitrary string from the user.
Both above answers already answered your question. So, i'm going a bit more verbose.
In your terminal is running the bash interpreter (probably). This is the program which parses your input line(s) and doing "things" based on your input.
When you enter some line the bash start doing the following workflow:
parsing and lexical analysis
expansion
brace expansion
tidle expansion
variable expansion
artithmetic and other substitutions
command substitution
word splitting
filename generation (globbing)
removing quotes
Only after all above the bash
will execute some external commands, like ls or dir.sh... etc.,
or will do so some "internal" actions for the known keywords and builtins like echo, for, if etc...
As you can see, the second last is the filename generation (globbing). So, in your case - if the test* matches some files, your bash expands the willcard characters (aka does the globbing).
So,
when you enter dir.sh test*,
and the test* matches some files
the bash does the expansion first
and after will execute the command dir.sh with already expanded filenames
e.g. the script get executed (in your case) as: dir.sh test.pas test.swift
BTW, it acts exactly with the same way for your ls example:
the bash expands the ls test* to ls test.pas test.swift
then executes the ls with the above two arguments
and the ls will print the result for the got two arguments.
with other words, the ls don't even see the test* argument - if it is possible - the bash expands the wilcard characters. (* and ?).
Now back to your script: add after the shebang the following line:
echo "the $0 got this arguments: $#"
and you will immediatelly see, the real argumemts how your script got executed.
also, in such cases is a good practice trying to execute the script in debug-mode, e.g.
bash -x dir.sh test*
and you will see, what the script does exactly.
Also, you can do the same for your current interpreter, e.g. just enter into the terminal
set -x
and try run the dir.sh test* = and you will see, how the bash will execute the dir.sh command. (to stop the debug mode, just enter set +x)
Everbody is giving you valuable advice which you should definitely should follow!
But here is the real answer to your question.
To pass unexpanded arguments to any executable you need to single quote them:
./your_script '*'
The best solution I have is to use the eval command, in this way:
#!/bin/bash
cmd="some command \"with_quetes_and_asterisk_in_it*\""
echo "$cmd"
eval $cmd
The eval command takes its arguments and evaluates them into the command as the shell does.
This solves my problem when I need to call a command with asterisk '*' in it from a script.

listing file in unix and saving the output in a variable(Oldest File fetching for a particular extension)

This might be a very simple thing for a shell scripting programmer but am pretty new to it. I was trying to execute the below command in a shell script and save the output into a variable
inputfile=$(ls -ltr *.{PDF,pdf} | head -1 | awk '{print $9}')
The command works fine when I fire it from terminal but fails when executed through a shell script (sh). Why is that the command fails, does it mean that shell script doesn't support the command or am I doing it wrong? Also how do I know if a command will work in shell or not?
Just to give you a glimpse of my requirement, I was trying to get the oldest file from a particular directory (I also want to make sure upper case and lower case extensions are handled). Is there any other way to do this ?
The above command will work correctly only if BOTH *.pdf and *.PDF files are in the directory you are currently.
If you would like to execute it in a directory with only one of those you should consider using e.g.:
inputfiles=$(find . -maxdepth 1 -type f \( -name "*.pdf" -or -name "*.PDF" \) | xargs ls -1tr | head -1 )
NOTE: The above command doesn't work with files with new lines, or with long list of found files.
Parsing ls is always a bad idea. You need another strategy.
How about you make a function that gives you the oldest file among the ones given as argument? the following works in Bash (adapt to your needs):
get_oldest_file() {
# get oldest file among files given as parameters
# return is in variable get_oldest_file_ret
local oldest f
for f do
[[ -e $f ]] && [[ ! $oldest || $f -ot $oldest ]] && oldest=$f
done
get_oldest_file_ret=$oldest
}
Then just call as:
get_oldest_file *.{PDF,pdf}
echo "oldest file is: $get_oldest_file_ret"
Now, you probably don't want to use brace expansions like this at all. In fact, you very likely want to use the shell options nocaseglob and nullglob:
shopt -s nocaseglob nullglob
get_oldest_file *.pdf
echo "oldest file is: $get_oldest_file_ret"
If you're using a POSIX shell, it's going to be a bit trickier to have the equivalent of nullglob and nocaseglob.
Is perl an option? It's ubiquitous on Unix.
I would suggest:
perl -e 'print ((sort { -M $b <=> -M $a } glob ( "*.{pdf,PDF}" ))[0]);';
Which:
uses glob to fetch all files matching the pattern.
sort, using -M which is relative modification time. (in days).
fetches the first element ([0]) off the sort.
Prints that.
As #gniourf_gniourf says, parsing ls is a bad idea. Such as leaving unquoted globs, and generally not counting for funny characters in file names.
find is your friend:
#!/bin/sh
get_oldest_pdf() {
#
# echo path of oldest *.pdf (case-insensitive) file in current directory
#
find . -maxdepth 1 -mindepth 1 -iname "*.pdf" -printf '%T# %p\n' \
| sort -n \
| tail -1 \
| cut -d\ -f1-
}
whatever=$(get_oldest_pdf)
Notes:
find has numerous ways of formatting the output, including
things like access time and/or write time. I used '%T# %p\n',
where %T# is last write time in UNIX time format incl.fractal part.
This will never containt space so it's safe to use as separator.
Numeric sort and tail get the last item, sorting by the time,
cut removes the time from the output.
I used IMO much easier to read/maintain pipe notation, with help of \.
the code should run on any POSIX shell,
You could easily adjust the function to parametrize the pattern,
time used (access/write), control the search depth or starting dir.

If multiple directories exist then move the directories - test if a globbing pattern matches anything

I want to know how I can use an if statement in a shell script to check the existence of multiple directories.
For example, if /tmp has subdirectories test1, test2, test3, I want to move them to another directory.
I am using if [ -d /tmp/test* ]; then mv test* /pathOfNewDir
but it does not work on the if statement part.
The -d test only accepts one argument, so you'll need to test each directory individually. I would also not recommend moving test* as it may match more than you intended.
Use the double-bracket syntax test syntax (e.g. if [[ -d...), which is bash-specific but tends to be clearer and have fewer gotchas than the single-bracket syntax. If you just need to check a few directories, you can do it with a simple statement like if [[ -d /tmp/test1 && -d /tmp/test2 && -d /tmp/test3 ]]; then...
Unfortunately, the shell's file-testing operators (such as -d and -f) operate on a single, literal path only:
A conditional such as [ -d /tmp/test* ] won't work, because if /tmp/test* expands to multiple matches, you'll get a syntax error (only 1 argument accepted).
The bash variant [[ -d /tmp/test* ]] doesn't work either, because no globbing (pathname expansion) is performed inside [[ ... ]].
To test whether a globbing pattern matches anything, the cleanest approach is to define an auxiliary function (this solution is POSIX-compliant):
exists() { [ -e "$1" ]; }
Invoke it with an [unquoted] pattern, e.g.:
exists foo* && echo 'HAVE MATCHES'
# or, in an `if` statement:
if exists foo*; then # ...
The only caveat is that if shopt -s failglob is in effect in bash, an error message will be printed to stderr if there's no match, and the rest of the command will not be executed.
See below for an explanation of the function.
Applied to your specific scenario, we get (using bash syntax):
# Define aux. function
exists() { [[ -e $1 ]]; }
exists /tmp/test*/ && mv /tmp/test*/ /path/to/new/dir
Note the trailing / in /tmp/test*/ to ensure that only directories match, if any.
&& ensures that the following command is only executed if the function's exit code indicates true.
mv /tmp/test*/ ... moves all matching directories at once to the new target directory.
Alternatively, capture globbing results in an helper array variable:
if matches=(/tmp/test*/) && [[ -e ${matches[0]} ]]; then
mv "${matches[#]}" /path/to/new/dir
fi
Or, process matches individually:
for d in /tmp/test*/; do
[[ -e $d ]] || break # exit, if no actual match
# Process individual match.
mv "$d" /path/to/new/dir
done
Explanation of auxiliary function exists() { [ -e "$1" ]; }:
It takes advantage of several shell features:
If you invoke it with a[n unquoted] pattern such as exists foo*, the shell will expand foo* to all matching files/directories and pass their names as individual arguments to the function.
If there are no matches, the pattern will be passed as is to the function - this behavior is mandated by POSIX.
Caveat: bash has configuration items that allow changing this behavior (shell options failglob and nullglob) - though by default it acts as mandated by POSIX in this case. (zsh, sadly, by default fails if there's no match.)
Inside the function, it's sufficient to examine the 1st argument ($1) to determine whether any matches were found:
If the 1st argument, $1 refers to an actual, existing filesystem item (as indicated by the exit code of the -e file-test operator), the implication is that the pattern indeed matched something (at least one, possibly more items).
Otherwise, the implication is that the pattern was passed as is, implying that no matches were found.
Note that the exit code of the -e test - due to being the last command in the function - implicitly serves as the exit code of the function as a whole.
It looks like you may want to use find:
find /tmp -name "test*" -maxdepth 1 -type d -exec mv \{\} /target/directory \;
This finds all test* directories directly under /tmp without recursion and moves them to /target/directory.
This approach uses ls and grep to create a list of matching directories or write an error in case no such directories are found:
IFS="
" # input is separated with newlines
if dirs=$( ls -1 -F | grep "^test.*/" | tr -d "/" )
then
# directories found - move them:
for d in $dirs
do
mv "$d" "$target_directory"/
done
else
# no directories found - send error
fi
While it would seem feasible to use find for such a task, find does not directly provide feedback on the number of matches as required by the OP according to the comments.
Note: Using ls for the task introduces a few limitations on filenames. This approach will not work with filenames containing newlines or wildcard characters.

"For" loop in bash script only run once

The script goal is simple.
I have many directory which contains some captured traffic files.
I want to run a command for each directory. So I came up with a script. But I don't know why the script is run only with the first match.
#!/bin/bash
# Collect throughput from a group of directory containing capture files
# Group of directory can be specify by pattern
# Usage: ./collectThroughputList [regex]
# [regex] is the name pattern of the group of directory
for DIR in $( ls -d $1 ); do
if test -d "$DIR"; then
echo Collecting throughputs from directory: "$DIR"
( sh collectThroughput.sh $DIR > $DIR.txt )
fi
done
echo Done\!
I try it with:
for DIR in $1; do
or
for DIR in `ls -d $1`; do
or
for DIR in $( ls -d "$1" ); do
or
for DIR in $( ls -d $1 ); do
But the result is the same. The for loop runs only one time.
Finally I found this one and did some tricks for it to work. However, I would like to know why my first script doesn't work.
find *Delay50ms* -type d -exec bash -c "cd '{}' && echo enter '{}' && ../collectThroughput.sh ../'{}' > ../'{}'.txt" \;
"*Delay*" is the directory pattern name that I want to run the command with.
Thanks for pointing out the issues.
Since you want to find all sub-directories under $1, use it like this:
for DIR in $(find $1 -type d)
Problem
Most probably the problem you are encountering is due to the fact that you are trying to use some kind of pattern like * as argument to your script.
Running it with something like:
my_script *
What's happening here is, that the shell will expand * prior to calling your script.
Thus after word splitting has been performed $1 in your script will just reference the first entry returned by ls.
Example
Given the following directory layout:
directory_a
directory_b
directory_c
Calling my_script * will result in:
my_script directory_a directory_b directory_c
being called thus your loop just iterating over $(ls -d directory_a) which in fact is nothing else but directory_a alone.
Solution
To have the program run with $1=* you would have to escape the * prior to calling your script.
Try running:
my_script \*
To see it effectively does what it is intended to do then. This way $1 in your script will contain * instead of directory_a which most probably is the way you wanted your script to work.
as mikyra has pointed out, the shell expands your argument * to all entries in your directory prior to passing it to your script.
if you want shell-expansion of your wildcards (e.g. * matches all but hidden files), you could simply leave the expansion to the shell and use the result, by iterating over all arguments, rather than just the first one:
for DIR in $#; do
# ...
done
if you want to do the expansion yourself (e.g. because the pattern should be applied only to a pre-filtered list or to files in a different directory, or because you want regex-expansion rather than shell globbing), you have to protect the argument from being expanded by the shell, either using backslash notation (like mikyra's \*) or by using quotes (which is often easier to use):
my_script "*"

Recursively look for files with a specific extension

I'm trying to find all files with a specific extension in a directory and its subdirectories with my bash (Latest Ubuntu LTS Release).
This is what's written in a script file:
#!/bin/bash
directory="/home/flip/Desktop"
suffix="in"
browsefolders ()
for i in "$1"/*;
do
echo "dir :$directory"
echo "filename: $i"
# echo ${i#*.}
extension=`echo "$i" | cut -d'.' -f2`
echo "Erweiterung $extension"
if [ -f "$i" ]; then
if [ $extension == $suffix ]; then
echo "$i ends with $in"
else
echo "$i does NOT end with $in"
fi
elif [ -d "$i" ]; then
browsefolders "$i"
fi
done
}
browsefolders "$directory"
Unfortunately, when I start this script in terminal, it says:
[: 29: in: unexpected operator
(with $extension instead of 'in')
What's going on here, where's the error?
But this curly brace
find "$directory" -type f -name "*.in"
is a bit shorter than that whole thing (and safer - deals with whitespace in filenames and directory names).
Your script is probably failing for entries that don't have a . in their name, making $extension empty.
find {directory} -type f -name '*.extension'
Example: To find all csv files in the current directory and its sub-directories, use:
find . -type f -name '*.csv'
The syntax I use is a bit different than what #Matt suggested:
find $directory -type f -name \*.in
(it's one less keystroke).
Without using find:
du -a $directory | awk '{print $2}' | grep '\.in$'
Though using find command can be useful here, the shell itself provides options to achieve this requirement without any third party tools. The bash shell provides an extended glob support option using which you can get the file names under recursive paths that match with the extensions you want.
The extended option is extglob which needs to be set using the shopt option as below. The options are enabled with the -s support and disabled with he -u flag. Additionally you could use couple of options more i.e. nullglob in which an unmatched glob is swept away entirely, replaced with a set of zero words. And globstar that allows to recurse through all the directories
shopt -s extglob nullglob globstar
Now all you need to do is form the glob expression to include the files of a certain extension which you can do as below. We use an array to populate the glob results because when quoted properly and expanded, the filenames with special characters would remain intact and not get broken due to word-splitting by the shell.
For example to list all the *.csv files in the recursive paths
fileList=(**/*.csv)
The option ** is to recurse through the sub-folders and *.csv is glob expansion to include any file of the extensions mentioned. Now for printing the actual files, just do
printf '%s\n' "${fileList[#]}"
Using an array and doing a proper quoted expansion is the right way when used in shell scripts, but for interactive use, you could simply use ls with the glob expression as
ls -1 -- **/*.csv
This could very well be expanded to match multiple files i.e. file ending with multiple extension (i.e. similar to adding multiple flags in find command). For example consider a case of needing to get all recursive image files i.e. of extensions *.gif, *.png and *.jpg, all you need to is
ls -1 -- **/+(*.jpg|*.gif|*.png)
This could very well be expanded to have negate results also. With the same syntax, one could use the results of the glob to exclude files of certain type. Assume you want to exclude file names with the extensions above, you could do
excludeResults=()
excludeResults=(**/!(*.jpg|*.gif|*.png))
printf '%s\n' "${excludeResults[#]}"
The construct !() is a negate operation to not include any of the file extensions listed inside and | is an alternation operator just as used in the Extended Regular Expressions library to do an OR match of the globs.
Note that these extended glob support is not available in the POSIX bourne shell and its purely specific to recent versions of bash. So if your are considering portability of the scripts running across POSIX and bash shells, this option wouldn't be right.
find "$PWD" -type f -name "*.in"
There's a { missing after browsefolders ()
All $in should be $suffix
The line with cut gets you only the middle part of front.middle.extension. You should read up your shell manual on ${varname%%pattern} and friends.
I assume you do this as an exercise in shell scripting, otherwise the find solution already proposed is the way to go.
To check for proper shell syntax, without running a script, use sh -n scriptname.
To find all the pom.xml files in your current directory and print them, you can use:
find . -name 'pom.xml' -print
find $directory -type f -name "*.in"|grep $substring
for file in "${LOCATION_VAR}"/*.zip
do
echo "$file"
done

Resources