How do I write a file path which include a regular expression - linux

Depending on the system I am working on, there might be 2 different possible paths (mutually exclusive):
System1: /tmp/aword/foo
System2: /tmp/bword/foo
I am supposed to echo something into the foo file regardless of which system I encounter (through a shell script).
How do I include a regular expression within the path itself, to take the correct (existent) path?
somethings I have tried:
#doesn't work
echo Hello > /tmp/(a|b)word/foo
#doesn't work
echo Hello > /tmp/[a|b]word/foo
is there a way of doing this without having to include a test before this which tests for path existence?

If it literally is aword and bword and you know that only one of them exists, you can use
echo 'Hello' > /tmp/[ab]word/foo
This is a shell pattern and documented in the Bash manual or the POSIX sh spec.
If, however, both paths exist, Bash will complain with
-bash: [ab]word: ambiguous redirect

Related

"read" command not executing in "while read line" loop [duplicate]

This question already has answers here:
Read user input inside a loop
(6 answers)
Closed 5 years ago.
First post here! I really need help on this one, I looked the issue on google, but can't manage to find an useful answer for me. So here's the problem.
I'm having fun coding some like of a framework in bash. Everyone can create their own module and add it to the framework. BUT. To know what arguments the script require, I created an "args.conf" file that must be in every module, that kinda looks like this:
LHOST;true;The IP the remote payload will connect to.
LPORT;true;The port the remote payload will connect to.
The first column is the argument name, the second defines if it's required or not, the third is the description. Anyway, long story short, the framework is supposed to read the args.conf file line by line to ask the user a value for every argument. Here's the piece of code:
info "Reading module $name argument list..."
while read line; do
echo $line > line.tmp
arg=`cut -d ";" -f 1 line.tmp`
requ=`cut -d ";" -f 2 line.tmp`
if [ $requ = "true" ]; then
echo "[This argument is required]"
else
echo "[This argument isn't required, leave a blank space if you don't wan't to use it]"
fi
read -p " $arg=" answer
echo $answer >> arglist.tmp
done < modules/$name/args.conf
tr '\n' ' ' < arglist.tmp > argline.tmp
argline=`cat argline.tmp`
info "Launching module $name..."
cd modules/$name
$interpreter $file $argline
cd ../..
rm arglist.tmp
rm argline.tmp
rm line.tmp
succes "Module $name execution completed."
As you can see, it's supposed to ask the user a value for every argument... But:
1) The read command seems to not be executing. It just skips it, and the argument has no value
2) Despite the fact that the args.conf file contains 3 lines, the loops seems to be executing just a single time. All I see on the screen is "[This argument is required]" just one time, and the module justs launch (and crashes because it has not the required arguments...).
Really don't know what to do, here... I hope someone here have an answer ^^'.
Thanks in advance!
(and sorry for eventual mistakes, I'm french)
Alpha.
As #that other guy pointed out in a comment, the problem is that all of the read commands in the loop are reading from the args.conf file, not the user. The way I'd handle this is by redirecting the conf file over a different file descriptor than stdin (fd #0); I like to use fd #3 for this:
while read -u3 line; do
...
done 3< modules/$name/args.conf
(Note: if your shell's read command doesn't understand the -u option, use read line <&3 instead.)
There are a number of other things in this script I'd recommend against:
Variable references without double-quotes around them, e.g. echo $line instead of echo "$line", and < modules/$name/args.conf instead of < "modules/$name/args.conf". Unquoted variable references get split into words (if they contain whitespace) and any wildcards that happen to match filenames will get replaced by a list of matching files. This can cause really weird and intermittent bugs. Unfortunately, your use of $argline depends on word splitting to separate multiple arguments; if you're using bash (not a generic POSIX shell) you can use arrays instead; I'll get to that.
You're using relative file paths everywhere, and cding in the script. This tends to be fragile and confusing, since file paths are different at different places in the script, and any relative paths passed in by the user will become invalid the first time the script cds somewhere else. Worse, you aren't checking for errors when you cd, so if any cd fails for any reason, then entire rest of the script will run in the wrong place and fail bizarrely. You'd be far better off figuring out where your system's root directory is (as an absolute path), then referencing everything from it (e.g. < "$module_root/modules/$name/args.conf").
Actually, you're not checking for errors anywhere. It's generally a good idea, when writing any sort of program, to try to think of what can go wrong and how your program should respond (and also to expect that things you didn't think of will also go wrong). Some people like to use set -e to make their scripts exit if any simple command fails, but this doesn't always do what you'd expect. I prefer to explicitly test the exit status of the commands in my script, with something like:
command1 || {
echo 'command1 failed!' >&2
exit 1
}
if command2; then
echo 'command2 succeeded!' >&2
else
echo 'command2 failed!' >&2
exit 1
fi
You're creating temp files in the current directory, which risks random conflicts (with other runs of the script at the same time, any files that happen to have names you're using, etc). It's better to create a temp directory at the beginning, then store everything in it (again, by absolute path):
module_tmp="$(mktemp -dt module-system)" || {
echo "Error creating temp directory" >&2
exit 1
}
...
echo "$answer" >> "$module_tmp/arglist.tmp"
(BTW, note that I'm using $() instead of backticks. They're easier to read, and don't have some subtle syntactic oddities that backticks have. I recommend switching.)
Speaking of which, you're overusing temp files; a lot of what you're doing with can be done just fine with shell variables and built-in shell features. For example, rather than reading line from the config file, then storing them in a temp file and using cut to split them into fields, you can simply echo to cut:
arg="$(echo "$line" | cut -d ";" -f 1)"
...or better yet, use read's built-in ability to split fields based on whatever IFS is set to:
while IFS=";" read -u3 arg requ description; do
(Note that since the assignment to IFS is a prefix to the read command, it only affects that one command; changing IFS globally can have weird effects, and should be avoided whenever possible.)
Similarly, storing the argument list in a file, converting newlines to spaces into another file, then reading that file... you can skip any or all of these steps. If you're using bash, store the arg list in an array:
arglist=()
while ...
arglist+=("$answer") # or ("#arg=$answer")? Not sure of your syntax.
done ...
"$module_root/modules/$name/$interpreter" "$file" "${arglist[#]}"
(That messy syntax, with the double-quotes, curly braces, square brackets, and at-sign, is the generally correct way to expand an array in bash).
If you can't count on bash extensions like arrays, you can at least do it the old messy way with a plain variable:
arglist=""
while ...
arglist="$arglist $answer" # or "$arglist $arg=$answer"? Not sure of your syntax.
done ...
"$module_root/modules/$name/$interpreter" "$file" $arglist
... but this runs the risk of arguments being word-split and/or expanded to lists of files.

How to get the complete calling command of a BASH script from inside the script (not just the arguments)

I have a BASH script that has a long set of arguments and two ways of calling it:
my_script --option1 value --option2 value ... etc
or
my_script val1 val2 val3 ..... valn
This script in turn compiles and runs a large FORTRAN code suite that eventually produces a netcdf file as output. I already have all the metadata in the netcdf output global attributes, but it would be really nice to also include the full run command one used to create that experiment. Thus another user who receives the netcdf file could simply reenter the run command to rerun the experiment, without having to piece together all the options.
So that is a long way of saying, in my BASH script, how do I get the last command entered from the parent shell and put it in a variable? i.e. the script is asking "how was I called?"
I could try to piece it together from the option list, but the very long option list and two interface methods would make this long and arduous, and I am sure there is a simple way.
I found this helpful page:
BASH: echoing the last command run
but this only seems to work to get the last command executed within the script itself. The asker also refers to use of history, but the answers seem to imply that the history will only contain the command after the programme has completed.
Many thanks if any of you have any idea.
You can try the following:
myInvocation="$(printf %q "$BASH_SOURCE")$((($#)) && printf ' %q' "$#")"
$BASH_SOURCE refers to the running script (as invoked), and $# is the array of arguments; (($#)) && ensures that the following printf command is only executed if at least 1 argument was passed; printf %q is explained below.
While this won't always be a verbatim copy of your command line, it'll be equivalent - the string you get is reusable as a shell command.
chepner points out in a comment that this approach will only capture what the original arguments were ultimately expanded to:
For instance, if the original command was my_script $USER "$(date +%s)", $myInvocation will not reflect these arguments as-is, but will rather contain what the shell expanded them to; e.g., my_script jdoe 1460644812
chepner also points that out that getting the actual raw command line as received by the parent process will be (next to) impossible. Do tell me if you know of a way.
However, if you're prepared to ask users to do extra work when invoking your script or you can get them to invoke your script through an alias you define - which is obviously tricky - there is a solution; see bottom.
Note that use of printf %q is crucial to preserving the boundaries between arguments - if your original arguments had embedded spaces, something like $0 $* would result in a different command.
printf %q also protects against other shell metacharacters (e.g., |) embedded in arguments.
printf %q quotes the given argument for reuse as a single argument in a shell command, applying the necessary quoting; e.g.:
$ printf %q 'a |b'
a\ \|b
a\ \|b is equivalent to single-quoted string 'a |b' from the shell's perspective, but this example shows how the resulting representation is not necessarily the same as the input representation.
Incidentally, ksh and zsh also support printf %q, and ksh actually outputs 'a |b' in this case.
If you're prepared to modify how your script is invoked, you can pass $BASH_COMMANDas an extra argument: $BASH_COMMAND contains the raw[1]
command line of the currently executing command.
For simplicity of processing inside the script, pass it as the first argument (note that the double quotes are required to preserve the value as a single argument):
my_script "$BASH_COMMAND" --option1 value --option2
Inside your script:
# The *first* argument is what "$BASH_COMMAND" expanded to,
# i.e., the entire (alias-expanded) command line.
myInvocation=$1 # Save the command line in a variable...
shift # ... and remove it from "$#".
# Now process "$#", as you normally would.
Unfortunately, there are only two options when it comes to ensuring that your script is invoked this way, and they're both suboptimal:
The end user has to invoke the script this way - which is obviously tricky and fragile (you could however, check in your script whether the first argument contains the script name and error out, if not).
Alternatively, provide an alias that wraps the passing of $BASH_COMMAND as follows:
alias my_script='/path/to/my_script "$BASH_COMMAND"'
The tricky part is that this alias must be defined in all end users' shell initialization files to ensure that it's available.
Also, inside your script, you'd have to do extra work to re-transform the alias-expanded version of the command line into its aliased form:
# The *first* argument is what "$BASH_COMMAND" expanded to,
# i.e., the entire (alias-expanded) command line.
# Here we also re-transform the alias-expanded command line to
# its original aliased form, by replacing everything up to and including
# "$BASH_COMMMAND" with the alias name.
myInvocation=$(sed 's/^.* "\$BASH_COMMAND"/my_script/' <<<"$1")
shift # Remove the first argument from "$#".
# Now process "$#", as you normally would.
Sadly, wrapping the invocation via a script or function is not an option, because the $BASH_COMMAND truly only ever reports the current command's command line, which in the case of a script or function wrapper would be the line inside that wrapper.
[1] The only thing that gets expanded are aliases, so if you invoked your script via an alias, you'll still see the underlying script in $BASH_COMMAND, but that's generally desirable, given that aliases are user-specific.
All other arguments and even input/output redirections, including process substitutiions <(...) are reflected as-is.
"$0" contains the script's name, "$#" contains the parameters.
Do you mean something like echo $0 $*?

How to prevent execution of command in ZSH?

I wrote hook for command line:
# Transforms command 'ls?' to 'man ls'
function question_to_man() {
if [[ $2 =~ '^\w+\?$' ]]; then
man ${2[0,-2]}
fi
}
autoload -Uz add-zsh-hook
add-zsh-hook preexec question_to_man
But when I do:
> ls?
After exiting from man I get:
> zsh: no matches found: ls?
How can I get rid of from message about wrong command?
? is special to zsh and is the wildcard for a single character. That means that if you type ls? zsh tries find matching file names in the current directory (any three letter name starting with "ls").
There are two ways to work around that:
You can make "?" "unspecial" by quoting it: ls\?, 'ls?' or "ls?".
You make zsh handle the cases where it does not match better:
The default behaviour if no match can be found is to print an error. This can be changed by disabling the NOMATCH option (also NULL_GLOB must not be set):
setopt NO_NOMATCH
setopt NO_NULL_GLOB
This will leave the word untouched, if there is no matching file.
Caution: In the (maybe unlikely) case that there is a file with a matching name, zsh will try to execute a command with the name of the first matching file. That is if there is a file named "lsx", then ls? will be replaced by lsx and zsh will try to run it. This may or may not fail, but will most likely not be the desired effect.
Both methods have their pro and cons. 1. is probably not exactly what you are looking for and 2. does not work every time as well as changes your shells behaviour.
Also (as #chepner noted in his comment) preexec runs additionally to not instead of a command. That means you may get the help for ls but zsh will still try to run ls? or even lsx (or another matching name).
To avoid that, I would suggest defining a command_not_found_handler function instead of preexec. From the zsh manual:
If no external command is found but a function command_not_found_handler exists the shell executes this function with all command line arguments. The function should return status zero if it successfully handled the command, or non-zero status if it failed. In the latter case the standard handling is applied: ‘command not found’ is printed to standard error and the shell exits with status 127. Note that the handler is executed in a subshell forked to execute an external command, hence changes to directories, shell parameters, etc. have no effect on the main shell.
So this should do the trick:
command_not_found_handler () {
if [[ $1 =~ '\?$' ]]; then
man ${1%\?}
return 0
else
return 1
fi
}
If you have a lot of matching file names but seldomly misstype commands (the usual reason for "Command not found" errors) you might want to consider using this instead:
command_not_found_handler () {
man ${1%?}
}
This does not check for "?" at the end, but just cuts away any last character (note the missing "\" in ${1%?}) and tries to run man on the rest. So even if a file name matches, man will be run unless there is indeed a command with the same name as the matched file.
Note: This will interfere with other tools using command_not_found_handler for example the command-not-found tool from Ubuntu (if enabled for zsh).
That all being said, zsh has a widget called run-help which can be bound to a key (in Emacs mode it is by default bound to Alt+H) and than runs man for the current command.
The main advantages of using run-help over the above are:
You can call it any time while typing a longer command, as long as the command name is complete.
After you leave the manpage, the command is still there unchanged, so you can continue writing on it.
You can even bind it to Alt+? to make it more similar: bindkey '^[?' run-help

Read filename with * shell bash

I'am new in Linux and I want to write a bash script that can read in a file name of a directory that starts with LED + some numbers.(Ex.: LED5.5.002)
In that directory there is only one file that will starts with LED. The problem is that this file will every time be updated, so the next time it will be for example LED6.5.012 and counting.
I searched and tried a little bit and came to this solution:
export fspec=/home/led/LED*
LedV=`basename $fspec`
echo $LedV
If I give in those commands one by one in my terminal it works fine, LedV= LED5.5.002 but if i run it in a bash scripts it gives the result: LedV = LED*
I search after another solution:
a=/home/led/LED*
LedV=$(basename $a)
echo $LedV
but here again the same, if i give it in one by one it's ok but in a script: LedV = LED*.
It's probably something small but because of my lack of knowledge over Linux I cannot find it. So can someone tell what is wrong?
Thanks! Jan
Shell expansions don't happen on scalar assignments, so in
varname=foo*
the expansion of "$varname" will literally be "foo*". It's more confusing when you consider that echo $varname (or in your case basename $varname; either way without the double quotes) will cause the expansion itself to be treated as a glob, so you may well think the variable contains all those filenames.
Array expansions are another story. You might just want
fspec=( /path/LED* )
echo "${fspec[0]##*/}" # A parameter expansion to strip off the dirname
That will work fine for bash. Since POSIX sh doesn't have arrays like this, I like to give an alternative approach:
for fspec in /path/LED*; do
break
done
echo "${fspec##*/}"
pwd
/usr/local/src
ls -1 /usr/local/src/mysql*
/usr/local/src/mysql-cluster-gpl-7.3.4-linux-glibc2.5-x86_64.tar.gz
/usr/local/src/mysql-dump_test_all_dbs.sql
if you only have 1 file, you will only get 1 result
MyFile=`ls -1 /home/led/LED*`

What does this bash script command mean (sed - e)?

I'm totally new to bash scripting but i want to solve this problem..
the command is:
objfil=`echo ${srcfil} | sed -e "s,c$,o,"`
the idea about the bash script program is to check for the source files, and check if there is an adjacent object file in the OBJ directory, if so, the rest of the program runs smoothly, if not, the iteration terminates and skips the current source file, and moves on to the next one.. it works with .c files but not on the headers, since the object filenames depend on .c files.. i want to write this command so it checks the object files not just the .c but the .h files too.. but without skipping them. i know i have to do something else too, but i need to understand what this line of command does exactly to move on. Thanks. (Sorry for my english)
UPDATE:
if test -r ${curOBJdir}/${objfil}
then
cp -v ${srcfil} ./SAVEDSRC/${srcfil}
fdone="NO"
linenums=ALL
else
fdone="YES"
err="${curOBJdir}/${objfil} is missing - ${srcfil} skipped)"
echo ${err}
echo ${err} >>${log}
fi
while test ${fdone} == "NO"
do
#rest of code ...
here is the rest of the program.. i tried to comment out the "test" part to ignore the comparison just because i only want my script to work on .h files, but without checking the e.g abc.h files has an abc.o file.. (the object file generation is needed because the end of the script there's a comparison between the hexdump of the original and modified object files). The whole script is for changing the basic types with typedefs like int to sint32_t for example.
This concrete command will substitute all c's right before line-end to o:
srcfill=abcd.c
objfil=`echo ${srcfil} | sed -e "s,c$,o,"`
echo $objfil
Output:
abcd.o
P.S. It uses a different match/replace separator: default is / but it uses ,.

Resources