Convert a batch file function to bash - linux

I'm trying to convert this batch file that runs a python script into a bash script. I needed help converting a wait function in the batch file that waits for an action to complete into bash.
script.py wait-for-job <actionID> is the actual call that waits for the specific action to complete. The wait function basically assigns a value from the log file to a variable and then passes that variable as a parameter to a python script (script.py).
The log file is written continuously after each action and the last line (from which the action ID is fetched) looks something like this:
02/10/2019 00:00:00 AM Greenwich Mean Time print_action_id():250 INFO Action ID: 123456
The wait function in the batch file is as follows:
:wait
#echo off
for /f "tokens=11" %%i in (C:\Users\DemoUser\Dir\file.log) do ^
set ID=%%i
#echo on
script.py wait-for-job --action-id %ID%
EXIT /B 0
I tried implementing the same thing in bash like below but it did not seem to work (I'm new to shell scripting and I'm sure it's all wrong):
for $a in (tail -n1 /home/DemoUser/Dir/file.log); do
ID=$($a | awk { print $12})
script.py wait-for-job --action-id $ID
done

The following reads each line of the file and pulls out the ID and uses it to call a py script. First we declare the paths and variables. Then we run a loop.
#!/bin/bash
typeset file=/home/DemoUser/Dir/file.log
typeset py_script=/path/to/script.py
readonly PY=/path/to/python
while IFS= read -r line ;do
${PY} ${py_script} wait-for-job --action-id $(${line} | awk { print $12})
done < "${file}"

Related

Redirect parallel process bash script output to individual log file

I have a requirement, where i need to pass multiple arguments to the script to trigger parallel process for each argument. Now i need to capture each process output in the separate log file.
for arg in test_{01..05} ; do bash test.sh "$arg" & done
Above piece of code can only give parallel processing for the input arguments. I tried with exec > >(tee "/path/of/log/$arg_filedate +%Y%m%d%H.log") 2>&1 and it was able to create single log file name with just date with empty output. Can someone suggest whats going wrong here or if there is any best way other than using parallel package
Try:
data_part=$(date +%Y%m%d%H)
for arg in test_{01..05} ; do bash test.sh "$arg" > "/path/to/log/${arg}_${data_part}.log" & done
If i use "$arg_date +%Y%m%d%H.log" it is creating a file with just date without arg
Yes, because $arg_ is parsed as a variable name
arg_=blabla
echo "$arg_" # will print blabla
echo "${arg_}" # equal to the above
To separate _ from arg use braces "${arg}_" would expand variable arg and add string _.

Unix: What does cat by itself do?

I saw the line data=$(cat) in a bash script (just declaring an empty variable) and am mystified as to what that could possibly do.
I read the man pages, but it doesn't have an example or explanation of this. Does this capture stdin or something? Any documentation on this?
EDIT: Specifically how the heck does doing data=$(cat) allow for it to run this hook script?
#!/bin/bash
# Runs all executable pre-commit-* hooks and exits after,
# if any of them was not successful.
#
# Based on
# http://osdir.com/ml/git/2009-01/msg00308.html
data=$(cat)
exitcodes=()
hookname=`basename $0`
# Run each hook, passing through STDIN and storing the exit code.
# We don't want to bail at the first failure, as the user might
# then bypass the hooks without knowing about additional issues.
for hook in $GIT_DIR/hooks/$hookname-*; do
test -x "$hook" || continue
echo "$data" | "$hook"
exitcodes+=($?)
done
https://github.com/henrik/dotfiles/blob/master/git_template/hooks/pre-commit
cat will catenate its input to its output.
In the context of the variable capture you posted, the effect is to assign the statement's (or containing script's) standard input to the variable.
The command substitution $(command) will return the command's output; the assignment will assign the substituted string to the variable; and in the absence of a file name argument, cat will read and print standard input.
The Git hook script you found this in captures the commit data from standard input so that it can be repeatedly piped to each hook script separately. You only get one copy of standard input, so if you need it multiple times, you need to capture it somehow. (I would use a temporary file, and quote all file name variables properly; but keeping the data in a variable is certainly okay, especially if you only expect fairly small amounts of input.)
Doing:
t#t:~# temp=$(cat)
hello how
are you?
t#t:~# echo $temp
hello how are you?
(A single Controld on the line by itself following "are you?" terminates the input.)
As manual says
cat - concatenate files and print on the standard output
Also
cat Copy standard input to standard output.
here, cat will concatenate your STDIN into a single string and assign it to variable temp.
Say your bash script script.sh is:
#!/bin/bash
data=$(cat)
Then, the following commands will store the string STR in the variable data:
echo STR | bash script.sh
bash script.sh < <(echo STR)
bash script.sh <<< STR

bash script, execute shell from another shell and assign results to a variable

I have this legacy delete script script, It is doing some delete work on a remote application.
When processing of delete is done it will return #completed successfully# or it will return
#
*nothing more to delete*
*Program will exit*
#
I would like to assign its output and execute the delete script as long its output is "completed successfully".
I am unable to assign the results of the script to a variable. I am running the shell script from folder X while the delete script is in folder Y.
Besides the script below, I also tried:
response=$(cd $path_to_del;./delete.sh ...)
I am unable to make this work.
#!/bin/bash
path_to_del=/apps/Utilities/unix/
response='completed successfully'
counter=1
while [[ $response == *successfully* ]]
do
response= working on batch number: $counter ...
echo $response
(cd $path_to_del;./delete.sh "-physicalDelete=true") > $response
echo response $response
((counter++))
done
echo deleting Done!
general ways to pass output from a subshell to the higher level shell is like this:
variable="$(command in subshell)"
or
read -t variable < <(command)
therefore the modifications to your script could look like:
response="$(cd $path_to_del;./delete.sh "-physicalDelete=true")"
or
cd $path_to_del
response="$(./delete.sh "-physicalDelete=true")"
this line will fail and needs fixing:
response= working on batch number:

How to show line number when executing bash script

I have a test script which has a lot of commands and will generate lots of output, I use set -x or set -v and set -e, so the script would stop when error occurs. However, it's still rather difficult for me to locate which line did the execution stop in order to locate the problem.
Is there a method which can output the line number of the script before each line is executed?
Or output the line number before the command exhibition generated by set -x?
Or any method which can deal with my script line location problem would be a great help.
Thanks.
You mention that you're already using -x. The variable PS4 denotes the value is the prompt printed before the command line is echoed when the -x option is set and defaults to : followed by space.
You can change PS4 to emit the LINENO (The line number in the script or shell function currently executing).
For example, if your script reads:
$ cat script
foo=10
echo ${foo}
echo $((2 + 2))
Executing it thus would print line numbers:
$ PS4='Line ${LINENO}: ' bash -x script
Line 1: foo=10
Line 2: echo 10
10
Line 3: echo 4
4
http://wiki.bash-hackers.org/scripting/debuggingtips gives the ultimate PS4 that would output everything you will possibly need for tracing:
export PS4='+(${BASH_SOURCE}:${LINENO}): ${FUNCNAME[0]:+${FUNCNAME[0]}(): }'
In Bash, $LINENO contains the line number where the script currently executing.
If you need to know the line number where the function was called, try $BASH_LINENO. Note that this variable is an array.
For example:
#!/bin/bash
function log() {
echo "LINENO: ${LINENO}"
echo "BASH_LINENO: ${BASH_LINENO[*]}"
}
function foo() {
log "$#"
}
foo "$#"
See here for details of Bash variables.
PS4 with value $LINENO is what you need,
E.g. Following script (myScript.sh):
#!/bin/bash -xv
PS4='${LINENO}: '
echo "Hello"
echo "World"
Output would be:
./myScript.sh
+echo Hello
3 : Hello
+echo World
4 : World
Workaround for shells without LINENO
In a fairly sophisticated script I wouldn't like to see all line numbers; rather I would like to be in control of the output.
Define a function
echo_line_no () {
grep -n "$1" $0 | sed "s/echo_line_no//"
# grep the line(s) containing input $1 with line numbers
# replace the function name with nothing
} # echo_line_no
Use it with quotes like
echo_line_no "this is a simple comment with a line number"
Output is
16 "this is a simple comment with a line number"
if the number of this line in the source file is 16.
This basically answers the question How to show line number when executing bash script for users of ash or other shells without LINENO.
Anything more to add?
Sure. Why do you need this? How do you work with this? What can you do with this? Is this simple approach really sufficient or useful? Why do you want to tinker with this at all?
Want to know more? Read reflections on debugging
Simple (but powerful) solution: Place echo around the code you think that causes the problem and move the echo line by line until the messages does not appear anymore on screen - because the script has stop because of an error before.
Even more powerful solution: Install bashdb the bash debugger and debug the script line by line
If you're using $LINENO within a function, it will cache the first occurrence. Instead use ${BASH_LINENO[0]}

Initiating dynamic variables (variable variables) in bash shell script

I am using PHP CLI through bash shell. Please check Manipulating an array (printed by php-cli) in shell script for details.
In the following shell code I am able to echo the key- value pairs that I get from the PHP script.
IFS=":"
# parse php script output by read command
php $PWD'/test.php' | while read -r key val; do
echo $key":"$val
done
Following is the output for this -
BASE_PATH:/path/to/project/root
db_host:localhost
db_name:database
db_user:root
db_pass:root
Now I just want to initiate dynamic variables inside the while loop so that I can use them like $BASE_PATH having value '/path/to/project/root', $db_host having 'localhost'
I come from a PHP background. I would like something like $$key = $val of PHP
Using eval introduces security risks that must be considered. It's safer to use declare:
# parse php script output by read command
while IFS=: read -r key val; do
echo $key":"$val
declare $key=$val
done < <(php $PWD'/test.php')
If you are using Bash 4, you can use associative arrays:
declare -A some_array
# parse php script output by read command
while IFS=: read -r key val; do
echo $key":"$val
some_array[$key]=$val
done < <(php $PWD'/test.php')
Using process substition <() and redirecting it into the done of the while loop prevents the creation of a subshell. Setting IFS for only the read command eliminates the need to save and restore its value.
You may try using the eval construct in BASH:
key="BASE_PATH"
value="/path/to/project/root"
# Assign $value to variable named "BASE_PATH"
eval ${key}="${value}"
# Now you have the variable named BASE_PATH you want
# This will get you output "/path/to/project/root"
echo $BASE_PATH
Then, just use it in your loop.
EDIT: this read loop creates a sub-shell which will not allow you to use them outside of the loop. You may restructure the read loop so that the sub-shell is not created:
# get the PHP output to a variable
php_output=`php test.php`
# parse the variable in a loop without creating a sub-shell
IFS=":"
while read -r key val; do
eval ${key}="${val}"
done <<< "$php_output"
echo $BASE_PATH

Resources