How do I display git output in bash and store one string from the output in a variable? - linux

I am running git push command in bash which generates some errors.
RESPONSE=$(git push "$target" --all | grep "error:" || true)
generates an output on the screen but variable $RESPONSE is empty
If I change the command to do this:
RESPONSE=$(git push "$target" --all 2>&1 | grep "error:" || true)
command runs silently but actually captures needed error in $RESPONSE
echo $RESPONSE
error: failed to push some refs to
'ssh://git#git.test.test.com:7999/test/test-test.git'
I really need to run this git push command in a way that it would hold the error above in $RESPONSE yet generate the entire output on the screen.
Running
RESPONSE=$(git push "$target" --all 2>&1 | tee -a log | grep "error:" || true) did not help, unless I am missing something.

One solution is to use tee; just not exactly the way you showed. Taking it step by step will perhaps make the solution easier to understand:
git push "$target" --all
will send the error you want to STDERR. That's why you added 2>&1, to redirect STDERR to STDOUT.
git push "$target" --all 2>&1
Then your pipeline (grep, etc.) is able to pick it up and eventually the variable capture is able to see it when you do
RESPONSE=$(git push "$target" --all 2>&1 | grep "error:" || true)
But because the error is no longer going to STDERR, and STDOUT is now being captured instead of sent to the screen, the output disappears.
So what you want to use tee for, is to put the output on both STDERR (for the screen) and STDOUT (for your pipeline and eventual variable capture).
RESPONSE=$(git push "$target" --all 2>&1 |tee >(cat 1>&2) | grep "error:" || true)
This will likely work as you intend, but be aware that everything you see on the screen - all output from the git command, error or otherwise - is now being passed on STDERR.
There aren't many practical reasons why this would be better than the answer about capturing to the variable and then echoing the variable (per miimote's answer), but if for some reason the non-sequential command structure seems better to you, this is a way to do it.

The first line
RESPONSE=$(git push "$target" --all | grep "error:" || true)
stores the response of the command in the var RESPONSE. So it is done in bash for any construction like VAR=$(command). If exists an error the var is empty but generates an output for the user.
If you add 2>&1, you are saying the same but if exists an error the output is the file $1, in your case the var $RESPONSE.
You could do this
RESPONSE=$(git push "$target" --all 2>&1 | grep "error:" || true); echo $RESPONSE
You can read more about command substitution and redirections

Related

Shell script to run a process in background parse its output and start service if the previous process contains a string

I need to write a shell script that starts a process in background and parse its output till it checks the output doesn't contains any Error in its output. The process will keep on running in the background as it needs to listen on ports. If the process output contained an error exit the script.
Based on the output of the previous process (it didn't contain any errors, the process was able to establish connection to DB) run the next command.
I have tried many approches suggested on Stack overflow, which includes:
https://unix.stackexchange.com/questions/12075/best-way-to-follow-a-log-and-execute-a-command-when-some-text-appears-in-the-log
https://unix.stackexchange.com/questions/45941/tail-f-until-text-is-seen
https://unix.stackexchange.com/questions/137030/how-do-i-extract-the-content-of-quoted-strings-from-the-output-of-a-command
/home/build/a_process 2>&1 | tee "output_$(date +"%Y_%m_%d").log"
tail -fn0 "output_$(date +"%Y_%m_%d").log" | \
while read line ; do
if [ echo "$line" | grep "Listening" ]
then
/home/build/b_process 2>&1 | tee "output_script_$(date +"%Y_%m_%d").log"
elif [ echo "$line" | grep "error occurred in load configuration" ] || [ echo "$line" | grep "Binding Failure" ]
then
sl -e
fi
done
The problem is since the process keep running despite it contains the text i was searching for it gets stuck in parsing the staring and never able to exit watching the output or tailing. As a result it's not able to execute next command.
On surface, the issue is with "tee" command (a_process ... | tee).
Recall that a pipeline will result in the shell
Creating the pipeline between the command
Waiting for the LAST command the finish.
Since the tee will not finish until a_process is done, and since a_process is a daemon, your script may wait forever (at least, until a_process exit).
In this case, consider sending the whole pipeline to the background.
log_file=output_$(date +"%Y_%m_%d").log
( /home/build/a_process 2>&1 | tee "$logfile" ) &
tail -fn0 "$logfile" |
...
Side note: consider setting the log file into a variable. This will make it easier to maintain (and understand) the script.

Get output of FOR with EOF in bash

I've created a bash script to temporarily help me send some files to a FTP server based on the id of the commit, i get the last commit, track the files and send as listed below.
#!/bin/bash
commit_hash=$(git log --format="%H" -n 1)
[[ -z "$1" ]] || commit_hash=$1
files=$(git diff-tree --no-commit-id --name-only -r $commit_hash)
echo -e $(git log -1 $commit_hash --pretty=format:"%h - %an, %ar : %s");
printf "\n"
HOST=
USER=
PASS=
for file in $files; do
ftp -nv $HOST << EOF
user $USER $PASS
cd /www/example
passive
put $file
bye
EOF
done;
of course it isn't the best approach to do that, but i automated some things that i am currently working on.
it is possible to catch the ftp output of the heredoc and apply some filters? with pipelines for example, i only want to know if the transfer was completed successfully.
it is possible to catch the ftp output of the heredoc and apply some filters? with pipelines for example, i only want to know if the transfer was completed successfully.
I presume you mean you want to catch the output of the ftp command whose input is redirected from the heredoc; the heredoc itself does not have output in a sense that anything other than the associated command can see.
But you can redirect the command's output. The thing to remember is that the heredoc begins on the next line, not immediately after the associated redirection operator. Thus, you can add a pipeline to another command after the heredoc operator. For example:
$ cat << EOF | grep flag
flag this line
not this line
or this line
flag this
last flag
EOF
Output:
flag this line
flag this
last flag
Do not use a for loop for this. See Bash FAQ 001.
commit_hash=${1:-$(git log --format="%H" -n 1)}
while IFS= read -r file; do
ftp -nv "$HOST" << EOF
user $USER $PASS
cd /www/example
passive
put $file
bye
EOF
done < <(git diff-tree --no-commit-id --name-only -r "$commit_hash")

Pre-commit hook for Subversion fails

I need most basic hook to prevent empty comment checkins. Googled, found sample bash script. Made it short and here is what I have:
#!/bin/sh
REPOS="$1"
TXN="$2"
# Make sure that the log message contains some text.
SVNLOOK=/usr/bin/svnlook
ICONV=/usr/bin/iconv
SVNLOOKOK=1
$SVNLOOK log -t "$TXN" "$REPOS" | \
grep "[a-zA-Z0-9]" > /dev/null || SVNLOOKOK=0
if [ $SVNLOOKOK = 0 ]; then
echo "Empty log messages are not allowed. Please provide a proper log message." >&2
exit 1
fi
# Comments should have more than 5 characters
LOGMSG=$($SVNLOOK log -t "$TXN" "$REPOS" | grep [a-zA-Z0-9] | wc -c)
if [ "$LOGMSG" -lt 6 ]; then
echo -e "Please provide a meaningful comment when committing changes." 1>&2
exit 1
fi
Now I'm testing it with Tortoise SVN and here is what I see:
Commit failed (details follow): Commit blocked by pre-commit hook
(exit code 1) with output: /home/svn/repos/apress/hooks/pre-commit:
line 11: : command not found Empty log messages are not allowed.
Please provide a proper log message. This error was generated by a
custom hook script on the Subversion server. Please contact your
server administrator for help with resolving this issue.
What is the error? svnlook is in /usr/bin
I'm very new to Linux, don't understand what happens..
To debug your script you'll have to run it manually.
To do that you'll have to get the sample values for the parameters passed to it.
Change the beginning of your script to something like
#!/bin/sh
REPOS="$1"
TXN="$2"
echo "REPOS = $REPOS, TXN = $TXN" >/tmp/svnhookparams.txt
Do a commit and check the file /tmp/svnhookparams.txt for the values.
Then do another change to the script:
#!/bin/sh
set -x
REPOS="$1"
TXN="$2"
This will enable echo of all commands run by the shell.
Now run you script directly from terminal passing to it the values you got previously.
Check the output for invalid commands or empty variable assignments.
If you have problems with that, post the output here.
$PATH is empty when running hook scripts. Thus you need to specify full paths for every external command. My guess, is that grep is not found.
I'm answering my own question.
This didn't work:
$SVNLOOK log -t "$TXN" "$REPOS" | \
grep "[a-zA-Z0-9]" > /dev/null || SVNLOOKOK=0
It had to be 1 line:
$SVNLOOK log -t "$TXN" "$REPOS" | grep "[a-zA-Z0-9]" > /dev/null || SVNLOOKOK=0

Redirect lsof exit code into variable

I'm trying to test whether a file is open and then do something with the exit code. Currently doing it like this:
FILE=/usr/local/test.sh
lsof "$FILE" | grep -q COMMAND &>/dev/null
completed=$?
Is there any way you can push the exit code straight into a local variable rather than redirecting output to /dev/null and capturing the '$?' variable?
Well, you could do:
lsof "$FILE" | grep -q COMMAND; completed=$?
There's no need to redirect anything as grep -q is quiet anyways. If you want do certain action if the grep succeeds, just use && operator. Storing exit status in this case is probably unnecessary.
lsof "$FILE" | grep -q COMMAND && echo 'Command was found!'

Why does set -e cause my script to exit when it encounters the following?

I have a bash script that checks some log files created by a cron job that have time stamps in the filename (down to the second). It uses the following code:
CRON_LOG=$(ls -1 $LOGS_DIR/fetch_cron_{true,false}_$CRON_DATE*.log 2> /dev/null | sed 's/^[^0-9][^0-9]*\([0-9][0-9]*\).*/\1 &/' | sort -n | cut -d ' ' -f2- | tail -1 )
if [ -f "$CRON_LOG" ]; then
printf "Checking $CRON_LOG for errors\n"
else
printf "\n${txtred}Error: cron log for $CRON_NOW does not exist.${txtrst}\n"
printf "Either the specified date is too old for the log to still be around or there is a problem.\n"
exit 1
fi
CRIT_ERRS=$(cat $CRON_LOG | grep "ERROR" | grep -v "Duplicate tracking code")
if [ -z "$CRIT_ERRS" ]; then
printf "%74s[${txtgrn}PASS${txtrst}]\n"
else
printf "%74s[${txtred}FAIL${txtrst}]\n"
printf "Critical errors detected! Outputting to console...\n"
echo $CRIT_ERRS
fi
So this bit of code works fine, but I'm trying to clean up my scripts now and implement set -e at the top of all of them. When i do it to this script, it exits with error code 1. Note that I have errors form the first statement dumping to /dev/null. This is because some days the file has the word "true" and other days "false" in it. Anyway, i don't think this is my problem because the script outputs "Checking xxxxx.log for errors." before exiting when I add set -e to the top.
Note: the $CRON_DATE variable is derived form user input. I can run the exact same statement from command line "$./checkcron.sh 01/06/2010" and it works fine without the set -e statement at the top of the script.
UPDATE: I added "set -x" to my script and narrowed the problem down. The last bit of output is:
Checking /map/etl/tektronix/logs/fetch_cron_false_010710054501.log for errors
++ cat /map/etl/tektronix/logs/fetch_cron_false_010710054501.log
++ grep ERROR
++ grep -v 'Duplicate tracking code'
+ CRIT_ERRS=
[1]+ Exit 1 ./checkLoad.sh...
So it looks like the problem is occurring on this line:
CRIT_ERRS=$(cat $CRON_LOG | grep "ERROR" | grep -v "Duplicate tracking code")
Any help is appreciated. :)
Thanks,
Ryan
Adding set -x, which prints a trace of the script's execution, may help you diagnose the source of the error.
Edit:
Your grep is returning an exit code of 1 since it's not finding the "ERROR" string.
Edit 2:
My apologies regarding the colon. I didn't test it.
However, the following works (I tested this one before spouting off) and avoids calling the external cat. Because you're setting a variable using the results of a subshell and set -e looks at the subshell as a whole, you can do this:
CRIT_ERRS=$(cat $CRON_LOG | grep "ERROR" | grep -v "Duplicate tracking code"; true)
bash -c 'f=`false`; echo $?'
1
bash -c 'f=`true`; echo $?'
0
bash -e -c 'f=`false`; echo $?'
bash -e -c 'f=`true`; echo $?'
0
Note that backticks (and $()) "return" the error code of the last command they run. Solution:
CRIT_ERRS=$(cat $CRON_LOG | grep "ERROR" | grep -v "Duplicate tracking code" | cat)
Redirecting error messages to /dev/null does nothing about the exit status returned by the script. The reason your ls command isn't causing the error is because it's part of a pipeline, and the exit status of the pipeline is the return value of the last command in it (unless pipefail is enabled).
Given your update, it looks like the command that's failing is the last grep in the pipeline. grep only returns 0 if it finds a match; otherwise it returns 1, and if it encounters an error, it returns 2. This is a danger of set -e; things can fail even when you don't expect them to, because commands like grep return non-zero status even if there hasn't been an error. It also fails to exit on errors earlier in a pipeline, and so may miss some error.
The solutions given by geocar or ephemient (piping through cat or using || : to ensure that the last command in the pipe returns successfully) should help you get around this, if you really want to use set -e.
Asking for set -e makes the script exit as soon as a simple command exits with a non-zero exit status. This combines perniciously with your ls command, which exits with a non-zero status when asked to list a non-existent file, which is always the case for you because the true and false variants don't co-exist.

Resources