I'm trying to do a script that does things on my linux computer but does not respects wait commands.
That's my code that does not work
cat file.txt | while read line || [[ -n $line ]]; do
do
QUEST="$(./fi $line | grep -oE " fi " &> A.txt; echo $? >"$dir")" & proc=$!
wait "$proc"
read ret <"$dir"
if [[ "$QUEST" != "" ]];then echo "$line" &>> A.txt; fi; unset QUEST;
done &> /dev/null & wait
It have to do one quest a time and save the output (that could exists or not).
When you run a command in the background, it's run in a subshell. Any variable assignments are not visible in the original shell, so the assignment to QUEST doesn't work.
You don't need to do that in the background, since you're immediately waiting for the command to finish. Just run it normally.
while read line || [[ -n $line ]]; do
do
QUEST="$(./fi $line | grep -oE " fi " &> A.txt)"
ret=$?
if [[ "$QUEST" != "" ]]
then echo "$line" &>> A.txt
fi
done &> /dev/null < file.txt
unset QUEST
There's also no need to write $? to $dir. The exit status of a variable assignment from a command substitution is the exit status of the command.
Related
I have a pretty simple bash script that coordinates running a couple python scripts. What I am having trouble figuring out is why after running the bash script (. bash_script.sh), the terminal hangs. I can't ctrl+c, ctrl+z or do anything except restart the SSH session. All I see is just a blinking cursor. Checking all the log files indicates a 0 status exit code with no errors in the scripts themselves. Running ps aux | grep bash_script.sh does not show any anything running either. Is there anyway to debug this?
#!/bin/bash
exec >> <DIR>/logfile.log 2>&1
script_message () {
status_arg=$1
if [[ $status_arg = "pass" ]]; then
printf "Script Done\n"
printf '=%.0s' {1..50}
printf "\n"
elif [[ $status_arg = "fail" ]]; then
printf "Script Failed\n"
printf '=%.0s' {1..50}
printf "\n"
else
:
fi
}
current_date=$(date '+%Y-%m-%d %H:%M:%S')
day=$(date +%u)
hour=$(date +%H)
printf "RUN DATE: $current_date\n"
# activate virtual env
source /<VENV DIR/bin/activate>
python <PYTHON SCRIPT>.py >> <DIR>/logfile2.log 2>&1
retVal=$?
if [[ $retVal -eq 0 && $day -eq 4 ]]; then
python <PYTHON SCRIPT 2>.py >> <DIR>/logfile3.log 2>&1
script_message pass
elif [[ $retVal -eq 0 ]]; then
script_message pass
else
#:
script_message fail
fi
echo $?
have some problem with shell script.
In our office we set up only few commands, that available for devs when they are trying ssh to server. It is configured with help of .ssh/authorized_keys file and available command for user there is bash script:
#!/bin/sh
if [[ $1 == "--help" ]]; then
cat <<"EOF"
This script has the purpose to let people remote execute certain commands without logging into the system.
For this they NEED to have a homedir on this system and uploaded their RSA public key to .ssh/authorized_keys (via ssh-copy-id)
Then you can alter that file and add some commands in front of their key eg :
command="/usr/bin/dev.sh",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty
The user will do the following : ssh testuser#server tail testserver.example.com/2017/01/01/user.log
EOF
exit 0;
fi
# set global variable
set $SSH_ORIGINAL_COMMAND
# set the syslog path where the files can be found
PATH="/opt/syslog/logs"
# strip ; or any other unwanted signs out of the command, this prevents them from breaking out of the setup command
if [[ $1 != "" ]]; then
COMMAND=$1
COMMAND=${COMMAND//[;\`]/}
fi
if [[ $2 != "" ]]; then
ARGU1=$2
ARGU1=${ARGU1//[;\`]/}
fi
if [[ $3 != "" ]]; then
ARGU2=$3
ARGU2=${ARGU2//[;\`]/}
fi
if [[ $4 != "" ]]; then
ARGU3=$4
ARGU3=${ARGU3//[;\`]/}
fi
# checking for the commands
case "$COMMAND" in
less)
ARGU2=${ARGU1//\.\./}
FILE=$PATH/$ARGU1
if [ ! -f $FILE ]; then
echo "File doesn't exist"
exit 1;
fi
#echo " --------------------------------- LESS $FILE"
/usr/bin/less $FILE
;;
grep)
if [[ $ARGU2 == "" ]]; then
echo "Pls give a filename"
exit 1
fi
if [[ $ARGU1 == "" ]]; then
echo "Pls give a string to search for"
exit 1
fi
ARGU2=${ARGU2//\.\./}
FILE=$PATH/$ARGU2
/usr/bin/logger -t restricted-command -- "------- $USER Executing grep $ARGU1 \"$ARGU2\" $FILE"
if [ ! -f $FILE ]; then
echo "File doesn't exist"
/usr/bin/logger -t restricted-command -- "$USER Executing $#"
exit 1;
fi
/bin/grep $ARGU1 $FILE
;;
tail)
if [[ $ARGU1 == "" ]]; then
echo "Pls give a filename"
exit 1
fi
ARGU1=${ARGU1//\.\./}
FILE=$PATH/$ARGU1
if [ ! -f $FILE ]; then
echo "File doesn't exist"
/usr/bin/logger -t restricted-command -- "$USER Executing $# ($FILE)"
exit 1;
fi
/usr/bin/tail -f $FILE
;;
cat)
ARGU2=${ARGU1//\.\./}
FILE=$PATH/$ARGU1
if [ ! -f $FILE ]; then
echo "File doesn't exist"
exit 1;
fi
/bin/cat $FILE
;;
help)
/bin/cat <<"EOF"
# less LOGNAME (eg less testserver.example.com/YYYY/MM/DD/logfile.log)
# grep [ARGUMENT] LOGNAME
# tail LOGNAME (eg tail testserver.example.com/YYYY/MM/DD/logfile.log)
# cat LOGNAME (eg cat testserver.example.com/YYYY/MM/DD/logfile.log)
In total the command looks like this : ssh user#testserver.example.com COMMAND [ARGUMENT] LOGFILE
EOF
/usr/bin/logger -t restricted-command -- "$USER HELP requested $#"
exit 1
;;
*)
/usr/bin/logger -s -t restricted-command -- "$USER Invalid command $#"
exit 1
;;
esac
/usr/bin/logger -t restricted-command -- "$USER Executing $#"
The problem is next:
when i try to exec some command, it takes only first argument, if i do recursion in files by using {n,n1,n2} - it doesn't work:
[testuser#local ~]$ ssh testuser#syslog.server less srv1838.example.com/2017/02/10/local1.log |grep 'srv2010' | wc -l
0
[testuser#local ~]$ ssh testuser#syslog.server less srv2010.example.com/2017/02/10/local1.log |grep 'srv2010' | wc -l
11591
[testuser#local ~]$ ssh testuser#syslog.server less srv{1838,2010}.example.com/2017/02/10/local1.log |grep 'srv2010' | wc -l
0
[testuser#local ~]$ ssh testuser#syslog.server less srv{2010,1838}.example.com/2017/02/21/local1.log |grep 'srv2010' | wc -l
11591
Could someone help me, how can i parse\count command arguments to make it work?
Thank you and have a nice day!
The number of arguments for a bash script would be $#. As a quick example:
#!/bin/bash
narg=$#
typeset -i i
i=1
while [ $i -le $narg ] ; do
echo " $# $i: $1"
shift
i=$i+1
done
gives, for bash tst.sh a b {c,d}
4 1: a
3 2: b
2 3: c
1 4: d
In your script, the command to execute (cat, less, ...) gets explicitly only the second argument to the script. If you want to read all arguments, you should do something like this (note: only a hint, removed all sorts of checks etc..)
command="$1"
shift
case $command in
(grep) pattern="$1"
shift
while [ $# -gt 0 ] ; do
grep "$pattern" "$1"
shift
done
;;
esac
note: added some quotes as comment suggested, but, being only a hint, you should carefully look at quoting and your checks in your own script.
Less command working now:
case "$COMMAND" in
less)
if [[ $ARGU1 == "" ]]; then
echo "Pls give a filename"
exit 1
fi
FILES_LIST=${#:2}
FILE=(${FILES_LIST//\.\./})
for v in "${FILE[#]}";do
v=${v//[;\']/}
if [ ! -f $v ]; then
echo "File doesn't exist"
fi
/usr/bin/less $PATH/$v
done;;
tail command works too with 2 and more files, but i can't execute tail -f command on two files unfortunately.
I'm writing a bash script to read a set of files line by line and perform some edits. To begin with, I'm simply trying to move the files to backup locations and write them out as-is, to test the script is working. However, it is failing to copy the last line of each file. Here is the snippet:
while IFS= read -r line
do
echo "Line is ***$line***"
echo "$line" >> $POM
done < $POM.backup
I obviously want to preserve whitespace when I copy the files, which is why I have set the IFS to null. I can see from the output that the last line of each file is being read, but it never appears in the output.
I've also tried an alternative variation, which does print the last line, but adds a newline to it:
while IFS= read -r line || [ -n "$line" ]
do
echo "Line is ***$line***"
echo "$line" >> $POM
done < $POM.backup
What is the best way to do this do this read-write operation, to write the files exactly as they are, with the correct whitespace and no newlines added?
The command that is adding the line feed (LF) is not the read command, but the echo command. read does not return the line with the delimiter still attached to it; rather, it strips the delimiter off (that is, it strips it off if it was present in the line, IOW, if it just read a complete line).
So, to solve the problem, you have to use echo -n to avoid adding back the delimiter, but only when you have an incomplete line.
Secondly, I've found that when providing read with a NAME (in your case line), it trims leading and trailing whitespace, which I don't think you want. But this can be solved by not providing a NAME at all, and using the default return variable REPLY, which will preserve all whitespace.
So, this should work:
#!/bin/bash
inFile=in;
outFile=out;
rm -f "$outFile";
rc=0;
while [[ $rc -eq 0 ]]; do
read -r;
rc=$?;
if [[ $rc -eq 0 ]]; then ## complete line
echo "complete=\"$REPLY\"";
echo "$REPLY" >>"$outFile";
elif [[ -n "$REPLY" ]]; then ## incomplete line
echo "incomplete=\"$REPLY\"";
echo -n "$REPLY" >>"$outFile";
fi;
done <"$inFile";
exit 0;
Edit: Wow! Three excellent suggestions from Charles Duffy, here's an updated script:
#!/bin/bash
inFile=in;
outFile=out;
while { read -r; rc=$?; [[ $rc -eq 0 || -n "$REPLY" ]]; }; do
if [[ $rc -eq 0 ]]; then ## complete line
echo "complete=\"$REPLY\"";
printf '%s\n' "$REPLY" >&3;
else ## incomplete line
echo "incomplete=\"$REPLY\"";
printf '%s' "$REPLY" >&3;
fi;
done <"$inFile" 3>"$outFile";
exit 0;
After review i wonder if :
{
line=
while IFS= read -r line
do
echo "$line"
line=
done
echo -n "$line"
} <$INFILE >$OUTFILE
is juts not enough...
Here my initial proposal :
#!/bin/bash
INFILE=$1
if [[ -z $INFILE ]]
then
echo "[ERROR] missing input file" >&2
exit 2
fi
OUTFILE=$INFILE.processed
# a way to know if last line is complete or not :
lastline=$(tail -n 1 "$INFILE" | wc -l)
if [[ $lastline == 0 ]]
then
echo "[WARNING] last line is incomplete -" >&2
fi
# we add a newline ANYWAY if it was complete, end of file will be seen as ... empty.
echo | cat $INFILE - | {
first=1
while IFS= read -r line
do
if [[ $first == 1 ]]
then
echo "First Line is ***$line***" >&2
first=0
else
echo "Next Line is ***$line***" >&2
echo
fi
echo -n "$line"
done
} > $OUTFILE
if diff $OUTFILE $INFILE
then
echo "[OK]"
exit 0
else
echo "[KO] processed file differs from input"
exit 1
fi
Idea is to always add a newline at the end of file and to print newlines only BETWEEN lines that are read.
This should work for quite all text files given they are not containing 0 byte ie \0 character, in which case 0 char byte will be lost.
Initial test can be used to decided whether an incomplete text file is acceptable or not.
Add a new line if line is not a line. Like this:
while IFS= read -r line
do
echo "Line is ***$line***";
printf '%s' "$line" >&3;
if [[ ${line: -1} != '\n' ]]
then
printf '\n' >&3;
fi
done < $POM.backup 3>$POM
i have a bash script that shows "Segment Violation" on line
sp-sc-auth "${sopUrl}" 8809 8908 > /dev/null &
but when sp-sc-auth is executed from terminal works fine
I set:
set -o pipefail
set -o errexit
set -o xtrace
set -o nounset
end script continue executing but throws that "Segment Violation" error...
System is a debian 64 bits
Thanks in advance
Regars
The ugly code:
#!/usr/bin/env bash
# Init
set -o pipefail
set -o errexit
#set -o xtrace
set -o nounset
__DIR__="$(cd "$(dirname "${0}")"; echo $(pwd))"
__BASE__="$(basename "${0}")"
__FILE__="${__DIR__}/${__BASE__}"
ARG1="${1:-Undefined}"
display_usage() {
scriptName=$(basename $0)
echo -e "Uso:\n "${scriptName}" [6,7,8,9,10 o 12]"
echo "Sin especificar el canal, búsqueda de retransmisiones"
}
parse_arenavision() {
url="http://www.arenavision.in/agenda"
if ! av=$(curl -s "${url}");then
echo "Sin conexión"
exit 1
fi
started="off"
declare -a _list
element=""
while read line
do
if [[ $line =~ (([0-9][0-9]+/[0-9]+/[0-9]+.*)) ]]; then
element=$(echo "${BASH_REMATCH[0]}" | sed -r 's#CET|AV([^6789]|1[02])##g; s#<br />##g; s#//|&.*;##g; s#/\s*$##g; s#INGLATERRA/PREMIER LEAGUE#PREMIER#g; s#ITALIA/SERIE A#SERIE A#g; s#ITALIA/SERIE A#SERIE A#g;' | tr -dc '[:print:]')
if [[ "${element}" =~ (.*AV[6789]|.*AV10|.*AV12) ]]; then
_list+=("${element}")
fi
started="on"
else
if [[ ${started} == "on" ]]; then
break
fi
fi
done <<< "${av}"
for i in "${_list[#]}"; do
if [[ "${i}" =~ (.*BALONCESTO.*) ]]; then
echo -e "\e[92m${i}\e[0m"
elif [[ "${i}" =~ (.*LIGA BBVA.*) ]]; then
echo -e "\e[37m${i}\e[0m"
else
echo "${i}"
fi
done
}
case $ARG1 in
"Undefined" )
parse_arenavision
exit 0
;;
[6789] )
page="${ARG1}"
;;
10 )
page="${ARG1}"
;;
* )
display_usage
exit 1
;;
esac
# Delete "zombies"
if pgrep -f "sp-sc"
then
kill -9 `pgrep -f "sp-sc-auth"`
fi
url="http://www.arenavision.in/arenavision$page"
# Get url content and url sop
if ! content=$(curl -s "${url}");then
echo "Sin conexión"
fi
if [[ $content =~ (sop://([A-Za-z0-9_]+|\.)+:[0-9]+) ]]; then
sopUrl=${BASH_REMATCH[1]}
else
echo "No se ha encontrado la url"
exit 1
fi
# Connect ArenaVision 1
children=""
trap 'kill $children 1>/dev/null 2>&1; exit 143' EXIT
sp-sc-auth "${sopUrl}" 8809 8908 > /dev/null &
children="$!"
# Check if exists
line='[ ]'
for i in {0..15}
do
replace="${line/ /#}"
line=$replace
echo -ne "Comprobando sopcast ${replace}" \\r
sleep 1
done
echo -ne "\033[2K"
if ! kill -0 "${children}" 1>/dev/null 2>&1; then
echo "Sin emisión"
exit 1
else
echo -ne "Comprobando sopcast [ OK ]" \\r
echo
fi
# Open VLC player
line='[ ]'
for i in {0..25}
do
replace="${line/ /#}"
line=$replace
echo -ne "Cargando reproductor ${replace}" \\r
sleep 1
done
if ! kill -0 "${children}" 1>/dev/null 2>&1; then
echo "Fallo en recepción"
exit 1
else
vlc http://localhost:8908/tv.asf 1>/dev/null 2>&1
echo -ne "\033[2K"
fi
exit 0
errexit cannot work on programs run in the background, so this is unsurprising -- the inline command is simply starting a background process, and that (starting a background process) succeeds, even if the process itself subsequently fails.
If you call wait $! subsequently, then errexit will be able to take effect, as the wait call will exit with the exit status of the program itself. (Of course, if you can call wait $!, then this raises the question of why you were backgrounding the initial program to start with).
If you always want to kill the parent script if the child fails, you can do this instead:
(sp-sc-auth "$sopUrl" 8809 8908 >/dev/null || kill $$) &
$$ evaluates to the PID of the parent shell, not the subshell, so this will act accordingly.
As for the segfault itself, "program X segfaults" is a question too vague to be addressed here. To even start debugging that, you'd need to collect the core dump created on its failure (enabling cores if necessary), install debug symbols for sopcast, and use gdb to collect a stack trace from the core file created on failure.
I am trying to write a small bash script to monitor the output of RiotShield (a 3rd party player scraper for League of Legends) for crashes. If a keyword is found in the log it should kill the process and restart it.
Here is my bash script as is:
#!/bin/bash
crash[1]="disconnected"
crash[2]="38290209"
while true; do
list=$(tail log.log)
#clear
echo "Reading Log"
echo "========================================"
echo $list
for item in ${list//\\n/ }
do
for index in 1 2
do
c=${crash[index]}
#echo "Crash Word:" $c
if [[ "$c" == *"$item"* ]]; then
echo "RiotShield has crashed."
echo "Killing RiotShield."
kill $(ps aux | grep '[R]iotShield.exe' | awk '{print $2}')
echo "RiotShield killed!"
echo "Clearing log."
echo > log.log
echo "Starting RiotShield"
(mono RiotShield.exe >> log.log &)
fi
done
done
sleep 10
done
My crash array are keywords that I know show in the log when it crashes. I have 38290209 in there only for testing purposes as it is my summoner ID on League of Legends and the moment I preform a search for my Summoner name the ID shows in the log.
The problem is even when disconnected and 38290209 do not show up in the log my
if [[ "$c" == *"$item"* ]]; then
fires, kills the RiotShield process and then relaunches it.
The length of the crash array will grow as I find more keywords for crashes so I cant just do
if [[ "$c" == "*disconnected*" ]]; then
Please and thanks SOF
EDIT:
Adding working code:
#!/bin/bash
crash[1]="disconnected"
crash[2]="error"
while true; do
list=$(tail log.log)
clear
echo "Reading Log"
echo "========================================"
echo $list
for index in 1 2
do
c=${crash[index]}
#echo "Crash Word:" $c
if [[ $list == *$c* ]]; then
echo "RiotShield has crashed."
echo "Crash Flag: " $c
echo "Killing RiotShield."
kill $(ps aux | grep '[R]iotShield.exe' | awk '{print $2}')
echo "RiotShield killed!"
echo "Clearing log."
echo > log.log
echo "Starting RiotShield"
(mono RiotShield.exe >> log.log &)
fi
done
sleep 10
done
I think you have the operands in your expression the wrong way around. It should be:
if [[ $item == *$c* ]]; then
because you want to see if a keyword ($c) is present in the line ($item).
Also, I'm not sure why you need to break the line into items by doing this: ${list//\\n/ }. You can just match the whole line.
Also note that double-quotes are not required within [[.