Bash script not returning any output - linux

I'm a newbie in the linux environment, and I'm starting to create an automated smoke test for several commands we frequently use at our company. Basically, running some kind of shell script that runs through multiple commands and also validates the command's output.
The first test cases I started writing out was to check our service can be successfully stopped and started. After researching around about bash scripts I came up with this:
#!/bin/bash
sudo service companyservice stop | grep 'Stopping companyservice ... [ OK ]' &> /dev/null \
if [ $? == 0 ] then echo "Stopping Company Service: SUCCESS" \
else echo "Stopping Company Service: FAIL. GO HARASS A DEVELOPER" \
fi
sudo service companyservice start | grep 'Starting companyservice ... [ OK ]' &> /dev/null \
if [ $? == 0 ] then echo "Starting Company Service: SUCCESS" \
else echo "Starting Company Service: FAIL. GO HARASS A DEVELOPER" \
fi
I saved this as SmokeTest.sh, but when running sh SmokeTest.sh on command line, I see nothing on the output. No error, no failure, no success. Nothing.
Any help or hints with this is much appreciated. I am using Red Hat 6.6 OS.
Also should this be right way to automate on Linux if I want to validate command's outputs?

Your line continuation characters \ at the end of the sudo lines are making the if part of the command line you're running. Get rid of those, and you should start to see syntax errors because you don't have ; after the conditions for your if statements before then
Also, on the lines with the continuation characters you're redirecting stderr to /dev/null which is why you wouldn't see it complaining about the situation
As you noted, it's possible to not put the ; with an if, but if you do so the then must be on the next line:
if [ -z "$var" ] then
Is wrong but
if [ -z "$var" ]; then
or
if [ -z "$var" ]
then
are both acceptable.
Also, the single line continuation characters might have been a little lost. If a line of bash ends with \ it means that the following line should actually be treated as part of the current line. So in your example:
sudo service companyservice stop | grep 'Stopping companyservice ... [ OK ]' &> /dev/null \
if [ $? == 0 ] then echo "Stopping Company Service: SUCCESS" \
else echo "Stopping Company Service: FAIL. GO HARASS A DEVELOPER" \
fi
is actually treated as a single line like
sudo service companyservice stop | grep 'Stopping companyservice ... [ OK ]' &> /dev/null if [ $? == 0 ] then echo "Stopping Company Service: SUCCESS" else echo "Stopping Company Service: FAIL. GO HARASS A DEVELOPER" fi
which is not right. If you remove the \ from each line and settle on a way to do the if...then you should be in much better shape

&> /dev/null will direct any output/errors to /dev/null and you will see no output or error. remove these parts or redirect them to a file for exmple:
&> log.txt

Related

Use of echo >> produces inconsistent results

I've been trying to understand a problem that's cropped up with some of the scripts we use at work.
To generate many of our script logs, we utilize the exec command and file redirects to print all output from the script to both the terminal and a log file. Occasionally, for information that doesn't need to be displayed to the user, we do a straight redirect to the log file.
The issue we're seeing occurs on the last line of output to the file when we're printing the number of errors that occurred during that execution: The text doesn't get printed to the file.
In an attempt to diagnose the problem, I wrote a simplified version of our production script (script1.bash) and a test script (script2.bash) to try to tease out the problem.
script1.bash
#!/bin/bash
log_name="${USER}_`date +"%Y%m%d-%H%M%S"`_${HOST}_${1}.log"
log="/tmp/${log_name}"
log_tmp="/tmp/temp_logs"
err_count=0
finish()
{
local ecode=0
if [ $# -eq 1 ]; then
ecode=${1}
fi
# This is the problem line
echo "Error Count: ${err_count}" >> "${log}"
mvlog
local success=$?
exec 1>&3 2>&4
if [ ${success} -ne 0 ]; then
echo ""
echo "WARNING: Failed to save log file to ${log_tmp}"
echo ""
ecode=$((ecode+1))
fi
exit ${ecode}
}
mvlog()
{
local ecode=1
if [ ! -d "${log_tmp}" ]; then
mkdir -p "${log_tmp}"
chmod 775 "${log_tmp}"
fi
if [ -d "${log_tmp}" ]; then
rsync -pt --bwlimit=4096 "${log}" "${log_tmp}/${log_name}" 2> /dev/null
[ $? -eq 0 ] && ecode=0
if [ ${ecode} -eq 0 ]; then
rm -f "${log}"
fi
fi
}
exec 3>&1 4>&2 >(tee "${log}") 2>&1
ecode=0
echo
echo "Some text"
echo
finish ${ecode}
script2.bash
#!/bin/bash
runs=10000
logdir="/tmp/temp_logs"
if [ -d "${logdir}" ]; then
rm -rf "${logdir}"
fi
for i in $(seq 1 ${runs}); do
echo "Conducting run #${i}/${runs}..."
${HOME}/bin/script1.bash ${i}
done
echo "Scanning logs from runs..."
total_count=`find "${logdir}" -type f -name "*.log*" | wc -l`
missing_count=`grep -L 'Error Count:' ${logdir}/*.log* | grep -c /`
echo "Number of runs performed: ${runs}"
echo "Number of log files generated: ${total_count}"
echo "Number of log files missing text: ${missing_count}"
My first test indicated roughly 1% of the time the line isn't written to the log file. I then proceeded to try several different methods of handling this line of output.
Echo and Wait
echo "Error Count: ${err_count}" >> "${log}"
wait
Alternate print method
printf "Error Count: %d\n" ${err_count} >> "${log}"
No Explicit File Redirection
echo "Error Count: ${err_count}"
Echo and Sleep
echo "Error Count: ${err_count}" >> "${log}"
sleep 0.2
Of these, #1 and #2 each had a 1% fail rate while #4 had a staggering 99% fail rate. #3 was the only methodology that had a 0% fail rate.
At this point, I'm at a loss for why this behavior is occurring, so I'm asking the gurus here for any insight.
(Note that the simple solution is to implement #3, but I want to know why this is happening.)
Without testing, this looks like a race condition between your script and tee. It's generally better to avoid multiple programs writing to the same file at the same time.
If you do insist on having multiple writers, make sure they are all in append mode, in this case by using tee -a. Appends to the local filesystem are atomic, so all writes should make it (this is not necessarily true for networked file systems).

can shell script make itself run in background after running some steps?

I have BBB based custom Embedded Linux based board with busybox shell(ash)
I have a situation where my script must run in background with following condition
There must only one instance of the script.
wrapper script need to know if script started successfully in background or not.
There is another wrapper script which starts and stops my script, wrapper script is as mentioned below.
#!/bin/sh
export PATH=/bin:/sbin:/usr/bin:/usr/sbin
readonly TEST_SCRIPT_PATH="/home/testscript.sh"
readonly TEST_SCRIPT_LOCK_PATH="/var/run/${TEST_SCRIPT_PATH##*/}.lock"
start_test_script()
{
local pid_of_testscript=0
local status=0
#Run test script in background
"${TEST_SCRIPT_PATH}" &
#---------Now When this point is hit, lock file must be created.-----
if [ -f "${TEST_SCRIPT_LOCK_PATH}" ];then
pid_of_testscript=$(head -n1 ${TEST_SCRIPT_LOCK_PATH})
if [ -n "${pid_of_testscript}" ];then
kill -0 ${pid_of_testscript} &> /dev/null || status="${?}"
if [ ${status} -ne 0 ];then
echo "Error starting testscript"
else
echo "testscript start successfully"
fi
else
echo "Error starting testscript.sh"
fi
fi
}
stop_test_script()
{
local pid_of_testscript=0
local status=0
if [ -f "${TEST_SCRIPT_LOCK_PATH}" ];then
pid_of_testscript=$(head -n1 ${TEST_SCRIPT_LOCK_PATH})
if [ -n "${pid_of_testscript}" ];then
kill -0 ${pid_of_testscript} &> /dev/null || status="${?}"
if [ ${status} -ne 0 ];then
echo "testscript not running"
rm "${TEST_SCRIPT_LOCK_PATH}"
else
#send SIGTERM signal
kill -SIGTERM "${pid_of_testscript}"
fi
fi
fi
}
#Script starts from here.
case ${1} in
'start')
start_test_script
;;
'stop')
stop_test_script
;;
*)
echo "Usage: ${0} [start|stop]"
exit 1
;;
esac
Now actual script "testscript.sh" looks something like this,
#!/bin/sh
#Filename : testscript.sh
export PATH=/bin:/sbin:/usr/bin:/usr/sbin
set -eu
LOCK_FILE="/var/run/${0##*/}.lock"
FLOCK_CMD="/bin/flock"
FLOCK_ID=200
eval "exec ${FLOCK_ID}>>${LOCK_FILE}"
"${FLOCK_CMD}" -n "${FLOCK_ID}" || exit 0
echo "${$}" > "${LOCK_FILE}"
# >>>>>>>>>>-----Now run the code in background---<<<<<<
handle_sigterm()
{
# cleanup
"${FLOCK_CMD}" -u "${FLOCK_ID}"
if [ -f "${LOCK_FILE}" ];then
rm "${LOCK_FILE}"
fi
}
trap handle_sigterm SIGTERM
while true
do
echo "do something"
sleep 10
done
Now in above script you can see "---Now run the code in background--" at that point I am sure that either lock file is successfully created or instance of this script is already running. So Then I can safely run other code in background and wrapper script can check for lockfile and find out if the process mentioned in the lock file is running or not.
can shellscript itself make it to run in background ?
if not is there a better way to meet all the conditions ?
I think you can look into job control built-in, specifically bg.
Job Control Commands
When processes say they background themselves, what they actually do is fork and exit the parent. You can do the same by running whichever commands, functions or statements you want with & and then exiting.
#!/bin/sh
echo "This runs in the foreground"
sleep 3
while true
do
sleep 10
echo "doing background things"
done &

Bash script output not going to stdout

I have a build process, kicked off by Make, that executes a lot of child scripts.
A couple of these child scripts require root privileges, so instead of running everything as root, or everything as sudo, I'm trying to only execute the scripts that need to be as root, as root.
I'm accomplishing this like so:
execute_as_user() {
su "$1" -s /bin/bash -c "$2;exit \$?"
}
Arg $1 is the user to run the script as, arg $2 is the script.
Arg $1 is either root (gotten with: $(whoami) since everything is under sudo), or the current user's account (gotten with: $(logname))
The entire build is kicked off as:
sudo make all
Sample from the Makefile:
LOG="runtime.log"
ROTATE_LOG:=$(shell bash ./scripts/utils/rotate_log.sh)
system:
/bin/bash -c "time ./scripts/system.sh 2>&1 | tee ${LOG}"
My problem is... none of the child scripts are printing output to stdout. I believe it to be some sort of issue with an almost recursive call of su root... but I'm unsure. From my understanding, these scripts should already be outputting to stdout, so perhaps I'm mistaken where the output is going?
To be clear, I'm seeing no output in either the logfile nor displaying to the terminal (stdout).
Updating for clarity:
Previously, I just ran all the scripts either with sudo or just as the logged in user... which with my makefile above, would print to the terminal (stdout) and logfile. Adding the execute_as_user() function is where the issue cropped up. The scripts execute and build the project... just no display "that it's working" and no logs.
UPDATE
Here is some snippets:
system.sh snippet:
execute_script() {
echo "Executing as user $3: $2"
RETURN=$(execute_as_user $3 ${SYSTEM_SCRIPTS}/$2)
if [ ${RETURN} -ne ${OK} ]
then
error $1 $2 ${RETURN}
fi
}
build_package() {
local RETURN=0
case "$1" in
system)
declare -a scripts=(\
"rootfs.sh" \
"base_files.sh" \
"busybox.sh" \
"iana-etc.sh" \
"kernel.sh" \
"firmware.sh" \
"bootscripts.sh" \
"network.sh" \
"dropbear.sh" \
"wireless_tools.sh" \
"e2fsprogs.sh" \
"shared_libs.sh"
)
for SCRIPT_NAME in "${scripts[#]}"; do
execute_script $1 ${SCRIPT_NAME} $(logname)
echo ""
echo -n "${SCRIPT_NAME}"
show_status ${OK}
echo ""
done
# finalize base system
echo ""
echo "Finalizing base system"
execute_script $1 "finalize.sh" $(whoami)
echo ""
echo -n "finalize.sh"
show_status ${OK}
echo ""
# package into tarball
echo ""
echo "Packing base system"
execute_script $1 "archive.sh" $(whoami)
echo ""
echo -n "archive.sh"
show_status ${OK}
echo ""
echo ""
echo -n "Build System: "
show_status ${OK}
;;
*)
echo "$1 is not supported!"
exit 1
esac
}
sample child script executed by system.sh
cd ${CLFS_SOURCES}/
tar -xvjf ${PKG_NAME}-${PKG_VERSION}.tar.bz2
cd ${CLFS_SOURCES}/${PKG_NAME}-${PKG_VERSION}/
make distclean
RESPONSE=$?
if [ ${RESPONSE} -ne 0 ]
then
pkg_error ${RESPONSE}
exit ${RESPONSE}
fi
ARCH="${CLFS_ARCH}" make defconfig
RESPONSE=$?
if [ ${RESPONSE} -ne 0 ]
then
pkg_error ${RESPONSE}
exit ${RESPONSE}
fi
# fixup some bugs with musl-libc
sed -i 's/\(CONFIG_\)\(.*\)\(INETD\)\(.*\)=y/# \1\2\3\4 is not set/g' .config
sed -i 's/\(CONFIG_IFPLUGD\)=y/# \1 is not set/' .config
etc...
Here's the entire system.sh script:
https://github.com/SnakeDoc/LiLi/blob/master/scripts/system.sh
(i know the project is messy... it's a learn-as-you-go style project)
Previously, I just ran all the scripts either with sudo or just as the
logged in user... which with my makefile above, would print to the
terminal (stdout) and logfile. Adding the execute_as_user() function
is where the issue cropped up. The scripts execute and build the
project... just no display "that it's working" and no logs.
Just a guess, but you're probably not calling your function or not calling it properly:
execute_as_user() {
su "$1" -s /bin/bash -c "$2;exit \$?"
}
execute_as_user "$#"
I also noticed that you're not passing any argument to the script at all. Is this meant?
./scripts/system.sh ???

Custom service start is not working in OLE6. Where am i going wrong?

I have Created a service in /etc/init.d. It calls another script file catalina.sh within it.
It starts fine with
./Service_name start
but when i try
service Service_name start
the script is executed but the service does not start
Here is the concerned part of the script file
start(){
echo -n "Starting $VAR"
PID="$(pgrep -f $VAR)"
if [ "$PID" = "" ]
then
cd /home/com/Analytics/servers/$VAR/bin
./catalina.sh start >/dev/null
while [ $temp -lt $startime ]
do
sleep 5
echo -n " ."
temp=$(( $temp + 5 ))
done
echo -e "\e[0;32m [ OK ] \e[0m"
else
echo -e "\e[0;31m [ FAILED ] \e[0m"
echo -e "\e[0;33m $VAR is already running. \e[0m"
fi
}
Also i would like to mention that
service Service_name stop
and
service Service_name status
works fine.
Actually the problem was requirement of bash of JAVA_HOME or JRE_HOME when catalina.sh is called as a service.. Which was fixed by exporting JAVA_HOME.
export JAVA_HOME=path_to_jdk

How to tell the status of a Linux Daemon

We have a Linux Daemon in c and a bash script to start it. The daemon sometimes fail to start because of some configuration file errors but the script reports the daemon was started successfully. A snippet of the script is shown as below, could someone tell me what's wrong with the script?
...
case "$1" in
start)
echo -n "Starting Demo Daemon: "
sudo -u demouser env DEMO_HOME=$DEMO_HOME /usr/local/demouser/bin/democtl startup > /dev/null 2> /dev/null
if [ "$?" = "0" ]; then
echo_success
else
echo_failure
fi
echo
;;
...
Thanks!
I feel there is nothing wrong with the script,it is the reponsibility of daemon to return non zero exit status if failed to start properly and based on those the script will display the message.(which i think it does)
You can add following line in your script to get running status of your Linux Daemon
status=`ps -aef |grep "\/usr\/local\/demouser\/bin\/democtl" |grep -v grep|wc -l`
if [ "$status" = "1" ]; then
echo_success
else
echo_failure
fi

Resources