Filter specific output of bash -x - linux

I'm trying to filter specific output produced when executing bash -x command. Here is the code I have :
touch ./log/time_$now_time.txt
touch ./log/session_$now_time.log
case $multinode in
true)whiptail --title "Multinode system" --msgbox "Multinode system found! Redirecting to the Multinode Menu... " 10 60
cd multinode
script -q -t 2> ../log/time_$now_time.txt -c "bash -x ./menu.sh" -f ../log/session_$now_time.log ;;
false)whiptail --title "Singlenode system" --msgbox "Singlenode system found!Redirecting to the Singlenode Menu..." 10 60
cd singlenode
script -q -t 2> ../log/time_$now_time.txt -c "bash -x ./menu.sh" -f ../log/session_$now_time.log;;
*)
echo "Invalid option. Quitting"
break ;;
esac
Is there a way to log all +++ output produced by bash -x , but not to display it ?
If I redirect all the output to /dev/null I'm losing the whiptail view as well, however I dont want to lose the bash -x outputs , but just not to be displayed .
Here is what I see when I start my script :
+++ date +%d_%m_%Y
++ now=11_09_2017
+ true
++ whiptail --title 'Multinode Main Menu' --menu '\n\n\n\n\n\n\n\n'...
+ case $OPTION in
+ echo 'Bye !'
Bye !
#
How can I hide all the +++ lines, but log them in the session log file ?

The special variable BASH_XTRACEFD allows to use another file descriptor than 2.
[reference][1]
https://www.gnu.org/software/bash/manual/html_node/Bash-Variables.html
exec 5> commands.txt
BASH_XTRACEFD=5
set -x
Or to integrate in provided script
... "BASH_XTRACEFD=5 bash -x ./menu.sh 5> commands.log" ...
also, error output (2>) is redirected in the same file provided by script -f option , is it wanted ?
To have errors and commands of menu.sh in a separate file and not gnu script messages
... "bash -x ./menu.sh 2> err_and_commands.log" ...

Related

Getting exit code from SSH command running in background in Linux (Ksh script)

I trying to run few commands to be executed in parallel in couple of remote servers using SSH and i need to get the correct exit code from those commands but with no success.
I am using the command below
(ssh -o LogLevel=Error "remote_server1" -n " . ~/.profile 1>&- 2>&-;echo "success" 2>&1 > /dev/null" ) & echo $? > /tmp/test1.txt
(ssh -o LogLevel=Error "remote_server2" -n " . ~/.profile 1>&- 2>&-;caname 2>&1 > /dev/null" ) & echo $? > /tmp/test2.txt
The result is always "0" (echo $?) even i forced to failed like the second example above. Any idea?

Bash return code error handling when using heredoc input

Motivation
I'm in a situation where I have to run multiple bash commands with a single bash invocation without the possibility to write a full script file (use case: Passing multiple commands to a container in Kubernetes). A common solution is to combine commands with ; or &&, for instance:
bash -c " \
echo \"Hello World\" ; \
ls -la ; \
run_some_command "
In practice writing bash scripts like that turns out to be error prone, because I often forget the semicolon leading to subtle bugs.
Inspired by this question, I was experiment with writing scripts in a more standard style by using a heredoc:
bash <<EOF
echo "Hello World"
ls -la
run_some_command
EOF
Unfortunately, I noticed that there is a difference in exit code error handling when using a heredoc. For instance:
bash -c " \
run_non_existing_command ; \
echo $? "
outputs (note that $? properly captures the exit code):
bash: run_non_existing_command: command not found
127
whereas
bash <<EOF
run_non_existing_command
echo $?
EOF
outputs (note that $? fails to capture the exit code compared to standard script execution):
bash: line 1: run_non_existing_command: command not found
0
Why is the heredoc version behaving differently? Is it possible to write the script in the heredoc style and maintaining normal exit code handling?
Why is the heredoc version behaving differently?
Because $? is expanded before running the command.
The following will output 1, that is the exit status of false command:
false
bash <<EOF
run_non_existing_command
echo $?
EOF
It's the same in principle as the following, which will print 5:
variable=5
bash <<EOF
variable="This is ignored"
echo $variable
EOF
Is it possible to write the script in the heredoc style and maintaining normal exit code handling?
If you want to have the $? expanded inside the subshell, then:
bash <<EOF
run_non_existing_command
echo \$?
EOF
or
bash <<'EOF'
run_non_existing_command
echo $?
EOF
Also note that:
bash -c \
run_non_existing_command ;
echo $? ;
is just equal to:
bash -c run_non_existing_command
echo $?
The echo $? is not executed inside bash -c.

Start a Pythonscript in a Screen session

i am currently working on a little bash script to start a .py file in a Screen session and could use help.
I have these 2 Files:
test.py (located at /home/developer/Test/):
import os
print("test")
os.system("ping -c 5 www.google.de>>/home/developer/Test/test.log")
test.sh (located at /home/developer/):
#!/bin/bash
Status="NULL"
if ! screen -list | grep -q "foo";
then
Status="not running"
else
Status="running"
fi
echo "Status: $Status"
read -p "Press [Enter] key to start/stop."
if [[ $Status == "running" ]]
then
screen -S foo -p 0 -X quit
echo "Stopped Executing"
elif [[ $Staus == "not running" ]]
then
screen -dmS foo sh
screen -S foo -X python /home/developer/Test/test.py
echo "Created new Instance"
else
exit 1
fi
It works as intendet until it has to start the python script aka. this line:
screen -S foo -X python /home/developer/Test/test.py
when running it in my normal shell i get:
test
sh: 1: cannot create /home/developer/Test/test.log: Permission denied
MY Questions:
I understand the cause of the Permission denied case (works with sudo) but how do i give Permissions and more interestingly, to whom do i give the Permissions to? (python? | screen? | myuser?)
Is the line to create a new instance in which the script runs correct like that?
Can u think of a better way to execute a python script which has to run night and day but is start and stoppable and doesn't block the shell?
To answer your questions:
You should not need to use sudo at all if the proper user/group is set on the scripts.
$ chmod 644 <user> <group> <script name>
The line creating the new instance does not look correct, it should be more like:
screen -S foo -d -m /usr/bin/python /home/Developer/Test/test.py
While using full path to the python exec; remove useless preceding line: screen -dmS foo sh
Screen is more than adequte to prefer such tasks.
Other problems in your script:
Add a shebang to the python script (eg. #!/usr/bin/python)
Typo on line 20 of test.sh: should be $Status, not $Staus
You may need to initially create test.log before executing your script (eg. touch test.log)

Bash script output not going to stdout

I have a build process, kicked off by Make, that executes a lot of child scripts.
A couple of these child scripts require root privileges, so instead of running everything as root, or everything as sudo, I'm trying to only execute the scripts that need to be as root, as root.
I'm accomplishing this like so:
execute_as_user() {
su "$1" -s /bin/bash -c "$2;exit \$?"
}
Arg $1 is the user to run the script as, arg $2 is the script.
Arg $1 is either root (gotten with: $(whoami) since everything is under sudo), or the current user's account (gotten with: $(logname))
The entire build is kicked off as:
sudo make all
Sample from the Makefile:
LOG="runtime.log"
ROTATE_LOG:=$(shell bash ./scripts/utils/rotate_log.sh)
system:
/bin/bash -c "time ./scripts/system.sh 2>&1 | tee ${LOG}"
My problem is... none of the child scripts are printing output to stdout. I believe it to be some sort of issue with an almost recursive call of su root... but I'm unsure. From my understanding, these scripts should already be outputting to stdout, so perhaps I'm mistaken where the output is going?
To be clear, I'm seeing no output in either the logfile nor displaying to the terminal (stdout).
Updating for clarity:
Previously, I just ran all the scripts either with sudo or just as the logged in user... which with my makefile above, would print to the terminal (stdout) and logfile. Adding the execute_as_user() function is where the issue cropped up. The scripts execute and build the project... just no display "that it's working" and no logs.
UPDATE
Here is some snippets:
system.sh snippet:
execute_script() {
echo "Executing as user $3: $2"
RETURN=$(execute_as_user $3 ${SYSTEM_SCRIPTS}/$2)
if [ ${RETURN} -ne ${OK} ]
then
error $1 $2 ${RETURN}
fi
}
build_package() {
local RETURN=0
case "$1" in
system)
declare -a scripts=(\
"rootfs.sh" \
"base_files.sh" \
"busybox.sh" \
"iana-etc.sh" \
"kernel.sh" \
"firmware.sh" \
"bootscripts.sh" \
"network.sh" \
"dropbear.sh" \
"wireless_tools.sh" \
"e2fsprogs.sh" \
"shared_libs.sh"
)
for SCRIPT_NAME in "${scripts[#]}"; do
execute_script $1 ${SCRIPT_NAME} $(logname)
echo ""
echo -n "${SCRIPT_NAME}"
show_status ${OK}
echo ""
done
# finalize base system
echo ""
echo "Finalizing base system"
execute_script $1 "finalize.sh" $(whoami)
echo ""
echo -n "finalize.sh"
show_status ${OK}
echo ""
# package into tarball
echo ""
echo "Packing base system"
execute_script $1 "archive.sh" $(whoami)
echo ""
echo -n "archive.sh"
show_status ${OK}
echo ""
echo ""
echo -n "Build System: "
show_status ${OK}
;;
*)
echo "$1 is not supported!"
exit 1
esac
}
sample child script executed by system.sh
cd ${CLFS_SOURCES}/
tar -xvjf ${PKG_NAME}-${PKG_VERSION}.tar.bz2
cd ${CLFS_SOURCES}/${PKG_NAME}-${PKG_VERSION}/
make distclean
RESPONSE=$?
if [ ${RESPONSE} -ne 0 ]
then
pkg_error ${RESPONSE}
exit ${RESPONSE}
fi
ARCH="${CLFS_ARCH}" make defconfig
RESPONSE=$?
if [ ${RESPONSE} -ne 0 ]
then
pkg_error ${RESPONSE}
exit ${RESPONSE}
fi
# fixup some bugs with musl-libc
sed -i 's/\(CONFIG_\)\(.*\)\(INETD\)\(.*\)=y/# \1\2\3\4 is not set/g' .config
sed -i 's/\(CONFIG_IFPLUGD\)=y/# \1 is not set/' .config
etc...
Here's the entire system.sh script:
https://github.com/SnakeDoc/LiLi/blob/master/scripts/system.sh
(i know the project is messy... it's a learn-as-you-go style project)
Previously, I just ran all the scripts either with sudo or just as the
logged in user... which with my makefile above, would print to the
terminal (stdout) and logfile. Adding the execute_as_user() function
is where the issue cropped up. The scripts execute and build the
project... just no display "that it's working" and no logs.
Just a guess, but you're probably not calling your function or not calling it properly:
execute_as_user() {
su "$1" -s /bin/bash -c "$2;exit \$?"
}
execute_as_user "$#"
I also noticed that you're not passing any argument to the script at all. Is this meant?
./scripts/system.sh ???

How to execute one shell command on completion of other

I am writing a shell script in which i am generating a csv file and after that attaching and sending this csv file with mutt command in linux .But the problem is that csv file not generated and still the mutt command executes and it says the file not found . So is there any way that i can check that if the command for csv file generation completes then only the mutt command execute.Below are the contents of my script the two statement executed one after other.
mysql --user=root --password= erpint -B -e "select * from user_info;" | sed "s/'/\'/;s/\t/\",\"/g;s/^/\"/;s/$/\"/;s/\n//g" > /home/mayuri/detail.csv
mutt -s "Mutt attach" srini#erpint.com -a /home/mayuri/detail.csv < /home/mayuri/detail.csv
using bash,
to check if file exist :
#generate you file ....
if [ ! -f /YourPathToTheFile/yourFile.txt ];
then
echo "no file found, exiting and doing nothing";
fi
#send your file
So, literally, wait for the file to exist:
while [ ! -f /home/mayuri/detail.csv ]; do
sleep 1
done
mutt -s "Mutt attach" srini#erpint.com -a /home/mayuri/detail.csv < /home/mayuri/detail.csv
You can use $? which is set to 0 if previous command executed successfully or 1 if it failed:
mysql --user=root --password= erpint -B -e "select * from user_info;" | sed "s/'/\'/;s/\t/\",\"/g;s/^/\"/;s/$/\"/;s/\n//g" > /home/mayuri/detail.csv
if [ $? -eq 0 ]; then
mutt -s "Mutt attach" srini#erpint.com -a /home/mayuri/detail.csv < /home/mayuri/detail.csv
fi

Resources