I have a basic script that outputs various status messages. e.g.
~$ ./myscript.sh
0 of 100
1 of 100
2 of 100
...
I wanted to wrap this in a parent script, in order to run a sequence of child-scripts and send an email upon overall completion, e.g. topscript.sh
#!/bin/bash
START=$(date +%s)
/usr/local/bin/myscript.sh
/usr/local/bin/otherscript.sh
/usr/local/bin/anotherscript.sh
RET=$?
END=$(date +%s)
echo -e "Subject:Task Complete\nBegan on $START and finished at $END and exited with status $RET.\n" | sendmail -v group#mydomain.com
I'm running this like:
~$ topscript.sh >/var/log/topscript.log 2>&1
However, when I run tail -f /var/log/topscript.log to inspect the log I see nothing, even though running top shows myscript.sh is currently being executed, and therefore, presumably outputting status messages.
Why isn't the stdout/stderr from the child scripts being captured in the parent's log? How do I fix this?
EDIT: I'm also running these on a remote machine, connected via ssh using pseudo-tty allocation, e.g. ssh -t user#host. Could the pseudo-tty be interfering?
I just tried your the following: I have three files t1.sh, t2.sh, and t3.sh all with the following content:
#!/bin/bash
for((i=0;i<10;i++)) ; do
echo $i of 9
sleep 1
done
And a script called myscript.sh with the following content:
#!/bin/bash
./t1.sh
./t2.sh
./t3.sh
echo "All Done"
When I run ./myscript.sh > topscript.log 2>&1 and then in another terminal run tail -f topscript.log I see the lines being output just fine in the log file.
Perhaps the things being run in your subscripts use a large output buffer? I know when I've run python scripts before, it has a pretty big output buffer so you don't see any output for a while. Do you actually see the entire output in the email that gets sent out at the end of topscript.sh? Is it just that while the processes run you're not seeing the output?
try
unbuffer topscript.sh >/var/log/topscript.log 2>&1
Note that unbuffer is not always available as a std binary in old-style Unix platforms and may require a search and installation for a package to support it.
I hope this helps.
Related
I'm wrote a bash script and I don't have a chance to download pssh. However, I need to run multiple ssh commands in parallel. There is no problem so far, but when I run ssh commands, I want to see the outputs sequentially from the remote machine. I mean one ssh has multiple outputs and they get mixed up because more than one ssh is running.
#!/bin/bash
pid_list=""
while read -r list
do
ssh user#$list 'commands'&
c_pid=$!
pid_list="$pid_list $c_pid"
done < list.txt
for pid in $pid_list
do
wait $pid
done
What should I add to the code to take the output unmixed?
The most obvious way to me would be to write the outputs in a file and cat the files at the end:
#!/bin/bash
me=$$
pid_list=""
while read -r list
do
ssh user#$list 'hostname; sleep $((RANDOM%5)); hostname ; sleep $((RANDOM%5))' > /tmp/log-${me}-$list &
c_pid=$!
pid_list="$pid_list $c_pid"
done < list.txt
for pid in $pid_list
do
wait $pid
done
cat /tmp/log-${me}-*
rm /tmp/log-${me}-* 2> /dev/null
I didn't handle stderr because that wasn't in your question. Nor did I address the order of output because that isn't specified either. Nor is whether the output should appear as each host finishes. If you want those aspects covered, please improve your question.
I'd like to run a linux console command from a terminal, preventing it from accessing the TTY by itself (which will, for example, happen often when the console command tries to request a password from the user - this should just fail). The closest I get to a solution is using this wrapper:
temp=`mktemp -d`
echo "$#" > $temp/run.sh
mkfifo $temp/out $temp/err
setsid sh -c "sh $temp/run.sh > $temp/out 2> $temp/err" &
cat $temp/err 1>&2 &
cat $temp/out
rm -f $temp/out $temp/err $temp/run.sh
rmdir $temp
This runs the command as expected without TTY access, but passing the stdout/stderr output through the FIFO pipes does not work for some reason. I end up with no output at all even though the process wrote to stdout or stderr.
Any ideas?
Well, thank you all for having a look. Turns out that the script already contained a working approach. It just contained a typo which caused it to fail. I corrected it in the question so it may serve for future reference.
I logged in a virtual machine via ssh and I tried to run a script in background, the script is shown below:
#!/bin/bash
APP_NAME=`basename $0`
CFG_FILE=$1
. $CFG_FILE #just some variables
CMD=$2
PID_FILE="$PIDS_DIR/$APP_NAME.pid"
CUR_LOG_DIR=$LOGS_RUNNING
echo $$ > $PID_FILE
#Main script code
#This script shall be called using the following syntax
# $ nohup script_name output_dir &
TIMESTAMP=`date +"%Y%m%d%H%M%S"`
CAP_INTERFACE="eth0"
/usr/sbin/tcpdump -nei $CAP_INTERFACE -s 65535 -w file_result
rm $PID_FILE
The result should be tcpdump running in background, redirecting the command result to file_result.
The script is called with:
nohup $SCRIPT_NAME $CFG_FILE start &
And It is stopped calling the STOP_SCRIPT:
##STOP_SCRIPT
PID_FILE="$PIDS_DIR/$APP_NAME.pid"
if [ -f $PID_FILE ]
then
PID=`cat $PID_FILE`
# send SIGTERM to kill all children of $PID
pkill -TERM -P $PID
fi
When I check the file_result, after running the stop script, It is empty.
What is happening? How can I solve it?
I found this link: https://it.toolbox.com/question/launching-tcpdump-processes-in-background-using-ssh-060614
The author seems to have faced a similar issue. They debate about race conditions, but I didn't understand completely.
I'm not sure what you're trying to accomplish by having the startup script itself continue to run, but here's an approach that I think accomplishes what you're trying to do, namely start tcpdump and have it continue to run immune to hangups via nohup. I've simplified things a bit for illustrative purposes - feel free to add any variables back as you see fit, such as the nohup.out output directory, TIMESTAMP, etc.
Script #1: tcpdump_start.sh
#!/bin/sh
rm -f nohup.out
nohup /usr/sbin/tcpdump -ni eth0 -s 65535 -w file_result.pcap &
# Write tcpdump's PID to a file
echo $! > /var/run/tcpdump.pid
Script #2: tcpdump_stop.sh
#!/bin/sh
if [ -f /var/run/tcpdump.pid ]
then
kill `cat /var/run/tcpdump.pid`
echo tcpdump `cat /var/run/tcpdump.pid` killed.
rm -f /var/run/tcpdump.pid
else
echo tcpdump not running.
fi
To start tcpdump, just run tcpdump_start.sh.
To stop the tcpdump instance started with tcpdump_start.sh, just run tcpdump_stop.sh.
The captured packets will be written to the file_result.pcap file, and yes, it's a pcap file, not a text file, so it helps to name it with the proper file extension. The tcpdump statistics will be written to the nohup.out file when tcpdump is terminated.
I too had faced problems when running tcpdump over an SSH session.
In my case, I was running
sudo nohup tcpdump -w {pcap_dump_file} {filter} > /dev/null 2>&1 &
Where, running this command over Paramiko SSH session as a background process was the problem.
To get around this, I used screen utility of Linux.
screen is an easy to use tool for long-running of processes as a service.
Might be an old post, but this is also relevant. I couldn;t understand why no file was being created only to realise that the file might not be created until a certain amount of data had been captured.
https://github.com/the-tcpdump-group/tcpdump/issues/485
This is the first question that I post here. I tried to do a throughout search, but if I haven't (and the answer is obvious somewhere else), please just let me know.
I have a script that runs a program for me, here it is:
csv_file=../data/teste_nohup.csv
trace_file=../data/gnp.trace
declare -i n=100
declare -i p=1
declare -i counter=0
while [ $counter -lt 3 ];
do
n=100
while true
do
nice -19 sage gnptest.py ${n} ${p} | tee -a ${csv_file}
notify-send "finished test gnp ${n} ${p}"
done
done
So, what I'm trying to do is run the gnptest.py program a few times, and have the result be written to the csv_file.
The problem is, that depending on the input, the program may take a long time to complete. So I'd like to connect to the server over ssh, start the program, close the terminal, and check the output file from time to time.
I've tried nohup and disown. Nohup creates a huge nohup.out file, full with errors that I don't get while normally running the script (it complains about using the -lt operand, for example). But the biggest problem that I'm facing is that no command (nohup ou disown -h) is executing the program and sending the output to the file that I've specified in the csv_file variable, which is being done using the tee command. Also, none of them seem to continue running after I logout...
Any help will be much appreciated.
Thanks in advance!!
i hv just joined so cannt add comment
Please try by using redirection instead of tee in script
And to get rid of Nohup.out use following to run script
nohup script.sh > /dev/null 2>&1 &
If above produces error use
nohup script.sh > /dev/null 2>&1 </dev/null &
Hope this will help.
Here is what I'm entering in Terminal:
curl --silent https://raw.githubusercontent.com/githubUser/repoName/master/installer.sh | bash
The WordPress installing bash script contains a "read password" command that is supposed to wait for users to input their MySQL password. But, for some reason, that doesn't happen when I run it with the "curl githubURL | bash" command. When I download the script via wget and run it via "sh installer.sh", it works fine.
What could be the cause of this? Any help is appreciated!
If you want to run a script on a remote server without saving it locally, you can try this.
#!/bin/bash
RunThis=$(lynx -dump http://127.0.0.1/example.sh)
if [ $? = 0 ] ; then
bash -c "$RunThis"
else
echo "There was a problem downloading the script"
exit 1
fi
In order to test it, I wrote an example.sh:
#!/bin/bash
# File /var/www/example.sh
echo "Example read:"
read line
echo "You typed: $line"
When I run Script.sh, the output looks like this.
$ ./Script.sh
Example read:
Hello World!
You typed: Hello World!
Unless you absolutely trust the remote scripts, I would avoid doing this without examining it before executing.
It wouldn't stop for read:
As when you are piping in a way you are forking a child which has been given input from parent shell.
You cannot give the values back to parent(modify parent's env) from child.
and through out this process you are always in parent process.