How to store output of "watch" to a file? - linux

I'm trying to display the status progression of a file copied with dd command:
I run this command : dd if=/dev/zero of=file.txt count=1024 bs=10485760
Then in another terminal, I run this command to display the status of progression:
watch -n 1 kill -USR1 $pid_dd
My problem I have tried to redirect the output of watch to a file but without success.
I have tried this solution proposed on this link How to save output of "watch" to file
while true
do
watch -n 1 kill -USR1 $pid_dd | tee -a output_watch.txt
sleep 2
done
I don't know how to redirect the output of this command to a file. My solution doesn't work.

I have tested that loop above without the "watch" command like this :
while true
do
kill -USR1 $pid_dd | tee -a output_watch.txt
sleep 2
done
The problem is the "output_watch.txt" file is empty. I don't understand why.

kill -USR1 $pid_dd doesn't output anything. You have indeed successfully captured its output, which is empty. It sends a signal that causes the dd process -- an entirely different process -- to output progress data on its stderr, which you didn't redirect.
If you simply redirect dd ... 2>&1 | tee -a file it will make it more difficult to determine the PID of dd which you need. Depending on your shell consider instead
dd ... 2> >(tee -a file) & pid=$!
watch -n1 kill -USR1 $pid
# you don't need a loop around watch, it already _is_ a loop
# or _instead_
while kill -USR1 $pid; do sleep 1; done
Alternatively, see pv which already does almost what you want (if available). Although described as monitoring the copying of data from one process to another, it actually does stdin to stdout which can (both) be files instead of piped processes.

Exactly #dave_thompson ! It works, I have written this code :
dd ... 2>>/home/file.txt & pid_dd=$!
watch -n 1 kill -USR1 $pid_dd
The file "file.txt" has all information about the dd progress.
Thanks so much.

Related

Why doesn't tcpdump run in background?

I logged in a virtual machine via ssh and I tried to run a script in background, the script is shown below:
#!/bin/bash
APP_NAME=`basename $0`
CFG_FILE=$1
. $CFG_FILE #just some variables
CMD=$2
PID_FILE="$PIDS_DIR/$APP_NAME.pid"
CUR_LOG_DIR=$LOGS_RUNNING
echo $$ > $PID_FILE
#Main script code
#This script shall be called using the following syntax
# $ nohup script_name output_dir &
TIMESTAMP=`date +"%Y%m%d%H%M%S"`
CAP_INTERFACE="eth0"
/usr/sbin/tcpdump -nei $CAP_INTERFACE -s 65535 -w file_result
rm $PID_FILE
The result should be tcpdump running in background, redirecting the command result to file_result.
The script is called with:
nohup $SCRIPT_NAME $CFG_FILE start &
And It is stopped calling the STOP_SCRIPT:
##STOP_SCRIPT
PID_FILE="$PIDS_DIR/$APP_NAME.pid"
if [ -f $PID_FILE ]
then
PID=`cat $PID_FILE`
# send SIGTERM to kill all children of $PID
pkill -TERM -P $PID
fi
When I check the file_result, after running the stop script, It is empty.
What is happening? How can I solve it?
I found this link: https://it.toolbox.com/question/launching-tcpdump-processes-in-background-using-ssh-060614
The author seems to have faced a similar issue. They debate about race conditions, but I didn't understand completely.
I'm not sure what you're trying to accomplish by having the startup script itself continue to run, but here's an approach that I think accomplishes what you're trying to do, namely start tcpdump and have it continue to run immune to hangups via nohup. I've simplified things a bit for illustrative purposes - feel free to add any variables back as you see fit, such as the nohup.out output directory, TIMESTAMP, etc.
Script #1: tcpdump_start.sh
#!/bin/sh
rm -f nohup.out
nohup /usr/sbin/tcpdump -ni eth0 -s 65535 -w file_result.pcap &
# Write tcpdump's PID to a file
echo $! > /var/run/tcpdump.pid
Script #2: tcpdump_stop.sh
#!/bin/sh
if [ -f /var/run/tcpdump.pid ]
then
kill `cat /var/run/tcpdump.pid`
echo tcpdump `cat /var/run/tcpdump.pid` killed.
rm -f /var/run/tcpdump.pid
else
echo tcpdump not running.
fi
To start tcpdump, just run tcpdump_start.sh.
To stop the tcpdump instance started with tcpdump_start.sh, just run tcpdump_stop.sh.
The captured packets will be written to the file_result.pcap file, and yes, it's a pcap file, not a text file, so it helps to name it with the proper file extension. The tcpdump statistics will be written to the nohup.out file when tcpdump is terminated.
I too had faced problems when running tcpdump over an SSH session.
In my case, I was running
sudo nohup tcpdump -w {pcap_dump_file} {filter} > /dev/null 2>&1 &
Where, running this command over Paramiko SSH session as a background process was the problem.
To get around this, I used screen utility of Linux.
screen is an easy to use tool for long-running of processes as a service.
Might be an old post, but this is also relevant. I couldn;t understand why no file was being created only to realise that the file might not be created until a certain amount of data had been captured.
https://github.com/the-tcpdump-group/tcpdump/issues/485

suspend a shell command without pid

I need something like $command & stop This should execute a command and suspend it. The application later resumes back the command for complete results.
I understand that job can be suspended with stop signal to the corresponding pid.
$kill -SIGSTOP 12753
When we execute a command, we barely know its pid. There is extra command involved to take a pid and do the required. I want to avoid the extra command and a time interval.
Basically The application is for a measure of network performance. Trigger all the commands put them in halt mode. The halted commands are resumed back as per the kind of traffic needed.
The process ID of the most recently started background command is available in the shell parameter $!:
$ command & kill -SIGSTOP $!
(Check the documentation for your shell's implementation of kill for the correct format.)
Try killall with the --signal option where you can specify the name of the process.
linux:~ # killall
Usage: killall [OPTION]... [--] NAME...
killall -l, --list
killall -V, --version
-e,--exact require exact match for very long names
-I,--ignore-case case insensitive process name match
-g,--process-group kill process group instead of process
-i,--interactive ask for confirmation before killing
-l,--list list all known signal names
-q,--quiet don't print complaints
-r,--regexp interpret NAME as an extended regular expression
-s,--signal SIGNAL send this signal instead of SIGTERM
-u,--user USER kill only process(es) running as USER
-v,--verbose report if the signal was successfully sent
-V,--version display version information
-w,--wait wait for processes to die
Verified by starting md5sum in a shell session:
linux$ md5sum
and in another session, ran:
killall -s SIGSTOP md5sum
yielding the following in the md5sum session:
[1]+ Stopped md5sum
Kindly confirm if you want to halt your command or run in background(append '&' to your command)?
If your application is expected to start halted command later, then why dont you start your command(to be halted) in that application itself.
This helps :
sleep 5 & kill -SIGSTOP $!
In above, have executed sleep(demo command) for 5 seconds in background.
Next have send to kill for stopping it using its PID obtained by $!.
Demo & kludge using timeout, (for some reason timeout intereprets a '0s' duration as "run forever"), to stop yes before it outputs anything:
# run 'yes' command, let it print 5 numbered lines, but stop it immediately
timeout -s SIGSTOP .000000001s yes | head -n 5 | cat -n
Output (to STDERR):
[1]+ Stopped timeout -s SIGSTOP .000000001s yes | head -n 5 | cat -n
Now restart it:
fg > /dev/null
Output:
1 y
2 y
3 y
4 y
5 y
Technique for users stuck with v8.12 or earlier coreutils, (pre-2011), wherein timeout lacks sub-second intervals. Requires waiting a second.
Wrap the command string in a shell invocation, preceded by a 1s wait -- so timeout waits 1 second, and simultaneously, so does the command string. Total wait time 1 second:
timeout -s SIGSTOP 1s sh -c "sleep 1s; yes | head -n 5 | cat -n"
Output is the same as before, fg is the same too.
Finesse, if waiting even 1 second before sleeping is too much, it can be run in the background like so:
timeout -s SIGSTOP 1s sh -c "sleep 1s; yes | head -n 5 | cat -n" &
Output (process number will vary):
[1] 14601
Then after a second, the output will be the same as the previous two timeout examples.
Assuming you are using the same command, find the command name in ps output, you can launch it in one terminal then open a new terminal
ps -ely
after retrieving the command name:
command & kill -SIGSTOP $(pidof command_name)
pidof needs the exact command name to be able to find the pid.
then to resume it:
kill -SIGCONT $(pidof command_name)
if the command name is not constant, but there is a pattern, you can create a script like this, you can call it pof.sh:
ps -ely | grep $1 | tr -s ' ' | cut -d" " -f3
command & kill -SIGSTOP $(bash pof.sh pattern)
One drawback with this script, is that in case many lines match the pattern it will returns all of theirs pids, if this is a problem, you can put the output in an array and go on from there.

Don't show the output of kill command in a Linux bash script [duplicate]

How can you suppress the Terminated message that comes up after you kill a
process in a bash script?
I tried set +bm, but that doesn't work.
I know another solution involves calling exec 2> /dev/null, but is that
reliable? How do I reset it back so that I can continue to see stderr?
In order to silence the message, you must be redirecting stderr at the time the message is generated. Because the kill command sends a signal and doesn't wait for the target process to respond, redirecting stderr of the kill command does you no good. The bash builtin wait was made specifically for this purpose.
Here is very simple example that kills the most recent background command. (Learn more about $! here.)
kill $!
wait $! 2>/dev/null
Because both kill and wait accept multiple pids, you can also do batch kills. Here is an example that kills all background processes (of the current process/script of course).
kill $(jobs -rp)
wait $(jobs -rp) 2>/dev/null
I was led here from bash: silently kill background function process.
The short answer is that you can't. Bash always prints the status of foreground jobs. The monitoring flag only applies for background jobs, and only for interactive shells, not scripts.
see notify_of_job_status() in jobs.c.
As you say, you can redirect so standard error is pointing to /dev/null but then you miss any other error messages. You can make it temporary by doing the redirection in a subshell which runs the script. This leaves the original environment alone.
(script 2> /dev/null)
which will lose all error messages, but just from that script, not from anything else run in that shell.
You can save and restore standard error, by redirecting a new filedescriptor to point there:
exec 3>&2 # 3 is now a copy of 2
exec 2> /dev/null # 2 now points to /dev/null
script # run script with redirected stderr
exec 2>&3 # restore stderr to saved
exec 3>&- # close saved version
But I wouldn't recommend this -- the only upside from the first one is that it saves a sub-shell invocation, while being more complicated and, possibly even altering the behavior of the script, if the script alters file descriptors.
EDIT:
For more appropriate answer check answer given by Mark Edgar
Solution: use SIGINT (works only in non-interactive shells)
Demo:
cat > silent.sh <<"EOF"
sleep 100 &
kill -INT $!
sleep 1
EOF
sh silent.sh
http://thread.gmane.org/gmane.comp.shells.bash.bugs/15798
Maybe detach the process from the current shell process by calling disown?
The Terminated is logged by the default signal handler of bash 3.x and 4.x. Just trap the TERM signal at the very first of child process:
#!/bin/sh
## assume script name is test.sh
foo() {
trap 'exit 0' TERM ## here is the key
while true; do sleep 1; done
}
echo before child
ps aux | grep 'test\.s[h]\|slee[p]'
foo &
pid=$!
sleep 1 # wait trap is done
echo before kill
ps aux | grep 'test\.s[h]\|slee[p]'
kill $pid ## no need to redirect stdin/stderr
sleep 1 # wait kill is done
echo after kill
ps aux | grep 'test\.s[h]\|slee[p]'
Is this what we are all looking for?
Not wanted:
$ sleep 3 &
[1] 234
<pressing enter a few times....>
$
$
[1]+ Done sleep 3
$
Wanted:
$ (set +m; sleep 3 &)
<again, pressing enter several times....>
$
$
$
$
$
As you can see, no job end message. Works for me in bash scripts as well, also for killed background processes.
'set +m' disables job control (see 'help set') for the current shell. So if you enter your command in a subshell (as done here in brackets) you will not influence the job control settings of the current shell. Only disadvantage is that you need to get the pid of your background process back to the current shell if you want to check whether it has terminated, or evaluate the return code.
This also works for killall (for those who prefer it):
killall -s SIGINT (yourprogram)
suppresses the message... I was running mpg123 in background mode.
It could only silently be killed by sending a ctrl-c (SIGINT) instead of a SIGTERM (default).
disown did exactly the right thing for me -- the exec 3>&2 is risky for a lot of reasons -- set +bm didn't seem to work inside a script, only at the command prompt
Had success with adding 'jobs 2>&1 >/dev/null' to the script, not certain if it will help anyone else's script, but here is a sample.
while true; do echo $RANDOM; done | while read line
do
echo Random is $line the last jobid is $(jobs -lp)
jobs 2>&1 >/dev/null
sleep 3
done
Another way to disable job notifications is to place your command to be backgrounded in a sh -c 'cmd &' construct.
#!/bin/bash
# ...
pid="`sh -c 'sleep 30 & echo ${!}' | head -1`"
kill "$pid"
# ...
# or put several cmds in sh -c '...' construct
sh -c '
sleep 30 &
pid="${!}"
sleep 5
kill "${pid}"
'
I found that putting the kill command in a function and then backgrounding the function suppresses the termination output
function killCmd() {
kill $1
}
killCmd $somePID &
Simple:
{ kill $! } 2>/dev/null
Advantage? can use any signal
ex:
{ kill -9 $PID } 2>/dev/null

passing control+C in linux shell script

in a shell script i have a command like, pid -p PID, after that i have some more commands. but as soon as the pid -p PID command runs we should supply a control+C to exit from it and then only the further commands executes. so i wanna do this periodically, i have all the things i want in a shell script and i wanna put this into crontab. the only thing that bothers is, if i schedule this script in the crontab, afetr its first execution, the command pid -p PID, how will i supply the CONTRO+C command for allowing further commands to execute???? please help
my script is like this.. very simple one
top -p $1
free -m
netstat -antp|grep 3306|grep $1
jmap -dump:file=my_stack$RANDOM.bin $1
You can send signals with kill. In your case however, you can just restrict top to one or a few iterations
top -p $1 -n 1
Update:
You can redirect the output of a command to a file. Either overwrite the file each time
command.sh >file.txt 2>&1
or append to a file
command.sh >>file.txt 2>&1
If you don't want the error output, leave out the 2>&1 part.
pid -p PID &
some_pid=$!
kill -s INT $some_pid

Redirecting Output of Bash Child Scripts

I have a basic script that outputs various status messages. e.g.
~$ ./myscript.sh
0 of 100
1 of 100
2 of 100
...
I wanted to wrap this in a parent script, in order to run a sequence of child-scripts and send an email upon overall completion, e.g. topscript.sh
#!/bin/bash
START=$(date +%s)
/usr/local/bin/myscript.sh
/usr/local/bin/otherscript.sh
/usr/local/bin/anotherscript.sh
RET=$?
END=$(date +%s)
echo -e "Subject:Task Complete\nBegan on $START and finished at $END and exited with status $RET.\n" | sendmail -v group#mydomain.com
I'm running this like:
~$ topscript.sh >/var/log/topscript.log 2>&1
However, when I run tail -f /var/log/topscript.log to inspect the log I see nothing, even though running top shows myscript.sh is currently being executed, and therefore, presumably outputting status messages.
Why isn't the stdout/stderr from the child scripts being captured in the parent's log? How do I fix this?
EDIT: I'm also running these on a remote machine, connected via ssh using pseudo-tty allocation, e.g. ssh -t user#host. Could the pseudo-tty be interfering?
I just tried your the following: I have three files t1.sh, t2.sh, and t3.sh all with the following content:
#!/bin/bash
for((i=0;i<10;i++)) ; do
echo $i of 9
sleep 1
done
And a script called myscript.sh with the following content:
#!/bin/bash
./t1.sh
./t2.sh
./t3.sh
echo "All Done"
When I run ./myscript.sh > topscript.log 2>&1 and then in another terminal run tail -f topscript.log I see the lines being output just fine in the log file.
Perhaps the things being run in your subscripts use a large output buffer? I know when I've run python scripts before, it has a pretty big output buffer so you don't see any output for a while. Do you actually see the entire output in the email that gets sent out at the end of topscript.sh? Is it just that while the processes run you're not seeing the output?
try
unbuffer topscript.sh >/var/log/topscript.log 2>&1
Note that unbuffer is not always available as a std binary in old-style Unix platforms and may require a search and installation for a package to support it.
I hope this helps.

Resources