I have a program that waits for commands on STDIN. It takes about ~2 seconds to be ready to accept those commands and after every command there needs to be at least a 1 second delay.
So far I have tried inside my script.
./myprogram << EOF
command1
command2
command3
EOF
The above works sometimes depending on how long it takes the program to start and how long it takes the commands to execute.
Are you sure the pauses are really needed? Most programs will buffer input and seamlessly run the next command when they're ready with the previous one.
If the pauses are needed, this is a job for expect. It's been a while since I've used expect, but you want a script that looks pretty much like this:
spawn myprogram # start your program
sleep 2 # wait 2 seconds
send "command1\r" # send a command
sleep 1
send "command2\r"
sleep 1
send "exit\r"
wait # wait until the program exits
(A big "gotcha" is that each line of input must be terminated with \r (and NOT \n). That's easy to miss.)
This can be improved however: you waste time if a command takes less than a second to run. Or maybe sometimes a command will take longer than expected. Since most interactive programs display some sort of prompt, it's better to use that as a cue. Expect makes this very easy. For this example I'm assuming your program prints "Ready>" when it's ready to accept a new command.
spawn myprogram
expect "Ready>"
send "command1\r"
expect "Ready>"
send "command2\r"
expect "Ready>"
send "exit\r"
wait
You'll have to consult the expect documentation for more advance features, e.g. to add error handling (what to do if some command fails?).
You could try this:
( sleep 2
echo command1
sleep 1
echo command2
sleep 1
echo command3
sleep 1 ) | ./myprogram
or look into the expect command.
Use sleep 1 between your commands
You may try the following:
./myprogram << EOF
command1
$(sleep 2)
command2
$(sleep 2)
command3
EOF
But I would strongly suggest you to take a look at expect:
http://linuxaria.com/howto/2-practical-examples-of-expect-on-the-linux-cli
Related
Does the wait command work in a csh script to wait for more than 1 PID to finish?
Where the wait command waits for all the PID listed to complete before moving on to the next line
e.g.
wait $job1_pid $job2_pid $job3_pid
nextline
as the documentation online that I usually see only shows the wait command with only 1 PID, although I have read of using wait for multiple PID, like here :
http://www2.phys.canterbury.ac.nz/dept/docs/manuals/unix/DEC_4.0e_Docs/HTML/MAN/MAN1/0522____.HTM
which says quote "If one or more pid operands are specified that represent known process IDs,the wait utility waits until all of them have terminated"
No, the builtin wait command in csh can only wait for all jobs to finish. The command in the documentation that you're referencing is a separate executable that is probably located at /usr/bin/wait or similar. This executable cannot be used for what you want to use it for.
I recommend using bash and its more powerful wait builtin, which does allow you to wait for specific jobs or process ids.
From the tcsh man page, wait waits for all background jobs. tcsh is compatible with csh, which is what the university's documentation you linked is referring to.
wait The shell waits for all background jobs. If the shell is interactive, an interrupt will disrupt the wait and cause the shell
to print the names and job numbers of all outstanding jobs.
You can find this exact text on the csh documentation here.
The wait executable described in the documentation is actually a separate command that waits for a list of process ids.
However, the wait executable is not actually capable of waiting for the child processes of the running shell script and has no chance of doing the right thing in a shell script.
For instance, on OS X, /usr/bin/wait is this shell script.
#!/bin/sh
# $FreeBSD: src/usr.bin/alias/generic.sh,v 1.2 2005/10/24 22:32:19 cperciva Exp $
# This file is in the public domain.
builtin `echo ${0##*/} | tr \[:upper:] \[:lower:]` ${1+"$#"}
Anyway, I can't get the /usr/bin/wait executable to work reliably in a Csh script ... because the the background jobs are not child processes of the /usr/bin/wait process itself.
#!/bin/csh -f
setenv PIDDIR "`mktemp -d`"
sleep 4 &
ps ax | grep 'slee[p]' | awk '{ print $1 }' > $PIDDIR/job
/usr/bin/wait `cat $PIDDIR/job`
I would highly recommend writing this script in bash or similar where the builtin wait does allow you to wait for pids and capturing pids from background jobs is easier.
#!/bin/bash
sleep 4 &
pid_sleep_4="$!"
sleep 7 &
pid_sleep_7="$!"
wait "$pid_sleep_4"
echo "waited for sleep 4"
wait "$pid_sleep_7"
echo "waited for sleep 7"
If you don't want to rewrite the entire csh script you're working on, you can call out to bash from inside a csh script like so.
#!/bin/csh -f
bash <<'EOF'
sleep 4 &
pid_sleep_4="$!"
sleep 7 &
pid_sleep_7="$!"
wait "$pid_sleep_4"
echo "waited for sleep 4"
wait "$pid_sleep_7"
echo "waited for sleep 7"
'EOF'
Note that you must end that heredoc with 'EOF' including the single quotes.
I am running a command in SSH which I have set on a loop using either watch or while.
The code:
while sleep SECONDS; do command; done
or
watch -n SECONDS command
Example:
while sleep 5; do ls; done
What this will do in this example is it will run ls every 5 seconds indefinitely until either it is exited with CTRL+C or the terminal is closed.
In my use case I am using a different command (which happens to not have any ssh output), but the results remain the same.
Both of these commands will display nothing (in the case that the command does not call for an output unlike ls does), but the command is indeed executed every x seconds.
The thing is, I want to view an output at each execution.
For example,
root#server:~ # while sleep 5; do <command>; done
.. executed at 00:00:05
.. executed at 00:00:10
.. executed at 00:00:15
.. executed at 00:00:20
.. executed at 00:00:25
.. executed at 00:00:30
and so on and so forth every x seconds when the command was executed until it is stopped.
Ideally I would like it to print the text like above including current time executed.
I can't just add in a printf into it cause I tried that.
# this doesn't work
while sleep 5; do <command> printf "..."; done
where ... is the output I want it to say at each execution. Problem is yes it prints but it screws up my command. I tried printf before the command too but then the command is not executed.
How then can I get it to give me a desired output so I don't just see a blank page? For 5 seconds it is easy to count. But when I get to higher times like every 120 seconds, it is much more challenging to track when it executed. An output with the time it executed each loop would be extremely helpful.
I am on CentOS 6 by the way.
I need to launch a process within a shell script. (It is a special logging process.) It needs to live for most of the shell script, while some other processes will run, and then at the end we will kill it.
A problem that I am having is that I need to launch this process, and wait for it to "warm up", before proceeding to launch more processes.
I know that I can wait for a line of input from a pipe using read, and I know that I can spawn a child process using &. But when I use them together, it doesn't work like I expect.
As a mockup:
When I run this (sequential):
(sleep 1 && echo "foo") > read
my whole shell blocks for 1 second, and the echo of foo is consumed by read, as I expect.
I want to do something very similar, except that I run the "foo" job in parallel:
(sleep 1 && echo "foo" &) > read
But when I run this, my shell doesn't block at all, it returns instantly -- I don't know why the read doesn't wait for a line to be printed on the pipe?
Is there some easy way to combine "spawning of a job" (&) with capturing the stdout pipe within the original shell?
An example that is very close to what I actually need is, I need to rephrase this somehow,
(sleep 1 && echo "foo" && sleep 20 &) > read; echo "bar"
and I need for it to print "bar" after exactly one second, and not immediately, or 21 seconds later.
Here's an example using named pipes, pretty close to what I used in the end. Thanks to Luis for his comments suggesting named pipes.
#!/bin/sh
# Set up temporary fifo
FIFO=/tmp/test_fifo
rm -f "$FIFO"
mkfifo "$FIFO"
# Spawn a second job that writes to FIFO after some time
sleep 1 && echo "foo" && sleep 20 >$FIFO &
# Block the main job on getting a line from the FIFO
read line <$FIFO
# So that we can see when the main job exits
echo $line
Thanks also to commenter Emily E., the example that I posted that was misbehaving was indeed writing to a file called read instead of using the shell-builtin command read.
I am running a shell script, something like sh script.sh in bash. The script contains many lines, some of which take seconds and others take days to execute. How can I kill the sh command but not kill its command currently running (the current line from the script)?
You haven't specified exactly what should happen when you 'kill' your script., but I'm assuming that you'd like the currently executing line to complete and then exit before doing any more work.
This is probably best achieved only by coding your script to behave in such a way as to receive such a kill command and respond in an appropriate way - I don't think that there is any magic to do this in linux.
for example:
You could trap a signal and then set a variable
Check for existence of a file (e.g touch /var/tmp/trigger)
Then after each line in your script, you'd need to check to see if each the trap had been called (or your trigger file created) - and then exit. If the trigger has not been set, then you continue on and do the next piece of work.
To the best of my knowledge, you can't trap a SIGKILL (-9) - if someone sends that to your process, then it will die.
HTH, Ace
The only way I can think of achieving this is for the parent process to trap the kill signal, set a flag, and then repeatedly check for this flag before executing another command in your script.
However the subprocesses need to also be immune to the kill signal. However bash seems to behave different to ksh in this manner and the below seems to work fine.
#!/bin/bash
QUIT=0
trap "QUIT=1;echo 'term'" TERM
function terminated {
if ((QUIT==1))
then
echo "Terminated"
exit
fi
}
function subprocess {
typeset -i N
while ((N++<3))
do
echo $N
sleep 1
done
}
while true
do
subprocess
terminated
sleep 3
done
I assume you have your script running for days and then you don't just want to kill it without knowing if one of its children finished.
Find the pid of your process, using ps.
Then
child=$(pgrep -P $pid)
while kill -s 0 $child
do
sleep 1
done
kill $pid
I was trying to search online but didn't get a definite answer for this. Does the symbol & in linux perform two jobs in parallel or in a linear way?
for example:
command1 & command2
Here, will command1 and command2 be performed in parallel or will command2 be performed AFTER command1 finishes? What is exactly happening here?
The reason I'm asking this is because in my command1 and command2 I am calling scripts with different arguments which write some data to the same text file. After running the aforementioned script, I see that the output of command2 is being appended to command1. Is this the expected behaviour if they are truly working in parallel?
Try this on for size:
$ ls & pwd
[1] 7592 <---"ls" being put in the background as job #1, with pid 7592
/home/marc <--- output of "pwd"
$ stuff
^---------------- shell waiting for next input
^^^^^---------- output of "ls" command
a & b places the a program in the background, and immediately starts executing the b command as well. It's not exactly parallel, but it is two completely separate processes that happen to be sharing a common output: your terminal