I'm working in linux. I have two programs that run for infinite time ( that is , wont stop unless i kill the process ).i want to run program 1 first and then run program 2 after 20 seconds ( both will have to run simultaneously as one reads a file written by the other ).Currently , i am running the 2 programs by manually keeping track of time.. Is there a way to automate this ? i.e. is there any command or can any program be written to do this..
prog1 &
sleep 20
prog2
Using the shell:
$ program1 & sleep 20 ; program2
If one program reads from the file output by the other you should consider using a pipe to pass output from one to the input of the other:
$> program1 | program2
I'm assuming that you have control over these two programs and can get them to write to stdout and read from stdin.
Related
I need to launch a process within a shell script. (It is a special logging process.) It needs to live for most of the shell script, while some other processes will run, and then at the end we will kill it.
A problem that I am having is that I need to launch this process, and wait for it to "warm up", before proceeding to launch more processes.
I know that I can wait for a line of input from a pipe using read, and I know that I can spawn a child process using &. But when I use them together, it doesn't work like I expect.
As a mockup:
When I run this (sequential):
(sleep 1 && echo "foo") > read
my whole shell blocks for 1 second, and the echo of foo is consumed by read, as I expect.
I want to do something very similar, except that I run the "foo" job in parallel:
(sleep 1 && echo "foo" &) > read
But when I run this, my shell doesn't block at all, it returns instantly -- I don't know why the read doesn't wait for a line to be printed on the pipe?
Is there some easy way to combine "spawning of a job" (&) with capturing the stdout pipe within the original shell?
An example that is very close to what I actually need is, I need to rephrase this somehow,
(sleep 1 && echo "foo" && sleep 20 &) > read; echo "bar"
and I need for it to print "bar" after exactly one second, and not immediately, or 21 seconds later.
Here's an example using named pipes, pretty close to what I used in the end. Thanks to Luis for his comments suggesting named pipes.
#!/bin/sh
# Set up temporary fifo
FIFO=/tmp/test_fifo
rm -f "$FIFO"
mkfifo "$FIFO"
# Spawn a second job that writes to FIFO after some time
sleep 1 && echo "foo" && sleep 20 >$FIFO &
# Block the main job on getting a line from the FIFO
read line <$FIFO
# So that we can see when the main job exits
echo $line
Thanks also to commenter Emily E., the example that I posted that was misbehaving was indeed writing to a file called read instead of using the shell-builtin command read.
I have 2 programs that I want to run, programA.py and programB.py. When I run them manually, I have to open separate terminals and type the following commands:
terminal1
python programA.py
terminal2
python programB.py
Each of these programs then output some data on the command line. Lastly, programA.py has to be fully started and waiting before programB.py can start (takes ~2s for programA.py to start and be ready to accept data).
If I am running these programs in Ubuntu, how can I write a bash script that accomplishes that? Right now, I have the following:
#!/bin/bash
python programA.py
python programB.py
This starts programA.py, but because programA.py then waits for input, programB.py doesn't start until you close out of programA.py. How can I change my script to run the two programs simultaneously?
Edit:
Using the advice given by Andreas Neumann below, changing the script to the following successfully launches the two programs:
#!/bin/bash
python programA.py &
sleep 5
python programB.py &
However, when both programs are launched, the code then doesn't work properly. Basically, programA.py is setting up a listening socket, and then creates an interface that the user works with. programB.py then starts afterwards, and runs a process, talking to programA.py over the sockets. When running the above script, programA starts, waits, programB starts, and then programA and B connect, form the interface, but then programB isn't running its background processes correctly.
Updated Answer
If you find my original answer below doesn't work, yet you still want to solve your question with a single script, you could do something like this:
#!/bin/bash
xterm -e "python ProgramA.py" &
sleep 5
python ProgramB.py
Original Answer
If programA is creating a user interface, you probably need that to be in the foreground, so start programB in the background:
{ sleep 5; python programB.py; } &
python ProgramA.py
#!/bin/bash
python programA.py &
sleep 5 # give enough time to start
python programB.py &
I want have a shell script, which configure several things and then call two other shell scripts. I want these two scripts run in parallel and I want to be able to get and print their live output.
Here is my first script which calls the other two
#!/bin/bash
#CONFIGURE SOME STUFF
$path/instance2_commands.sh
$path/instance1_commands.sh
These two process trying to deploy two different application and each of them took around 5 minute so I want to run them in parallel and also see their live output so I know where are they with the deploying tasks. Is this possible?
Running both scripts in parallel can look like this:
#!/bin/bash
#CONFIGURE SOME STUFF
$path/instance2_commands.sh >instance2.out 2>&1 &
$path/instance1_commands.sh >instance1.out 2>&1 &
wait
Notes:
wait pauses until the children, instance1 and instance2, finish
2>&1 on each line redirects error messages to the relevant output file
& at the end of a line causes the main script to continue running after forking, thereby producing a child that is executing that line of the script concurrently with the rest of the main script
each script should send its output to a separate file. Sending both to the same file will be visually messy and impossible to sort out when the instances generate similar output messages.
you may attempt to read the output files while the scripts are running with any reader, e.g. less instance1.out however output may be stuck in a buffer and not up-to-date. To fix that, the programs would have to open stdout in line buffered or unbuffered mode. It is also up to you to use -f or > to refresh the display.
Example D from an article on Apache Spark and parallel processing on my blog provides a similar shell script for calculating sums of a series for Pi on all cores, given a C program for calculating the sum on one core. This is a bit beyond the scope of the question, but I mention it in case you'd like to see a deeper example.
It is very possible, change your script to look like this:
#!/bin/bash
#CONFIGURE SOME STUFF
$path/instance2_commands.sh >> script.log
$path/instance1_commands.sh >> script.log
They will both output to the same file and you can watch that file by running:
tail -f script.log
If you like you can output to 2 different files if you wish. Just change each ling to output (>>) to a second file name.
This how I end up writing it using Paul instruction.
source $path/instance2_commands.sh >instance2.out 2>&1 &
source $path/instance1_commands.sh >instance1.out 2>&1 &
tail -q -f instance1.out -f instance2.out --pid $!
wait
sudo rm instance1.out
sudo rm instance2.out
My logs in two processes was different so I didn't care if aren't all together, that is why I put them all in one file.
Background: I have to revive my old program, which unfortunately fails when it comes to communication with subprocess. The program is written in C++ and creates subprocess for writing with opened pipe for reading. Nothing crashes, but there is no data to read.
My idea is to recreate entire scenario in bash, so I could interactively check what is going on.
Things I used in C++:
mkfifo for creating pipe, there is a bash equivalent
popen for creating subprocess (in my case for writing)
espeak -x -q -z 1> /dev/null 2> /tmp/my-pipe
open and read -- for opening the pipe and then reading, I hope simple cat will suffice
fwrite -- for writing to subprocess, will just redirection work?
So I hope open, read and fwrite will be straightforward, but how do I launch a program as a process (what is popen in bash)?
bash naturally makes piping between processes very easy, so commands to create and open pipes are not normally needed
program1 | program2
This is the equivalent of program1 running popen("program2","w");
It could also be achieved by program2 running popen("program1","r");
If you explicitly want to use a named pipe:
mkfifo /tmp/mypipe
program1 >/tmp/mypipe &
program2 </tmp/mypipe
rm /tmp/mypipe
A thought that might solve your original problem (and is a consideration for using pipes in shell):
Using stdio commands such as popen, fwrite, etc involve buffering. If a program on the write end of the pipe only writes a small amount of data to the pipe, the program on the reading end won't see any of it until a full block of data has been written to the pipe, after which, the block of data will be pushed along the pipe. If you wish to have the data get there sooner, you need either call fflush() on the writing end, or fclose() if you are not planning on sending any more data. Note that with bash, I don't believe there is any equivalent of fflush.
You simply run the process in the background.
espeak -x -q -z >/dev/null 2>/tmp/mypipe &
First the background to this intriguing challenge. The continuous integration build can often have failures during development and testing of deadlocks, loops, or other issues that result in a never ending test. So all the mechanisms for notifying that a build has failed become useless.
The solution will be to have the build script timeout if there's zero output to the build log file for more than 5 minutes since the build routinely writes out the names of unit tests as it proceeds. So that's the best way to identify it's "frozen".
Okay. Now the nitty gritty...
The build server uses Hudson to run a simple bash script that invokes the more complex build script based on Nant and MSBuild (all on Windows).
So far all solutions around the net involve a timeout on the total run time of the command. But that solution fails in this case because the tests might hang or freeze in the first 5 minutes.
What we've thought of so far:
First, here's the high level bash command run the full test suite in Hudson.
build.sh clean free test
That command simply sends all the Nant and MSBuild build logging to stdout.
It's obvious that we need to tee that output to a file:
build.sh clean free test 2>&1 | tee build.out
Then in parallel a command needs to sleep, check the modify time of the file and if more than 5 minutes kill the main process. A kill -9 will be fine at that point--nothing graceful needed once it has frozen.
That's the part you can help with.
In fact, I made a script like this over 15 years ago to kill the connection with a data phone line to japan after periods of inactivity but can't remember how I did it.
Sincerely,
Wayne
build.sh clean free test 2>&1 | tee build.out &
sleep 300
kill -KILL %1
You may be able to use timeout:
timeout 300 command
Solved this myself by writing a bash script.
It's called iotimeout with one parameter which is the number of seconds.
You use it like this:
build.sh clean dev test | iotimeout 120
iotimeout has 2 loops.
One is a simple while read line loop that echos echo line but
it also uses the touch command to update the modified time of a
tmp file every time it writes a line. Unfortunately, it wasn't
possible to monitor a build.out file because Windoze doesn't
update the file modified time until you close the file. Oh well.
Another loop runs in the background, that's a forever loop
which sleeps 10 seconds and then checks the modified time
of the temp file. If that ever exceeds 120 seconds old then
that loop forces the entire process group to exit.
The only tricky stuff was returning the exit code of the original
program. Bash gives you a PIPESTATUS array to solve that.
Also, figuring out how to kill the entire program group was
some research but turns out to be easy just--kill 0