Is there any kind of a program or a script that I can manually queue processes of applications on Ubuntu? For example there are 40 processes running at a specific point of time and 10 more are about to run in some time after.Is there anyway I can tell the system to run for example the 3 out of 10 at the same time and after they complete to run the 7 remaining one at a time in a specific order?
You could achieve that result with a job-aware shell (zsh,bash).
For instance in bash:
# run first 3 apps in background and in parallel
app1 &
app2 &
app3 &
# wait for all background jobs to finish
wait
# run the remaining apps in specified order
app4
app5
...
The & means run the program in the background (i.e. you get another shell prompt the moment the program is started). All backgrounded jobs run in parallel. However, backgrounded jobs cannot access the standard input (i.e. you can't provide keyboard input to them - well, you can, by bringing to the foreground first, but that's another story).
Related
I have an uploader service which needs to run every 5minutes and it definitely finished within 5 minutes so there are never two parallel session.
Wondering what would be a good strategy to run this, either to schedule this as a cron job on host or start a go program with infinite loop which execute the program and sleeps(Golang: Implementing a cron / executing tasks at a specific time)
If your task is...
On Unix
Stand alone
Periodic
Has an acceptable startup time
cron will be better than rolling your own scheduler just for the one service. It will guarantee the process will always run at the correct time and has rudimentary error reporting. There's no need to add a watchdog in case your infinite loop has an error, cron will run the process again in 5 minutes.
If cron is insufficient, look into other job schedulers before rolling your own.
I have an uploader service which needs to run every 5minutes and it definitely finished within 5 minutes so there are never two parallel session.
These are famous last words. I would suggest adding in some form of locking. For example, write your PID to a file in /var/run and check if that process is running. There's even a little pidfile library for Go.
Take a look on Systemd, you can execute a script with timers and set max execution time for the script.
https://wiki.archlinux.org/index.php/Systemd/Timers
I am using tcl/expect for testing an ethernet switch. I want to run some test tool repeatedly every 2 seconds and in background. The output can be ignored.
That is I want to do the following:
Have one block of code which repeats the following over and over until stopped from the other block of code:
exec { $myCommand }
sleep $sleepDuration
$myCommand can be either a list or a string, whatever is more convenient. $sleepDuration is a number.
In the other block of code I will do normal operations, including spawning processes and talking to them or interacting with already existing connections.
Is that possible with tcl? I am running on Debian Linux 6 in VMWare. The code doesn't need to be portable to other plattforms.
I have tried using watch but I am not sure how to do the following:
exec {watch $myCommand}
switch to an already connected process
interact with it
switch to watch
send "\x03"
In the simplest case, where the command is quickly running, you use the every pattern:
proc every {delay script} {
uplevel #0 $script
after $delay [info level 0]
}
every 2000 {exec $yourCommand $yourArgument &}
Then, as long as you're servicing the Tcl event loop, the command will be run every 2 seconds (== 2000 milliseconds).
To start with, I am a beginner to programming etc so apologies for lack of professionally accurate terminology in my question but hopefully I will manage to get my points across!
Would you have any suggestions how in bash or tcsh I can run a long background process which itself launches a few programs and has to run three long processes in parallel on different cores and wait for all three to be completed before proceeding...?
I have written a shell script (for bash) to apply an image filter to each frame of a short but heavy video clip (it's a scientific tomogram actually but this does not really matter). It is supposed to:
Create a file with a script to convert the whole file to a different format using an em2em piece of software.
Split the converted file into three equal parts and filter each set of frames in a separate process on separate cores on my linux server (to speed things up) using a program spider. Firstly, three batch-mode files (filter_1/2/3.spi) with required filtration parameters are created and then three subprocesses are launched:
spider spi/spd #filter_1 & # The first process to be launched by the main script and run in the background on one core
spider spi/spd #filter_2 & # The second background process run on the next core
spider spi/spd #filter_3 # The third process to be run in parallel with the two above and be finished before proceeding further.
These filtered fragments are then put together at the end.
Because I wanted the 3 filtration steps to run simultaneously, I sent the first two to background with a simple & and kept the last one in the foreground, so that the main script process will wait for all three to finish (should happen at the same time) before proceeding further to reassemble the 3 chunks. This all works fine when I run my script in the foreground but it throws a lot of output info from the many subprocesses onto the terminal. I can reduce it with:
$ ./My_script 2>&1 > /dev/null
But each spider process still returns
*****Spider normal stop*****
to the terminal. When I try to send the main process to background it keeps stopping all the time.
Would you have any suggestions how I can run the main script in the background and still get it to run the 3 spider sub-processes in parallel somehow?
Thanks!
You can launch each spider in the background, storing the process ids which you can later use in a wait command, such as:
spider spi/spd #filter_1 &
sp1=$!
spider spi/spd #filter_2 &
sp2=$!
spider spi/spd #filter_3 &
sp3=$!
wait $sp1 $sp2 $sp3
If you want to get rid of output, apply redirections on each command.
Update: actually you don't even need to store the PIDs, a wait without parameters will automatically wait for all spawned children.
First, if you are using bash you can use wait to wait for each process to exit. For example, all the messages will be printed only when all processes have finished:
sleep 10 &
P1=$!
sleep 5 &
P2=$!
sleep 6 &
P3=$!
wait $P1
echo "P1 finished"
wait $P2
echo "P2 finished"
wait $P3
echo "P3 finished"
You can use the same idea to wait for the spider processes to finish and only then merge the results.
Regarding the output, you can try to redirect each one to /dev/null instead of redirecting all the output of the script:
sleep 10 &> /dev/null &
I have a cron job I need to run every 7 days to aggregate up a bunch of data using a php script. The process is pretty CPU intensive and can take a decent amount of time. Despite setting it to run at 4 am (when we get the least amount of traffic) users are starting to notice some down time when the script runs. Is there a way to run this in the background only when the CPU is not being used or has an open thread?
Thanks!
In the cron job line, you can wrap the php command line with either the 'nice', 'chrt' or 'loadwatch' programs.
I coded a monitoring program in RPG that checks if the fax/400 is operational.
And now I want this program to check every 15 minutes.
Instead of placing a job every 15 minutes in the job scheduler (which would be ugly to manage), I made the program wait between checks using DLYJOB.
Now how can I make this program "place itself" in memory so it keeps running?
(I thought of using SBMJOB, but I can't figure in which job queue I could place it.)
A good job queue to use for an endlessly running job would be QSYSNOMAX. That allows unlimited numbers of jobs to be running.
You could submit the job to that queue in your QSTRTUP program and it will simply remain running all the time.
Here what I have done in the past. There are two approaches to this.
Submit a new job every time the program runs with DLYJOB before it runs.
Create a loop and only end given a certain condition.
What I did with a Monitor MSGW program was the following:
PGM
DCL VAR(&TIME) TYPE(*CHAR) LEN(6)
DCL VAR(&STOPTIME) TYPE(*CHAR) LEN(6) +
VALUE('200000')
/* Setup my program (run only once) */
START:
/* Perform my actions */
RTVSYSVAL SYSVAL(QTIME) RTNVAR(&TIME)
IF COND(&TIME *GE &STOPTIME) THEN(GOTO CMDLBL(END))
DLYJOB DLY(180)
GOTO CMDLBL(START)
END:
ENDPGM
This will run continuously until 8:00 pm. Then I add this to the job scheduler to submit every morning.
As far as which jobq. I am using QINTER, but it could really be run anywhere. Make sure you choose a subsystem with enough available running jobs as this will take one.
The negative of running in QINTER if the program starts to hit 100% CPU, that will use up all of your interactive CPU and effectively locks up your system.
i know of 3 ways to that.
1) using Data queue, there is parm to tell it to wait endlessly and at time-interval.
2) using OVRDBF cmd, there is parm there to tell that it should not end or EOF, making your pgm to keep on waiting.
3) easiest to implement, sbmjob to call a pgm that loops forever eg with DOW 1=1, you can insert a code to check for certain time interval before it iterates. You can have your logic inside the loop that checks for fax, process it and then back to waiting.