In line x in script A, I want to start running script B, which takes 10 seconds. However, I do not want line x+1 to be executed 10 seconds after line x. Is there a way to fix this?
Script B is simply an independent series of command sent to another external device. Script A is used to monitor that device. Script B does not have any return, and code after line x in script A does not rely on script B.
Overall, I want to trigger the start of Script B and let it run independent of Script A. While script A is kept running continuously.
Thanks to the comment, I found subprocess.Popen() can do the work.
Actually, Script B can be simplified into one single function (Function B). In this case, shall I create one Python script as Script B that only calls Function B, and use subprocess.Popen() to call Script B in Script A?
Or, is there a better way to call Function B, rather than Script B, in a similar fashion as in subprocess.Popen()?
I am trying to call Function B directly because my task is very time-dependent, and half a second of delay may be significant. I have measured the delay from the line x-1 in Script A to line 1 in Script B, if I call Script B in Script A. The delay is ~450 ms. I suspect the delay is from the time for the interpreter to compile Script B and execute it, even though Script B is one or two lines long.
Thank you very much!
How to run two scripts at the same time:
Use the GNU parallel ( sudo apt install parallel if not already installed on your system )
Alan#Enigma:~$ parallel python ::: TheScript_A TheScript_B [ TheScript_C [ ... ]]
This way is way cheaper, than trying to orchestrate process-spawns from inside the first python session. It is possible, yet the processing costs and latency side-effects and software engineering costs are way higher, than using the smart O/S services present right for this simple problem-definition.
Reading man parallel you get all the smart bash-scripting options to add for flexible, parametrised process-options, as one may need and wish
...
total_jobs() number of jobs in total
slot() slot number of job
seq() sequence number of job
...
Related
I have a background task that needs to be run repeatedly, every hour or so, sending me an email whenever the task emitted non-trivial output.
I'm currently using cron for that, but it's somewhat ill-suited: it forces me to choose exact times at which the command is run, and it doesn't prevent overlap.
An alternative would be to run the script in a loop with sleep 3600 at the end of each iteration but this then needs extra work to make sure the script is always restarted after boot and such.
Ideally, I'd like a cron-like tool where I can give a set of commands to run repeatedly with approximate execution rates and the tool will run them "when convenient" and without overlapping execution of different iterations of a command (or even without overlapping execution of any command).
Short of writing such a tool myself, what would be the recommended approach?
We have a startup script for an application (Owned and developed by different team but deployments are managed by us), which will prompt Y/N to confirm starting post deployment. But the number of times it will prompt will vary, depends on changes in the release.
So the number of times it will prompt would vary from 1 to N (Might be even 100 or more than that).
We have automated the deployment and startup using Jenkins shell script jobs. But startup prompts number is hardcoded to 20 which might be more at sometime.
Could anyone please advise how number of prompts can be handled dynamically. We need to pass Y whenever there is pattern in the output "Do you really want to start".
Checked few options like expect, read. But not able to come up with a solution.
Thanks in advance!
In general, the best way to handle this is by (a) using a standard process management system, such as your distro's preferred init system; or, if that's not possible, (b) to adjust the script to run noninteractively (e.g., with a --yes or --noninteractive option).
Barring that, assuming your script reads from standard input and not the TTY, you can use the standard program yes and pipe it into the command you're running, like so:
$ yes | ./deploy
yes prints y (or its argument) over and over until it's killed, usually by SIGPIPE.
If your process is reading from /dev/tty instead of standard input, and you really can't convince the other team to come to their senses and add an appropriate option, you'll need to use expect for this.
I have two commands, one is logging things in the background and won't stop until I kill it and the second one will eventually stop.
Let's mark them A and B respectively.
I want to execute:
A and B in parallel
Wait for B to finish
Kill A
[do some more stuff]
and repeat that in a loop.
I'm running on MacOS and can't update to Bash 4.x because of GPLv3.
Preferably A (the logger) would start first but I wouldn't mind if it's undefined or B would start first since the difference in time is negligible.
Help would be appreciated.
Thanks
I have two services A and B which I want to start on boot. But A should start first and then only B should start.
I enabled the services using systemctl enable service_name.
Now the services are starting but not in order i.e B is starting before A. Is there any way I can configure their start order?
You can add the following command at the end of the startup script of A, and disable B to be started on bootup: systemctl start B
They're starting out of order because Linux uses "makefile style concurrent boot" during startup -- and the A process is taking longer to start than the B process. The simplest way to delay process B is with a sleep command -- a few seconds is likely enough -- though this will delay the completion of startup by a fixed amount (and, if process A takes a variable time to start, as with opening a wifi connection etc., this may not always work unless you set the time higher than it usually needs to be).
More reliable, and possibly less delay in startup, would be to use something like lsproc | grep proc_a | wc -l to check for existence of process A (or a child of A) as a condition for starting process B -- put this in a short loop with a 1 or 2 second sleep (so it doesn't hog all your CPU while it waits) and it'll effectively keep B back until A is running, without unnecessary delay.
I'm learning C and would like to get a sense for just how much faster some of my C code is than its python equivalent.
I'm running Ubuntu 12.04
From command line you can use "time" command. This will give you the execution time of that program in three separate mode (by default) - a. real time; b. user time; c. system time.
a. real time indicates how much time it took overall;
b. user time indicates how much time it took executing at userspace
c. system time indicates how much time it took executing at kernel space.
Above is the way to measure time from commnad line. You can also measure program execution time from in program - using system call like gettimeofday().
You have the answer to your question there in the title: the time command will measure the time it takes for a command to complete.