I have a script which updates a web application. The web application is spread across 2 servers. Here is a rundown of the script
The shell script updates the git repository.
The shell script stops the application server.
The shell script stops the web server.
The shell script instructs the application server to checkout the latest git update.
The shell script instructs the web server to checkout the latest git update.
The shell script starts the application server.
The shell script starts the web server.
Each of the 7 steps are done one after the other synchronously. The total run time is approximately 9 seconds. To reduce downtime however, many of these steps could be done asynchronously.
For example, step 4 and 5 could be done at the same time. I want to start step 4 and 5 asynchronously (e.g. running in the background), but I cannot find how to wait until they are both completed before going further.
You might want to use command grouping to maintain which steps need to be synchronous:
step1
( step2 && step4 && step6 ) &
( step3 && step5 && step7 ) &
wait && echo "all done"
launch step 4 and 5 in background in your script (ending &), then simply call wait bash builtin before running step 6
You are looking for the wait command.
wait: wait [id]
Wait for job completion and return exit status.
Waits for the process identified by ID, which may be a process ID or a
job specification, and reports its termination status. If ID is not
given, waits for all currently active child processes, and the return
status is zero. If ID is a a job specification, waits for all processes
in the job's pipeline.
Exit Status:
Returns the status of ID; fails if ID is invalid or an invalid option is
given.
Related
I need to wait until current job will be done and don't run jobs which are also waiting to be executed.
When I throw new Error('test') current job(job1) will try to be executed again, because I set backoff and attempts and it is working, I can see how system is trying to execute job1 again and again. But if in this time I will crate new job(job2), it will be executed immediately and will not wait for job1.
I need don't execute job2 until I call job1.done();
This feature is implemented only in paid Pro version of Bull
I create a cronjob for periodically run the executatble compile from python.
20 23 * * * .../TestInno/inno_main >>.../TestInno/Log/Inno_cronlog_`date +\%Y\%m\%d\%H\%M\%S`.log*
the executable consisted of 2 task
update database
sent email notification
When I run the executable in command line, it does perfectly.
however, the cronjob only does task 1 only (from log file)
Since there's few minute pause while connecting to smtp server for sending email. I suspect that the cronjob think the job's done, so kills the task.
Is it right?
If so, how to prevent it?
We have a plan to do an automation that automatically sends data to cloud storage once the server will shut down or halt.
We will use the common:
ln -s /etc/ec2-termination /etc/rc0.d/S01ec2-termination
My Question is, let say my ec2-termination take 10 Mins to execute. Is the system wait to the said init script to complete before it will shutdown?
No the server want wait. The usual termination time is ~ 2min and not longer.
https://github.com/miglen/aws/blob/master/ec2/run-script-on-ec2-instance-termination.md
There are tools like npm-run-all that allow persistent processes to run in parallel in one process. I am interested in doing this with redis and a node server.
However I am looking for a way to run the two in parallel, but only run the node process when the redis process is verifiably successful.
Is there any unix / bash tool that can achieve what I want?
I can see this working in two ways:
Option 1
A tool that checks for specific stdout from a process for instance redis will write Ready to accept connections to stdout, the tool would watch for this as a regular expression. When it has received it an event internally would fire and the node server would be run.
Option 2
A tool that checks if / when the http connection is available for a specific server and when it receives a proper health check response the internal event is fired and the subsequent node server would be run. There would also need to be a timeout involved. The con with this is that it's only specific to processes that spin up servers and endpoints on a consistent local port.
How about a script that responds PING command?
!/bin/bash
X="`redis-cli ping`"
echo ${X}
while [ "${X}" != "PONG" ]; do
echo "redis not yet ready"
echo "${X}"
sleep 50
X="`redis-cli ping`"
done
echo 'Lets start node'
I configured a ubuntu server(AWS ec2 instance) system as a cronserver, 9 cronjobs run between 4:15-7:15 & 21:00-23:00. I wrote a cron job on other system(ec2 intance) to stop this cronserver after 7:15 and start again # 21:00. I want the cronserver to stop by itself after the execution of the last script. Is it possible to write such script.
When you start the temporary instance, specify
--instance-initiated-shutdown-behavior terminate
Then, when the instance has completed all its tasks, simply run the equivalent of
sudo halt
or
sudo shutdown -h now
With the above flag, this will tell the instance that shutting down from inside the instance should terminate the instance (instead of just stopping it).
Yes, you can add an ec2stop command to the end of the last script.
You'll need to:
install the ec2 api tools
put your AWS credentials on the intance, or create IAM credentials that have authority to stop instances
get the instance id, perhaps from the inIstance-data
Another option is to run the cron jobs as commands from the controlling instance. The main cron job might look like this:
run processing instance
-wait for sshd to accept connections
ssh to processing instance, running each processing script
stop processing instance
This approach gets all the processing jobs done back to back, leaving your instance up for the least amount of time., and you don't have to put the credentials on thee instance.
If your use case allows for the instance to be terminated instead of stopped, then you might be able to replace the start/stop cron jobs with EC2 autoscaling. It now sports schedules for running instances.
http://docs.amazonwebservices.com/AutoScaling/latest/DeveloperGuide/index.html?scaling_plan.html