To stop the EC2 instance after the execution of a script - cron

I configured a ubuntu server(AWS ec2 instance) system as a cronserver, 9 cronjobs run between 4:15-7:15 & 21:00-23:00. I wrote a cron job on other system(ec2 intance) to stop this cronserver after 7:15 and start again # 21:00. I want the cronserver to stop by itself after the execution of the last script. Is it possible to write such script.

When you start the temporary instance, specify
--instance-initiated-shutdown-behavior terminate
Then, when the instance has completed all its tasks, simply run the equivalent of
sudo halt
or
sudo shutdown -h now
With the above flag, this will tell the instance that shutting down from inside the instance should terminate the instance (instead of just stopping it).

Yes, you can add an ec2stop command to the end of the last script.
You'll need to:
install the ec2 api tools
put your AWS credentials on the intance, or create IAM credentials that have authority to stop instances
get the instance id, perhaps from the inIstance-data
Another option is to run the cron jobs as commands from the controlling instance. The main cron job might look like this:
run processing instance
-wait for sshd to accept connections
ssh to processing instance, running each processing script
stop processing instance
This approach gets all the processing jobs done back to back, leaving your instance up for the least amount of time., and you don't have to put the credentials on thee instance.
If your use case allows for the instance to be terminated instead of stopped, then you might be able to replace the start/stop cron jobs with EC2 autoscaling. It now sports schedules for running instances.
http://docs.amazonwebservices.com/AutoScaling/latest/DeveloperGuide/index.html?scaling_plan.html

Related

PM2 Cluster Mode. Execution on one process blocks the event loop on other processes as well

I am using PM2 in cluster mode and have 2 instances of my node.js application running. I have some long executing cron jobs (about 30 seconds) that I am trying to run. I am placing an if statement before the execution of the cron jobs to ensure that they only run on the first process via
if (process.env.NODE_APP_INSTANCE === 0) {
myCronFunction()
}
The goal was that since there are two processes, and PM2 should be load balancing them, if the cron job executes on process one, then process two would still be available to respond to the request. I am not sure what's going on, if PM2 is not successfully load balancing them, or what. But when my cron job executes on instance one, instance two is still not responding to requests until after the job on instance one finishes executing.
I'm not sure why that is. It is my understanding that they are supposed to be completely independent of one another.
Anyone have any ideas?

Docker Container in Azure Logic App fails does not exit properly

I have a curious issue getting a docker container set up to run and exit properly in an Azure logic app.
I have a python script that prints hello world, then sleeps for 30 minutes. The reason for sleeping is to make the script run longer so that I can test if the container in the logic app exits at the right moment, when the script is done running and not when the loop times out.
First, I confirm that the container is running properly and exiting properly in powershell:
PS C:\Users\cgiltner> docker run helloworld
Running 'Hello World' at 2019-11-26 17:53:48
Hello, World!
Sleeping for 30 minutes...
Awake after 30 minutes
PS C:\Users\cgiltner>
I have the container set up in a logic app as follows, there is an “Until” loop that is configured to run until “State” = “succeeded”
But when I run it, the “until” loop continues for 1 hour, which is the default timeout period for an until loop (PT1H)
Looking at the properties of the container, I can see that the state of the container never changed from “Running”
Just to clarify, the container IS running and executing the script/docker container successfully. The problem is that it is not exiting when the script is actually done, rather it is waiting until the timeout period is done. There is not an error message or any failure indicating that it times out, it just simply moves to the next step. This has big implications in a complex logic app where multiple steps need to happen after containers run, it causes things to take hours in the app.
For your issue, what you need to know first is that your first action of the Logic App is creating the Azure Container Instance, but when the Logic App action has done, the creation of the Azure Container Instance still be not finished. It only returns a pending state and will not update. In your second action, you expect the succeeded state in the Until action. So the result is that the action will delay until timeout.
The solution is that you need to add a pure delay action behind the creation of the Azure Container Instance. Then add the action to get the properties and logs of the containers in the container group.

Callback on host node after slurm job has been allocated

I'd like to do two things in sequence:
Submit a job with sbatch
Once the job has been allocated, retrieve the hostname of the allocated node and, using that name, execute a second command on the host (login) node.
Step 2 is the hard part. I suppose I could write a Python script that polls squeue. Is there a better way? Can I set up a callback that Slurm will execute once a job starts running?
(In case you're curious, my motivation is to launch a Jupyter notebook on a compute node and automatically set up ssh port forwarding as in this blog post.)

systemd start service after another one stopped issue

I have 2 services that i need to start.
First service has download jobs required for second service.
First service
[Unit]
Description=First
After=network.target
Second service
[Unit]
Description=Second
After=First
Problem is they both start at the same time, i need second service to wait until first one is dead.
I don't wait to use sleep because download jobs can be big.
Thank you.
In your first service add
ExecStopPost = /bin/systemctl start Second
what this does is when the service terminates the above option is activated and thus second service is called.
This particular option(ExecStopPost) allows you to execute commands that are executed after the service is stopped. This includes cases where the commands configured in ExecStop= were used, where the service does not have any ExecStop= defined, or where the service exited unexpectedly.

Does gearmand with libdrizzle work while mysql-database is down for a while?

Use-Case:
The gearmand is fully operational with libdrizzle as persistence-layer to a mysql-database
The drizzle connection crashes (e.g. the gearmand-database is locked for some minutes during nightly backups, or the mysql server crashes or network-problems to the database-server).
Question:
Does the gearmand work without the persistence in this moment (MySQL) and catch up later?
Answer
No.
Details
Debian 6
gearmand 1.1.8 (via https://launchpad.net/gearmand)
exactly 5000 jobs to be created via doBackground
persist the jobs into mysql
/usr/local/sbin/gearmand -q mysql --mysql-user user1 --mysql-password
pass1 --mysql-db gearmand
Scenario #1
Scenario:
Enable READ lock for gearman queue table
Result:
The script, which creates the background tasks, is on hold.
After removing the READ lock, the script continues and creates all 5000 jobs successfully.
Note: I just tested the lock for some seconds. The script might crash due to a timeout.
Scenario #2
Scenario:
Stop the entire mysql server instance (with the gearman queue)
Result:
Without the mysqld, the jobs cannot be created.
3974 jobs out of 5000 have been created.
gearmand output:
mysql_stmt_prepare failed: Can't connect to local MySQL server through
socket X
PHP script output:
PHP Warning: GearmanClient::doBackground():
gearman_client_run_tasks:QUEUE_ERROR:QUEUE_ERROR
Unfortunately, with my test scenarios, the gearmand stops work if the mysql persistence layer is unavailable.

Resources