We have a plan to do an automation that automatically sends data to cloud storage once the server will shut down or halt.
We will use the common:
ln -s /etc/ec2-termination /etc/rc0.d/S01ec2-termination
My Question is, let say my ec2-termination take 10 Mins to execute. Is the system wait to the said init script to complete before it will shutdown?
No the server want wait. The usual termination time is ~ 2min and not longer.
https://github.com/miglen/aws/blob/master/ec2/run-script-on-ec2-instance-termination.md
Related
This question actually means a couple of things.
First of all I want to ask what exactly happens when a dyno sleeps?
If I have global variables stored in an array in my bot, does it get
wiped when it sleeps (so that means that I have to actually save
everything to external files)? - Basically does my memory data cleared
when it sleeps, when it wokes up my bot won't be having that data?
Secondly, for the 550 free dyno hours, can I schedule a sleeping
schedule (e.g., 01:00 - 7:00 am) or is it not a daily limit (18h/day)
but a monthly limit (so a 24/7 uses up the hours until I have 0 for
the rest of the month)?
Adding on to what #Beppe C said in his answer, in order to understand dyno sleeping, you need to understand the difference between web and worker dynos.
Web dynos allow you to show a webpage, some file content, etc. This dyno goes to sleep periodically (after 30 minutes of inactivity). This is where you're getting the phrase dyno sleeping. If you insist on using a web dyno (as in you need to show content), the best way to schedule it is by using the Heruko Scheduler add-on. However, I don't recommend this, as it requires you to enter your credit card information. Otherwise, you could use a cron job service.
Worker dynos are different. As long as you don't use up your 550 hours in a given month (the hour limit is monthly), they will stay running without sleeping. With a worker dyno, any global variables will be saved for the duration of the time that the process stays running, unless you stop the process. The downside is that they work in the background, meaning that you can't show/display web content with them. If you need to show web content, stick with a web dyno.
To start a worker process for an app, scale up the app like so:
# scale up
heroku ps:scale worker=1 -a {appname}
Read more documentation about worker dynos here: https://devcenter.heroku.com/articles/background-jobs-queueing
My advice? If you don't need to show web content, use a worker dyno that periodically shuts off at a given time every day. Since you seem to be using node.js, maybe use setInterval to check the time?
A sleeping dyno is when the virtual node shuts down stopping the application which runs on it. The application memory is cleared (yes, all your variables and arrays) and it cannot process any request until the dyno restarts.
Any data which needs to survive should be persisted to an external storage (external file system, database).
A Web dyno goes to sleep after 30 min inactivity (ie no http requests have arrived), you cannot schedule this.
A Worker does not go to sleep and it runs until your free quota has run out.
You can always scale down (shutdown) and scale up (restart) using the command line.
# scale down
heroku ps:scale web=0 -a {appname}
I am using nodejs on google app engine with an end point for a cron job. When the rest end point is called I want to proceed with my cron job after returning the response back to the caller.The cron task will continue for about an hour. Will GAE terminate the task if it runs for an hour or more ? I suppose GAE should not kill my nodejs server process because that way my application will stop. I want to know if there is any possibility for the task to end prematurely due to some restrictions on GAE.
It depends on which type of scaling you have selected: https://cloud.google.com/appengine/docs/standard/java/an-overview-of-app-engine
Requests on Basic & Manual Scaling can run indefinitely, Automatic scaling has a 60 second deadline for http requests & 10 minutes for task queue requests. If you're not sure which type of scaling you have you probably have Automatic.
You could setup a micro-service with Basic scaling specifically for tasks like this; so that your primary service can stay on Automatic scaling.
You could also split up your cron task into several tasks, and then daisy-chain them using push queues (i.e. you cron task launches, does some work, and then launches task2 and dies. task2 launches, does some work, launches task3 and dies. etc)
Use-Case:
The gearmand is fully operational with libdrizzle as persistence-layer to a mysql-database
The drizzle connection crashes (e.g. the gearmand-database is locked for some minutes during nightly backups, or the mysql server crashes or network-problems to the database-server).
Question:
Does the gearmand work without the persistence in this moment (MySQL) and catch up later?
Answer
No.
Details
Debian 6
gearmand 1.1.8 (via https://launchpad.net/gearmand)
exactly 5000 jobs to be created via doBackground
persist the jobs into mysql
/usr/local/sbin/gearmand -q mysql --mysql-user user1 --mysql-password
pass1 --mysql-db gearmand
Scenario #1
Scenario:
Enable READ lock for gearman queue table
Result:
The script, which creates the background tasks, is on hold.
After removing the READ lock, the script continues and creates all 5000 jobs successfully.
Note: I just tested the lock for some seconds. The script might crash due to a timeout.
Scenario #2
Scenario:
Stop the entire mysql server instance (with the gearman queue)
Result:
Without the mysqld, the jobs cannot be created.
3974 jobs out of 5000 have been created.
gearmand output:
mysql_stmt_prepare failed: Can't connect to local MySQL server through
socket X
PHP script output:
PHP Warning: GearmanClient::doBackground():
gearman_client_run_tasks:QUEUE_ERROR:QUEUE_ERROR
Unfortunately, with my test scenarios, the gearmand stops work if the mysql persistence layer is unavailable.
I configured a ubuntu server(AWS ec2 instance) system as a cronserver, 9 cronjobs run between 4:15-7:15 & 21:00-23:00. I wrote a cron job on other system(ec2 intance) to stop this cronserver after 7:15 and start again # 21:00. I want the cronserver to stop by itself after the execution of the last script. Is it possible to write such script.
When you start the temporary instance, specify
--instance-initiated-shutdown-behavior terminate
Then, when the instance has completed all its tasks, simply run the equivalent of
sudo halt
or
sudo shutdown -h now
With the above flag, this will tell the instance that shutting down from inside the instance should terminate the instance (instead of just stopping it).
Yes, you can add an ec2stop command to the end of the last script.
You'll need to:
install the ec2 api tools
put your AWS credentials on the intance, or create IAM credentials that have authority to stop instances
get the instance id, perhaps from the inIstance-data
Another option is to run the cron jobs as commands from the controlling instance. The main cron job might look like this:
run processing instance
-wait for sshd to accept connections
ssh to processing instance, running each processing script
stop processing instance
This approach gets all the processing jobs done back to back, leaving your instance up for the least amount of time., and you don't have to put the credentials on thee instance.
If your use case allows for the instance to be terminated instead of stopped, then you might be able to replace the start/stop cron jobs with EC2 autoscaling. It now sports schedules for running instances.
http://docs.amazonwebservices.com/AutoScaling/latest/DeveloperGuide/index.html?scaling_plan.html
SOLUTION:
The solution that I found: using low level nohup program that ignores signal sent by putty when closing the connection.
So, instead of ./gearman-manager start I did nohup ./gearman-manager start
NOTE: Still, I would like to know why was it slowing down when closing putty OR why does it continues in the first place if it has received the hangup signal???
I have a problem with execution of a gearman worker after I close a putty session.
This is what I have:
gearman client that is started with a cron job checking something in DB (infinite loop).
gearman manager started with gearman-manager start command receiving client's tasks and managing the calls to a worker
gearman worker reading/writing from DB and echoing the status of a current job
When I start gearman-manager I can see the echos from my worker when it receives task and when it executes them. Tasks (updates in DB) are executed cca. 1/second...
A) When I close putty session the speed of changes in DB decreases enormously (cca. 1/10sec)?! Could you tell me why is this?
B) When I log back with putty I don't get the outputs of gearman-manager back to the screen? I expected I'll log back into and see that it continues to echo the status like it did before closing putty? Maybe this could be because gearman-manager is started with owner root while the echoes are coming from .php ran as user gearman? or maybe when I log back into it the process is in the background?!
You don't see the output when you create a new tty because the process was bound to the previous tty. Unless you use something like screen to keep the tty alive, you aren't going to see that output with a new terminal.