Scheduling Azure Instances - azure

I'd like to run a single Azure instance on a predetermined schedule (e.g. 9-5 PM EST, Mon-Fri) to reduce billing and am wondering what the best way to go about it is.
Two parts to the question:
Can the Service Management API [1] be used to set the InstanceCount to 0 on a predetermined schedule?
If so, are you still billed for this service, as is the case with suspended deployments?
[1] -
http://blogs.msdn.com/b/gonzalorc/archive/2010/02/07/auto-scaling-in-azure.aspx

You can't set the instance count to zero, but you can suspend and then delete the deployment and then redeploy all programmatically.

Microsoft shipped the Autoscaling Application Block (Wasabi) which will guard your budget by changing instance counts based on timetables. It offers many other features, including an optimizing stabilizer that will take care of the hourly boundaries (concretely, it will limit scaling up operations to the beginning of the hour and scaling down operations to the end of the hour).
See my detailed answer with supported scenarios on this thread.

Steve covered your first bullet point.
For the second: if you suspend your deployment, you are still billed for it. You have to delete the deployment to stop the accrual of compute-hours.

Alternatively, you could use Lokad.CQRS or Lokad.Cloud to combine the tasks that don't need to run all the time on a single compute instance.
Of course, this approach is not universally applicable and depending on the specifics of your application it may not be suitable for your case.

Related

Best way to implement background “timer” functionality in Python/Django

I am trying to implement a Django web application (on Python 3.8.5) which allows a user to create “activities” where they define an activity duration and then set the activity status to “In progress”.
The POST action to the View writes the new status, the duration and the start time (end time, based on start time and duration is also possible to add here of course).
The back-end should then keep track of the duration and automatically change the status to “Finished”.
User actions can also change the status to “Finished” before the calculated end time (i.e. the timer no longer needs to be tracked).
I am fairly new to Python so I need some advice on the smartest way to implement such a concept?
It needs to be efficient and scalable – I’m currently using a Heroku Free account so have limited system resources, but efficiency would also be important for future production implementations of course.
I have looked at the Python threading Timer, and this seems to work on a basic level, but I’ve not been able to determine what kind of constraints this places on the system – e.g. whether the spawned Timer thread might prevent the main thread from finishing and releasing resources (i.e. Heroku Dyno threads), etc.
I have read that persistence might be a problem (if the server goes down), and I haven’t found a way to cancel the timer from another process (the .cancel() method seems to rely on having the original object to cancel, and I’m not sure if this is achievable from another process).
I was also wondering about a more “background” approach, i.e. a single process which is constantly checking the database looking for activity records which have reached their end time and swapping the status.
But what would be the best way of implementing such a server?
Is it practical to read the database every second to find records with an end time of “now”? I need the status to change in real-time when the end time is reached.
Is something like Celery a good option, or is it overkill for a single process like this?
As I said I’m fairly new to these technologies, so I may be missing other obvious solutions – please feel free to enlighten me!
Thanks in advance.
To achieve this you need some kind of scheduling tasks functionality. For a fast simpler implementation is a good solution to use the Timer object from the
Threading module.
A more complete solution is tu use Celery. If you are new, deeping in it will give you a good value start using celery as a queue manager distributing your work easily across several threads or process.
You mentioned that you want it to be efficient and scalable, so I guess you will want to implement similar functionalities that will require multiprocessing and schedule so for that reason my recommendation is to use celery.
You can integrate it into your Django application easily following the documentation Integrate Django with Celery.

Blueprism - Limit Process Accessibility To Certain Resources

I am trying to setup my processes so that certain bots/resources can and can't run them. I can see a capability drop down, but I don't know how to limit the capabilities.
I am almost certain that the capability feature is not complete yet and it's something that will make its appearance in the following BP versions.
As far as I can think of, you could either take advantage of some of the Multi-team environment functions and hide some resources or processes for some groups of people, or you could use the GetResourceName() function to terminate your process if not executed on the white-listed machines.
Not ideal, we'll just have to see what the capabilities are about.

Block Resource in Optaplanner Job scheduling

I've managed to use the Job scheduling example for a project I'm working on. I have an additionnal constraint I would like to add. Some Resources should be blocked sometimes. For example a Global renewable Resource shouldn't be used between minutes 10 to 20. Is it currently already doable or if not, how can it be done in the score calculation ?
Thanks
Use a custom shadow variable listener to predict the starting time of each task.
Then simply have a hard constraint to check that the task won't overlap with its blocks.
Penalize the amount of overlap to avoid a "score trap".

Azure autoscale scale in kills in use instances

I'm using Azure Autoscale feature to process hundreds of files. The system scales up correctly to 8 instances and each instance processes one file at a time.
The problem is with scaling in. Because the scale in rules seem to be based on ALL instances, if I tell it to reduce the instance count back to 1 after an average CPU load of < 25% it will arbitrarily kill instances that are still processing data.
Is there a way to prevent it from shutting down individual instances that are still in use?
Scale down will remove the highest instance numbers first. For example, if you have WorkerRole_IN_0, WorkerRole_IN_1, ..., WorkerRole_IN_8, and then you scale down by 1, Azure will remove WorkerRole_IN_8 first. Azure has no idea what your code is doing (ie. if it is still processing a file) or if it is finished and ready to shut down.
You have a few options:
If the file processing is quick, you can delay the shutdown for up to 5 minutes in the OnStop event, giving your instance enough time to finish processing the file. This is the easiest solution to implement, but not the most reliable.
If processing the file can be broken up into shorter chunks of work then you can have the instances process chunks until the file is complete. This way it doesn't really matter if an arbitrary instance is shut down since you don't lose any significant amount of work and another instance will pick up where it left off. See https://learn.microsoft.com/en-us/azure/architecture/patterns/pipes-and-filters for a pattern. This is the ideal solution as it is an optimized architecture for distributed workloads, but some workloads (ie. image/video processing) may not be able to break up easily.
You can implement your own autoscale algorithm and manually shut down individual instances that you choose. To do this you would call the Delete Role Instance API (https://msdn.microsoft.com/en-us/library/azure/dn469418.aspx). This requires some external process to be monitoring your workload and executing management operations so may not be a good solution depending on your infrastructure.

What is the default / maximum value for WEBJOBS_RESTART_TIME?

I have a continuous webjob and sometimes it can take a REALLY, REALLY long time to process (i.e. several days). I'm not interested in partitioning it into smaller chunks to get it done faster (by doing it more parallel). Having it run slow and steady is fine with me. I was looking at the documentation about webjobs here where it lists out all the settings but it doesn't specify the defaults or maximums for these values. I was curious if anybody knew.
Since the docs say
"WEBJOBS_RESTART_TIME - Timeout in seconds between when a continuous job's process goes down (for any reason) and the time we re-launch it again (Only for continuous jobs)."
it doesn't matter how long your process runs.
Please clarify your question as most part of it is irrelevant to what you're asking at the end.
If you want to know the min - I'd say try 0. For max try MAX_INT (2147483647), that's 68 years. That should do it ;).
There is no "max run time" for a continuous WebJob. Note that, in practice, there aren't any assurances on how long a given instance of your Web App hosting the WebJob is going to exist, and thus your WebJob may restart anyway. It's always good design to have your continuous job idempotent; meaning it can be restarted many times, and pick back up where it left off.

Resources