I have a short running container that does background processing (no ingress) that I have deployed to the Azure Container Apps service in Azure, my config is min replicas 0 (for when the container completes its work and exits) and max replicas 1 (i want only want one instance of my container running at any time).
I want to start my container once every hour, it generally runs for 3 mins and completes its task and closes.
Is there anyway with Azure Container Apps to schedule the start of my container? At the moment I have reverted to running my Azure DevOps pipeline on a schedule which calls the az containerapp update command, but it feels like the wrong way to go about this.
There's no scheduling concept in Container Apps. Here are some ideas:
1-Enable Ingress and create a Function or a Logic App that runs on a schedule and "ping" the Container App to start the process.
2-Create a Logic App that runs on a schedule and creates a Container Instance every hour, wait for it to complete, and delete it.
Related
I have the following Logic app:
This logic app is triggered when my connection goes above 120, it runs a powershell script which reduces the number of connection. The problem that I am facing is once it runs and the connections go back down from 120 or above the logic app is triggered again because the alert is being triggered, this generally happens minutes from each other. Is there a way I can tweak this logic app to make sure it wont trigger again for maybe 10 minutes after it has been triggered, to stop my powershell script from running twice?
You could have persistent value - stored in any of one of cloud services - let me take azure blob for instance.
The immediate running instance can persist the current running time in the Azure Blob storage.
So next instance is triggered - checks for the last run time from the blob - if it is less than 10 minutes. Your logic would be to skip the execution of the PowerShell.
The overall logic will look like below :
Note :
The logic app doesn't have a concept of persistent storage built-in. You can use AzureSQL, CosmosDB,Sharepoint, Azure Storage etc. using their inbuilt connector to achieve this persisting storage functionality.
I have feed job as a pod on AKS, triggered by Logic App for every one hour. Purpose- The data will be processed and updated on Azure blob storage. Planned to move this to Azure container instance(ACI) , so have automated via Jenkins to deploy the ACI and it works as expected. Now looking to stop Azure container instance once the data processing is completed and start back prior to an hour. so logic App can initiate the trigger. Shall we have job in Jenkins to delete ACI after 10 mins and run the build prior to an hour or
What will be best approach to stop/delete ACI, once the data is uploaded on storage account and start back prior to an hour?
I have completed this request, took the approach as below:
Initial build and deployment is taken care by Jenkins to AKS via Virtual Node to ACI.
Build a Logic App with ACI connector to start and then trigger the API.
Another Logic App with event grid to trap the file is modified on storage account and then stop the ACI via ACI connector.
This ensures my ACI is stopped once the work is completed and start back when needed.
When I read the documentation for azure WebJobs, it found below statement For Continuous WebJobs to run reliably and on all instances refer to image below
My WebJob workflow:
Need to prepare the report for the newly created user in my application at 12 AM EST and send me the report in Email in daily occurrence.This time is changeable by UI so I need to run job continuously to find and run schedule at selected time
My Question
If WebJob runs all instance say two instances in running now for my web app.
Will I receive two email such that WebJob in each instance prepare the report and send to me?
Will get only one email irrespective of how many WebJobs are running?
A Continuous WebJob by default runs on all instances of your App Service Plan.
Whether your WebJob will run twice depends on how you implemented it. A Service Bus Queue listening WebJob will only run a queue message once, no matter how many instances (though if it fails, it'll run more than once).
If you wish, you can also make the WebJob run on a single instance by including a settings.job file in your WebJob with the following content:
{
"is_singleton": true
}
If I want to schedule a task to run say once every 30 minutes. I could do this with a basic timeout or use a node module like node-schedule.
But If I deploy my app to the cloud, such as Amazon AWS or Azure, and scale the instances to say 10, will this task then be scheduled to run 10 times, one for each instance? How can I avoid this, or am I thinking about how cloud instanced work in the wrong way.
If you want to use an Azure component, you could use the Azure scheduler:
Create jobs that run on your schedule
Azure Scheduler lets you create
jobs in the cloud that reliably invoke services inside and outside of
Azure—such as calling HTTP/S endpoints or posting messages to Azure
Storage queues. You can choose to run jobs right away, on a recurring
schedule, or at some point in the future.
Azure Scheduler
It entirely depends on your code but I'd imagine that yes it will run on each instance, probably slightly out of sync too.
This article describes quite nicely the problems with scheduled tasks:
http://dejanglozic.com/2014/07/21/node-js-apps-and-periodic-tasks/
My advice would be to try and avoid scheduled tasks as much as possible.
I'm working on the deployment processes for a web application which runs inside an Azure cloud service.
I deploy to the staging slot, once all the instances report a status of RoleReady I then do a VIP swap into the production slot. The aim is that I can deploy a new version and my users won't have to wait while the site warms up.
I have added a certain amount of warmup into the RoleEntryPoint.OnStart, essentially this hits a number of the application's endpoints to allow the caches to spin up and and view compilation to run. What I'm seeing is that the instances all report ready, before this process has completed.
How can I tell if my application has warmed up before I swap staging into production? The deploy script I'm using is a derivative of https://gist.github.com/chartek/5265057.
The role instance does not report Ready until the OnStart method finishes and the Run method begins. You can validate this by looking at the guest agent logs on the VM itself (see http://blogs.msdn.com/b/kwill/archive/2013/08/09/windows-azure-paas-compute-diagnostics-data.aspx for more info about those logs).
When you access the endpoints are you waiting for response or just sending a request? See Azure Autoscale Restarts Running Instances for code which hits the endpoints and waits in OnStart for responses before moving the instance to the Ready state.