I have a curious issue getting a docker container set up to run and exit properly in an Azure logic app.
I have a python script that prints hello world, then sleeps for 30 minutes. The reason for sleeping is to make the script run longer so that I can test if the container in the logic app exits at the right moment, when the script is done running and not when the loop times out.
First, I confirm that the container is running properly and exiting properly in powershell:
PS C:\Users\cgiltner> docker run helloworld
Running 'Hello World' at 2019-11-26 17:53:48
Hello, World!
Sleeping for 30 minutes...
Awake after 30 minutes
PS C:\Users\cgiltner>
I have the container set up in a logic app as follows, there is an “Until” loop that is configured to run until “State” = “succeeded”
But when I run it, the “until” loop continues for 1 hour, which is the default timeout period for an until loop (PT1H)
Looking at the properties of the container, I can see that the state of the container never changed from “Running”
Just to clarify, the container IS running and executing the script/docker container successfully. The problem is that it is not exiting when the script is actually done, rather it is waiting until the timeout period is done. There is not an error message or any failure indicating that it times out, it just simply moves to the next step. This has big implications in a complex logic app where multiple steps need to happen after containers run, it causes things to take hours in the app.
For your issue, what you need to know first is that your first action of the Logic App is creating the Azure Container Instance, but when the Logic App action has done, the creation of the Azure Container Instance still be not finished. It only returns a pending state and will not update. In your second action, you expect the succeeded state in the Until action. So the result is that the action will delay until timeout.
The solution is that you need to add a pure delay action behind the creation of the Azure Container Instance. Then add the action to get the properties and logs of the containers in the container group.
Related
I have a cronjob which processes lakhs of data with API calls and runs very frequently. Now the issue is sometimes the API calls are timed out and the cronjob gets stuck in running state. When next trigger time comes a new cronjob is started on a different thread. This results in duplicate data and multiple instances of same job running on multiple thread. How do i stop it.?
I have a webjob in Azure, hosted on an App Service that is not used for anything. I am currently deploying my webjob from Visual Studio, but this will change in the future as it's not in production. It's a .NET Core 3.1 application webjob that compiles to an EXE, but that shouldn't matter to this question (and I'm aware of Azure Functions, but that is also not a part of my question).
The webjob is a continous webjob triggered by a queue. I have set it up to run 10 batches simultaneously. I have looked online for answers, but I have found unclear answers.
My question is: Let's say I have 3 jobs running. Then I deploy a new version of the EXE file. This seems to work without problems. But what happens to the jobs that are running? Will they continue running to the end? Or will it fail and stop? I haven't quite managed to sort that out and I wanted to ask here in case someone have helpful experience on this.
My queue related config is like this, if that's helpful:
.ConfigureWebJobs(b =>
{
b.AddAzureStorageCoreServices();
b.AddAzureStorage(a =>
{
a.BatchSize = 10;
a.NewBatchThreshold = 5;
a.MaxDequeueCount = 1;
a.MaxPollingInterval = TimeSpan.FromSeconds(20);
});
})
Thank you!
But what happens to the jobs that are running? Will they continue running to the end? Or will it fail and stop?
When a WebJob that uses the WebJobs SDK picks up a message from a queue, it acquires it with a 10 minutes lease. If the job process dies while processing the message, the lease expires after 10 minutes and the message goes back in the queue. If the WebJob is restarted, it will pick that message again. The message is only deleted if the function completes successfully.
Therefore, if the job dies and restarts immediately, like in the case of a redeploy, it might take up to 10 minutes to pick again the message. Also, because of this, it is recommended to either save state yourself or make the function idempotent.
In the WebJobs Dashboard you will see two invocations for the same message. One of them will be marked as Never Finished because the function execution never completed.
Unfortunately, there is no out of the box solution to prevent jobs from running during deploy. You would have to create your own logic that notifies (through a queue message?) that a deploy is about the start and then aborts the host. The host abort will wait for any existing function to stop and will prevent new ones from starting. However, this is a very tricky situation if you have multiple instances of the webjob because only one of them will get the notification.
I am using PM2 in cluster mode and have 2 instances of my node.js application running. I have some long executing cron jobs (about 30 seconds) that I am trying to run. I am placing an if statement before the execution of the cron jobs to ensure that they only run on the first process via
if (process.env.NODE_APP_INSTANCE === 0) {
myCronFunction()
}
The goal was that since there are two processes, and PM2 should be load balancing them, if the cron job executes on process one, then process two would still be available to respond to the request. I am not sure what's going on, if PM2 is not successfully load balancing them, or what. But when my cron job executes on instance one, instance two is still not responding to requests until after the job on instance one finishes executing.
I'm not sure why that is. It is my understanding that they are supposed to be completely independent of one another.
Anyone have any ideas?
I am very new to Azure so I am not sure if my question is stated correctly but I will do my best:
I have an App that sends data in the form (1.bin, 2.bin, 3.bin...) always in consecutive order to a blob input container, when this happens it triggers an Azure function via QueueTrigger and the output of the function (1output.bin, 2output.bin, 3output.bin...) is stored in a blog output container.
When azure crashes the program tries 5 times before giving up. When azure succeeds it will run just once and that's it.
I am not sure what happened last week but since last week after each successful run, functions is idle like for 7 minutes and then it starts the process again as if it was the first time. So for example the blob receives 22.bin and functions process 22.bin and generates 22output.bin, it is supossed to stop after that but after seven minutes is processing 22.bin again.
I don't think is the app because each time the app sends data, even if it is the same one it will name the data with the next number (in my example 23.bin) but this is not the case it is just doing 22.bin again as if the trigger queue was not clear after the successful azure run, and it keeps doing it over and over again until I have to stop functions and make it crash i order to stop it.
Any idea in why is this happening and what can I try to correct it is greatly appreciated. I am just starting to learn about all this stuff.
One thing that could be possibly happening is that, the function execution time is exceeding 5 mins. Since this is a hard limit, function runtime would terminate the current execution and restart the function host.
One way to test this would be to create a Function app using Standard App Service plan instead of Consumption plan. Function app created with standard plan does not have execution time limit. You can log function start time and end time to see if it is taking longer than 5 mins to complete processing queue message.
Subject says it all really :) Say I've got a pretty busy Azure continuous webjob that is processing from an azure Queue:
public static void ProcessQueue([QueueTrigger("trigger")] Info info)
{ .... }
If I re-deploy the webjob, I can see that any currently executing job seems to be aborted (I see a "Never Finished" status). Is that job replayed after I release or is it lost forever?
Also, is there a nice way to make sure that no jobs are running when we deploy webjobs, or is it up to the developer to code a solution to that (such as a config flag that is checked every run).
Thanks
When a WebJob that uses the WebJobs SDK picks up a message from a queue, it acquires it with a 10 minutes lease. If the job process dies while processing the message, the lease expires after 10 minutes and the message goes back in the queue. If the WebJob is restarted, it will pick that message again. The message is only deleted if the function completes successfully.
Therefore, if the job dies and restarts immediately, like in the case of a redeploy, it might take up to 10 minutes to pick again the message. Also, because of this, it is recommended to either save state yourself or make the function idempotent.
In the WebJobs Dashboard you will see two invocations for the same message. One of them will be marked as Never Finished because the function execution never completed.
Unfortunately, there is no out of the box solution to prevent jobs from running during deploy. You would have to create your own logic that notifies (through a queue message?) that a deploy is about the start and then aborts the host. The host abort will wait for any existing function to stop and will prevent new ones from starting. However, this is a very tricky situation if you have multiple instances of the webjob because only one of them will get the notification.