I have a few Azure functions sharing same the code. So I created a batch file for publishing my libs. It is a simple bat file. For each of my azure functions, it connects to a host and uses robocopy to synchronize folders.
However, each time I publish, current running functions are dropped. I want to avoid that. Is there a way to let a running function naturally terminate its work?
I think its possible because when I publish, I'm not re-write real running dll, but I copy file in <azure-function-url>/site/wwwroot folder.
NOTE:
The function calls an async method without await. The async method does not completed the work when source change. (Im not focus on this problem, thanks Matt for the comment..open my eyes)
The functions runtime is designed to allow functions to gracefully exit in the event of host restarts, see here.
Not awaiting your async calls is an antipattern in functions, as we won't be able to track your function execution. We use the returned Task to determine when your function has finished. If you do not return a Task, we assume your function has completed when it returns.
In your case, that means we will kill the host on restarts while your orphaned asynchronous calls are running. If you fail to await async calls, we also don't guarantee successful:
Logging
Output bindings
Exception handling
Do: static async Task Run(...){ await asyncMethod(); }
Don't: static void Run(...){ asyncMethod(); }
Related
In HTTP functions for Firebase, we get that:
Terminate HTTP Functions DOC link
Always end an HTTP function with send(), redirect(), or end(). Otherwise, your function might continue to run and be forcibly terminated by the system. See also Sync, Async and Promises.
QUESTION
Is something similar necessary to other types of functions like the ones triggered by Firestore events?
Do I need to return something (even if it's null) or some other command to explicitly end it?
Is something similar necessary to other types of functions like the
ones triggered by Firestore events?
All the other types of Cloud Functions (i.e. all Cloud Functions except the HTTP Cloud Functions) require that you return a promise that will resolve when the asynchronous work is complete. This includes background triggered Cloud Functions (e.g. the ones triggered by Firestore events).
If your Cloud Function does not include asynchronous operation(s), you can return a simple value, like null when all the work is finished. You would also do this in case you want to cancel the Cloud Function execution, for example if a pre-condition is not fulfilled. The official Cloud Functions samples show several examples, in particular here an here.
I would suggest you watch the 3 videos about "JavaScript Promises" from the Firebase video series: https://firebase.google.com/docs/functions/video-series/ which explains that in detail.
According to the official documentation about Terminating background functions
You must signal when background functions have completed. Otherwise,
your function can continue to run and be forcibly terminated by the
system. You can signal function completion in each runtime as
described below:
In the Node.js runtimes version 8 and above, signal function
completion by either:
1.Invoking the callback argument
2.Returning a Promise
3.Wrapping your function using the async keyword (which causes your function to implicitly return a Promise)
4.Returning a value.
If invoking the callback argument or synchronously returning a value,
ensure that all asynchronous processes have completed first. If
returning a Promise, Cloud Functions ensures that the Promise is
settled before terminating.
I have a queue trigger azure function with DurableOrchestrationClient. I am able to start a new execution of my orchestration function, which triggers multiple activitytrigger functions and waits for them all to process. Everything works great.
My issue is that I am unable to check on the status of my orchestration function("TestFunction"). GetStatusAsync always returns as null. I need to know when the orchestration function is actually complete and process the return object (bool).
public static async void Run([QueueTrigger("photostodownload", Connection = "QueueStorage")]PhotoInfo photoInfo, [OrchestrationClient]DurableOrchestrationClient starter, TraceWriter log)
{
var x = await starter.StartNewAsync("TestFunction", photoInfo);
Thread.Sleep(2 * 1000);
var y = await starter.GetStatusAsync(x);
}
StartNewAsync enqueues a message into the control queuee, it doesn't mean that the orchestration starts immediately.
GetStatusAsync returns null if the instance either doesn't exist or has not yet started running. So, probably the orchestration just doesn't start yet during those 2 seconds of sleep that you have.
Rather than having a fixed wait timeout, you should either periodically poll the status of your orchestration, or send something like a Done event from the orchestration as the last step of the workflow.
Are you using function 1.0 or 2.0? A similar issue has been reported for Function 2.0 runtime on Github.
https://github.com/Azure/azure-functions-durable-extension/issues/126
Also when you say everything works great do you mean activityTrigger functions complete execution?
Are you running functions locally or is it deployed on Azure?
I'm still looking for a solution to implement an asynchronous call before returning a response. In other words, I have a long asynchronous process which should start running before returning a response, a user should not be waiting a long time for the end of this process:
$data = ....
...//Here call to an asynchronous function <<----
return $this->getSuccessResponse($data);
I tried with Events, Thread, Process, but no result.
What should I do ? (something expect RabbitMQ)
You can use a queuing system like Beanstalk. With this bundle LeezyPheanstalkBundle you can manage the queues.
In the controller, insert the job in the queue. And, in a command running with supervisor, execute your task.
Edit:
You can use an EventSubscriber
I am working in nodejs with express for a web app that communicates with mongodb frequently. Currently I running production with my own job queue system that only begins processing a job once the previous job has been completed (an approach that seems to be taken by kue).
To me, this seems wildly inefficient, I was hoping for a more asynchronous job queue, so I am looking for some advice as to how other nodejs developers queue their jobs and structure their processing.
One of my ideas is to process any jobs that are received immediately and return the resulting data in the order of addition.
Also to be considered: currently each user has their own independent job queue instance, is this normal practice? Is there any reason why this should not be the case? (ie, all users send jobs to one universal queue?)
Any comments/advice are appreciated.
Why do you build your own queue system? You did quite a lot of work to serialize a async queue with addLocalJob.
Why don't you just do something like
on('request', function(req, res) { queryDB(parameter, function(result) { res.send(result) })
? Full parallel access, no throttle, no (async) waiting.
If you really want to do it "by hand" in your own code, why not execute the first n elements of your trafficQueue instead of only the first?
If you want to throttle the DB - two ways:
use a library like async and the function parallelLimit
connect to your mongodb with http(s) and use the node-build-in http.globalAgent.maxSockets.
Hope this helps.
I'm using MVC4 ApiController to upload data to Azure Blob. Here is the sample code:
public Task PostAsync(int id)
{
return Task.Factory.StartNew(() =>
{
// CloudBlob.UploadFromStream(stream);
});
}
Does this code even make sense? I think ASP.NET is already processing the request in a worker thread, so running UploadFromStream in another thread doesn't seem to make sense since it now uses two threads to run this method (I assume the original worker thread is waiting for this UploadFromStream to finish?)
So my understanding is that async ApiController only makes sense if we are using some built-in async methods such as HttpClient.GetAsync or SqlCommand.ExecuteReaderAsync. Those methods probably use I/O Completion Ports internally so it can free up the thread while doing the actual work. So I should change the code to this?
public Task PostAsync(int id)
{
// only to show it's using the proper async version of the method.
return TaskFactory.FromAsync(BeginUploadFromStream, EndUploadFromStream...)
}
On the other hand, if all the work in the Post method is CPU/memory intensive, then the async version PostAsync will not help throughput of requests. It might be better to just use the regular "public void Post(int id)" method, right?
I know it's a lot questions. Hopefully it will clarify my understanding of async usage in the ASP.NET MVC. Thanks.
Yes, most of what you say is correct. Even down to the details with completion ports and such.
Here is a tiny error:
I assume the original worker thread is waiting for this UploadFromStream to finish?
Only your task thread is running. You're using the async pipeline after all. It does not wait for the task to finish, it just hooks up a continuation. (Just like with HttpClient.GetAsync).