Azure Durable Functions, unexpected value of 'IsReplaying' flag - azure

I'm trying to understand Azure Durable Functions behavior. Specifically about how the Orchestrator function gets replayed. I thought I was getting the hang of it until I found one value of the Context.IsReplaying flag that didn't make sense to me.
My code is very "hello world" -ish. It has an Orchestrator function that calls two Activity functions one after the other.
[FunctionName("OrchestratorFn")]
public static async Task<object> Orchestrator(
[OrchestrationTrigger] IDurableOrchestrationContext context,
ILogger log
) {
log.LogInformation($"--------> Orchestrator started at {T()}, isReplay={context.IsReplaying}");
string name = context.GetInput<string>();
string name1 = await context.CallActivityAsync<string>("A_ActivityA", name);
string name2 = await context.CallActivityAsync<string>("A_ActivityB", name1);
log.LogInformation($"--------> Orchestrator ended at {T()}, isReplay={context.IsReplaying}");
return new {
OutputFromA = name1,
OutputFromB = name2
};
}
[FunctionName("A_ActivityA")]
public static async Task<object> ActivityA(
[ActivityTrigger] string input,
ILogger log
) {
log.LogInformation($"--------> Activity A started at {T()}");
await Task.Delay(3000);
log.LogInformation($"--------> Activity A ended at {T()}");
return input + "-1";
}
[FunctionName("A_ActivityB")]
public static async Task<object> ActivityB(
[ActivityTrigger] string input,
ILogger log
) {
log.LogInformation($"--------> Activity B started at {T()}");
await Task.Delay(3000);
log.LogInformation($"--------> Activity B ended at {T()}");
return input + "-2";
}
In the console output (I've cut out everything except the output where I log time), this is what I see:
[1/26/2020 12:56:40 PM] ------> DurableClient Function Running at 56.40.8424.
[1/26/2020 12:56:49 PM] ------> DurableClient Function END at 56.49.5029.
[1/26/2020 12:57:03 PM] ------> Orchestrator started at 57.03.7915, isReplay=False
[1/26/2020 12:57:04 PM] ------> Activity A started at 57.04.1905
[1/26/2020 12:57:07 PM] ------> Activity A ended at 57.07.2016
[1/26/2020 12:57:24 PM] ------> Orchestrator started at 57.24.8029, isReplay=True
[1/26/2020 12:57:40 PM] ------> Activity B started at 57.40.4136
[1/26/2020 12:57:43 PM] ------> Activity B ended at 57.43.4258
[1/26/2020 12:57:53 PM] ------> Orchestrator started at 57.53.1490, isReplay=True
[1/26/2020 12:57:59 PM] ------> Orchestrator ended at 57.59.0736, isReplay=False
It's the 'isReplay=False' on the very last line that I can't explain. Why is this ? Shouldn't isReplay be 'True' ?
I'm using Microsoft.Azure.WebJobs.Extensions.Durable v2.1.1

No, it should not be isReplay=true because this line is really only executed once. Whenever the Orchestrator await some call, it stops its own execution right there and waits for that call to finish. When it does, it runs through all the code up until its last point again - without making outbound calls again.
Since there is no further await behind your last logging statement, this line is only reached once.

Related

Order Process with Azure Durable Functions or not

I am creating an architecture to process our orders from an ecommerce website who gets 10,000 orders or more every hour. We are using an external third party order fulfillment service and they have about 5 Steps/APIs that we have to run which are dependent upon each other.
I was thinking of using Fan in/Fan Out approach where we can use durable functions.
My plan
Once the order is created on our end, we store in a table with a flag of Order completed.
Run a time trigger azure function that runs the durable function orchestrator which calls the activity functions for each step
Now if it fails, timer will pick up the order again until it is completed. But my question is should we put this order in service bus and pick it up from there instead of time trigger.
Because there can be more than 10,000 records each hour so we have to run a query in the time trigger function and find orders that are not completed and run the durable orchestrator 10,000 times in a loop. My first question - Can I run the durable function parallelly for 10,000 records?
If I use service bus trigger to trigger durable orchestrator, it will automatically run azure function and durable 10,000 times parallelly right? But in this instance, I will have to build a dead letter queue function/process so if it fails, we are able to move it to active topic
Questions:
Is durable function correct approach or is there a better and easier approach?
If yes, Is time trigger better or Service bus trigger to start the orchestrator function?
Can I run the durable function orchestrator parallelly through time trigger azure function. I am not talking about calling activity functions because those cannot be run parallelly because we need output of one to be input of the next
This usecase fits function chaining. This can be done by
Have the ordering system put a message on a queue (storage or servicebus)
Create an azure function with storage queue trigger or service bus trigger. This would also be the client function that triggers the orchestration function
Create an orchestration function that invokes the 5 step APIs, one activity function for each (similar to as given in function chaining example.
Create five activity function, one f for each API
Ordering system
var clientOptions = new ServiceBusClientOptions
{
TransportType = ServiceBusTransportType.AmqpWebSockets
};
//TODO: Replace the "<NAMESPACE-NAME>" and "<QUEUE-NAME>" placeholders.
client = new ServiceBusClient(
"<NAMESPACE-NAME>.servicebus.windows.net",
new DefaultAzureCredential(),
clientOptions);
sender = client.CreateSender("<QUEUE-NAME>");
var message = new ServiceBusMessage($"{orderId}");
await sender.SendMessageAsync(message);
Client function
public static class OrderFulfilment
{
[Function("OrderFulfilment")]
public static string Run([ServiceBusTrigger("<QUEUE-NAME>", Connection = "ServiceBusConnection")] string orderId,
[DurableClient] IDurableOrchestrationClient starter)
{
var logger = context.GetLogger("OrderFulfilment");
logger.LogInformation(orderId);
return starter.StartNewAsync("ChainedApiCalls", orderId);
}
}
Orchestration function
[FunctionName("ChainedApiCalls")]
public static async Task<object> Run([OrchestrationTrigger] IDurableOrchestrationContext fulfillmentContext)
{
try
{
// .... get order with orderId
var a = await context.CallActivityAsync<object>("ApiCaller1", null);
var b = await context.CallActivityAsync<object>("ApiCaller2", a);
var c = await context.CallActivityAsync<object>("ApiCaller3", b);
var d = await context.CallActivityAsync<object>("ApiCaller4", c);
return await context.CallActivityAsync<object>("ApiCaller5", d);
}
catch (Exception)
{
// Error handling or compensation goes here.
}
}
Activity functions
[FunctionName("ApiCaller1")]
public static string ApiCaller1([ActivityTrigger] IDurableActivityContext fulfillmentApiContext)
{
string input = fulfillmentApiContext.GetInput<string>();
return $"API1 result";
}
[FunctionName("ApiCaller2")]
public static string ApiCaller2([ActivityTrigger] IDurableActivityContext fulfillmentApiContext)
{
string input = fulfillmentApiContext.GetInput<string>();
return $"API2 result";
}
// Repeat 3 more times...

Durable function and CPU resource > 80 %

I am running on an Azure Consumption Plan. And notice high CPU usage with what I understand as a simple task. I would like to know if my approach is correct. At times CPU was above 80%
I have created a scheduler function, which executes once a minute and checks a SQL db for IoT devices that needs to control at specific times. If the device needs to be controlled, and durable function is executed. This durable function simply sends the device a message and wait for the reply before sending another request.
What I am doing is simply polling the durable function and then sleeping or delaying the function for x seconds as shown below:
[FunctionName("Irrimer_HttpStart")]
public static async Task<HttpResponseMessage> HttpStart(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")]HttpRequestMessage req,
[OrchestrationClient]DurableOrchestrationClient starter,
ILogger log)
{
// Function input comes from the request content.
log.LogInformation($"Started Timmer Irr");
dynamic eventData = await req.Content.ReadAsAsync<object>();
string ZoneNumber = eventData.ZoneNumber;
string instanceId = await starter.StartNewAsync("Irrtimer", ZoneNumber);
return starter.CreateCheckStatusResponse(req, instanceId);
}
public static async Task<List<string>> RunOrchestrator(
[OrchestrationTrigger] DurableOrchestrationContext context, ILogger log)
{
log.LogInformation($"Time Control Started--->");
Iot_data_state iotstatedata = new Iot_data_state();
iotstatedata.NextState = "int_zone";
var outputs = new List<string>();
outputs.Add("Stating Durable");
iotstatedata.zonenumber = context.GetInput<string>();
iotstatedata.termination_counter = 0;
while (iotstatedata.NextState != "finished")
{
iotstatedata = await context.CallActivityAsync<Iot_data_state>("timer_irr_process", iotstatedata);
outputs.Add(iotstatedata.NextState + " " + iotstatedata.now_time);
if (iotstatedata.sleepduration > 0)
{
DateTime deadline = context.CurrentUtcDateTime.Add(TimeSpan.FromSeconds(iotstatedata.sleepduration));
await context.CreateTimer(deadline, CancellationToken.None);
}
}
return outputs;
}
I have another function "timer_irr_process", which is simply a case statement, which performs and queries as required. Then sets the delay in seconds before it needs to be invoked again. When it gets to the case "Finish", this means the durable function is no longer needed and it exists.
The kind of tasks i am trying to create efficiently is sending a message to IoT device to switch On, observe it performance incase its been controlled manually or various other things. If it malfunctions send a message to a user, if it preforms correctly and task is finished , close the durable function.
Is the any efficient away to doing this?

Azure durable function running multiple times on startup when running locally

I have an http triggered azure durable function with an orchestration trigger called "ExecuteWork" and two activities namely "HandleClaimsForms" and "HandleApplicationForms". I will add the definitions for them below. The function is used to process PDFs in a blob storage container. When running locally it will execute "HandleClaimsForms" four or five times on startup without it being called.
Here are the logs that it is producing:
Functions:
Function1: [GET,POST] http://localhost:7071/api/Function1
ExecuteWork: orchestrationTrigger
HandleApplicationForms: activityTrigger
HandleClaimsForms: activityTrigger
[2022-06-07T12:39:44.587Z] Executing 'HandleClaimsForms' (Reason='(null)', Id=c45878fe-35c8-4a57-948e-0b43da969427)
[2022-06-07T12:39:44.587Z] Executing 'HandleClaimsForms' (Reason='(null)', Id=0fb9644d-6748-4791-96cf-a92f6c161a97)
[2022-06-07T12:39:44.587Z] Executing 'HandleClaimsForms' (Reason='(null)', Id=9a39a169-a91d-4524-b5e5-63e6226f70ec)
[2022-06-07T12:39:44.587Z] Executing 'HandleClaimsForms' (Reason='(null)', Id=b3697f6b-7c96-4497-826f-3894359ff361)
[2022-06-07T12:39:44.587Z] Executing 'HandleClaimsForms' (Reason='(null)', Id=3ca3bbce-1657-453b-a5b3-e9dbdb940302)
Here are the Function definitions:
Function entrypoint
[FunctionName("Function1")]
public async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req,
[DurableClient] IDurableOrchestrationClient starter,
ILogger log)
{
string instanceID = await starter.StartNewAsync("ExecuteWork");
return starter.CreateCheckStatusResponse(req, instanceID);
}
Orchestration trigger
[FunctionName("ExecuteWork")]
public async Task<bool> ProcessForms(
[OrchestrationTrigger] IDurableOrchestrationContext context,
ILogger log)
{
bool success = true;
try
{
await context.CallActivityAsync("HandleClaimsForms", success);
await context.CallActivityAsync("HandleApplicationForms", success);
return success;
}
catch (Exception err)
{
log.LogInformation($"The following error was thrown: {err}");
success = false;
return success;
}
}
HandleClaimsForm Activity
public async Task<bool> ProcessClaimsForms(
[ActivityTrigger]bool success)
{
await _docHandler.Handle();
return success;
}
HandleApplicationForm activity
[FunctionName("HandleApplicationForms")]
public async Task<bool> ProcessApplicationForms(
[ActivityTrigger]bool success)
{
await _appHandler.HandleJsonApplicationFormAsync();
return success;
}
One of the workaround you can follow to resolve the above issue,
Based on the MICROSOFT DOCUMENT about the reliability is:
Durable Functions uses event sourcing transparently. Behind the scenes, the await (C#) or yield (JavaScript/Python) operator in an
orchestrator function yields control of the orchestrator thread back
to the Durable Task Framework dispatcher.
The orchestrator wakes up and re-executes the entire function from scratch to rebuild the local state whenever an orchestration function
is given more work to do (for instance, when a response message is
received or a durable timer expires). The Durable Task Framework
analyses the orchestration's execution history if the code tries to
invoke a function or do any other async task during the replay. In the
event that it discovers that the activity function has already run and
produced a result, it replays that result while the orchestrator code
keeps running. Playback continues until the function code terminates
or until additional async work has been scheduled.
Another, approach is to use the dependency injection .
For more information please refer the below links:-
SIMILAR SO THREAD| HTTP Trigger on demand azure function calling itself multiple times & Azure Durable Function Activity seems to run multiple times and not complete .

Multiple Azure EventHub trigger to single function in Azure Function app

I want to do the same functionality (with few changes based on message data) from two different eventhubs.
Is it possible to attach two consumer group to a single function.
It did not work even though I add it to function.json.
The short answer is no. You cannot bind multiple input triggers to the same function:
https://github.com/Azure/azure-webjobs-sdk-script/wiki/function.json
A function can only have a single trigger binding, and can have multiple input/output bindings.
However, you can call the same "shared" code from multiple functions by either wrapping the shared code in a helper method, or using Precompiled Functions.
Recommended practice here is to share business logic between functions by using the fact that a single function app can be composed of multiple functions.
MyFunctionApp
| host.json
|____ business
| |____ logic.js
|____ function1
| |____ index.js
| |____ function.json
|____ function2
|____ index.js
|____ function.json
In "function1/index.js" and "function2/index.js"
var logic = require("../business/logic");
module.exports = logic;
The function.json of function1 and function2 can be configured to different triggers.
In "business/logic.js
module.exports = function (context, req) {
// This is where shared code goes. As an example, for an HTTP trigger:
context.res = {
body: "<b>Hello World</b>",
status: 201,
headers: {
'content-type': "text/html"
}
};
context.done();
};
Is it possible to attach two consumer group to a single function.
Assuming you're looking for trigger and don't want to do your own polling using EventProcessorClient inside your function. Because you can schedule a function to periodically use API to fetch messages from multiple event hubs and process them. But you need to implement all the built-in logic (polling, handling multiple partitions, check-pointing, scaling, ...) you get when you use triggers.
Couple of work arounds:
Capture: If event hubs are in same namespace, you can enable capture on all your event hubs. Then create an event grid trigger for your function. You'll get a message with path of capture file. E.g.
{
"topic": "/subscriptions/9fac-4e71-9e6b-c0fa7b159e78/resourcegroups/kash-test-01/providers/Microsoft.EventHub/namespaces/eh-ns",
"subject": "eh-1",
"eventType": "Microsoft.EventHub.CaptureFileCreated",
"id": "b5aa3f62-15a1-497a-b97b-e688d4368db8",
"data": {
"fileUrl": "https://xxx.blob.core.windows.net/capture-fs/eh-ns/eh-1/0/2020/10/28/21/39/01.avro",
"fileType": "AzureBlockBlob",
"partitionId": "0",
"sizeInBytes": 8011,
"eventCount": 5,
"firstSequenceNumber": 5,
"lastSequenceNumber": 9,
"firstEnqueueTime": "2020-10-28T21:40:28.83Z",
"lastEnqueueTime": "2020-10-28T21:40:28.908Z"
},
"dataVersion": "1",
"metadataVersion": "1",
"eventTime": "2020-10-28T21:41:02.2472744Z"
}
Obviously this is not real-time, min capture time you can set is 1 minute and there might be a little delay between the time captured avro file is written and your function is invoked.
At least in Java there is no restriction that you must have a separate class for each function. So you can do this:
public class EhConsumerFunctions {
private void processEvent(String event) {
// process...
}
#FunctionName("eh1_consumer")
public void eh1_consumer(
#EventHubTrigger(name = "event", eventHubName = "eh-ns", connection = "EH1_CONN_STR") String event,
final ExecutionContext context) {
processEvent(event);
}
#FunctionName("eh2_consumer")
public void eh2_consumer(
#EventHubTrigger(name = "event", eventHubName = "eh-ns", connection = "EH2_CONN_STR") String event,
final ExecutionContext context) {
processEvent(event);
}
}
and define EH1_CONN_STR and EH2_CONN_STR in your app settings.

How to do Async in Azure WebJob function

I have an async method that gets api data from a server. When I run this code on my local machine, in a console app, it performs at high speed, pushing through a few hundred http calls in the async function per minute. When I put the same code to be triggered from an Azure WebJob queue message however, it seems to operate synchronously and my numbers crawl - I'm sure I am missing something simple in my approach - any assistance appreciated.
(1) .. WebJob function that listens for a message on queue and kicks off the api get process on message received:
public class Functions
{
// This function will get triggered/executed when a new message is written
// on an Azure Queue called queue.
public static async Task ProcessQueueMessage ([QueueTrigger("myqueue")] string message, TextWriter log)
{
var getAPIData = new GetData();
getAPIData.DoIt(message).Wait();
log.WriteLine("*** done: " + message);
}
}
(2) the class that outside azure works in async mode at speed...
class GetData
{
// wrapper that is called by the message function trigger
public async Task DoIt(string MessageFile)
{
await CallAPI(MessageFile);
}
public async Task<string> CallAPI(string MessageFile)
{
/// create a list of sample APIs to call...
var apiCallList = new List<string>();
apiCallList.Add("localhost/?q=1");
apiCallList.Add("localhost/?q=2");
apiCallList.Add("localhost/?q=3");
apiCallList.Add("localhost/?q=4");
apiCallList.Add("localhost/?q=5");
// setup httpclient
HttpClient client =
new HttpClient() { MaxResponseContentBufferSize = 10000000 };
var timeout = new TimeSpan(0, 5, 0); // 5 min timeout
client.Timeout = timeout;
// create a list of http api get Task...
IEnumerable<Task<string>> allResults = apiCallList.Select(str => ProcessURLPageAsync(str, client));
// wait for them all to complete, then move on...
await Task.WhenAll(allResults);
return allResults.ToString();
}
async Task<string> ProcessURLPageAsync(string APIAddressString, HttpClient client)
{
string page = "";
HttpResponseMessage resX;
try
{
// set the address to call
Uri URL = new Uri(APIAddressString);
// execute the call
resX = await client.GetAsync(URL);
page = await resX.Content.ReadAsStringAsync();
string rslt = page;
// do something with the api response data
}
catch (Exception ex)
{
// log error
}
return page;
}
}
First because your triggered function is async, you should use await rather than .Wait(). Wait will block the current thread.
public static async Task ProcessQueueMessage([QueueTrigger("myqueue")] string message, TextWriter log)
{
var getAPIData = new GetData();
await getAPIData.DoIt(message);
log.WriteLine("*** done: " + message);
}
Anyway you'll be able to find usefull information from the documentation
Parallel execution
If you have multiple functions listening on different queues, the SDK will call them in parallel when messages are received simultaneously.
The same is true when multiple messages are received for a single queue. By default, the SDK gets a batch of 16 queue messages at a time and executes the function that processes them in parallel. The batch size is configurable. When the number being processed gets down to half of the batch size, the SDK gets another batch and starts processing those messages. Therefore the maximum number of concurrent messages being processed per function is one and a half times the batch size. This limit applies separately to each function that has a QueueTrigger attribute.
Here is a sample code to configure the batch size:
var config = new JobHostConfiguration();
config.Queues.BatchSize = 50;
var host = new JobHost(config);
host.RunAndBlock();
However, it is not always a good option to have too many threads running at the same time and could lead to bad performance.
Another option is to scale out your webjob:
Multiple instances
if your web app runs on multiple instances, a continuous WebJob runs on each machine, and each machine will wait for triggers and attempt to run functions. The WebJobs SDK queue trigger automatically prevents a function from processing a queue message multiple times; functions do not have to be written to be idempotent. However, if you want to ensure that only one instance of a function runs even when there are multiple instances of the host web app, you can use the Singleton attribute.
Have a read of this Webjobs SDK documentation - the behaviour you should expect is that your process will run and process one message at a time, but will scale up if more instances are created (of your app service). If you had multiple queues, they will trigger in parallel.
In order to improve the performance, see the configurations settings section in the link I sent you, which refers to the number of messages that can be triggered in a batch.
If you want to process multiple messages in parallel though, and don't want to rely on instance scaling, then you need to use threading instead (async isn't about multi-threaded parallelism, but making more efficient use of the thread you're using). So your queue trigger function should read the message from the queue, the create a thread and "fire and forget" that thread, and then return from the trigger function. This will mark the message as processed, and allow the next message on the queue to be processed, even though in theory you're still processing the earlier one. Note you will need to include your own logic for error handling and ensuring that the data wont get lost if your thread throws an exception or can't process the message (eg. put it on a poison queue).
The other option is to not use the [queuetrigger] attribute, and use the Azure storage queues sdk API functions directly to connect and process the messages per your requirements.

Resources