Can I use an output binding argument in a foreach, multiple times?
[FunctionName("OnClientConnectedDisconnected")]
public async Task Run(
[EventGridTrigger] EventGridEvent eventGridEvent,
[SignalR(HubName = "Lobby")] IAsyncCollector<SignalRMessage> signalRMessage,
[SignalR(HubName = "Lobby")] IAsyncCollector<SignalRGroupAction> signalRGroupMessage,
ILogger log)
{
...
...
foreach (var player in onlineFriends)
{
await signalRGroupMessage.AddAsync(
new SignalRGroupAction
{
GroupName = $"Group_{player}",
Action = GroupAction.Add,
UserId = eventGridData.UserId
}
);
}
}
Whether you can call an output binding multiple times in a function depends on the binding type and how it is written. For example, the table storage output binding will add multiple entities to a table (once for each call), but the queue binding will only post one message. I'm not seeing that the SignalR output binding allows multiple calls.
Remember that binding are optional--an alternate strategy would be to write your own code to create your group and exit the function after the loop.
Yes, you can.
In C#, you can use SignalR output binding multiple times and the invocation just happens at where you call await signalRGroupMessage.AddAsync()
Related
I am creating an architecture to process our orders from an ecommerce website who gets 10,000 orders or more every hour. We are using an external third party order fulfillment service and they have about 5 Steps/APIs that we have to run which are dependent upon each other.
I was thinking of using Fan in/Fan Out approach where we can use durable functions.
My plan
Once the order is created on our end, we store in a table with a flag of Order completed.
Run a time trigger azure function that runs the durable function orchestrator which calls the activity functions for each step
Now if it fails, timer will pick up the order again until it is completed. But my question is should we put this order in service bus and pick it up from there instead of time trigger.
Because there can be more than 10,000 records each hour so we have to run a query in the time trigger function and find orders that are not completed and run the durable orchestrator 10,000 times in a loop. My first question - Can I run the durable function parallelly for 10,000 records?
If I use service bus trigger to trigger durable orchestrator, it will automatically run azure function and durable 10,000 times parallelly right? But in this instance, I will have to build a dead letter queue function/process so if it fails, we are able to move it to active topic
Questions:
Is durable function correct approach or is there a better and easier approach?
If yes, Is time trigger better or Service bus trigger to start the orchestrator function?
Can I run the durable function orchestrator parallelly through time trigger azure function. I am not talking about calling activity functions because those cannot be run parallelly because we need output of one to be input of the next
This usecase fits function chaining. This can be done by
Have the ordering system put a message on a queue (storage or servicebus)
Create an azure function with storage queue trigger or service bus trigger. This would also be the client function that triggers the orchestration function
Create an orchestration function that invokes the 5 step APIs, one activity function for each (similar to as given in function chaining example.
Create five activity function, one f for each API
Ordering system
var clientOptions = new ServiceBusClientOptions
{
TransportType = ServiceBusTransportType.AmqpWebSockets
};
//TODO: Replace the "<NAMESPACE-NAME>" and "<QUEUE-NAME>" placeholders.
client = new ServiceBusClient(
"<NAMESPACE-NAME>.servicebus.windows.net",
new DefaultAzureCredential(),
clientOptions);
sender = client.CreateSender("<QUEUE-NAME>");
var message = new ServiceBusMessage($"{orderId}");
await sender.SendMessageAsync(message);
Client function
public static class OrderFulfilment
{
[Function("OrderFulfilment")]
public static string Run([ServiceBusTrigger("<QUEUE-NAME>", Connection = "ServiceBusConnection")] string orderId,
[DurableClient] IDurableOrchestrationClient starter)
{
var logger = context.GetLogger("OrderFulfilment");
logger.LogInformation(orderId);
return starter.StartNewAsync("ChainedApiCalls", orderId);
}
}
Orchestration function
[FunctionName("ChainedApiCalls")]
public static async Task<object> Run([OrchestrationTrigger] IDurableOrchestrationContext fulfillmentContext)
{
try
{
// .... get order with orderId
var a = await context.CallActivityAsync<object>("ApiCaller1", null);
var b = await context.CallActivityAsync<object>("ApiCaller2", a);
var c = await context.CallActivityAsync<object>("ApiCaller3", b);
var d = await context.CallActivityAsync<object>("ApiCaller4", c);
return await context.CallActivityAsync<object>("ApiCaller5", d);
}
catch (Exception)
{
// Error handling or compensation goes here.
}
}
Activity functions
[FunctionName("ApiCaller1")]
public static string ApiCaller1([ActivityTrigger] IDurableActivityContext fulfillmentApiContext)
{
string input = fulfillmentApiContext.GetInput<string>();
return $"API1 result";
}
[FunctionName("ApiCaller2")]
public static string ApiCaller2([ActivityTrigger] IDurableActivityContext fulfillmentApiContext)
{
string input = fulfillmentApiContext.GetInput<string>();
return $"API2 result";
}
// Repeat 3 more times...
I am trying to track down some occasional Non-Deterministic workflow detected: TaskScheduledEvent: 0 TaskScheduled ... errors in a durable function project of ours. It is infrequent (3 times in 10,000 or so instances).
When comparing the orchestrator code to the constraints documented here there is one pattern we use that I am not clear on. In an effort to make the orchestrator code more clean/readable we use some private async helper functions to make the actual CallActivityWithRetryAsync call, sometimes wrapped in an exception handler for logging, then the main orchestrator function awaits on this helper function.
Something like this simplified sample:
[FunctionName(Name)]
public static async Task RunPipelineAsync(
[OrchestrationTrigger]
DurableOrchestrationContextBase context,
ILogger log)
{
// other steps
await WriteStatusAsync(context, "Started", log);
// other steps
await WriteStatusAsync(context, "Completed", log);
}
private static async Task WriteStatusAsync(
DurableOrchestrationContextBase context,
string status,
ILogger log
)
{
log.LogInformationOnce(context, "log message...");
try
{
var request = new WriteAppDocumentStatusRequest
{
//...
};
await context.CallActivityWithRetryAsync(
"WriteAppStatus",
RetryPolicy,
request
);
}
catch(Exception e)
{
// "optional step" will log errors but not take down the orchestrator
// log here
}
}
In reality these tasks are combined and used with Task.WhenAll. Is it valid to be calling these async functions despite the fact that they are not directly on the context?
Yes, what you're doing is perfectly safe because it still results in deterministic behavior. As long as you aren't doing any custom thread scheduling or calling non-durable APIs that have their own separate async callbacks (for example, network APIs typically have callbacks running on a separate thread), you are fine.
If you are ever unsure, I highly recommend you use our Durable Functions C# analyzer to analyzer your code for coding errors. This will help flag any coding mistakes that could result in Non-deterministic workflow errors.
UPDATE
Note: the current version of the analyzer will require you to add a [Deterministic] attribute to your private async function, like this:
[Deterministic]
private static async Task WriteStatusAsync(
DurableOrchestrationContextBase context,
string status,
ILogger log
)
{
// ...
}
This lets it know that the private async method is being used by your orchestrator function and that it also needs to be analyzed. If you're using Durable Functions 1.8.3 or below, the [Deterministic] attribute will not exist. However, you can create your own custom attribute with the same name and the analyzer will honor it. For example:
[Deterministic]
private static async Task WriteStatusAsync(
DurableOrchestrationContextBase context,
string status,
ILogger log
)
{
// ...
}
// Needed for the Durable Functions analyzer
class Deterministic : Attribute { }
Note, however, that we are planning on removing the need for the [Deterministic] attribute in a future release, as we're finding it may not actually be necessary.
How to use "waiting for external" event functionality of durable task framework in the code. Following is a sample code.
context.ScheduleWithRetry<LicenseActivityResponse>(
typeof(LicensesCreatorActivity),
_retryOptions,
input);
I am using ScheduleWithRetry<> method of context for scheduling my task on DTF but when there is an exception occurring in the code. The above method retries for the _retryOptions number of times.
After completing the retries, the Orchestration status will be marked as Failed.
I need a process by which i can resume my orchestration on DTF after correcting the reason of exception.
I am looking into the githib code for the concerned method in the code but no success.
I have concluded two solution:
Call a framework's method (if exist) and re-queue the orchestration from the state where it failed.
Hold the orchestration code in try catch and in catch section i implement a method CreateOrchestrationInstanceWithRaisedEventAsync whcih will put the orchestration in hold state until an external event triggers it back. Whenever a user (using some front end application) will call the external event for resuming (which means the user have made the corrections which were causing exception).
These are my understandings, if one of the above is possible then kindly guide me through some technical suggestions. otherwise find me a correct path for this task.
For the community's benefit, Salman resolved the issue by doing the following:
"I solved the problem by creating a sub orchestration in case of an exception occurs while performing an activity. The sub orchestration lock the event on azure as pending state and wait for an external event which raise the locked event so that the parent orchestration resumes the process on activity. This process helps if our orchestrations is about to fail on azure durable task framework"
I have figured out the solution for my problem by using "Signal Orchestrations" taken from code from GitHub repository.
Following is the solution diagram for the problem.
In this diagram, before the solution implemented, we only had "Process Activity" which actually executes the activity.
Azure Storage Table is for storing the multiplier values of an instanceId and ActivityName. Why we implemented this will get clear later.
Monitoring Website is the platform from where a user can re-queue/retry the orchestration activity to perform.
Now we have a pre-step and a post-step.
1. Get Retry Option (Pre-Step)
This method basically set the value of RetryOptions instance value.
private RetryOptions ModifyMaxRetires(OrchestrationContext context, string activityName)
{
var failedInstance =
_azureStorageFailedOrchestrationTasks.GetSingleEntity(context.OrchestrationInstance.InstanceId,
activityName);
var configuration = Container.GetInstance<IConfigurationManager>();
if (failedInstance.Result == null)
{
return new RetryOptions(TimeSpan.FromSeconds(configuration.OrderTaskFailureWaitInSeconds),
configuration.OrderTaskMaxRetries);
}
var multiplier = ((FailedOrchestrationEntity)failedInstance.Result).Multiplier;
return new RetryOptions(TimeSpan.FromSeconds(configuration.OrderTaskFailureWaitInSeconds),
configuration.OrderTaskMaxRetries * multiplier);
}
If we have any entry in our azure storage table against the instanceId and ActivityName, we takes the multiplier value from the table and updates the value of retry number in RetryOption instance creation. otherwise we are using the default number of retry value which is coming from our config.
Then:
We process the activity with scheduled retry number (if activity fails in any case).
2. Handle Exceptions (Post-Step)
This method basically handles the exception in case of the activity fails to complete even after the number of retry count set for the activity in RetryOption instance.
private async Task HandleExceptionForSignal(OrchestrationContext context, Exception exception, string activityName)
{
var failedInstance = _azureStorageFailedOrchestrationTasks.GetSingleEntity(context.OrchestrationInstance.InstanceId, activityName);
if (failedInstance.Result != null)
{
_azureStorageFailedOrchestrationTasks.UpdateSingleEntity(context.OrchestrationInstance.InstanceId, activityName, ((FailedOrchestrationEntity)failedInstance.Result).Multiplier + 1);
}
else
{
//const multiplier when first time exception occurs.
const int multiplier = 2;
_azureStorageFailedOrchestrationTasks.InsertActivity(new FailedOrchestrationEntity(context.OrchestrationInstance.InstanceId, activityName)
{
Multiplier = multiplier
});
}
var exceptionInput = new OrderExceptionContext
{
Exception = exception.ToString(),
Message = exception.Message
};
await context.CreateSubOrchestrationInstance<string>(typeof(ProcessFailedOrderOrchestration), $"{context.OrchestrationInstance.InstanceId}_{Guid.NewGuid()}", exceptionInput);
}
The above code first try to find the instanceID and ActivityName in azure storage. If it is not there then we simply add a new row in azure storage table for the InstanceId and ActivityName with the default multiplier value 2.
Later on we creates a new exception type instance for sending the exception message and details to sub-orchestration (which will be shown on monitoring website to a user). The sub-orchestration waits for the external event fired from a user against the InstanceId of the sub-orchestration.
Whenever it is fired from monitoring website, the sub-orchestration will end up and go back to start parent orchestration once again. But this time, when the Pre-Step activity will be called once again it will find the entry in azure storage table with a multiplier. Which means the retry options will get updated after multiplying it with default retry options.
So by this way, we can continue our orchestrations and prevent them from failing.
Following is the class of sub-orchestrations.
internal class ProcessFailedOrderOrchestration : TaskOrchestration<string, OrderExceptionContext>
{
private TaskCompletionSource<string> _resumeHandle;
public override async Task<string> RunTask(OrchestrationContext context, OrderExceptionContext input)
{
await WaitForSignal();
return "Completed";
}
private async Task<string> WaitForSignal()
{
_resumeHandle = new TaskCompletionSource<string>();
var data = await _resumeHandle.Task;
_resumeHandle = null;
return data;
}
public override void OnEvent(OrchestrationContext context, string name, string input)
{
_resumeHandle?.SetResult(input);
}
}
I have an async method that gets api data from a server. When I run this code on my local machine, in a console app, it performs at high speed, pushing through a few hundred http calls in the async function per minute. When I put the same code to be triggered from an Azure WebJob queue message however, it seems to operate synchronously and my numbers crawl - I'm sure I am missing something simple in my approach - any assistance appreciated.
(1) .. WebJob function that listens for a message on queue and kicks off the api get process on message received:
public class Functions
{
// This function will get triggered/executed when a new message is written
// on an Azure Queue called queue.
public static async Task ProcessQueueMessage ([QueueTrigger("myqueue")] string message, TextWriter log)
{
var getAPIData = new GetData();
getAPIData.DoIt(message).Wait();
log.WriteLine("*** done: " + message);
}
}
(2) the class that outside azure works in async mode at speed...
class GetData
{
// wrapper that is called by the message function trigger
public async Task DoIt(string MessageFile)
{
await CallAPI(MessageFile);
}
public async Task<string> CallAPI(string MessageFile)
{
/// create a list of sample APIs to call...
var apiCallList = new List<string>();
apiCallList.Add("localhost/?q=1");
apiCallList.Add("localhost/?q=2");
apiCallList.Add("localhost/?q=3");
apiCallList.Add("localhost/?q=4");
apiCallList.Add("localhost/?q=5");
// setup httpclient
HttpClient client =
new HttpClient() { MaxResponseContentBufferSize = 10000000 };
var timeout = new TimeSpan(0, 5, 0); // 5 min timeout
client.Timeout = timeout;
// create a list of http api get Task...
IEnumerable<Task<string>> allResults = apiCallList.Select(str => ProcessURLPageAsync(str, client));
// wait for them all to complete, then move on...
await Task.WhenAll(allResults);
return allResults.ToString();
}
async Task<string> ProcessURLPageAsync(string APIAddressString, HttpClient client)
{
string page = "";
HttpResponseMessage resX;
try
{
// set the address to call
Uri URL = new Uri(APIAddressString);
// execute the call
resX = await client.GetAsync(URL);
page = await resX.Content.ReadAsStringAsync();
string rslt = page;
// do something with the api response data
}
catch (Exception ex)
{
// log error
}
return page;
}
}
First because your triggered function is async, you should use await rather than .Wait(). Wait will block the current thread.
public static async Task ProcessQueueMessage([QueueTrigger("myqueue")] string message, TextWriter log)
{
var getAPIData = new GetData();
await getAPIData.DoIt(message);
log.WriteLine("*** done: " + message);
}
Anyway you'll be able to find usefull information from the documentation
Parallel execution
If you have multiple functions listening on different queues, the SDK will call them in parallel when messages are received simultaneously.
The same is true when multiple messages are received for a single queue. By default, the SDK gets a batch of 16 queue messages at a time and executes the function that processes them in parallel. The batch size is configurable. When the number being processed gets down to half of the batch size, the SDK gets another batch and starts processing those messages. Therefore the maximum number of concurrent messages being processed per function is one and a half times the batch size. This limit applies separately to each function that has a QueueTrigger attribute.
Here is a sample code to configure the batch size:
var config = new JobHostConfiguration();
config.Queues.BatchSize = 50;
var host = new JobHost(config);
host.RunAndBlock();
However, it is not always a good option to have too many threads running at the same time and could lead to bad performance.
Another option is to scale out your webjob:
Multiple instances
if your web app runs on multiple instances, a continuous WebJob runs on each machine, and each machine will wait for triggers and attempt to run functions. The WebJobs SDK queue trigger automatically prevents a function from processing a queue message multiple times; functions do not have to be written to be idempotent. However, if you want to ensure that only one instance of a function runs even when there are multiple instances of the host web app, you can use the Singleton attribute.
Have a read of this Webjobs SDK documentation - the behaviour you should expect is that your process will run and process one message at a time, but will scale up if more instances are created (of your app service). If you had multiple queues, they will trigger in parallel.
In order to improve the performance, see the configurations settings section in the link I sent you, which refers to the number of messages that can be triggered in a batch.
If you want to process multiple messages in parallel though, and don't want to rely on instance scaling, then you need to use threading instead (async isn't about multi-threaded parallelism, but making more efficient use of the thread you're using). So your queue trigger function should read the message from the queue, the create a thread and "fire and forget" that thread, and then return from the trigger function. This will mark the message as processed, and allow the next message on the queue to be processed, even though in theory you're still processing the earlier one. Note you will need to include your own logic for error handling and ensuring that the data wont get lost if your thread throws an exception or can't process the message (eg. put it on a poison queue).
The other option is to not use the [queuetrigger] attribute, and use the Azure storage queues sdk API functions directly to connect and process the messages per your requirements.
I have a Node.js timerTrigger Azure function that processes a collection and queues the processing results for further processing (by a Node.js queueTrigger function).
The code is something like the following:
module.exports = function (context, myTimer) {
collection.forEach(function (item) {
var items = [];
// do some work and fill 'items'
var toBeQueued = { items: items };
context.bindings.myQueue = toBeQueued;
});
context.done();
};
This code will only queue the last toBeQueued and not each one I'm trying to queue.
Is there any way to queue more than one item?
Update
To be clear, I'm talking about queueing a toBeQueued in each iteration of forEach, not just queueing an array. Yes, there is an issue with Azure Functions because of which I cannot queue an array, but I have a workaround for it; i.e., { items: items }.
Not yet, but we'll address that within the week, stay tuned :) You'll be able to pass an array to the binding as you're trying to do above.
We have an issue tracking this in our public repo here. Thanks for reporting.
Mathewc's answer is the correct one wrt Node.
For C# you can today by specifying ICollector<T> as the type of your output queue parameter.
Below is an example I have of two output queues, one of which I add via a for loop.
public static void Run(Item inbound, DateTimeOffset InsertionTime, ICollector<Item> outbound, ICollector<LogItem> telemetry, TraceWriter log)
{
log.Verbose($"C# Queue trigger function processed: {inbound}");
telemetry.Add(new LogItem(inbound, InsertionTime));
if(inbound.current_generation < inbound.max_generation)
{
for(int i = 0; i < inbound.multiplier; i++) {
outbound.Add(Item.nextGen(inbound));
}
}
}