We are switching over our Azure WebJobs to Azure Functions (for multiple reasons besides this post). But we internally can't really agree on the architecture for those functions.
Currently we have one WebJob that does a single tasks from A-Z.
E.g. Status emails: (Gets triggered from the scheduler)
1. Looks up all recepients
2. Send email to all of those
3. Logs success/failure for each individual recepient
4. Logs success/failure for the whole run
And we have multiple web-jobs that do similar tasks.
Now we have 3 ways we can implement this in the future.
One-to-one conversion. Move the complete WebJob functionality into one Azure function
Split the above process into 4 different Azure functions. E.g. one that looks up the recipients and then calls another function that sends out the email and so on.
Combine all WebJobs into one Azure function
Personally, I would tend towards solution 3. But some team members tend towards 1. What do you think?
I would go with option 3 too. You can use Durable Functions and let it control the workflow, and create activities for each step (1-4).
[FunctionName("Chaining")]
public static async Task<object> Run(
[OrchestrationTrigger] IDurableOrchestrationContext context)
{
try
{
var recipients = await context.CallActivityAsync<object>("GetAllRecipients", null);
foreach(var recipient in recipients)
{
//maybe return a complex object with more info about the failure
var success = await context.CallActivityAsync<object>("SendEmail", recipient);
if (! success)
{
await context.CallActivityAsync<object>("LogError", recipient);
}
}
return await context.CallActivityAsync<object>("NotifyCompletion", null);
}
catch (Exception ex)
{
// Error handling or compensation goes here.
await context.CallActivityAsync<object>("LogError", ex);
}
}
more info: https://learn.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-overview
Related
My problem is that I don't know how to handle external calls that mutates the state but also needs validation before executing them
Here is my command handler
public async Task<IAggregateRoot> ExecuteAsync(Command command)
{
var sandbox = await _aggregateStore.GetByIdAsync<Sandbox>(command.SandboxId);
var response = await _azureService.CreateRedisInstance(sandbox.Id);
if (response.IsSuccess)
{
sandbox.CreateRedisDetails(response);
return sandbox;
}
sandbox.FailSetup(response.Errors.Select(e => e.Message));
return sandbox;
}
The problem here is that the sandbox aggregate needs to be in correct state before calling external service and I cannot satisfy both. My only idea here is to create separate method CanCreateRedisInstance that checks if aggregate state is valid and only then calls external service. What I don't like is that I introduce validation methods
public async Task<IAggregateRoot> ExecuteAsync(Command command)
{
var sandbox = await _aggregateStore.GetByIdAsync<Sandbox>(command.SandboxId);
if(!sandbox.CanCreateRedisInstance())
{
throw new ValidationExcetpion("something");
}
var response = await _azureService.CreateRedisInstance(sandbox.Id);
if (response.IsSuccess)
{
sandbox.CreateRedisDetails(response);
return sandbox;
}
sandbox.FailSetup(response.Errors.Select(e => e.Message));
return sandbox;
}
The other approach I thought of is to make whole process more cqrs-ish.
public async Task<IAggregateRoot> ExecuteAsync(Command command)
{
var sandbox = await _aggregateStore.GetByIdAsync<Sandbox>(command.SandboxId);
sandbox.ScheduleRedisInstanceCreation();
}
public void ScheduleRedisInstanceCreation()
{
if(RedisInstanceDetails != null)
{
throw new ValidationException("something")
}
RedisInstanceDetails = RedisInstanceDetails.Scheduled(some arguments);
AddEvent(new RedisInstanceCreationScheduled(some arguments));
}
The RedisInstanceCreationScheduled event is sent to queue and picked by event handler
which will call external service and based on result will create other events
public async Task ExecuteAsync(RedisInstanceCreationScheduled event)
{
var sandbox = await _aggregateStore.GetByIdAsync<Sandbox>(command.SandboxId);
var response = await _azureService.CreateRedisInstance(sandbox.Id);
if (response.IsSuccess)
{
sandbox.CreateRedisDetails(response);
return sandbox;
}
sandbox.FailSetup(response.Errors.Select(e => e.Message));
_aggregateStore.Save(sandbox);
}
However this approach add some extra complexity and I am not quite sure if event handler should modify aggregate.
Both approaches are possible.
Why no validation should stay in the Handler? When you change something in the domain, the domain object makes also a validation about the action, and deny it if it's not possible. Here you just need to interact with an external service to verify it.
The external service is just an interface in the domain layer, that you're going to implement with a concrete class into the infrastructure layer. Hence you will not have a directly binding with azure, but a service, let's say CloudService, that in it's implementation uses Azure. This allows you to build domain related exceptions that are thrown by classes that stay in the infrastructure layer.
Also the CQRS approach is valid. But you have to take care when you use it.
You can, for example, start a saga where you ask to the external service to create the instance (CreateRedisInstance), then, according to the event that you get (success or failure) you proceed with the next handler. But you really have to take care about middle situations: what should be done to handle failures between the 2 actions? You need also a rollback of the first action if the second one ends with a failure.
Said this, I would go with the first one if there're no really need to handle a complex process. Moreover, it looks that is all related to the same domain (no infra-domain actions are required), hence there're no real need to augment the complexity with a saga where every success/fail status should be correctly handled.
I have a CRM system, when a contact is added, I want to add them to an accounting system.
I have setup a webhook in the CRM system that passes the contact to an Azure Function. The Azure function connects to the accounting system API and creates them there.
There is a little other processing I need to do before the user can be added to the accounting system.
I need about a 5 minute delay after receiving the webhook before I can add the user to the accounting system.
I would rather not add a pause or delay statement in the Azure Function as there is a timeout limit, and also It's a consumption plan so I want each function to action quickly.
I am using Powershell core.
Is a Service Bus Queue the best way to do this?
You could use a Timer in a Durable Function for this. Then you won't need an extra component like a queue. A Durable Function is all you need. For example (warning: not compiled this):
Note: Durable Functions do support powershell but I don't ;-) So the code below is to understand the concept.
[FunctionName("Orchestration_HttpStart")]
public static async Task<HttpResponseMessage> HttpStart(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequestMessage req,
[DurableClient] IDurableOrchestrationClient starter,
ILogger log)
{
// Function input comes from the request content.
string content = await req.Content.ReadAsStringAsync();
string instanceId = await starter.StartNewAsync("Orchestration", content);
log.LogInformation($"Started orchestration with ID = '{instanceId}'.");
return starter.CreateCheckStatusResponse(req, instanceId);
}
[FunctionName("Orchestration")]
public static async Task Run(
[OrchestrationTrigger] IDurableOrchestrationContext context)
{
var requestContent = context.GetInput<string>();
DateTime waitAWhile = context.CurrentUtcDateTime.Add(TimeSpan.FromMinutes(5));
await context.CreateTimer(waitAWhile, CancellationToken.None);
await context.CallActivityAsync("ProcessEvent", requestContent);
}
[FunctionName("ProcessEvent")]
public static string ProcessEvent([ActivityTrigger] string requestContent, ILogger log)
{
// Do something here with requestContent
return "Done!";
}
I would rather not add a pause or delay statement in the Azure Function as there is a timeout limit, and also It's a consumption plan so I want each function to action quickly.
The 5 minutes delay introduced by the timer won't count as active time so you won't run out of time on the consumption plan for those minutes.
Is a Service Bus Queue the best way to do this?
You can use it, but Azure Storage Queue is cheaper for your scenario.
What you can do is create a time triggered functions (* */5 * * * *) and will check for a message in a queue. If the time between the execution and the time the message was created is greater than minutes, then you process and complete the message, otherwise, don't complete the message and it will return to the queue for the next execution.
I have a following azure storage queue trigger azure function which is binded to azure table for the output.
[FunctionName("TestFunction")]
public static async Task<IActionResult> Run(
[QueueTrigger("myqueue", Connection = "connection")]string myQueueItem,
[Table("TableXyzObject"), StorageAccount("connection")] IAsyncCollector<TableXyzObject> tableXyzObjectRecords)
{
var tableAbcObject = new TableXyzObject();
try
{
tableAbcObject.PartitionKey = DateTime.UtcNow.ToString("MMddyyyy");
tableAbcObject.RowKey = Guid.NewGuid();
tableAbcObject.RandomString = myQueueItem;
await tableXyzObjectRecords.AddAsync(tableAbcObject);
}
catch (Exception ex)
{
}
return new OkObjectResult(tableAbcObject);
}
public class TableXyzObject : TableEntity
{
public string RandomString { get; set; }
}
}
}
I am looking for a way to read 15 messages from poisonqueue which is different than myqueue (queue trigger on above azure function) and batch insert it in to dynamic table (tableXyz, tableAbc etc) based on few conditions in the queue message. Since we have different poison queues, we want to pick up messages from multiple poison queues (name of the poison queue will be provided in the myqueue message). This is done to avoid to spinning up new azure function every time we have a new poison queue.
Following is the approach I have in my mind,
--> I might have to get 15 queue messages using queueClient (create new one) method - ReceiveMessages(15) of Azure.Storage.Queue package
--> And do a batch insert using TableBatchOperation class (cannot use output binding)
Is there any better approch than this?
Unfortunately, storage queues don't have a great solution for this. If you want it to be dynamic then the idea of implementing your own clients and table outputs is probably your best option. The one thing I would suggest changing is using a timer trigger instead of a queue trigger. If you are putting a message on your trigger queue every time you add something to the poison queue it would work as is, but if not a timer trigger ensures that poisoned messages are handled in a timely fashion.
Original Answer (incorrectly relating to Service Bus queues)
Bryan is correct that creating a new queue client inside your function isn't the best way to go about this. Fortunately, the Service Bus extension does allow batching. Unfortunately the docs haven't quite caught up yet.
Just make your trigger receive an array:
[QueueTrigger("myqueue", Connection = "connection")]string myQueueItem[]
You can set your max batch size in the host.json:
"extensions": {
"serviceBus": {
"batchOptions": {
"maxMessageCount": 15
}
}
}
I have a scenario in which I am calling RegisterMessageHandler of SubscriptionClient class of Azure Service Bus library.
Basically I am using trigger based approach while receiving the messages from Service Bus in one of my services in Service Fabric Environment as a stateless service.
So I am not closing the subscriptionClient object immediately, rather I am keeping it open for the lifetime of the Service so that it keeps on receiving the message from azure service bus topics.
And when the service needs to shut down(due to some reasons), I want to handle the cancellation token being passed into the service of Service Fabric.
My question is how can I handle the cancellation token in the RegisterMessageHandler method which gets called whenever a new message is received?
Also I want to handle the closing of the Subscription client "Gracefully", i.e I want that if a message is already being processed, then I want that message to get processed completely and then I want to close the connection.
Below is the code I am using.
Currently We are following the below approach:
1. Locking the process of the message using semaphore lock and releasing the lock in finally block.
2. Calling the cancellationToken.Register method to handle cancellation token whenever cancellation is done. Releasing the lock in the Register Method.
public class AzureServiceBusReceiver
{
private SubscriptionClient subscriptionClient;
private static Semaphore semaphoreLock;
public AzureServiceBusReceiver(ServiceBusReceiverSettings settings)
{
semaphoreLock = new Semaphore(1, 1);
subscriptionClient = new SubscriptionClient(
settings.ConnectionString, settings.TopicName, settings.SubscriptionName, ReceiveMode.PeekLock);
}
public void Receive(
CancellationToken cancellationToken)
{
var options = new MessageHandlerOptions(e =>
{
return Task.CompletedTask;
})
{
AutoComplete = false,
};
subscriptionClient.RegisterMessageHandler(
async (message, token) =>
{
semaphoreLock.WaitOne();
if (subscriptionClient.IsClosedOrClosing)
return;
CancellationToken combinedToken = CancellationTokenSource.CreateLinkedTokenSource(cancellationToken, token).Token;
try
{
// message processing logic
}
catch (Exception ex)
{
await subscriptionClient.DeadLetterAsync(message.SystemProperties.LockToken);
}
finally
{
semaphoreLock.Release();
}
}, options);
cancellationToken.Register(() =>
{
semaphoreLock.WaitOne();
if (!subscriptionClient.IsClosedOrClosing)
subscriptionClient.CloseAsync().GetAwaiter().GetResult();
semaphoreLock.Release();
return;
});
}
}
Implement the message client as ICommunicationListener, so when the service is closed, you can block the call until message processing is complete.
Don't use a static Semaphore, so you can safely reuse the code within your projects.
Here is an example of how you can do this.
And here's the Nuget package created by that code.
And feel free to contribute!
I developed a couple of microservice using Azure functions, every service has independent use case and different programming language.
Now I have a use case to use all service in below order, So I developed one more Azure function to use all service in given order. below code running well.
public static async Task<HttpResponseMessage> Run([HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = null)]HttpRequestMessage req, TraceWriter log)
{
string returnValue = string.Empty;
dynamic data = await req.Content.ReadAsStringAsync();
if (data == null)
{
return req.CreateResponse(HttpStatusCode.BadRequest, "Please pass a value in the request body");
}
else
{
string body = data.ToString();
var transformResult = await HttpRestHelper.CreateRequestAsync(AppConfiguration.TransformServiceEndPoint, body, HttpMethod.POST);
var validationResult = await HttpRestHelper.CreateRequestAsync(AppConfiguration.ValidationServiceEndPoint, transformResult.Result.ToString(), HttpMethod.POST);
if (validationResult.Result != null && Convert.ToBoolean(validationResult.Result))
{
var encryptionResult = await HttpRestHelper.CreateRequestAsync(AppConfiguration.EncryptionServiceEndPoint, transformResult.Result.ToString(), HttpMethod.POST);
var storageResult = await HttpRestHelper.CreateRequestAsync(AppConfiguration.StorageServiceEndPoint, encryptionResult.Result.ToString(), HttpMethod.POST);
returnValue = storageResult.Result.ToString();
}
else
{
returnValue = "Validation Failed";
}
return req.CreateResponse(HttpStatusCode.OK, returnValue, "text/plain");
}
}
Question
If every microservice takes 1 min to execution, I have to wait ~4min in my Super Service and billed for 4+ min. (We don't need to pay for waiting time :) https://www.youtube.com/watch?v=lVwWlZ-4Nfs)
I want to use Azure Durable functions here but didn't get any method to call external url.
Please help me or suggest a better solution.
Thanks In Advance
Durable Orchestration Functions don't work with arbitrary HTTP endpoints. Instead, you need to create individual functions as Activity-triggered.
Orchestration will use message queues behind the scenes rather than HTTP. HTTP is request-response in nature, so you have to keep the connection and thus pay for it.
Queue-based orchestrator can also give you some extra resilience in face of intermittent failures.