Azure Function store requests and trigger the funtion again later - azure

I have an Azure Function that will take a POST request , do some processing and then sends a POST to another Endpoint with the content of the input request.
I have the requirement, that I somehow have to store all the incoming requests and after a fixed time period the Azure Function needs to send the same POST request again.
What I could do is, set up a cloud storage where I store all the incoming requests and have a second Azure Function with a timer trigger that reads from the storage and sends the request again. My problem with this is, that I have to set up an additional storage and I am looking for a more cost efficient method.
Does anyone know an alternative way to "store" incoming requests and have the same Azure Funtion pick them up again later?

Sounds like what you're looking for is durable functions which can handle exactly this kind of scenario and it'll do all the complicated parts of storing state/context during delays. They are backed by Azure storage but as has been said this is one of the cheapest services available on Azure.
For what you've described you might want to do a combination of function chaining:
Combined with a timer in between the processing you do in chained functions:
//do some initial processing (possibly in an Activity function)
//...
DateTime waitTime = context.CurrentUtcDateTime.Add(TimeSpan.FromDays(1));
await context.CreateTimer(waitTime, CancellationToken.None);
//call the next activity function (parameters can also be passed in)
await context.CallActivityAsync("DoNextSetOfThings");
High level what you'd have is something along the lines of:
An HTTP endpoint which you POST to initially
An Orchestrator function which handles the chaining and delays
One or more Activity functions that do the work and can accept parameters and return results to the Orchestrator

Related

Azure function - [Error] (Function) Executed Messages cannot be larger than 65536 bytes. but without queue

My Function is running directly from a Logic App, the Function sits at the end and receives the message from the Logic App, sometimes this message can be very large. That's why I've avoided using a queue.
Unfortunately I'm having the same situation without the queue and I cannot find a documentation to understand if it's a Function limitation or if it's just because I'm using HTTP request as the trigger.
If I use a blob, will the Function give me the same error with the same data?
The function uses http trigger (as linked to the logic app) and the data passed (as string) can be very small or very large.
The data flows correctly from Logic app to the function, I can print the data in the function when arrives from the logic app but after that I get the error

Pass HTTP request from Azure Function through Event Grid

I've started thinking through a prototype architecture for a system I want to build based on Azure Functions and Event Grid.
What I would like to achieve is to have a single point of entry (Function) which a variety of external vendors will send Webhook (GET) HTTP requests to. The purpose of the Function is to add some metadata to the payload, and publish the package (metadata + original payload from vendor) to an Event Grid. The Event Grid will then trigger another Function, whose purpose is to respond to the original Webhook HTTP request with e.g. a status 204 HTTP code.
The diagram below is a simplified version of the architecture, the Event Grid will of course publish events also to other Functions, but for the sake of simplicity…
The challenge I'm facing at the moment is that the context of the original Webhook HTTP request from external vendor is lost after the first Function is triggered. Trying to send the context as part of the event payload to Event Grid feels like an anti-pattern, and regardless I cannot get it working (the .done() function is lost somewhere in the event). Trying to just use context.res = {} and context.done() in the last Function won't respond to the vendor's original HTTP request.
Any ideas here? Is the whole architecture just one big anti-pattern -- will it even work? Or do I have to immediately send the HTTP response in the first Function triggered by the vendor's request?
Thank you!
You are mixing two difference patterns such as a message-driven and event-driven.
The Azure Event Grid is a distributed Pub/Sub eventing Push model, where the subscriber subscribing an interest on the source in the loosely decoupled manner.
In your scenario, you want to use an eventing model within the message exchange request-response pattern in the sync manner. The request message exchange context can not flow via the Pub/Sub eventing model and back to the anonymous endpoint such as actually a point for response message.
However, there are a several options how to "logical" integrate these two different patterns, the following is showing some of them:
using a request - replyTo message exchange pattern, such as a full duplex communication, one for request and the other one for replyTo.
using a request - response message exchange pattern with a polling state. Basically, your first function will wait for a subscriber state and then return back to the caller. In the distributed internet architecture, we can use an azure lease blob storage for sharing a state between the sync part and async eventing part.
In your scenario, the first AF will create this lease blob, then firing an event to the AEG and then it will periodically polling the state in the lease blob for end of aggregation process (multiple subscribers, etc.).
Also, for this kind of pattern, you can use Azure Durable Function to simplify an integration to the event-driven AEG model.
The following screen snippet shows a sequence diagram using an Azure Lease Blob for sharing a "Request State" in the distributed model. Note, that this pseudo sync/async pattern is suitable for cases when the Request-Response is processing within a short time less than 60 seconds.
For more details about using a Lease Blob within the Azure Function, see my answer here.

How do I use an Azure Storage Queue in a request / response mode?

As we port more of our node.js code into Azure Functions we see references in the Azure docs that using Storage Queues is the preferred way to delegate processing responsibility instead of using http requests to call other functions.
What is the request/response design pattern we should use for this delegation? Specifically, how can the response sent back through a queue be delivered only to the Function where the request originated?
Here's an example of what we want to do:
Request comes in to an HTTP Trigger Function A
Function A places message into Queue X (in JSON format) with the first key being the unique requestId: "ABC345"
Function A starts listening to Queue Y for the response
Function B dequeues this message and does its work
Function B places message with work results added into Queue Y with requestId: "ABC345"
Function A sees this message with requestId: "ABC345" and returns the HTTP response
How can we get Function A to pick up only the request that it is waiting for?
The getMessage method doesn't seem to be able to selectively listen to a queue, only to grab the top message:
getMessage(queue [, options], callback)
Another angle on this would be if we want multiple Worker Functions to listen to Queue X. Function C would process all requests that have requestType: "query" and Function D would process all requestType: "blob". Without this filtering we would use one queue per Worker function. Is that the right way to do it, too?
Note: We're using node.js but I'm assuming that the Queue API's are equivalent across all SDK's.
Azure Queues really don't do request-response.
Http processing should not be waiting on queues. Http messages should return quickly (synchronously, < 1 minute), whereas queues are used for longer asynchronous background processing. If Http is waiting on queues, it should be using a 202 long-running pattern.
Consider:
keep using Http. You can port to Functions and keep your underlying
patterns.
Use queues in a fully asynchronous matter. So A queues a
message to kick off B and returns; A2 listens on the response from
B.
Check out the Durable Functions preview. This allows
synchronous calls exactly like what you want. It's in preview, but
see https://github.com/Azure/azure-functions-durable-extension
It looks like Azure Logic Apps is the future of orchestrating multiple functions in a request/response pattern. Using Logic Apps you can set up an HTTP trigger (among many others) then set up several functions to run sequentially with conditional logic:
https://learn.microsoft.com/en-us/azure/logic-apps/logic-apps-azure-functions

Azure functions with complex c# code

I have complex code in c# with multiple classes(and classes have different functions) and i want to implement it as azure function. The problem is that the architecture is as follows: stream data is coming to function as input and after complex calculation within the class functions, I need to return calculated values as stream again. To be returned values are inside class functions and I am having trouble to find a way to return these to "run" function. Is there any easy way to do it?
Structure is like this
public static void Run(string myQueueItem, TraceWriter log)
{
// gets data from service bus per second
call function 1
}
public class Class1
{
function1(){
call function2
}
}
public class Class2
{
function2()
output interested is in here and program creates an output after 30 31 seconds and continues to creates about every 20 second
)}
Many thanks
Your question is not very clear, but based on your comments I understood that your processing logic and calculation results depend on multiple incoming messages.
Azure Function (Run method) will be called once per each Service Bus message in your queue. That means that you need to persist some state, e.g. the previous messages, across Function calls.
Azure Functions themselves don't provide any reliable in-process state storage, so you need to make use of external storage (e.g. SQL Database, Cosmos DB, Table Storage etc).
Your function flow would look something like this:
Run is called for an incoming Service Bus message.
You load the previous state of the function from external storage.
You instantiate Class1/Class2/etc hierachy.
You pass the incoming message AND your state to Class1/Class2.
Your logic produces an output message, new state, or both.
You persist the state back to the storage.
You return the output message to the output binding.
In case you don't want any external state, Azure Functions might not be the right service for you. E.g. you may have a Web Job, which constantly runs and keeps a series of messages in memory.
Edit: As #Gaurav suggested, you should take a look at
Durable Functions, but they are still at early preview stage.
You should take a look at Azure Durable Functions announced recently. Unfortunately I have only read about it and not used it, hence I will not be able to propose how exactly it will solve your problem.
One neat thing I liked about this is that unlike your regular Functions, they are stateful in nature which lets you persist the local state.
Another thing I liked about this is that it is intended for long running tasks which is something you're after.
Looking at your question, I believe Function Chaining pattern is something that could be useful to you.
First of all, Azure Functions are not designed for complex processing. Instead you can consider worker roles for this or a microservice if your solution is already on Service Fabric.

Azure Storage Queue - correlate response to request

When a Web Role places a message onto a Storage Queue, how can it poll for a specific, correlated response? I would like the back-end Worker Role to place a message onto a response queue, with the intent being that the caller would pick the response up and go from there.
Our intent is to leverage the Queue in order to offload some heavy processing onto the back-end Worker Roles in order to ensure high performance on the Web Roles. However, we do not wish to respond to the HTTP requests until the back-end Workers are finished and have responded.
I am actually in the middle of making a similar decision. In my case i have a WCF service running in a web role which should off-load calculations to worker-roles. When the result has been computed, the web role will return the answer to the client.
My basic data structure knowledge tells me that i should avoid using something that is designed as a queue in a non-queue way. That means a queue should always be serviced in a FIFO like manner. So basically if using queues for both requests and response, the threads awaiting to return data to the client will have to wait untill the calculation message is at the "top" of the response queue, which is not optimal. If storing the responses by using Azure tables, the threads poll for messages creating unnecessary overhead
What i belive is a possible solution to this problem is using a queue for the requests. This enables use of the competeing consumers pattern and thereby load-balancing. On messages sent into this queue you set the correlationId property on the message. For reply the pub/sub part ("topics") part of Azure service bus is used togehter with a correlation filter. When your back-end has processed the request, it published a result to a "responseSubject" with the correlationId given in the original request. Now this response ca be retrieved by your client by calling CreateSubscribtion (Sorry, i can't post more than two links apparently, google it) using that correlation filter, and it should get notified when the answer is published. Notice that the CreateSubscribtion part should just be done one time in the OnStart method. Then you can do an async BeginRecieve on that subscribtion and the role will be notified in the given callback when a response for one of it's request is available. The correlationId will tell you which request the response is for. So your last challenge is giving this response back to the thread holding the client connection.
This could be achieved by creating Dictionary with the correlationId's (probably GUID's) as key and responses as value. When your web role gets a request it creates the guid, set it as correlationId, add it the hashset, fire the message to the queue and then call Monitor.Wait() on the Guid object. Then have the recieve method invoked by the topic subscribition add the response to the dictionary and then call Monitor.Notify() on that same guid object. This awakens your original request-thread and you can now return the answer to your client (Or something. Basically you just want your thread to sleep and not consume any ressources while waiting)
The queues on the Azure Service Bus have a lot more capabilities and paradigms including pub / sub capabilities which can address issues dealing with queue servicing across multiple instance.
One approach with pub / sub, is to have one queue for requests and one for the responses. Each requesting instance would also subscribe to the response queue with a filter on the header such that it would only receive the responses targeted for it. The request message would, of course contain the value to the placed in the response header to drive the filter.
For the Service Bus based solution there are samples available for implementing Request/Response pattern with Queues and Topics (pub-sub)
Let worker role keep polling and processing the message. As soon as the message is processed add an entry in Table storage with the required corelationId(RowKey) and the processing result, before deleting the processed message from the queue.
Then WebRoles just need to do a look up of the Table with the desired correlationId(RowKey) & PartitionKey
Have a look at using SignalR between the worker role and the browser client. So your web role puts a message on the queue and returns a result to the browser (something simple like 'waiting...') and hook it up to the worker role with SignalR. That way your web role carries on doing other stuff and doesn't have to wait for a result from asynchronous processing, only the browser needs to.
There is nothing intrinsic to Windows Azure queues that does what you are asking. However, you could build this yourself fairly easily. Include a message ID (GUID) in your push to the queue and when processing is complete, have the worker push a new message with that message ID into a response channel queue. Your web app can poll this queue to determine when processing is completed for a given command.
We have done something similar and are looking to use something like SignalR to help reply back to the client when commands are completed.

Resources