Peek and Complete Message using different Receiver Instances - Azure Service Bus - azure

Scenario
When business transactions are performed, we're supposed to make that data available to end clients.
Current Design
Our web app publishes transaction messages are added to a topic on the Azure Service Bus.
We expose APIs to clients through which they can consume the data from those transactions.
Upon calling these APIs, we read the messages from the Subscription and return it to the client.
Problem
We want a guaranteed delivery - we want to make sure the client acknowledges the delivery of the data. So we don't want to remove the message from the subscription immediately. We want to keep it until the client acknowledges it.
So we only want to do a "Peek" instead of "Receive".
So the client calls the first API, to get the data, where we do a Peek.
And once the client has received the packets, the client would call a second API, to acknowledge.
At this point, we want to remove the message from the Subscription, making it Complete.
The current design of the Service Bus Message Receiver is that, a Complete can be performed only using the same Receiver instance that performed the Peek, as per the documentation, and we also observed the same when we tried it out.
Both the APIs, are two separate APIs and we cannot do the Peek and Complete using the same instance of the Receiver.
Thinking about options to somehow make the Receiver as a Singleton, across APIs within that App Service.
However this will be a problem when the App Service scales out.
Is there a different way to achieve what we're trying to do here ?

There is an option available in Azure Service Bus to defer messages. Once a message is deferred, it can be received with the help of it's sequence number.
The first client should receive the message and instead of completing it, it should defer it and return it.
The second client (which has sequence number) can receive the message from the Subscription. Refer here for more details.

Another option would be to not use a Service Bus Client on your backend and instead your clients could directly work with Service Bus using its Service REST API (assuming they can't use the AMQP client if I am understanding your scenario correctly).
There are APIs to
Peek-Lock
Renew Lock
Unlock
Delete (Complete)
You could also proxy these requests if you'd like using your backend itself or a service like APIM if you are already using it.
PS: Cross posting the answer for the same query on the MSDN forum

Related

Azure Service Bus conditional message locking

Is it possible to implement the following pseudo scenario with an Azure Service Bus?
I have a function that can scale out to 50 instances, it uses a service bus trigger. I would like to guarantee that related messages are only processed if an existing related message is NOT currently being processed.
Let's say I have a message (Message A) being processed by a function instance that's associated with UserID 1234. Another message (Message B) appears on the queue which is also associated with UserID 1234, the service bus should "ignore" it because a related message is already being processed. Another message (Message C) with UserID 9876 appears on the queue, this gets handled straight away because there is no in action message with UserID 9876.
Message A finishes processing and Message B is now picked up.
Currently I have a routing function which consumes the initial service bus trigger and then routes it to one of 10 functions each of which is responsible for messages where the last digit of the UserID is 0-9.
This means that if function "4" is busy with a request it won't be able to process any other requests where the UserID ends with 4, thus guaranteeing the system cannot process a related message at the same time. It does it's job but doesn't scale.
There's no conditional locking. From the description, sounds like you want to process messages associated with the same user ID, one at a time. For that, Azure Service Bus has a feature called Message Sessions.
As far as I know, azure service bus has a locking mechanism built into it. So no matter the message, if you have a single queue you are reading from, when a function picks that message up the rest of your functions would not pick up that same message. To solve the issue of not processing a duplicate message by userID I would recommend using something table storage to validate if that userID has been processed already (So in your function when you pick up a message you insert that into azure table storage before doing any processing and also have a check to see if it exists before processing).
service bus - https://learn.microsoft.com/en-us/azure/service-bus-messaging/message-transfers-locks-settlement
table storage - https://learn.microsoft.com/en-us/azure/cosmos-db/table/quickstart-dotnet?toc=https%3A%2F%2Flearn.microsoft.com%2Fen-us%2Fazure%2Fstorage%2Ftables%2Ftoc.json&bc=https%3A%2F%2Flearn.microsoft.com%2Fen-us%2Fazure%2Fbread%2Ftoc.json&tabs=azure-cli%2Cwindows
In summary I think the solution here would be using multiple technologies and use a central point your scaled out functions could validate against whats processed by other functions, whatever that central database is

Is it possible for Azure API Management to synchronously post to Azure Service Bus?

I am converting a monolithic application to microservices. I have set up an API Management layer and a Service Bus all within a Service Fabric. The idea is to use messages to communicate to the microservices so they do not know about eachother.
When one microservice needs information it posts a message to the service bus and it gets fulfilled and a reply is sent and correlated.
The only problem is that the API Management posts the message to the service bus and returns without waiting for a reply therefore the client does not get a response.
Is there a way to have the API Management wait for a reply?
Would this need a sort of broker service in-between?
Is it better to just have a REST layer on each microservice that the API Management could call but then the services would use the service bus?
Thanks for any help.
UPDATE:
I think the only way to have Api Management wait is use of a logic app. Not sure about this.
Any Azure experts out there?
The way APIM is behaving is actually expected.
Service Bus is meant to decouple different (micro)services and inherently doesn't have a request-response style of operation though it can be implemented that way.
Here is one way to can design/implement you system
First, for a request-response style operation with Service Bus, one way you can achieve it is by using two queues.
One for sending the request (along with some Unique ID - GUID will do) and the other for receiving the response (which again contains the Unique ID sent in the request).
Instead of having APIM work with Service Bus, call a Logic App or Function which does this for you.
Finally, waiting for the response is something that will depend on your use case.
If you have a very long running task, its best to follow the Async Pattern implemented by both Logic Apps and Functions (using Durable Functions), which return a 202 Accepted response immediately with a status URI that your client can poll for updates.
But if its a quick response (before the HTTP request times out), you could probably wait for the response service bus message and return the response then. For this, your Logic App or Function would have to poll/wait for the service bus message with the same unique ID and then return the response.

How can I control acknowledgement in Cloud PubSub using Node.js

Basically I have created a cloud function(written a Node.js code) which will trigger on the message of cloud pubsub topic and will load that data to Bigquery table.
A message in a topic gets deleted after reading it by cloud function. I understand that subscriber internally sends acknowledgement and result of that, message gets deleted from topic.
I want to control the acknowledgement sent to publisher. How can it be achieved, didn't find any document on this.
Google Cloud Functions does not allow you to control the acknowledgement of the Cloud Pub/Sub message. Upon completion of the function, the message is acknowledged for the subscription. If you want finer-grained control over acknowledgements, then you will need to use Google Cloud Pub/Sub directly. There is a Node.js client library.
Just some clarifying notes on acknowledgements: Acknowledging a message for a single subscription doesn't mean the message is deleted for the topic, only for the subscription. Other independent subscriptions will still receive the message and have to acknowledge it. This is also independent of the acknowledgement sent to the publisher. When a Google Cloud Pub/Sub message is published, the publish call is acknowledged (i.e., a response is sent to the publisher) once Google Cloud Pub/Sub has saved the message and guarantees it will be delivered to subscriptions. This is one of the main advantages of an asynchronous message delivery system: receiving the message from the publisher (and acknowledging the publish) is independent of delivering the message via a subscription (which is separately acknowledged by the subscriber).
If I understand correctly; you made a pub/sub topic and placed a cloud function within the same project as this topic. The cloud function is deployed with a google.pubsub.topic.publish trigger for the specified topic.
Since using a queue/topic, producer and consumer operate independently of each other. This enables a loosely coupled architecture, which has its own advantages and disadvantages.
If the publisher publishes a message to the topic, it gets confirmation that the message is sent to the topic successfully. Otherwise your code will give an exception (connection refused, forbidden, etc). For Node.js and other languages, there are pub/sub client sdk's which you can use to publish messages fairly easy.
When the message is on the topic, it will go to the subscribers, which can be push or pull subscriptions. At this point, acknowledgement is getting important. Google pub/sub, as do other queues/topics, are designed with guaranteed delivery. This means if a message could not be delivered, it will try again after some (configurable) time, until the total lifetime is exceeded (default is 7 days)
When using a pull subscription and want to let the topic know that you successfully received the message you would need something like this in Node.js:
message.ack();
When using a push subscription to an API or a HTTP cloud function, you would need to return a custom http code. Pub/sub expects a succes status code (e.g. 200 or 204):
res.status(204);
res.send('OK');
The only way I have found to reliably control what messages get acknowledged and don't in a cloud function is by using the REST Service APIs.
This is because the node.js pubsub client services acknowledgements and manages connections in the background. This is clearly forbidden in a cloud function.
However, the REST API's are fairly easy to use, and give fine grain control over what messages get acknowledged.

Run scheduler on Azure based on user specific scheduled time

We have an API to fetch the latest transaction data of the user based on the scheduled Next_Refresh_Time. Each user has different scheduled refresh time. Since we have thousands of users we have to run the scheduler to fetch the data. Please suggest me the best way to do it.
Each user has different scheduled refresh time. Since we have thousands of users we have to run the scheduler to fetch the data.
You could add a queue message and specify initialVisibilityDelay with Next_Refresh_Time value when a user login, and then you could create and run a Queue-trigger WebJob to process queue message and featch the latest data (and if current user is still online, add the message (specify same content and initialVisibilityDelay as original message) to queue).
Besides, if you’d like to real-time push the latest data to specific connected user, SignalR would help you implement real-time functionality and SignalR can be used in a variety of client platforms. You could save connection id of a login user in queue message, and then you can call hub method in WebJob function to push data to a connected user based on connection id.
The following thread and article would be helpful to know how to establish connection and call hub method.
SignalR - Broadcasting over a Hub in another Project from outside of
a
Hub
Hubs API for
SignalR

Should I use Azure Service (such as Scheduler) for sending rest messages to my bot, or use a separate thread for notifications?

I am creating a bot using Microsoft Bot Framework (BotBuilder) and want it to message the user when an appointment is about to begin.
I currently use Microsoft Graph api to access the user's Office 365 calendar and store the appointments. A background thread then keeps track of time and then messages the user when an appointment is about to start.
The current idea is to use Graph webhooks to notify my bot about new appointments.
My question is, would it be smarter to use an Azure service (such as Scheduler) to keep track of the appointments, and send rest messages to my bot, which will then send a message to the user?
My worry is, that as the amount of users rise, the amount of appointments and time checks will become too large, and that maybe Azure services would be able to handle it better.
This is a perfect fit for Azure Functions with a HTTP Trigger.
https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-http-webhook
This article explains how to configure and work with HTTP triggers and bindings in Azure Functions. With these, you can use Azure Functions to build serverless APIs and respond to webhooks.
Azure Functions provides the following bindings:
An HTTP trigger lets you invoke a function with an HTTP request. This can be customized to respond to webhooks.
An HTTP output binding allows you to respond to the request.

Resources