Scalable Request Response pattern using Azure Service Bus - azure

We are evaluating "Azure service bus" to use between web server and app server for request response pattern. We are planning to have two queues:
Request Queue
Response Queue
Webserver will push a message to request queue and subscribe to response queue.
By comparing the MessageID and CorrelationId, it can receive the response back, which can be sent back to browser.
But over cloud, using elastic scaling, we can increase/decrease web server (and app server) instances.
We are wondering if this pattern will work here optimally.
To make this work, we will have to have one Request queue and multiple topics (one for each web server instance).
This will have two down sides:
Along with increasing/decreasing web server instance, we will have
to create/delete topic as well.
All the message will be pushed to
all the topics. So, every message will be processed by all the web
servers. And this is not an efficient way.
Please share your thoughts.
Thanks In Advance

When you scale out your endpoint, you don't want to have an instance affinity. You want to rely on the competing consumers and not care which instance of your endpoint processes messages.
For example, if you receive a response and write that to a database, most likely you don't care what instance of an endpoint has written the data. But if you have some in-memory state or anything other info only available to the endpoint that has originated the request and processing reply messages requires that information, then you have instance affinity and need to either remove it or use technology that allows to address that. For example, something like a SignalR with a backplane to communicate a reply message to all your web endpoint instances.
Note that ideally you should avoid instance affinity as much as you can.

I know this is old, but thought I should comment to complete this thread.
I agree with Sean.
In principle, Do not design with instance affinity in mind.
Any design should work irrespective of number of instances and whichever instance runs the code.
Microsoft does recommend the same when designing application architecture for running in the cloud.
In your case, I do not think you should plan to have one topic for each instance.
You should just put the request messages into one topic, with a subscription to allow your receiving app service to process those request messages.
When your receiving app service scales out, that's where your design needs to allow reading messages from the subscription from multiple receivers (multiple instances), which is described in the Competing consumers pattern.
https://learn.microsoft.com/en-us/azure/architecture/patterns/competing-consumers
Please post what you have finally implemented.

Related

Azure Service Bus Queues vs Topics for one to many(unique)

I have an online service hosted on Azure, that asynchronously sends data to on-premise clients.
Each client is identified by an unique code.
Actually there is a single topic, with a subscription for each client which has a filter on the unique code, that is sent as a parameter in the message. No message will ever be broadcasted to all the clients.
I feel that using topic this way is wrong.
The alternative that comes to my mind is to use a dedicated queue for each client, that is created on first contact
Could this be a better approach?
Thanks
In my opinion using Topics and Subscriptions is the right way to go. Here's the reason why:
Currently the routing logic (which message needs to go to which subscription) is handled by Azure Service Bus based on the rules you have configured. If you go with queues, the routing logic will need to come to your hosted service. You'll need to ensure that the queue exists before sending each message. I think it will increase the complexity at your service level somehow.
Furthermore, topics and subscriptions would enable you to do build an audit trail kind of functionality (not sure if you're looking for this kind of functionality). You can create a separate subscription that has a rule to deliver all messages (True SQL Rule) to that subscription along with client specific subscription.
Creating a separate Queue for each client is not advisable. This is the problem solved by Topics.
If you have separate Queue for each client, then you need to send messages to multiple Queues from Server. This will become tedious when the number of clients increases.
Having a single Topic and multiple Subscriptions is easy to manage as the message will be sent only to a single Topic from Server.

Sharing Azure Service Fabric Reliable Queues Between Services

I'm diving into Service Fabric (from the Cloud Services world) and am hitting a few speed bumps with how ReliableQueues work.
Let's say I have 2 stateful services StatefulService1 and StatefulService2.
If I need to have StatefulService1 send a message in a queue that StatefulService2 will pick up and read am I able to use ReliableQueues or are ReliableQueues isolated within the service they are created in?
If that is the case then what is the purpose of having ReliableQueues? The usual pattern behind them is for another process to act on the messages. I understand why isolating a Dictionary to a service would make sense, but not a queue...
Is my best option to rely on a traditional approach to send this message such as a Storage Queue or does ServiceFabric offer a solution for passing message queues between services?
UPDATE
Just want to clarify that I did attempt to dequeue a message created in StatefulService1 from within StatefulService2 and it came up empty. Dequeing from within StatefulService1 worked fine as expected.
Reliable Collections are in memory data structures that are not intended for inter-service communications. If you would like to establish a communication channel between StatefulService1 and StatefulService2, you have the following options:
Use Communication Listeners. You can have custom listeners for the protocol of your choice, including HTTP, WCF or your custom protocol. You can read more about it in this section. For example, StatefulService2 can open up an HTTP endpoint that StatefulService1 can POST/GET to.
Use an external queuing system, like Servicebus, EventHub or Kafka, where StatefulService1 can post events to. StatefulService2 can be consumer service that consumes events from the queue and processes it.
Reliable Collections (queue and dictionary) are not intended for communication. With queues, it's a 2PC, so only one process can access it at any point in time. Note that when you use stateful services with partitions, to access the data both service instances have to be on the same partition. Different partitions cannot access the same data.
Relying on either traditional methods or implementing your own communication listener is the way to go. With the traditional way - keep in mind that you'll need to decide if you want to partition your queues just like your services are or not.
I don't see why a service can't host a reliable collection/queue, and other services can access it via one of three transports: Remoting, WCF and HTTP.
Obviously, the reliable service will have to expose the collection/queue via an API or implement an IService interface
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-connect-and-communicate-with-services
You have to add a fault-handling retry pattern to your calling code, see https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-reliable-services-communication, in this case you don't need a queue to hold data for your in between service calls.

Ways to make a broker at Azure for anonymous HTTP API messages?

We need API at Azure that would store messages sent to it (broker) via HTTP in case my system (Cloud Service) unavailable or DB is busy. It's not easy to change what exact message will be sent. What ways to make such a broker at Azure?
Service Bus Queue looks interesting but it needs Shared Access Signatures as far as I understand.
Another WebRole should be a solution but it needs time to implement.
Virtual Machine with some tool (MSMQ?) seems a way but it requires maintenance.
What do you think?
Your scenario is a prime candidate for applying a Queue-Centric Work Pattern.
From http://www.asp.net/aspnet/overview/developing-apps-with-windows-azure/building-real-world-cloud-apps-with-windows-azure/queue-centric-work-pattern:
If either your Worker(s) or Database become unavailable, messages are still placed in durable storage and consumed later.
The Task Queue can take the form of an Azure Storage Queue or a Service Bus Queue. In every great design, the least complex component that does the job wins. In this case that would be Azure Storage Queues, durable, reliable, very few moving parts. Unless you absolutely need precision FIFO ordering, in which case you go with Service Bus.
From https://msdn.microsoft.com/en-us/library/dn568101.aspx:
This solution offers the following benefits:
It enables an inherently load-leveled system that can handle wide variations in the volume of requests sent by application instances. The queue acts as a buffer between the application instances and the consumer service instances, which can help to minimize the impact on availability and responsiveness for both the application and the service instances (as described by the Queue-based Load Leveling pattern). Handling a message that requires some long-running processing to be performed does not prevent other messages from being handled concurrently by other instances of the consumer service.
It improves reliability. If a producer communicates directly with a consumer instead of using this pattern, but does not monitor the consumer, there is a high probability that messages could be lost or fail to be processed if the consumer fails. In this pattern messages are not sent to a specific service instance, a failed service instance will not block a producer, and messages can be processed by any working service instance.
It does not require complex coordination between the consumers, or between the producer and the consumer instances. The message queue ensures that each message is delivered at least once.
It is scalable. The system can dynamically increase or decrease the number of instances of the consumer service as the volume of messages fluctuates.
It can improve resiliency if the message queue provides transactional read operations. If a consumer service instance reads and processes the message as part of a transactional operation, and if this consumer service instance subsequently fails, this pattern can ensure that the message will be returned to the queue to be picked up and handled by another instance of the consumer service.
Given you can't change the client, I would proxy the call. Recreate the API using the API Management Service in Azure, and change the web url to point to the API Management Service proxy.
The proxy can then easily delegate to a function application like Aravind mentioned in the comments to your question by using the API Management Service policies.

Can Azure Service Bus Sub/Topic implementation work for this approach?

I have potentially tens or even hudnreds of thousands of clients who need to communicate with a central server.
Communication is in the form of:
receive command from central servers (process it on the client)
respond with a status to central servers
I would like to avoid having the client machines talk to any intermediate web/API servers and instead, I want them to go directly to ASB
No client can see each other's messages. Whatsoever. I understand I can use SAS tokens to provide temporary privilges to clients and renew them on a scheduled basis and that's great and works within my architecture. However, I'm not sure if I can utilize the same ASB topic and have each client have their own topic inside?
Is ASB even the right technology for this? Can I somehow maintain only two queues/service-bus subscriptions for this (request/reply) or must I create an individual queue for each indivdiual client?
TIA
It’s difficult to tell without knowing more about the nature of the messages you are sending – e.g. how many are being sent. However, with this many clients you may be coming up against the quotas which are shown here:
https://msdn.microsoft.com/en-us/library/azure/ee732538.aspx
The salient limitations are:
100 concurrent connections per entity (i.e. topic, queue or subscription)
2,000 subscriptions per topic
10,000 queues or topics per service bus namespace
100,000 correlation filters per topic
It’s worth taking a look at the Azure scalability scenarios described in the second half of this document:
https://msdn.microsoft.com/en-us/library/azure/hh528527.aspx
It may be possible to get the broadcast side of things going by getting clients to connect with correlation filters though I have not tried using them on this scale.
If you want to have lots of senders going to a single queue then you should consider using the Service Bus REST API for message sending.
Otherwise, I'm afraid you may want to consider a proxy...

Azure Loosely Coupled / Scalable

I have been struggling with this concept for a while. I am attempting to come up with a loosely coupled Azure component design that is completely scalable using Queues and worker roles, which dequeue and process the items. I can scale the worker roles at will, and publishing to the queue is never an issue. So far so good, but, it seems that the only real world model this could work in is fire and forget. It would work fantastic for logging and other one way operations, but let's say I want to up load a file using queues/worker roles, save it to blob, then get a response back once it is complete. Or should this type of model not be used for online apps? What is the best way to send a notification back once an operation is completed? Do I create a response Q, then (somehow) retrieve the associated response? Any help is greatly appreciated!!!!!
I usually do a polling model.
Client (usually a browser) sends a request to do some work.
Front-end (web role) enqueues the work and replies with an ID.
Back-end (worker role) processes the queue and stores the result in a blob or table entity named .
Client polls ("Is done yet?") at some interval.
Front-end checks to see if the blob or table entity is there and replies accordingly.
See http://blog.smarx.com/posts/web-page-image-capture-in-windows-azure for one example of this pattern.
you could also look into the servicebus appfabric instead of using queues. with the servicebus you can send messages, use queues etc all from the servicebus appfabric. you could go to publish and subscribe instead of polling then!

Resources