Sharing Azure Service Fabric Reliable Queues Between Services - azure

I'm diving into Service Fabric (from the Cloud Services world) and am hitting a few speed bumps with how ReliableQueues work.
Let's say I have 2 stateful services StatefulService1 and StatefulService2.
If I need to have StatefulService1 send a message in a queue that StatefulService2 will pick up and read am I able to use ReliableQueues or are ReliableQueues isolated within the service they are created in?
If that is the case then what is the purpose of having ReliableQueues? The usual pattern behind them is for another process to act on the messages. I understand why isolating a Dictionary to a service would make sense, but not a queue...
Is my best option to rely on a traditional approach to send this message such as a Storage Queue or does ServiceFabric offer a solution for passing message queues between services?
UPDATE
Just want to clarify that I did attempt to dequeue a message created in StatefulService1 from within StatefulService2 and it came up empty. Dequeing from within StatefulService1 worked fine as expected.

Reliable Collections are in memory data structures that are not intended for inter-service communications. If you would like to establish a communication channel between StatefulService1 and StatefulService2, you have the following options:
Use Communication Listeners. You can have custom listeners for the protocol of your choice, including HTTP, WCF or your custom protocol. You can read more about it in this section. For example, StatefulService2 can open up an HTTP endpoint that StatefulService1 can POST/GET to.
Use an external queuing system, like Servicebus, EventHub or Kafka, where StatefulService1 can post events to. StatefulService2 can be consumer service that consumes events from the queue and processes it.

Reliable Collections (queue and dictionary) are not intended for communication. With queues, it's a 2PC, so only one process can access it at any point in time. Note that when you use stateful services with partitions, to access the data both service instances have to be on the same partition. Different partitions cannot access the same data.
Relying on either traditional methods or implementing your own communication listener is the way to go. With the traditional way - keep in mind that you'll need to decide if you want to partition your queues just like your services are or not.

I don't see why a service can't host a reliable collection/queue, and other services can access it via one of three transports: Remoting, WCF and HTTP.
Obviously, the reliable service will have to expose the collection/queue via an API or implement an IService interface
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-connect-and-communicate-with-services

You have to add a fault-handling retry pattern to your calling code, see https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-reliable-services-communication, in this case you don't need a queue to hold data for your in between service calls.

Related

Azure Service Bus Queues vs. Topics (Pub/Sub)

Need a bit of architectural guidance. I have a set of stateless services that do various functions. My architecture allows for multiple copies of each service to run at the same time (as they are stateless), allowing me to:
scale up as needed for handling larger workloads
have fault-tolerance (if one instance of a service fails, no problem as there will be others to take on that work).
However, I don't want duplication of work.
If Service A, Instance 1 has already taken Job ABC, I don't want Service A, Instance 2, to take on that same job. So, I could avoid this problem by using Azure Service Bus Queues. Only a single worker would get a particular item from the queue and would only be reassigned to another worker, if the worker didn't mark it as complete in a set time.
So what's an appropriate use-case for Topics (Pub/Sub)? It seems like if I ever have multiple copies of the same service, I must rely on Queues. Is that right?
Asked another way, is there a way to use Topics in Azure Service Bus or similar products/services but avoid duplication of work? Also, if there is a way to lock a message (for a short period of time) when using Topics, is it possible to lock that message to just one instance of Service A (so no other instances of Service A will have access to it) but the message will be broadcast to Service B, Service, C, etc.?
is there a way to use Topics in Azure Service Bus or similar
products/services but avoid duplication of work?
Yes, there is. Basically with that you would need to use each subscription as a queue. What you will need to do is define proper filters so that one kind of message is sent to a single subscription (that way it acts as a queue) and have multiple listeners (service instances in your case) listen to a specific subscription only.
Also, if there is a way to lock a message (for a short period of time)
when using Topics, is it possible to lock that message to just one
instance of Service A (so no other instances of Service A will have
access to it) but the message will be broadcast to Service B, Service,
C, etc.?
It is certainly possible to lock a message. For that you will need to fetch messages in Peek-Lock mode. However if multiple subscribers (services) are involved, then only one subscriber will be able to lock the message and access it. For other subscribers, the message will be invisible. You can't have a scenario where one service acquires the lock and other services still receive the message.
Azure function triggers would provide all what you are looking for out of the box.
If you are not leveraging any advanced queuing features of service bus then I would recommend you look at storage queues to save some money.
If you need service bus then you can use service bus triggers.
Hope that helps.

In Service Fabric, Are reliable queues only available to the same service type?

I created a pair of services in service fabric, one goes and reads from the source database and if it finds any new items, adds to a reliable queue; the other one tries to dequeue from the reliable queue and creates in the other database where I need the records.
If both of this processes are in the same service, everything works, but if I separate this functionality in two different services, the second service queue is always empty, which tells me the queues are not the same.
Hence my question: is a reliable queue only available to instances of the same service type? Is there any way to make a reliable queue available to two or more service types? If I want to share the same queue across service types, do I have to use Service Bus instead?
I hope my question makes sense, I have been trying to find this in the documentation, but I do not see anything helpful there, maybe I am looking in the wrong place.
A reliable collection is indeed only available to one particular stateful service type. The whole idea behind it is that the data (reliable collection) lives where the code (service) lives.
If you want to access the queue from another service you could expose methods that manipulate the queue to do that on the service interface and have other services call this service. See this repo for some inspiration. Or use another messaging service like the Azure Service Bus or Azure Storage Queues.

Scalable Request Response pattern using Azure Service Bus

We are evaluating "Azure service bus" to use between web server and app server for request response pattern. We are planning to have two queues:
Request Queue
Response Queue
Webserver will push a message to request queue and subscribe to response queue.
By comparing the MessageID and CorrelationId, it can receive the response back, which can be sent back to browser.
But over cloud, using elastic scaling, we can increase/decrease web server (and app server) instances.
We are wondering if this pattern will work here optimally.
To make this work, we will have to have one Request queue and multiple topics (one for each web server instance).
This will have two down sides:
Along with increasing/decreasing web server instance, we will have
to create/delete topic as well.
All the message will be pushed to
all the topics. So, every message will be processed by all the web
servers. And this is not an efficient way.
Please share your thoughts.
Thanks In Advance
When you scale out your endpoint, you don't want to have an instance affinity. You want to rely on the competing consumers and not care which instance of your endpoint processes messages.
For example, if you receive a response and write that to a database, most likely you don't care what instance of an endpoint has written the data. But if you have some in-memory state or anything other info only available to the endpoint that has originated the request and processing reply messages requires that information, then you have instance affinity and need to either remove it or use technology that allows to address that. For example, something like a SignalR with a backplane to communicate a reply message to all your web endpoint instances.
Note that ideally you should avoid instance affinity as much as you can.
I know this is old, but thought I should comment to complete this thread.
I agree with Sean.
In principle, Do not design with instance affinity in mind.
Any design should work irrespective of number of instances and whichever instance runs the code.
Microsoft does recommend the same when designing application architecture for running in the cloud.
In your case, I do not think you should plan to have one topic for each instance.
You should just put the request messages into one topic, with a subscription to allow your receiving app service to process those request messages.
When your receiving app service scales out, that's where your design needs to allow reading messages from the subscription from multiple receivers (multiple instances), which is described in the Competing consumers pattern.
https://learn.microsoft.com/en-us/azure/architecture/patterns/competing-consumers
Please post what you have finally implemented.

Ways to make a broker at Azure for anonymous HTTP API messages?

We need API at Azure that would store messages sent to it (broker) via HTTP in case my system (Cloud Service) unavailable or DB is busy. It's not easy to change what exact message will be sent. What ways to make such a broker at Azure?
Service Bus Queue looks interesting but it needs Shared Access Signatures as far as I understand.
Another WebRole should be a solution but it needs time to implement.
Virtual Machine with some tool (MSMQ?) seems a way but it requires maintenance.
What do you think?
Your scenario is a prime candidate for applying a Queue-Centric Work Pattern.
From http://www.asp.net/aspnet/overview/developing-apps-with-windows-azure/building-real-world-cloud-apps-with-windows-azure/queue-centric-work-pattern:
If either your Worker(s) or Database become unavailable, messages are still placed in durable storage and consumed later.
The Task Queue can take the form of an Azure Storage Queue or a Service Bus Queue. In every great design, the least complex component that does the job wins. In this case that would be Azure Storage Queues, durable, reliable, very few moving parts. Unless you absolutely need precision FIFO ordering, in which case you go with Service Bus.
From https://msdn.microsoft.com/en-us/library/dn568101.aspx:
This solution offers the following benefits:
It enables an inherently load-leveled system that can handle wide variations in the volume of requests sent by application instances. The queue acts as a buffer between the application instances and the consumer service instances, which can help to minimize the impact on availability and responsiveness for both the application and the service instances (as described by the Queue-based Load Leveling pattern). Handling a message that requires some long-running processing to be performed does not prevent other messages from being handled concurrently by other instances of the consumer service.
It improves reliability. If a producer communicates directly with a consumer instead of using this pattern, but does not monitor the consumer, there is a high probability that messages could be lost or fail to be processed if the consumer fails. In this pattern messages are not sent to a specific service instance, a failed service instance will not block a producer, and messages can be processed by any working service instance.
It does not require complex coordination between the consumers, or between the producer and the consumer instances. The message queue ensures that each message is delivered at least once.
It is scalable. The system can dynamically increase or decrease the number of instances of the consumer service as the volume of messages fluctuates.
It can improve resiliency if the message queue provides transactional read operations. If a consumer service instance reads and processes the message as part of a transactional operation, and if this consumer service instance subsequently fails, this pattern can ensure that the message will be returned to the queue to be picked up and handled by another instance of the consumer service.
Given you can't change the client, I would proxy the call. Recreate the API using the API Management Service in Azure, and change the web url to point to the API Management Service proxy.
The proxy can then easily delegate to a function application like Aravind mentioned in the comments to your question by using the API Management Service policies.

Background Worker or Worker with Service Bus for SQL Database access?

I'm building a game for Windows Phone 8 and would like to use Windows Azure SQL Database for storing my users' data (mostly scores and rankings).
I have been reading Azure's documentation on SQL Database and found this link which describes just the scenario I'm looking for (it's Scenario B in the picture): I want my clients (the game running in a user's windows phone) to get data from an SQL Server through a middle application also hosted on Windows Azure.
By reading further the documentation (personally I think it's really messy and hard to find what you're looking for in there), I've learned that I could use Cloud Services for this middle application, however I'm not sure if I should use a background worker which provides an HTTP API or a worker with a Service Bus Relay (I discovered that I can use service bus in WP8 in this link).
I've got a few questions that I couldn't find an answer to:
1) What would be the "standard" way to go in this case?
2) If both ways are acceptable, are there other advantages to using a Service Bus other than an easier way to connect and send messages to my middle application? What are the disadvantages?
3) Is a cloud service really what I'm looking for (and not just a VM with the middle application code running in it)?
Its difficult to answer these sort of question as there are lots of considerations. I don't believe there is a necessarily 'standard way'.
The Service Bus' relay service's purpose is to help traverse firewalls and NATs, not something that directly relates to your scenario, I suspect.
The Service Bus, though, also includes a messaging capability which provides queues, topics and subscriptions to use to exchange messages between clients or client/server.
You could use the phone client to write and read messages to/from queues. you would then have a worker role hosting your application logic and accessing the database as needed.
Some of the advantages of using messaging include being load leveller, helping handling peaks in traffic (at the expense of latency), helping separating concerns and allowing you to accept requests from the clients when the backend is down as so can help with resiliency.
In theory they can also help you deliver messages to the client in the same fashion, by using a queue or subscription per client, but for a large number of clients this may become a management issue.
On the downside you would have to work with what is a proprietary protocol, and will need to understand the characteristics and limitations of the service bus. you will need to manage the queues and topics over time. there will also be some increased latency, although typically not an issue and, finally, you will have to implement asynchronous messaging on the client side which has advantages but is also harder to implement.
I would imagine that many architectures follow the WEB API route by using a web role cloud service exposing the API. The web role can then perform any business logic and connect to the database in the background.
A third option, which you didn't mention, is to use Windows Azure Mobile Services and implement your business logic as a service API there

Resources