I'm trying to use the request reply pattern as described in the microsoft docs (https://learn.microsoft.com/en-us/azure/service-bus-messaging/message-sessions#request-response-pattern)
"Multiple applications can send their requests to a single request queue, with a specific header parameter set to uniquely identify the sender application. The receiver application can process the requests coming in the queue and send replies on the session enabled queue, setting the session ID to the unique identifier the sender had sent on the request message. The application that sent the request can then receive messages on the specific session ID and correctly process the replies."
As I understand it, it should be possible to send a message from multiple applications, have the receiver handle the message and send a response that will only be picked up by the initial sender.
Maybe I'm wrong, but a bit like this.
This doesn't seem to be documented (only using sessions for ordered message handling) and I have no luck finding how to implement this.
Does anybody have an idea/experience with this?
I am using .net core 3.1 with the microsoft azure servicebus package (4.1.2)
Ok, it took some time figuring out but I think I was able to achieve the setup from the diagram.
Here is the process as it may help others:
I have one normal queue (PostNL queue) and one shared 'applications' queue that is sessions enabled
An application (e.g App1) sends a message to the PostNL queue using a QueueClient and setting a unique SessionId
The receiver handles the incoming messages through QueueClient.RegisterMessageHandler
The receiver processes the message and sends a reply to the applications queue using QueueClient.SendAsync (the replymessage has the SessionId set to UniqueSessionId)
The sender uses a session = SessionClient.AcceptMessageSessionAsync("UniqueSessionId")
The sender can start receiving messages in this session using session.ReceiveAsync
(all the other applications listening on the applications queue will not compete for these reply messages as long as they use other session Ids)
Related
Intro
We're developing a system to support multiple real-time messages (chat) and updates (activity notifications).
That is, user A can receive via Web Socket messages for :
receiving new chat messages
receiving updates for some activity, for example if someone like their photo.
and more.
We use one single WebSocket connection to send all these different messages to the client.
However, we also support multiple applications/clients to be open by the user at the same time.
(i.e - user A connect on their web browser, and also from their mobile app, at the same time).
Architecture
We have a "Hub" that stores a map of UserId to a list of active websocket sessions.
(user:123 -> listOf(session#1, session#2))
Each client, once websocket connection is established, has its own Consumer which subscribes to a pulsar topic "userId" (e.g - user:123 topic).
If user A connected on both mobile and web, each client has its own Consumer to topic user:A.
When user A sends a new message from session #1 to user B, the flow is :
user makes a REST POST request to send a message.
service stores a new message to DB.
service sends a Pulsar message to topic user:B and user:A.
return 200 status code + created Message response.
Problem
If user A has two sessions open (two clients/websockets), and they send a message from session #1, how can we make sure only session #2 gets the message ?
Since user A has already received the 200 response with the created message in session #1, there's no need to send the message to him again by sending a message to his Consumer.
I'm not sure if it's a Pulsar configuration, or perhaps our architecture is wrong.
how can we make sure only session #2 gets the message ?
I'm going to address this at the app level.
Prepend a unique nonce (e.g. a guid) to each message sent.
Maintain a short list of recently sent nonces,
aging them out so we never have more than, say, half a dozen.
Upon receiving a message,
check to see if we sent it.
That is, check to see if its nonce is in the list.
If so, silently discard it.
Equivalently, name each connection.
You could roll a guid just once when a new websocket is opened.
Or you could incorporate some of the websocket's addressing
bits into the name.
Prepend the connection name to each outbound message.
Discard any received message which has "sender" of "self".
With this de-dup'ing approach
there's still some wasted network bandwidth.
We can quibble about it if you wish.
When the K-th websocket is created,
we could create K topics,
each excluding a different endpoint.
Sounds like more work than it's worth!
I'm trying to understand how to do two-way communication with google pub-sub with the following architecture
EDIT: I meant to say subscribers instead of consumers
I'm trying to support the following workflow:
UI sends a request to an api service to process an async process
API Service publishes request to a topic to begin the process kick-off
The consumer picks up the message and processes the async process service.
once the async process service is done it publishes to a process complete topic.
Here is where I want the UI to pick up the process complete message and I'm trying to figure out the best approach.
So two questions:
Is the multiple topic the preferred approach when wanting to do two-way communication back to the client? Or is there a way to do this with a single topic with multiple subscriptions?
How should the consumer of the Process-Complete get the response back to the UI? Should the UI be the consumer of the subscription? Or should I send it back to the api service and publish a websocket message? Both these approaches seem to have tradeoffs.
Multiple topics are going to be preferred in this situation, one for messages going to the asynchronous processors and then one for the responses that go back. Otherwise, your asynchronous processors are going to needlessly receive the response messages and have to ack them immediately, which is unnecessary extra delivery of messages.
With regard to getting the response back to the UI, the UI should not be the consumer of the subscription. In order to do that, you'd need every running instance of the UI to have its own subscription because otherwise, they would load balance messages across them and you couldn't guarantee that the particular client that sent the request would actually receive the response. The same would be true if you have multiple API servers that need to receive particular responses based on the requests that transmitted through them. Cloud Pub/Sub isn't really designed for topics and subscriptions to be ephemeral in this way; it is best when these are created once and all of the data is transmitted across them.
Additionally, having the UI act as a subscriber means that you'd have to have the credentials in the UI to subscribe, which could be a security issue.
You might also consider not using a topic for the asynchronous response. Instead, you could encode as part of the message the address or socket of the client or API server that expects the response. Then, the asynchronous processor could receive a message, process it, send a response to the address specified in the message, and then ack the message it received. This would ensure responses are routed to where they need to go and minimize the delivery of messages that subscribers just ack that they don't need to process, e.g., messages that were intended for a different API server.
I am creating an chat application where I have a rest API and a socket.io server, What I want to do is the user will send messages to rest API, The api will persist those messages in database and then send these messages to the rabbimq queue then rabbitmq will send these messages to socket.io if the receiving user is online, Else the message will be stored in the queue and when the user will come online the user will retrieve these messages from the queue however I want to implement this in a way like whatsapp, The messages will be for a particular user and the user will only receive those messages which are meant for them i.e I don't want to broadcast messages I want only particular user to receive those messages
Chat should be a near-real-time application, there are multiple ways of modeling such a thing. First of all, you can use HTTP pooling, HTTP long pooling but some time ago there where introduced the new application-level protocol - web socket. It would be good for you, along with Stomp messages. Here you can check a brief example. And sending messages to specific users is also supported out-of-the-box example
1
To send messages to specific sockets you can use rooms: io.to(room).emit(msg). Every socket is a part of a room with the same name as the socket id.
2
I wouldn't wait for the message to be written to the database before sending it out through socket.io, your API can do both at once. When a user connects they can retrieve their messages from the database then listen for new ones on their socket.
Running SS 4.0.54 at the moment and what I want to accomplish is to provide clients a service where by they can send one way HTTP requests.
The service itself is simple. For the message, open a DB connection and save some value.
What I don't want to have happen is I get a flood of requests within a minute and have to open up a 1000 connections to the DB.
Ideally the client would send their requests over HTTP and fill the queue. SS would then every X milliseconds or if MAX number of messages have been queued, send them to the service.
This way we don't have messages queued up for too long, and we only process X number of messages at a time.
I've looked through http://docs.servicestack.net/messaging but something isn't clicking.
The InMemoryTransientMessageService doesn't buffer, it processes the message as soon as it receives it. You'd need to use one of the other MQ Servers to have the requests published to dedicated queues in the configured MQ Broker which are then processed serially outside the context of the HTTP Request, the concurrency of which can be controlled using the threadCount when registering the handler.
When you have a MQ Server registered, any requests sent to using the SendOneWay API (or /oneway pre-defined route) are automatically published to the configured MQ Server.
I am writing a Client, Server-based chat. The Server is the central component and handles all the incoming messages and outgoing messages. The clients are that chat users. They see the chat in a frame and can also write chat messages. These messages are sent over to the server. The server in turn updates all clients.
My problem is synchronisation of the clients. Since the server is multi-threaded, both messages can be received from clients and updates (in form of messages) have to be sent out aswell. Since each client is getting updated in in its own thread, there is no guarantee that all clients will receive the same messages. We have a snychronisation problem.
How do I solve it?
I have messed with timestamps and a buffer. But this is not a good solution again because there is no guarantee that after assigning a timestamp the message will be put into the buffer immediately afterwards.
I shall add that I do not know the clients. That is, I only have one open connection in each thread on the server. I do not have an array of clients or something like that to keep track of all the clients.
I suggest that you implement a queue for each client proxy (that's the object that manages the communication with each client).
Each iteration of your server object's (on its own thread) work:
1. It reads messages from the queues of all client proxies first
2. Decides if it needs to send out any messages based on its internal logic and incoming messages
3. Prepares and puts any outgoing messages to the queues of all its client proxies.
The client proxy thread work schedule is this:
1. Read from the communication.
2. Write to the queue from client proxy to server (if received any messages).
3. Read from the queue from server to client proxy.
4. Write to communication channel to client (if needed).
You may have to have a mutex on each queue.
Hope that helps