Following up to quesiton! since I can't comment.
I followed #Brandon Yarbrough instruction everything is configured the problem is I am not receiving anything script saying
Listening for messages on projects/[project_id]/subscriptions/projects/[project_id]/subscriptions/subtestbucketthhh
I am having serious problems to make messages delivery fail proof in a chat system.
Having several node.js and live communication via websocket to the clients, I use rabbit to callback the correct consumer at a specific node.
I declare my queues as {durable: true, prefetch:1, expires: 2*3600*1000, autoDelete: true}
consumerOption is {noAck: false, exclusive: false}
Once I receive a message from the server, I callback the server, get the message, and use message.ack(false)
Sometimes, a message appears with a pendent ACK in rabbit and as I would expect, the consumers stop being callbacked.
Here is my overall strategy:
1- when socket disconnects, I recover the queue using queue.recover() during the the reconnection/connection (more frequent).
2- When I send a message to the server and not receive it back, I send a message to the server to recover the queue.
3- I use the socket callback function to send the ack confirmation. On the server, I use message.ack(false) The server keeps a hashmap {[ackCode: string]: RabbitMessage} and I send the ackCode back to the server, so it can retrieve the correct message and ack it.
5- If client is not receiving any message for 2 minutes, I ask to the server to recover the queue.
The step 5 should not exist but even with this step, sometimes I send a recover queue request to the server, the server executes the command, but nothing happens and chat is freeze.
These are very difficult events to debug. I am using a Typescript library which is 3 year without any commit and this could be one of the causes.
Regarding the strategy, is it correct? Any idea on what I could be facing?
What I've learned and why I think that I couldn't use rabbit to solve the specific problem mentioned in the original post.
The domain: A "chat" where the message order is very important (some are chains) and we must be sure that the message will be delivery if/when the client is online.
The problem: We have several node.js servers, sockets are spread among them. Sockets falls all time, and it is common to a client connection that was in the first server be connected again in another. We don't use cookies, session affinity by IP won't handle the issue.
Limitations: That being said, I can't activate a consumer that is currently activated in another server, so if a customer Queue is tied to server 1 I can't activate it in server 2. And all the messages that need to be sent are tied to this specific queue.
Another limitation is that I don't have an easy way to consume queues, re-queue, to know in advance how much not ack messages I have in the queue, aggregate them and bulk send them via socket.
The solution: I am no longer using {noAck: false} and I am controlling the ack in a Redis queue. Thus, I am using Rabbit as a pub-sub, to callback the correct consumers to send the message using the socket. Rabbit wake me up, first thing I do is to put the message at the end of a redis queue. When I send a message via socket, I always start sending the messages from the beginning of the queue, regardless of the message that have just woke me up. I send the message, wait for the callback event, If it is not ok, I re-queue the messages,
After decoupling the pub-sub from the queue/ack control, I can now easily change my rabbit pub/sub from one server to another (declaring using socket.id and no more with the client queue), with no concern of loosing any message. Also, now I am capable of much more advanced operations on my queue.
As my use case don't allow me to use the full power of exchanges/binds (i have complex routing rules), I am evaluating the possibility of changing from rabbit to redis pub/sub, but in this case, I would continue to differentiate pub/sub from the queue.
After more than a month trying to make rabbit working in this scenery, I think that I was using a good technology to the wrong use case. It is much simpler now.
I am using Artemis ActiveMQ for internal asynchronous processes of my application.
All the connection logic is handled by Spring Integration.
I've encountered a low disk space scenario on the artemis server. This resulted in artmeis server blocking my message producers, without any warning (except a warning in the artemis server log). However it can be any other blocking scenario.
The application continued to produce messages, without being aware that the messages aren't written to the queue.
How can my application (producer) be informed about such an infrastructure issue, so I can throw exception or log an error, that will be visible at my applications' end.
If your application sends messages asynchronously then there's no way for it to know about problems sending the message (except for problems that happen specifically on the client). Sending messages async is "fire-and-forget"; the client just sends them and doesn't really care about what happens to them. You'd need to send them synchronously in order to get any indication of a problem on the broker.
Like ActiveMQ the Artemis server supports producer flow control (personally never used it). While the ActiveMQ documentation explicitly states that it also applies to async producers provided you set the Producer Window Size on the connection factory the Artemis documentation says nothing about it. But the windowing concept is the same. You probably should give it a shot.
Running SS 4.0.54 at the moment and what I want to accomplish is to provide clients a service where by they can send one way HTTP requests.
The service itself is simple. For the message, open a DB connection and save some value.
What I don't want to have happen is I get a flood of requests within a minute and have to open up a 1000 connections to the DB.
Ideally the client would send their requests over HTTP and fill the queue. SS would then every X milliseconds or if MAX number of messages have been queued, send them to the service.
This way we don't have messages queued up for too long, and we only process X number of messages at a time.
I've looked through http://docs.servicestack.net/messaging but something isn't clicking.
The InMemoryTransientMessageService doesn't buffer, it processes the message as soon as it receives it. You'd need to use one of the other MQ Servers to have the requests published to dedicated queues in the configured MQ Broker which are then processed serially outside the context of the HTTP Request, the concurrency of which can be controlled using the threadCount when registering the handler.
When you have a MQ Server registered, any requests sent to using the SendOneWay API (or /oneway pre-defined route) are automatically published to the configured MQ Server.
I am writing a Client, Server-based chat. The Server is the central component and handles all the incoming messages and outgoing messages. The clients are that chat users. They see the chat in a frame and can also write chat messages. These messages are sent over to the server. The server in turn updates all clients.
My problem is synchronisation of the clients. Since the server is multi-threaded, both messages can be received from clients and updates (in form of messages) have to be sent out aswell. Since each client is getting updated in in its own thread, there is no guarantee that all clients will receive the same messages. We have a snychronisation problem.
How do I solve it?
I have messed with timestamps and a buffer. But this is not a good solution again because there is no guarantee that after assigning a timestamp the message will be put into the buffer immediately afterwards.
I shall add that I do not know the clients. That is, I only have one open connection in each thread on the server. I do not have an array of clients or something like that to keep track of all the clients.
I suggest that you implement a queue for each client proxy (that's the object that manages the communication with each client).
Each iteration of your server object's (on its own thread) work:
1. It reads messages from the queues of all client proxies first
2. Decides if it needs to send out any messages based on its internal logic and incoming messages
3. Prepares and puts any outgoing messages to the queues of all its client proxies.
The client proxy thread work schedule is this:
1. Read from the communication.
2. Write to the queue from client proxy to server (if received any messages).
3. Read from the queue from server to client proxy.
4. Write to communication channel to client (if needed).
You may have to have a mutex on each queue.
Hope that helps