I have two identical sites which will consume RabbitMQ messages using the new Rabbit MQ client. The producer ideally should be able to designate the site either by queue name or routing key. The former I can do as a Publish parameter but the latter I have no access to. Furthermore, on the service side, the consumer appears only able to subscribe to convention-based queue names, i.e. mq.myrequest.inq and I don't seem to be able to take advantage of the routing key.
Is there a way I can publish and subscribe using my own routing key, or register the handler based on an explicit queue name, i.e mq.myrequest.site1.inq ?
There isn't. ServiceStack's RabbitMq support is conventionally based on Type names and is opinionated to function as a work queue. It was designed to be config-free and simple to use so automatically takes care of the details of which exchanges, routing keys and queue names to use.
If you need advanced or custom configuration it's best to instead use the underlying RabbitMQ.Client directly.
Related
I am now working on the application saving data into the database using the REST API. The basic flow is: REST API -> object -> save to database. I wanted to introduce the queue to the application, having in mind the idea of the producer and consumer being a part of one, abovementioned application.
Is it possible for the Node.js application to act as both producer and consumer of the queue? Knowing that Node.js is single-threaded language, does it give me any other choice instead of creating two applications - one producing to the queue and the second one - waiting actively for messages in a queue and saving to the database?
Also, the requirement here would be for an application to process any item that hasn't been acknowledged on the queue on the restart. That also makes me think that the 'two applications' architecture is the best idea here.
Thank you for the help.
Yes, nodejs is able to do that and is well suited for every I/O intensive application use case. The point here is "what are you trying to achieve"? message queues are meant to make different applications communicate together, while if you need an in-process event bus is a total overkill. There are many easier and efficient ways to propagate messages between decoupled components of the same nodejs app; one of these way is EventEmitter that let your components collaborate in a pubsub fashion
If you are convinced that an AMQP broker is you solution, you just need to
Define a "producer" class that publishes data on an exchange myExchange
Define a "consumer" queue that declares a queue myQueue
Create a binding at application startup between myExchange and myQueue, based on some routing key. Then, when a message is received from "consumer" you need to acknowledge after db saving. When a message is acked, it will be destroyed since it's already been consumed. You can decide, after an error, to recover the message via NACK
There are nodejs libraries that make code easier, such as Rascal
Short answer: YES and use two separate connections for publishing and consuming
Is it possible for the NodeJS application to act as both producer and consumer of the queue?
I would even state that it is a good usecase matching extremely well with NodeJS philosophy and threading mechanism.
Knowing that Node.js is single-threaded language, does it give me any other choice instead of creating two applications - one producing to the queue and the second one - waiting actively for messages in a queue and saving to the database?
You can have one application handling both, just be aware that if your client is publish too fast for the server to handle, RabbitMQ can apply back pressure on the TCP connection, thus consuming on a back-pressured TCP connection would greatly affect consumer performance.
Spring Cloud Stream is based on At least once method,This means that in some rare cases a duplicate message can arrive at an endpoint.
Does Spring Cloud Stream keep a buffer of already received messages?
The IdempotentReceiver in Enterprise Integration Patterns book suggests :
Design a receiver to be an Idempotent Receiver,one that can safely receive the same message multiple times.
Does Spring Cloud Stream control duplicate messages in consumers?
Update:
A paragraph from Spring Cloud Stream says :
4.5.1. Durability
Consistent with the opinionated application model of Spring Cloud Stream, consumer group subscriptions are durable. That is, a binder implementation ensures that group subscriptions are persistent and that, once at least one subscription for a group has been created, the group receives messages, even if they are sent while all applications in the group are stopped.
Anonymous subscriptions are non-durable by nature. For some binder implementations (such as RabbitMQ), it is possible to have non-durable group subscriptions.
In general, it is preferable to always specify a consumer group when binding an application to a given destination. When scaling up a Spring Cloud Stream application, you must specify a consumer group for each of its input bindings. Doing so prevents the application’s instances from receiving duplicate messages (unless that behavior is desired, which is unusual).
I think your assumption on the responsibility of the spring-cloud-stream framework are incorrect.
Spring-cloud-stream in a nutshell is a framework responsible for connecting and adapting producers/consumers provided by the developer to the message broker(s) exposed by the spring-cloud-stream binder (e.g., Kafka, Rabbit, Kinesis etc).
So connecting to a broker, receiving message from the broker, deserialising it, invoking user code, serialising message and sending it back to the broker is in the scope of framework responsibility. So you can look at it as purely infrastructure.
What you're describing is more of an application concern since the actual receiver is something that user would develop as part of the spring-cloud-stream development experience, hence responsibility for idempotence would reside with such user.
Also, on top of that most brokers already handle idempotency (in a way) by ensuring that a particular message has been delivered only once. That said, if someone sends identical message to such broker, it will have no idea that it is duplicate so the requirement for idempotency and/or deduplication is still valid, but as you can see it is not as straight forward given the amount of factor that are in play where your understanding of idempotence could be different from mine, hence our approaches could be different as well.
One last thing (partially to prove my last point): can safely receive the same message multiple times. - That is all it states, but what does safely really mean to you vs. me vs. some other person?
If you are concerned about a case where the application receives and processes message from the broker but crashes before it acknowledges the message, that can happen. Spring cloud stream app starters provides support for auto-configuration of a persistent message metadata store which backs Spring Integration's IdempotentReceiverInterceptor. An example of this is in the SFTP source app starter. By default, the sftp source uses an in-memory metadata store, so it would not survive a restart, but can be customized to use a persistent store.
So I'd like to perform the following - each N seconds get X messages from a sessions-enabled queue (peek-lock) and then send them together(in a single request) up to the next processing point. Here are options I've come up so far -
"Get messages from a queue" action
Seems like it requires me to hardcode a session id beforehand(?), which is not that handy.
"Batch receiver" logic app
It's still in preview
Custom trigger
Seems like it will work, but requires extra coding.
Any suggestions on how to effectively achieve it via Logic Apps with stuff available today?
You don't need Sessions specifically to retrieve a specific number of messages in a batch....just read 10 message then do whatever processing you need.
If you need to also retrieve the messages in order, then yes, use a Session enabled Queue where all callers use the same SessionId.
Keep in mind, the SessinId is an arbitrary Application value so you can use the same value as the Queue name if you want. I don't see this as any kind of hurdle and it's just how it works.
You can use a Recurrence Trigger at whatever interval you need.
Sessions are primarily for grouping messages. The SessionID can be any specific arbitrary value, HighPriority/LowPriority or a value determined at runtime, such as a guid, if you're doing Correlation among specific related messages. Now that I think about it, the FIFO side affect seems more to support correlation scenarios.
One way to address this is to set the maximum concurrency on the logic app.
Go to the settings of the service bus receiving action:
Then choose to enable concurrency for 10:
I have no clue if it's better to ask this here, or over on Programmers.SE, so if I have this wrong, please migrate.
First, a bit about what I'm trying to implement. I have a node.js application that takes messages from one source (a socket.io client), and then does processing on the message, which might result in zero or more messages back out, either to the sender, or other clients within that group.
For the processing, I would like to essentially just shove the message into a queue, then it works its way through various message processors that might kick off their own items, and eventually, the bit running socket.io is informed "Hey, send this message back"
As a concrete example, say a user signs into the service, that sign in message is then placed in the queue, where the authorization processor gets it, does it's thing, then places a message back in the queue saying the client's been authorized. This goes back to the socket.io socket that is connected to the client, along with other clients that might be interested. It can also go to other subsystems that might want to do more processing on authorization (looking up user info, sending more info to the client based on their data, etc).
If I wanted strong coupling, this would be easy, but I tried that before, and it just goes to a mess of spaghetti code that's very fragile, and I would like to avoid that. Another wrench in the setup is this should be cluster-able, which is where the real problem comes in. There might be more than one, say, authorization processor running. But the authorization message should be processed only once.
So, in short, I'm looking for a pattern/technique that will allow me to, essentially, have multiple "groups" of subscribers for a message, and the message will be processed only once per group.
I thought about maybe having each instance of a processor generate a unique name that would be used as a list in Reids. This name would then be registered with some sort of dispatch handler, and placed into a set for that group of subscribers. Then when a message arrives, the dispatch pulls a random member out of that set, and places it into that list. While it seems like this would work, it seems somewhat over-complicated and fragile.
The core problem is I've never designed a system like this, so I'm not even sure the proper terms to use or look up. If anyone can point me in the right direction for this, I would be most appreciative.
I think what your describing is similar to https://www.getbridge.com/ service. I it but ended up writing my own based on zeromq, it allows you to register services, req -> <- rec and channels which are pub / sub workers.
As for the design, I used a client -> broker -> services & channels which are all plug and play using auto discovery, you have the services register their schema with the brokers who open a tcp connection so that brokers on other servers can communicate with that broker groups services. Then internal services and clients connect via unix sockets or ipc channels which ever is preferred.
I ended up wrapping around the redis publish/subscribe functions a bit to do this. Each type of message processor gets a "group name", and there can be multiple instances of the processor within that group (so multiple instances of the program can run for clustering).
When publishing a message, I generate an incremental ID, then store the message in a string key with that ID, then publish the message ID.
On the receiving end, the first thing the subscriber does is attempt to add the message ID it just got from the publisher into a set of received messages for that group with sadd. If sadd returns 0, the message has already been grabbed by another instance, and it just returns. If it returns 1, the full message is pulled out of the string key and sent to the listener.
Of course, this relies on redis being single threaded, which I imagine will continue to be the case.
What you might be looking for is an AMQP protocol implementation,where you can have queue get custom exchanges,and implement a pub-sub model.
RabbitMQ - a popular amqp protocol implementation with lots of libraries
it also has node.js library
We are thinking of speparate Queues for:
Request (RequestQueue)
Response (ResponseQueue)
Scenario:
Worker role will putMessage to RequestQueue e.g. GetOrders
Third party will monitor RequestQueue. If they see GetOrders
request they will getMessage, process them and put the response in
ResponseQueue.
Question:
If I putMessage to RequestQueue, I will like to get results back from ResponseQueue. Is there easy way to achieve this and how?
Thank you.
No, this is not possible. If you put a message in a queue, you must pop the message from the same queue (it will not magically appear in any other queue). Perhaps if you explained more why you think you need two separate queues here for push/pop, there might be a more expansive answer and suggestion.
EDIT: Perhaps I misunderstood your intent. I guess I don't get the question now - can you help clarify. You seem to be asking how to put a message on one queue, acknowledge it by putting another message on another queue, and have someone read the acknowledgment from the second queue? What is the question here? I should point out that you won't want some 3rd party to read directly from a Windows Azure queue as that would require sharing the master storage key with them (a non-starter). Perhaps you are looking for how to have 3rd parties read from a queue?
EDIT 2: Sounds like you want to consume messages with a 3rd Party. Windows Azure queues probably are not a good fit as I mentioned due to security reasons (you need to share the master key). Instead, you could either layer a WCF service over the queue (using queues via proxy) or use the queueing from the Service Bus - that will allow you to have separate credentials. Using the Service Bus capability might be the right choice here in terms of simplicity. Take a look here for demos.
Have a worker of some sort monitor the question queue, then post an answer to the answer queue. Interface out the queue managers and you shouldn't have any problems using any sort of queue tech. Also, the worker doesn't really need to use a queue for answers..
Caveats:
Worker service has access to both queues
Each queue item contains a serialized foreign key to identify themselves.