How to "seed" a rabbot queue - node.js

We have a distributed application using various Docker containers, communicating internally via RabbitMq (using Rabbot). This is all working well, including using RabbitMq as a container independent timer using message timeouts and deadlettering. Each of the containers guarantees (configures) the full topology on startup. Whichever is first will actually set it up.
However, we now need to seed our timers exactly once, when the rabbitMQ topology is first setup. Is there any way to seed a rabbitMQ queue with a message on creation, or detect if the queue already existed on the rabbitMq servers or that the configuration call resulted in creating the queue?

Related

Looking for a Node.js queue system that allows only one consumer at a time (similar to Single Active Consumer in RabbitMQ)

My situation requires that the next message will only start processing after the previous one finishes processing. (The message processing function is an async function).
RabbitMQ fits my needs with Single Active Consumer functionality and by setting prefetch to 1. About Single Active Consumer
A queue is declared and some consumers register to it at roughly the same time.
The very first registered consumer become the single active consumer: messages are dispatched to it and the other consumers are ignored.
The single active consumer is cancelled for some reason or simply dies. One of the registered consumer becomes the new single active consumer and messages are now dispatched to it. In other terms, the queue fails over automatically to another consumer.
, but it cannot be installed via npm.
Are there any alternative queue systems that does this and can be installed via npm?
As my reply to #paulsm4 The reason I don't consider RabbitMQ at this moment is because I plan to deploy my app to Heroku and would like to keep it as simple as possible without the need to use a third party RabbitMQ addon. However, I will use Redis anyways so any library that depends on Redis is fine.

Avoid consuming same events parallel from EventHub

I'm using:
Azure platform to run some microservice architecture software solution.
microservices are using the Azure-EventHub for communicating in special cases.
Kubernetes with 2 clusters (primary, secondary)
per application namespace, there is 1 event-listener pod running per cluster for consuming from eventhub
The last point is relevant to my current problem:
The load balancers will share traffic between the primary and secondary clusters. This means that 2 event-listener-pods are running per application at the same time. So they are just reacting to events but some times they are consuming the same event from the event hub and this causes some duplicated notification mails.
So finally my question is: How can I avoid reading the same event twice the same time? I thought event hub index is always increasing but starting at the same moment is not "secured".
You will need to use separate consumer groups per pod to avoid EPOCH error.
That said, both pods will read the same events, so you have two options.
Have an active-passive set up. One consumer group, one pod that reads the events and delegates the work out on each event. If that pod fails, then a health/heart beat mechanism brings the second pod online.
Have an active-active set up. Two consumer groups, two active pods. You will need to implement idempotent processing.
Idempotent processing, where processing the same message multiple times produces the same result, is good practice regardless of approach. This would allow you to replay batches of events in which one errored and not have adverse affects on the integrity of your data.
I would opt for the first option, a single event hub reader will process thousands of events per second and pass off the work to your micro services.
If you have lower volumes of messages and need guaranteed message processing, then using Service Bus may be a better choice where messages can be locked, completed and abandoned.

Node.js application acting as producer and consumer

I am now working on the application saving data into the database using the REST API. The basic flow is: REST API -> object -> save to database. I wanted to introduce the queue to the application, having in mind the idea of the producer and consumer being a part of one, abovementioned application.
Is it possible for the Node.js application to act as both producer and consumer of the queue? Knowing that Node.js is single-threaded language, does it give me any other choice instead of creating two applications - one producing to the queue and the second one - waiting actively for messages in a queue and saving to the database?
Also, the requirement here would be for an application to process any item that hasn't been acknowledged on the queue on the restart. That also makes me think that the 'two applications' architecture is the best idea here.
Thank you for the help.
Yes, nodejs is able to do that and is well suited for every I/O intensive application use case. The point here is "what are you trying to achieve"? message queues are meant to make different applications communicate together, while if you need an in-process event bus is a total overkill. There are many easier and efficient ways to propagate messages between decoupled components of the same nodejs app; one of these way is EventEmitter that let your components collaborate in a pubsub fashion
If you are convinced that an AMQP broker is you solution, you just need to
Define a "producer" class that publishes data on an exchange myExchange
Define a "consumer" queue that declares a queue myQueue
Create a binding at application startup between myExchange and myQueue, based on some routing key. Then, when a message is received from "consumer" you need to acknowledge after db saving. When a message is acked, it will be destroyed since it's already been consumed. You can decide, after an error, to recover the message via NACK
There are nodejs libraries that make code easier, such as Rascal
Short answer: YES and use two separate connections for publishing and consuming
Is it possible for the NodeJS application to act as both producer and consumer of the queue?
I would even state that it is a good usecase matching extremely well with NodeJS philosophy and threading mechanism.
Knowing that Node.js is single-threaded language, does it give me any other choice instead of creating two applications - one producing to the queue and the second one - waiting actively for messages in a queue and saving to the database?
You can have one application handling both, just be aware that if your client is publish too fast for the server to handle, RabbitMQ can apply back pressure on the TCP connection, thus consuming on a back-pressured TCP connection would greatly affect consumer performance.

How to access to worker's queued requests?

I'm implementing a web server using nodejs which must serve a lot of concurrent requests. As nodejs processes the requests one by one, it keeps them in an internal queue (in libuv, I guess).
I also want to run my web server using cluster module, so there will be one requests queue per worker.
Questions:
If any worker dies, how can I retrieve its queued
requests?
How can I put retrieved requests into other workers' queues?
Is there any API to access to alive workers' requests queue?
By No. 3 I want to keep queued requests somewhere such as Redis (if possible), so in case of server crash, failure or even hardware restart I can retrieve them.
As you mentioned in the tags that you are-already-using/want-to-use redis, you can use queue-manager based on redis to do all the work for you.
Checkout https://github.com/OptimalBits/bull (or it's alternatives).
bull has a concept of queue. you add jobs to the queue and listen to the same queue from different processes/vms. bull will send the same job to only one listener and you have the ability to control how many jobs each listener is processing at the same time (concurrency-level).
In addition, if one of the jobs fails to run (in other words, the listener of the queue threw an error), bull will try to give the same job to different listener.

Develop a clock and workers in node.js on heroku

I'm working on a service that needs to analyze data from social media networks every five minutes for different users. I'm developing it in node.js and I will implement it on Heroku.
According to this article on Heroku website, the best way to do that is separating the logic of the scheduler from the logic of the worker. In fact, the idea is to have one dyno dedicated to schedule tasks to avoid duplication. This dyno instructs a farm of workers (n dynos as needed) to do the tasks.
Here is the procfile of this architecture:
web: node web.js
worker: node worker.js
clock: node clock.js
The problem is how to implement it in node.js. I googled it, and the suggestion is to use message queue systems (like IronMQ, RabbitMQ or CloudAMQP). But I'm trying to set my code and app simple and with the minor need of add-ons.
The question is: is there a way to communicate directly from my scheduler (clock) to the worker dynos?
Thanks for your answers.
Heroku dynos do not have fixed IP addresses, so there is no way to open a direct connection between them. That's why you need to create a separate server instance with a static IP or other fixed endpoint that acts as a go-between.
You have at least two viable options: a RabbitMQ-type message queue, or a stripped down version using a pub-sub redis feed. I generally use the latter because it's quick, simple, and sufficiently robust for all my needs (e.g. if a message gets lost every once in a blue moon, it's no big deal). If, however, it is essential that you never lose a message, you should use a full-blown message queue like RabbitMQ.
Setting up the redis implementation is very straightforward. There are several redis add-ons (I use RedisCloud) with free and inexpensive plans. When you provision them, you get an endpoint to connect to and a password. Then you just connect your web dyno(s) and worker dyno(s) to your redis instance such that your web app publishes tasks to a channel and the worker subscribes to that channel.
If you need the web app to communicate with the client after task completion, you just create another channel for the worker to publish task completion messages and the web app to listen for them.
You'll never get duplication of tasks, as each time a worker receives a message it pops off the queue.
If I understood this correctly, you want to spin a clock as one app, and then spin workers as separate apps? Sure, there is a direct way. You open a connection from the clock app towards the worker app.
For example, have every worker open a client sockets connection to the clock. Then the clock can communicate to them and relay orders.
Or use WebRTC. That way the workers will talk to the clock, but they can also talk to each other.
Or make an (authenticated) HTTP(s) REST endpoint on the worker where it will receive tasks. Like, POST /tasks will create a task on the worker. If the task is short, it can reply right away, so that the clock knows the job is done. Or if it's a longer task, it can acknowledge it, but later call an endpoint on the clock to say it's done, something like PUT /tasks/32.
Or even more directly, open a direct net connection towards the clock, for example on worker start (and the other way around). Use dgram and send UDP messages between worker and clock.
In any way, I also believe that the people suggesting MQ like RabbitMQ is much better to just push jobs/tasks on. Then it can distribute tasks as needed, and based on unacked count on the job queue, it can spin up more workers when needed.
But your question is very broad, so to get more details, you could provide a little more details.
This might be helpful.
http://blog.andyjiang.com/intermediate-cron-jobs-with-heroku/
Basically you require the worker file directly into the clock file.
I solved in an easy way with the following three steps:
Set credit card information in Heroku account;
Installed Heroku Scheduler addon (you can use the command heroku addons:create scheduler:standard --app <yourAppName>)
Set up the script to run as scheduled job
More info here or here.

Resources