job-launching-gateway and persistent queue - spring-integration

I'm working on a project with spring-boot, spring-batch and spring-integration.
I have already configured spring-integration to start a spring-batch job when a new message arrives.
I send a message to the spring-integration channel attached to the JobLaunchingGateway and, for each message, the JobLaunchingGateway try to starts a new TaskExecutor.
Let the channel be backed by a persistent queue ( ActiveMQ as example )
Let the task-executor pool-size be equal to 2.
I would like to configure the system so that when the executor pool-size is already used the new messages are not consumed by the JobLaunchingGateway but remains on the persistent queue.
Is it possible? Is there any best practices.
Any feedback will be appreciated.
Thanks in advance.

You can add a queue limit to the TE and use the CallerBlocksPolicy' for theRejectedExecutionHandler`.
However, in the event of a failure, you will lose the task(s) in the queue.
It's generally better to use a message-driven channel, set the concurrency to two and run the jobs on the listener container thread rather than using a TE to run the job.
The additional benefit is if the job fails, or the machine crashes, you won't lose that request. Once you hand over to the TE, the message is gone from the queue.

Related

how to create multiple single threaded subscribers for a spring publishsubscribe channel

I have a spring integration publish subscribe channel.
Now, i create a subscriber to this channel, when ever a user logs in. so if there are 10 users, there will be 10 subscribers.
i want all these subscribers to be single threaded. is there any way to achieve that. please advise. thanks.
Well, if you configure a PublishSubscribeChannel with an executor, each subscriber will get its copy of message in a separate thread from that executor. If your subscriber flow does nothing with thread, all the steps are going to be performed in that thread.
This is not good practice to spawn as many thread as you have subscribers. Now imaging you going to have 1000 or even million...
It is better to configured PublishSubscribeChannel with a reasonable ThreadPoolTaskExecutor and let subscribers to compete in them rather than see how your system is dead because of too many threads.

I want to re-queue into RabbitMQ when there is error with added values to queue payload

I have a peculiar type of problem statement to be solved.
Configured RabbitMQ as message broker and its working but when there is failure in process in consume I'm now acknowledging with nack but it blindly re-queues with whatever already came in as payload but i want to add some-more fields to it and re-queue with simpler steps
For Example:
When consume gets payload data from RabbitMQ it will then process it and try to do some process based on it in multiple host machines, but due to some thing if one machine not reachable i need to process that alone after some time .
Hence I'm planning to re-queue failed data with one more fields with machine name again back to queue so it will be processed again with existing logic itself.
How to achieve this ? Can someone help on me
When a message is requeued, the message will be placed to its original position in its queue, if possible. If not (due to concurrent deliveries and acknowledgements from other consumers when multiple consumers share a queue), the message will be requeued to a position closer to queue head. This way you will end up in an infinite loop(consuming and requeuing the message). To avoid this, you can, positively acknowledge the message and publish it to the queue with the updated fields. Publishing the message puts it at the end of the queue, hence you will be able to process it after some time.
Reference https://www.rabbitmq.com/nack.html

How do I log or monitor JMS messages placed on a queue?

We are on JBoss EAP 6.4.
There is a JMS queue that has a consumer who immediately consumes everything that is placed to the queue.
I want to check what is placed to the queue but for my tool (Hermes JMS) the queue is always empty.
I tried configuring fine level logging for "org.hornetq" category but messages placed to the queue are not logged at any debug level.
Is there a way to see what is placed to the queue when it is immediately consumed?
Thanks,
Valery
If you don't need to see the content but only see the flow, you could execute read-resource command, or create a custom script that reads the queue and certain times and calculate the flow. You would just need to parse it and calculate, doing your own tool, simple and efficient.
CLI command to read jms queue
/subsystem=messaging/hornetq-server=default/jms-queue=testQueue/:read-resource(recursive=false,proxies=false,include-runtime=true,include-defaults=true)
Reference for CLI

How to check if there are no message to consume in AMQ queue in stopmit node.js

I am using stompit package of node.js to connect to AMQ queue to subscribe message. I used ConnectFailover class to create connection and channelPool class to create pool.
Problem I am facing is that once connection is made and if there is no message in the queue then it stay connected.
What I need a way to disconnect if there is no message to read from the queue. I don't see any option in stompit documentation.
There is no way to do that with STOMP as per this issue. As a general rule, brokers like AMQ rarely allow consumers to inspect queue properties like message count.
Unless you can somehow leverage JMX from your node.js code, the easiest way would be to create a timer with client.disconnect() as a callback and wait for an amount of time suitable for your system. Whenever a message is consumed, reset the timer.

How to automatically assign a worker for message processing?

After the master has forked the workers and now wants to start sending messages to the worker processes, is specifying a worker before sending a message the only way to pass the message? The documentation suggests so.
const worker = cluster.fork();
worker.send('hi there');
If yes, what is the scheduling policy all about? Is there a way where we could:
master.sendToWorker('Hi there!');
and it automatically selects the worker according to the default/configured algorithm?
The scheduling policy is for handling incoming connections. If you have 3 workers that are express applications, when a user connects, only one worker will handle the request. It will either be Round Robin, by default, or OS's choice. So that does not give you lots of flexibility.
Now, that does not help us on your request, which is to send messages from the master. The correct solution depends on the nature of the message you'd like to send.
If you are sending a message to make the worker start a task, messages might not be the best solution, you might like to use a job queue instead. But if you'd like to use messages anyways, your master could simply take note of available workers and arbitrarily send the message to a free one, removing it from the available workers until it reports to have finished.
You could simply use your round robin implementation, in one line of code it would look like this:
workersList[++messageCount%workersList.length].send("message");
If you wanted to use the native policy, you could have your workers listen on a specific port and have your master send a message to that port on localhost, it should work, but you'll have to implement your own messaging system...
IMO, if you want to send a message, you know who you want to send it to. If you want to send a message to a "random" recipient, it may be because a message might not be the appropriate way to communicate for that scenario.

Resources