How do I log or monitor JMS messages placed on a queue? - jboss6.x

We are on JBoss EAP 6.4.
There is a JMS queue that has a consumer who immediately consumes everything that is placed to the queue.
I want to check what is placed to the queue but for my tool (Hermes JMS) the queue is always empty.
I tried configuring fine level logging for "org.hornetq" category but messages placed to the queue are not logged at any debug level.
Is there a way to see what is placed to the queue when it is immediately consumed?
Thanks,
Valery

If you don't need to see the content but only see the flow, you could execute read-resource command, or create a custom script that reads the queue and certain times and calculate the flow. You would just need to parse it and calculate, doing your own tool, simple and efficient.
CLI command to read jms queue
/subsystem=messaging/hornetq-server=default/jms-queue=testQueue/:read-resource(recursive=false,proxies=false,include-runtime=true,include-defaults=true)
Reference for CLI

Related

How do I make sure my application only processes 1 message at a time?

I have a spring-integration application, which uses a message-driven-channel-adapter to receive a xml message from a Websphere MQ queue and then passing this message to a spring integration channel (comprised ) for storing into database.
How do I make sure only 1 message is processed at a time, that is no message can be processed if the preceding message has not reached a specific endpoint (in my case, a service-activator) ?
If you don't use any queue or executor channels, and you leave the default adapter concurrency to 1, the entire flow will run on the single container thread.
Hence, only one message will be processed at a time.

How to know if the queue has already been read fully using PEEK method in Azure Service Bus

I am using Azure Service Bus REST API to receive messages.
The requirement is to have a scheduled job to read messages from Azure Service Bus Queues and forward them for processing. If processed successfully, then delete them from the Queue or keep them in the Queue to be processed in the next scheduled job. I am using Peek-Lock Message (Non-Destructive Read) method(https://learn.microsoft.com/en-us/rest/api/servicebus/peek-lock-message-non-destructive-read).
The problem i am facing is inside my loop, how to know that i have read the queue fully so that i do not re-read the same queue again.
Your requirement is somewhat problematic.
If processed successfully, then delete them from the Queue or keep them in the Queue to be processed in the next scheduled job.
Successful processing should always result in message completion. Otherwise, you're asking for trouble. When processing messages in peek-lock mode, the message is locked for up to 5 minutes. It's your responsibility to complete it if the processing is successful. If it wasn't completed, that's a sign the processing wasn't successful and it should be read again given your requirement. Do not leave successfully processed messages in the queue.
The problem i am facing is inside my loop, how to know that i have read the queue fully so that i do not re-read the same queue again.
You shouldn't be concerned about this. Read messages and process. If failed to process, the message will reappear. Otherwise, a message should be removed. If you want to handle idempotency, i.e. ensure that if for some reason the message is not processed more than once, upon successful processing and prior to completion store the message ID (assuming it's unique) in a data store and validate any new message against that data store.

Get messages from a queue only retrieves a single message

I've created an azure service bus and a new logic-app using a manual trigger. I then add a "Get messages from a queue (peek-lock)" action to the app and set the maximum message count to "20".
I then create 5 new messages in my queue an manually and then trigger my new logic-app. When I then look at the execution of my app, I only see that ONE message was retrieved (and checked, that 4 messages are still in my queue).
Seems like the count of "20" is not being honored. I also checked the settings of my service-bus queue and the "Maximum Delivery Count" is set to "10". This should at least give me batches of 10 (instead of 20).
What am I missing?
It is not that simple to answer without more details. Still I hope this could help.
If you are using a WebJob, make sure associated AzureWebJobsStorage is created in Classical Mode, instead of Remote Mode. That would make your WebJob crash in less than 20 seconds... not reading all queue messages.
Does your logical-app involve a ServiceBusTrigger ? Then it seems like the first call to your method marked with correct trigger fails with exception, and that other messages are not read.
Let me know if I did misunderstood some details.

job-launching-gateway and persistent queue

I'm working on a project with spring-boot, spring-batch and spring-integration.
I have already configured spring-integration to start a spring-batch job when a new message arrives.
I send a message to the spring-integration channel attached to the JobLaunchingGateway and, for each message, the JobLaunchingGateway try to starts a new TaskExecutor.
Let the channel be backed by a persistent queue ( ActiveMQ as example )
Let the task-executor pool-size be equal to 2.
I would like to configure the system so that when the executor pool-size is already used the new messages are not consumed by the JobLaunchingGateway but remains on the persistent queue.
Is it possible? Is there any best practices.
Any feedback will be appreciated.
Thanks in advance.
You can add a queue limit to the TE and use the CallerBlocksPolicy' for theRejectedExecutionHandler`.
However, in the event of a failure, you will lose the task(s) in the queue.
It's generally better to use a message-driven channel, set the concurrency to two and run the jobs on the listener container thread rather than using a TE to run the job.
The additional benefit is if the job fails, or the machine crashes, you won't lose that request. Once you hand over to the TE, the message is gone from the queue.

Requeue or delete messages in Azure Storage Queues via WebJobs

I was hoping if someone can clarify a few things regarding Azure Storage Queues and their interaction with WebJobs:
To perform recurring background tasks (i.e. add to queue once, then repeat at set intervals), is there a way to update the same message delivered in the QueueTrigger function so that its lease (visibility) can be extended as a way to requeue and avoid expiry?
With the above-mentioned pattern for recurring background jobs, I'm also trying to figure out a way to delete/expire a job 'on demand'. Since this doesn't seem possible outside the context of WebJobs, I was thinking of maybe storing the messageId and popReceipt for the message(s) to be deleted in Table storage as persistent cache, and then upon delivery of message in the QueueTrigger function do a Table lookup to perform a DeleteMessage, so that the message is not repeated any more.
Any suggestions or tips are appreciated. Cheers :)
Azure Storage Queues are used to store messages that may be consumed by your Azure Webjob, WorkerRole, etc. The Azure Webjobs SDK provides an easy way to interact with Azure Storage (that includes Queues, Table Storage, Blobs, and Service Bus). That being said, you can also have an Azure Webjob that does not use the Webjobs SDK and does not interact with Azure Storage. In fact, I do run a Webjob that interacts with a SQL Azure database.
I'll briefly explain how the Webjobs SDK interact with Azure Queues. Once a message arrives to a queue (or is made 'visible', more on this later) the function in the Webjob is triggered (assuming you're running in continuous mode). If that function returns with no error, the message is deleted. If something goes wrong, the message goes back to the queue to be processed again. You can handle the failed message accordingly. Here is an example on how to do this.
The SDK will call a function up to 5 times to process a queue message. If the fifth try fails, the message is moved to a poison queue. The maximum number of retries is configurable.
Regarding visibility, when you add a message to the queue, there is a visibility timeout property. By default is zero. Therefore, if you want to process a message in the future you can do it (up to 7 days in the future) by setting this property to a desired value.
Optional. If specified, the request must be made using an x-ms-version of 2011-08-18 or newer. If not specified, the default value is 0. Specifies the new visibility timeout value, in seconds, relative to server time. The new value must be larger than or equal to 0, and cannot be larger than 7 days. The visibility timeout of a message cannot be set to a value later than the expiry time. visibilitytimeout should be set to a value smaller than the time-to-live value.
Now the suggestions for your app.
I would just add a message to the queue for every task that you want to accomplish. The message will obviously have the pertinent information for processing. If you need to schedule several tasks, you can run a Scheduled Webjob (on a schedule of your choice) that adds messages to the queue. Then your continuous Webjob will pick up that message and process it.
Add a GUID to each message that goes to the queue. Store that GUID in some other domain of your application (a database). So when you dequeue the message for processing, the first thing you do is check against your database if the message needs to be processed. If you need to cancel the execution of a message, instead of deleting it from the queue, just update the GUID in your database.
There's more info here.
Hope this helps,
As for the first part of the question, you can use the Update Message operation to extend the visibility timeout of a message.
The Update Message operation can be used to continually extend the
invisibility of a queue message. This functionality can be useful if
you want a worker role to “lease” a queue message. For example, if a
worker role calls Get Messages and recognizes that it needs more time
to process a message, it can continually extend the message’s
invisibility until it is processed. If the worker role were to fail
during processing, eventually the message would become visible again
and another worker role could process it.
You can check the REST API documentation here: https://msdn.microsoft.com/en-us/library/azure/hh452234.aspx
For the second part of your question, there are really multiple ways and your method of storing the id/popReceipt as a lookup is a possible option, you can actually have a Web Job dedicated to receive messages on a different queue (e.g plz-delete-msg) and you send a message containing the "messageId" and this Web Job can use Get Message operation then Delete it. (you can make the job generic by passing the queue name!)
https://msdn.microsoft.com/en-us/library/azure/dd179474.aspx
https://msdn.microsoft.com/en-us/library/azure/dd179347.aspx

Resources