Oi
I've got two bpel processes. Process A puts message in a queue and Process B consumes the messages and does some work.
What I'm looking for is a way to limit the number of messages being handled at the same time. So limiting the number of Processes B running simultaneously.
adapter.jms.receive.threads - this parameter indicates the number of poller threads that are created when an adapter endpoint is activated. The default is 1. Each poller thread receives its own message that is processed independently and thus allows for increased throughput.
I think this parameter does what i'm looking for but I see no difference with it.
What i'm doing to test it is pushing a bunch of messages into the queue and immediately its created an execution instance no matter what value i have in adapter.jms.receive.threads.
Shouldn't this property limit the number of requests being handled simultaneously? Can you think of any reason for it not working? Am I missing any configuration? Any compability issue?
You did not specify which exact version you are using but because you mentioned "adapter.jms.receive.threads" I assume you are at least on Oracle BPEL 11g+.
Described behaviour occurs if you don't override the default value of bpel.config.oneWayDeliveryPolicy property (which is set to "async.persist"). Changing bpel.config.oneWayDeliveryPolicy on your component to "sync" should solve your problem.
Precisely, add the following property to your component definition inside composite.xml file:
<property name="bpel.config.oneWayDeliveryPolicy" type="xs:string" many="false">sync</property>
Related
I have the following JMeter context:
In one Concurrency Thread Group 1, I have a JSR223 Sampler which sends request messages to an MQ queue1 and always gets the JMSMessageID and an epochTimestamp (derived from JMS_IBM_PutDate + JMS_IBM_PutTime) and puts them into one variable. Underneath this Sampler is an Inter-Thread Communication PostProcessor element which gets the data from this variable and puts it into a FIFO QUEUE.
In another Concurrency Thread Group 2, I have another JSR223 Sampler with code to get the response messages for all the messages sent on MQ queue 1 from an MQ queue2.
To do this, (and be able to calculate the response time fore each message) before the JSR223 Sampler executes, I use the Inter-Thread Communication PreProcessor element which gets a message ID and a timestamp from the FIFO queue (60 seconds timeout) and passes it over to a variable with which the JSR223 Sampler can work to calculate the request-response time for each message.
I want to stress-test the system, which is why I am gradually dynamically increasing the Requests per second at every 1 minute (for script testing purposes) in both thread groups, like so:
I use the tstFeedback function of Concurrency Thread Group for this:
${__tstFeedback(ThroughputShapingTimerIn,1,1000,10)}
My problem is this:
When I gradually increase the desired TPS load, during the first 4 target TPS steps, the Consumer threads keep up (synchronized) with the Producer threads, but as the time passes and load increases, the consumer threads seem to be taking more time to find and consume the messages. It's as though the load of the consumer treads is no longer able to keep up with the load of the producer threads, despite both thread groups having the same load pattern. This eventually causes the queue2 which is keeping the response messages to get full. Here is a visual representation of what I mean:
The Consumer samples end up being much less than the producer samples. My expectation is that they should be more or less equal...
I need to understand how I can go about to debug this script and isolate the cause:
I think that something happens at the inter-thread synchronization level because sometimes I am getting null values from the FIFO queue into the consumer threads - I need to understand what gets put into that FIFO queue and what gets taken off of that FIFO queue.
How can I print what is present in the FIFO list at each iteration?
Does anyone have any suggestions for what could be the cause of this behavior and how to mitigate it?
Any help/suggestion is greatly appreciated.
First of all take a look at jmeter.log file, you have at least 865 errors there so I strongly doubt your Groovy scripts are doing what they're supposed to be doing
Don't run your test in GUI mode, it's only for tests development and debugging, when it comes to execution you should be using command-line non-GUI mode
When you call __fifoPop() you can save the value into a JMeter Variable like ${__fifoPop(queue-name,some-variable)}, the variable can be visualized using Debug Sampler. The size of the queue can be checked using __fifoSize() function
Alternatively my expectation is that such a Groovy expert as you shouldn't have any problems printing queue items in Groovy code:
I am new to Event sourcing concept so there are a couple of moments I don't understand. One of them is how to handle following scenario:
I've got 2 instances of a service. Both of them listen to a event queue. There are two messages: CreateUser and UpdateUser. First instance picks up CreateUser and second instance picks up UpdateUser. For some reason second instance will handle its command quicker but there will be no User to update, since it was not created.
What am I getting wrong here?
What am I getting wrong here?
Review: Race Conditions Don't Exist
A microsecond difference in timing shouldn’t make a difference to core business behaviors.
In other words, what you want is logic such that the order of the messages doesn't change the final result, and a first writer wins policy (aka compare-and-swap), so that when you have two processes trying to update the same resource, the loser of the data race has to start over.
As a general rule, events should be understood to support multiple observers - all subscribers get to see all events. So a queue with competing consumers isn't the usual approach unless you are trying to distribute a specific subscriber across multiple processes.
You do not have a concurrency issue you can solve. This totally runs down to either using bad tools or not reading the documentation.
Both of them listen to a event queue.
And that queue should support that. Example are azure queues, where I Can listen AND TELL THE QUEUE not to show the event to anyone else for X seconds (which is enough for me to decide whether i handled it or not). If I do not answer -> event is reinserted after that time. If I kill it first, there is no concurrency.
So, you need a backend queue that can handle this.
I have two inbound-channel-adapter which collect files from two distinct sources.
I'd like to process the incoming files one at a time, by the same instance of service-activator and in the same thread. At the moment, since there are two distinct Poller, they are actually processed by two different threads concurrently.
I thought that using a queueChannel to feed my service-activator would have solved the problem but I don't want to introduce another Poller (and hence, another delay).
Any idea?
Use an ExecutorChannel with an Executors.newSingleThreadExecutor().
You can also use a QueueChannel with a fixedDelay of 0; the poller blocks in the queue for 1 second by default (and can be increased - receiveTimeout) so with a 0 delay between polls, no additional latency will be added.
I've been meddling with Cassandra's (v 2.2.4) threadpool executors (namely SEPExecutor.java module) and trying to change the queues used for storing pending reads (that have no immediately available workers to serve). By default, Cassandra uses a ConcurrentLinkedQueue (which is a non-blocking queue variant). I'm currently trying to override this with a MultiQueue setup in order to schedule requests in non-FIFO order.
Lets assume for simplicity that my MultiQueue implementation is an extension of AbstractQueue that simply overrides the offer and poll functions and randomly (de)queues requests to any of the enclosed ConcurrentLinkedQueues. For polling, if one queue returns null, we basically keep going through all the queues until we find a non-null element (otherwise we return null). There's no locking mechanism in place since my intention is to utilize the properties of the enclosed ConcurrentLinkedQueues (which are non-blocking).
The main problem is that it seems I'm running into some sort of race condition, where some of the assigned workers can't poll an item that supposedly exists in the queue. In other words, the MultiQueue structure appears to be non-linearizable. More specifically, I'm encountering a NullPointerException on this line: SEPWorker.java [line 105]
Any clue as to what could be causing this, or how should I go about maintaining the properties of a single ConcurrentLinkedQueue in a MultiQueue setup?
There is a scenario in one of our continuous webjobs which has us a little puzzled. We have very thorough logging and the picture that it paints seems to indicate that two messages were dequeued by the same invocation of a webjob. The timestamps seem to support this. But more compelling was the exception.
At one point in our code we are adding a key to a dictionary. The exception that we observed was that a duplicate key was attempted to be added to that dictionary. If the 2 messages were dequeued at the same time by the same instance of the webjob method, then that is the only thing that seems to make sense. Because the dictionary is created with each invocation using the new keyword i.e. each invocation of the dequeue method creates a separate object in memory.
In short, my question is, can 2 messages be dequeued simultaneously by the same instance/method of a continuously running webjob which is observing that queue?
By default there is parallel execution so the ordering of the dequeued messages would be different. You can set the batchsize to 1 https://azure.microsoft.com/en-us/documentation/articles/websites-dotnet-webjobs-sdk-storage-queues-how-to/#config