Why is the Hazelcast Near Cache out of sync with the EntryUpdatedListener even though they're in the same process? - hazelcast

I understand that Near Caches are not guaranteed to be synchronized real-time when the value is updated elsewhere on some other node.
However I do expect it to be in sync with the EntryUpdatedListener that is on the same node and therefore the same process - or am I missing something?
Sequence of events:
Cluster of 1 node modifies the same key/value, flipping a value from X to Y and back to X on an interval every X seconds.
A client connects to this cluster node and adds an EntryUpdatedListener to observe the flipping value.
Client receives the EntryUpdatedEvent and prints the value given - as expected, it gives the value recently set.
Client immediately does a map.get for the same key (which should hit the near cache), and it prints a STALE value.
I find this strange - it means that two "channels" within the same client process are showing inconsistent versions of data. I would only expect this between different processes.
Below is my reproducer code:
public class ClusterTest {
private static final int OLD_VALUE = 10000;
private static final int NEW_VALUE = 88888;
private static final int KEY = 5;
private static final int NUMBER_OF_ENTRIES = 10;
public static void main(String[] args) throws Exception {
HazelcastInstance instance = Hazelcast.newHazelcastInstance();
IMap map = instance.getMap("test");
for (int i = 0; i < NUMBER_OF_ENTRIES; i++) {
map.put(i, 0);
}
System.out.println("Size of map = " + map.size());
boolean flag = false;
while(true) {
int value = flag ? OLD_VALUE : NEW_VALUE;
flag = !flag;
map.put(KEY, value);
System.out.println("Set a value of [" + value + "]: ");
Thread.sleep(1000);
}
}
}
public class ClientTest {
public static void main(String[] args) throws InterruptedException {
HazelcastInstance instance = HazelcastClient.newHazelcastClient(new ClientConfig().addNearCacheConfig(new NearCacheConfig("test")));
IMap map = instance.getMap("test");
System.out.println("Size of map = " + map.size());
map.addEntryListener(new MyEntryListener(instance), true);
new CountDownLatch(1).await();
}
static class MyEntryListener
implements EntryAddedListener,
EntryUpdatedListener,
EntryRemovedListener {
private HazelcastInstance instance;
public MyEntryListener(HazelcastInstance instance) {
this.instance = instance;
}
#Override
public void entryAdded(EntryEvent event) {
System.out.println("Entry Added:" + event);
}
#Override
public void entryRemoved(EntryEvent event) {
System.out.println("Entry Removed:" + event);
}
#Override
public void entryUpdated(EntryEvent event) {
Object o = instance.getMap("test").get(event.getKey());
boolean equals = o.equals(event.getValue());
String s = "Event matches what has been fetched = " + equals;
if (!equals) {
s += ", EntryEvent value has delivered: " + (event.getValue()) + ", and an explicit GET has delivered:" + o;
}
System.out.println(s);
}
}
}
The output from the client:
INFO: hz.client_0 [dev] [3.11.1] HazelcastClient 3.11.1 (20181218 - d294f31) is CLIENT_CONNECTED
Jun 20, 2019 4:58:15 PM com.hazelcast.internal.diagnostics.Diagnostics
INFO: hz.client_0 [dev] [3.11.1] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
Size of map = 10
Event matches what has been fetched = true
Event matches what has been fetched = false, EntryEvent value has delivered: 88888, and an explicit GET has delivered:10000
Event matches what has been fetched = true
Event matches what has been fetched = true
Event matches what has been fetched = false, EntryEvent value has delivered: 10000, and an explicit GET has delivered:88888

Near Cache has Eventual Consistency guarantee, while Listeners work in a fire & forget fashion. That's why there are two different mechanisms for both. Also, batching for near cache events reduces the network traffic and keeps the eventing system less busy (this helps when there are too many invalidations or clients), as a tradeoff it may increase the delay of individual invalidations. If you are confident that your system can handle each invalidation event, you can disable batching.
You need to configure the property on member side as events are generated on cluster members and sent to clients.

Related

Bulk load in hazelcast map using ttl based on map entry

I need to load around 10 millions records from flat file in hazelcast map.Also the ttl needs to set based on each map entry.
What is most efficient way to do the same?
Currently i am using Imap.putall().Is there a way to set ttl based on map entry using putall?
There isn't an API that allows you to do bulk put with individual expiry.
The code below would be a way to do it with Hazelcast Jet writing into Hazelcast's IMap.
The client submits this job and the grid servers process, reading a single file of input server side. The line .groupingKey partitions the input stream by
the entry key, so each server does a map.put where the key will be local, but enriched with a different TTL for each entry.
This is an alternative to iterating across your input file and insert each key individually. Whether it is faster will depend on factors such as the network speed, number of servers, and so on. It is certainly more complicated than simple iteration, so the speed gain would need to justify the complexity.
public class MyClient implements EntryExpiredListener<Long, Long> {
private static final String INPUT_DIRECTORY = System.getProperty("user.home") + "/input_data";
private static final String MAP_NAME = "test";
public static void main(String[] args) {
new MyClient().go();
}
public void go() {
JetInstance jetInstance = Jet.newJetClient();
jetInstance.getMap(MAP_NAME).addEntryListener(this, false);
Pipeline pipeline = MyClient.buildPipeline();
JobConfig jobConfig = new JobConfig();
jobConfig.addClass(MyClient.class);
try {
jetInstance.newJob(pipeline, jobConfig).join();
} catch (Exception e) {
e.printStackTrace();
}
}
/**
* Process a file that looks like <pre>
* % cat test/input
* 1
* 2
* 3
* 4
* 5
* </pre>
* #return
*/
private static Pipeline buildPipeline() {
ComparatorEx<Tuple3<Long, Long, Long>> comparatorEx = ComparatorEx.comparingLong(Tuple3::f0);
Pipeline pipeline = Pipeline.create();
BatchStage<String> input = pipeline.readFrom(MyClient.mySource(INPUT_DIRECTORY));
// Convert to trios of key, value, expiry
BatchStage<Tuple3<Long, Long, Long>> tuples
= input
.map(line -> {
long l = Long.parseLong(line);
return Tuple3.<Long, Long, Long>tuple3(100 * l, 200 * l, 300 * l);
});
// Route per JVM based on entry key
BatchStage<Entry<Long, Tuple3<Long, Long, Long>>> routedEntries
= tuples
.groupingKey(Tuple3::f0)
.rollingAggregate(AggregateOperations.maxBy(comparatorEx));
// Custom map save using expiry
routedEntries.writeTo(MyClient.mySink(MAP_NAME));
// [Optional] all log entries to systout
routedEntries.writeTo(Sinks.logger());
return pipeline;
}
private static BatchSource<String> mySource(String directory) {
return Sources.filesBuilder(directory)
.sharedFileSystem(true)
.build();
}
private static Sink<? super Entry<Long, Tuple3<Long, Long, Long>>> mySink(String mapName) {
return SinkBuilder.sinkBuilder("mySink",
processorContext -> processorContext.jetInstance().<Long, Long>getMap(mapName))
.receiveFn((IMap<Long, Long> map, Entry<Long, Tuple3<Long, Long, Long>> entry) -> {
map.put(entry.getKey(), entry.getValue().f1(), entry.getValue().f2(), TimeUnit.SECONDS);
})
.build();
}
#Override
public void entryExpired(EntryEvent<Long, Long> entryEvent) {
System.out.println(entryEvent.getEventType() + " for " + entryEvent.getKey());
}
}

Mockito (How to correctly mock nested objects)

I have this next class:
#Service
public class BusinessService {
#Autowired
private RedisService redisService;
private void count() {
String redisKey = "MyKey";
AtomicInteger counter = null;
if (!redisService.isExist(redisKey))
counter = new AtomicInteger(0);
else
counter = redisService.get(redisKey, AtomicInteger.class);
try {
counter.incrementAndGet();
redisService.set(redisKey, counter, false);
logger.info(String.format("Counter incremented by one. Current counter = %s", counter.get()));
} catch (JsonProcessingException e) {
logger.severe(String.format("Failed to increment counter."));
}
}
// Remaining code
}
and this this my RedisService.java class
#Service
public class RedisService {
private Logger logger = LoggerFactory.getLogger(RedisService.class);
#Autowired
private RedisConfig redisConfig;
#PostConstruct
public void postConstruct() {
try {
String redisURL = redisConfig.getUrl();
logger.info("Connecting to Redis at " + redisURL);
syncCommands = RedisClient.create(redisURL).connect().sync();
} catch (Exception e) {
logger.error("Exception connecting to Redis: " + e.getMessage(), e);
}
}
public boolean isExist(String redisKey) {
return syncCommands.exists(new String[] { redisKey }) == 1 ? true : false;
}
public <T extends Serializable> void set(String key, T object, boolean convertObjectToJson) throws JsonProcessingException {
if (convertObjectToJson)
syncCommands.set(key, writeValueAsString(object));
else
syncCommands.set(key, String.valueOf(object));
}
// Remaining code
}
and this is my test class
#Mock
private RedisService redisService;
#Spy
#InjectMocks
BusinessService businessService = new BusinessService();
#Before
public void setup() {
MockitoAnnotations.initMocks(this);
}
#Test
public void myTest() throws Exception {
for (int i = 0; i < 50; i++)
Whitebox.invokeMethod(businessService, "count");
// Remaining code
}
my problem is the counter always equals to one in logs when running tests
Counter incremented by one. Current counter = 1(printed 50 times)
and it should print:
Counter incremented by one. Current counter = 1
Counter incremented by one. Current counter = 2
...
...
Counter incremented by one. Current counter = 50
this means the Redis mock always passed as a new instance to BusinessService in each method call inside each loop, so how I can force this behavior to become only one instance used always for Redis inside the test method ??
Note: Above code is just a sample to explain my problem, but it's not a complete code.
Your conclusion that a new RedisService is somehow created in each iteration is wrong.
The problem is that it is a mock object for which you haven’t set any behaviours, so it responds with default values for each method call (null for objects, false for bools, 0 for ints etc).
You need to use Mockito.when to set behaviour on your mocks.
There is some additional complexity caused by the fact that:
you run the loop multiple times, and behaviour of the mocks differ between first and subsequent iterations
you create cached object in method under test. I used doAnswer to capture it.
You need to use doAnswer().when() instead of when().thenAnswer as set method returns void
and finally, atomicInt variable is modified from within the lambda. I made it a field of the class.
As the atomicInt is modified each time, I again used thenAnswer instead of thenReturn for get method.
class BusinessServiceTest {
#Mock
private RedisService redisService;
#InjectMocks
BusinessService businessService = new BusinessService();
AtomicInteger atomicInt = null;
#BeforeEach
public void setup() {
MockitoAnnotations.initMocks(this);
}
#Test
public void myTest() throws Exception {
// given
Mockito.when(redisService.isExist("MyKey"))
.thenReturn(false)
.thenReturn(true);
Mockito.doAnswer((Answer<Void>) invocation -> {
atomicInt = invocation.getArgument(1);
return null;
}).when(redisService).set(eq("MyKey"), any(AtomicInteger.class), eq(false));
Mockito.when(redisService.get("MyKey", AtomicInteger.class))
.thenAnswer(invocation -> atomicInt);
// when
for (int i = 0; i < 50; i++) {
Whitebox.invokeMethod(businessService, "count");
}
// Remaining code
}
}
Having said that, I still find your code questionable.
You store AtomicInteger in Redis cache (by serializing it to String). This class is designed to be used by multiple threads in a process, and the threads using it the same counter need to share the same instance. By serializing it and deserializing on get, you are getting multiple instances of the (conceptually) same counter, which, to my eyes, looks like a bug.
smaller issue: You shouldn't normally test private methods
2 small ones: there is no need to instantiate the field annotated with #InjectMocks. You don't need #Spy as well.

EntryProcessor without locking entries

In my application, I'm trying to process data in IMap, the scenario is as follows:
application recieves request (REST for example) with set of keys to be processed
application processes entries with given key and returns result - map where key is original key of the entry and result is calculated
for this scenario IMap.executeOnKeys is almost perfect, with one problem - the entry is locked while being processed - and really it hurts thruput. The IMap is populated on startup and never modified.
Is it possible to process entries without locking them? If possible without sending entries to another node and without causing network overhead (sending 1000 tasks to single node in for-loop)
Here is reference implementation to demonstrate what I'm trying to achieve:
public class Main {
public static void main(String[] args) throws Exception {
HazelcastInstance instance = Hazelcast.newHazelcastInstance();
IMap<String, String> map = instance.getMap("the-map");
// populated once on startup, never modified
for (int i = 1; i <= 10; i++) {
map.put("key-" + i, "value-" + i);
}
Set<String> keys = new HashSet<>();
keys.add("key-1"); // every requst may have different key set, they may overlap
System.out.println(" ---- processing ----");
ForkJoinPool pool = new ForkJoinPool();
// to simulate parallel requests on the same entry
pool.execute(() -> map.executeOnKeys(keys, new MyEntryProcessor("first")));
pool.execute(() -> map.executeOnKeys(keys, new MyEntryProcessor("second")));
System.out.println(" ---- pool is waiting ----");
pool.shutdown();
pool.awaitTermination(5, TimeUnit.MINUTES);
System.out.println(" ------ DONE -------");
}
static class MyEntryProcessor implements EntryProcessor<String, String> {
private String name;
MyEntryProcessor(String name) {
this.name = name;
}
#Override
public Object process(Map.Entry<String, String> entry) {
System.out.println(name + " is processing " + entry);
return calculate(entry); // may take some time, doesn't modify entry
}
#Override
public EntryBackupProcessor<String, String> getBackupProcessor() {
return null;
}
}
}
Thanks in advance
In executeOnKeys the entries are not locked. Maybe you mean that the processing happens on partitionThreads, so that there may be no other processing for the particular key? Anyhow, here's the solution:
Your EntryProcessor should implement:
Offloadable interface -> this means that the partition-thread will be used only for reading the value. The calculation will be done in the offloading thread-pool.
ReadOnly interface -> in this case the EP won't hop on the partition-thread again to save the modification you might have done in the entry. Since your EP does not modify entries, this will increase the performance.

How to re-queue message when spring integration configuration includes a priority channel

I have a Spring Integration configuration that utilizes a priority channel. When an item is read from that channel, local resources are checked at that point in time, and if the resources are not available to process the item, I would like to requeue the message so that another machine picks it up. Originally, I wrongly threw an exception thinking that a requeue would occur, but as was answered in my other question this is not going to work since the priority channel executes in another thread than the listener container.
I thought about placing a filter right after the inbound channel adapter, and throwing an exception if resources are not available at that time, but at that instance in time an accurate assessment of resources cannot be made because resource availability at that time does match what will be available when the message is selected based upon priority.
My next thought is to place a filter after the priority channel and before the service activator and direct messages that cannot be handled by current resources to the discard-channel which is defined as an outbound channel adapter that sends the message back to the original queue. Are there pitfalls to this approach?
EDIT 20150917:
Per Gary's advice, I have moved to RabbitMQ 3.5.x in order to take of the built-in priority queues. I now have a problem tracking the number of attempts as it appears my original message is placed back on the queue, rather than my modified message. I have updated the code blocks to reflect the current setup.
EDIT 20150922:
I am updating this post to reflect the final proof of concept code base that I created. I am not a Spring-Integration expert by any means, so please keep that in mind as well as the fact that this test code is not production ready. My original intent was to have messages resubmitted and retried a certain amount of times if a particular exception was thrown. This can be accomplished using the StatefulRetryOperationsInterceptor. But to experiment further, I wanted to be able to set/increment a header on failure and then have something in my flow that could react to that value. That was accomplished by using an extension of the RepublishMessageRecoverer that overrides additionalHeaders(). This object then is used to configure the RetryOperationsInterceptor.
One other minor thing: I wanted to reduce some of the default Spring Integration logging when my signal exception was thrown, so I needed to make sure I named my error channel "errorChannel" in order to replace the Spring Integration default. I also needed to create a custom ErrorHandler which to assign to the ListenerContainer default which logs everything to ERROR level.
Here is my current setup:
Spring Integration 4.2.0.RELEASE
Spring AMQP 1.5.0.RELEASE
RabbitMQ 3.5.x
Configuration
#Autowired
public void setSpringIntegrationConfigHelper (SpringIntegrationHelper springIntegrationConfigHelper) {
this.springIntegrationConfigHelper = springIntegrationConfigHelper;
}
#Bean
public String priorityPOCQueueName() {
return "poc.priority";
}
#Bean
public Queue priorityPOCQueue(RabbitAdmin rabbitAdmin) {
boolean durable = true;
boolean exclusive = false;
boolean autoDelete = false;
//Adding the x-max-priority argument is what signals RabbitMQ that this is a priority queue. Must be Rabbit 3.5.x
Map<String,Object> arguments = new HashMap<String, Object>();
arguments.put("x-max-priority", 5);
Queue queue = new Queue(priorityPOCQueueName(),
durable,
exclusive,
autoDelete,
arguments);
rabbitAdmin.declareQueue(queue);
return queue;
}
#Bean
public Binding priorityPOCQueueBinding(RabbitAdmin rabbitAdmin) {
Binding binding = new Binding(priorityPOCQueueName(),
DestinationType.QUEUE,
"amq.direct",
priorityPOCQueue(rabbitAdmin).getName(),
null);
rabbitAdmin.declareBinding(binding);
return binding;
}
#Bean
public AmqpTemplate priorityPOCMessageTemplate(ConnectionFactory amqpConnectionFactory,
#Qualifier("priorityPOCQueueName") String queueName,
#Qualifier("jsonMessageConverter") MessageConverter messageConverter) {
RabbitTemplate template = new RabbitTemplate(amqpConnectionFactory);
template.setChannelTransacted(false);
template.setExchange("amq.direct");
template.setQueue(queueName);
template.setRoutingKey(queueName);
template.setMessageConverter(messageConverter);
return template;
}
#Autowired
#Qualifier("priorityPOCQueue")
public void setPriorityPOCQueue(Queue priorityPOCQueue) {
this.priorityPOCQueue = priorityPOCQueue;
}
#Bean
public MessageRecoverer miTestMessageRecoverer(final AmqpTemplate priorityPOCMessageTemplate) {
return new MessageRecoverer() {
#Override
public void recover(org.springframework.amqp.core.Message msg, Throwable t) {
StringBuilder sb = new StringBuilder();
sb.append("Firing Test Recoverer: ").append(t.getClass().getName()).append(" Message Count: ")
.append(msg.getMessageProperties().getMessageCount())
.append(" ID: ").append(msg.getMessageProperties().getMessageId())
.append(" DeliveryTag: ").append(msg.getMessageProperties().getDeliveryTag())
.append(" Redilivered: ").append(msg.getMessageProperties().isRedelivered());
logger.debug(sb.toString());
PriorityMessage m = new PriorityMessage(5);
m.setId(randomGenerator.nextLong(10L, 1000000L));
priorityPOCMessageTemplate.convertAndSend(m , new SimulateErrorHeaderPostProcessor(Boolean.FALSE, m.getPriority()));
}
};
}
#Bean
public RepublishMessageRecoverer miRepublishRecoverer(final AmqpTemplate priorityPOCMessageTemplate) {
class MiRecoverer extends RepublishMessageRecoverer {
public MiRecoverer(AmqpTemplate errorTemplate) {
super(errorTemplate);
this.setErrorRoutingKeyPrefix("");
}
#Override
protected Map<? extends String, ? extends Object> additionalHeaders(
org.springframework.amqp.core.Message message, Throwable cause) {
Map<String, Object> map = new HashMap<>();
if (message.getMessageProperties().getHeaders().containsKey("jmattempts") == false) {
map.put("jmattempts", 0);
} else {
Integer count = Integer.valueOf(message.getMessageProperties().getHeaders().get("jmattempts").toString());
map.put("jmattempts", ++count);
}
return map;
}
} ;
return new MiRecoverer(priorityPOCMessageTemplate);
}
#Bean
public StatefulRetryOperationsInterceptor inadequateResourceInterceptor(#Qualifier("priorityPOCMessageTemplate") AmqpTemplate priorityPOCMessageTemplate
, #Qualifier("priorityMessageKeyGenerator") PriorityMessageKeyGenerator priorityMessageKeyGenerator
, #Qualifier("miTestMessageRecoverer") MessageRecoverer messageRecoverer
, #Qualifier("miRepublishRecoverer") RepublishMessageRecoverer miRepublishRecoverer) {
StatefulRetryInterceptorBuilder b = RetryInterceptorBuilder.stateful();
return b.maxAttempts(2)
.backOffOptions(2000L, 1.0D, 4000L)
.messageKeyGenerator(priorityMessageKeyGenerator)
.recoverer(miRepublishRecoverer)
.build();
}
#Bean(name="exec.priorityPOC")
TaskExecutor taskExecutor() {
ThreadPoolTaskExecutor e = new ThreadPoolTaskExecutor();
e.setCorePoolSize(1);
e.setQueueCapacity(1);
return e;
}
/* #Bean(name="poc.priorityChannel")
public MessageChannel pocPriorityChannel() {
PriorityChannel c = new PriorityChannel(new PriorityComparator());
c.setComponentName("poc.priorityChannel");
c.setBeanName("poc.priorityChannel");
return c;
}
*/
#Bean(name="poc.inputChannel")
public MessageChannel pocPriorityChannel() {
DirectChannel c = new DirectChannel();
c.setComponentName("poc.inputChannel");
c.setBeanName("poc.inputChannel");
return c;
}
#Bean(name="poc.inboundChannelAdapter") //make this a unique name
public AmqpInboundChannelAdapter amqpInboundChannelAdapter(#Qualifier("exec.priorityPOC") TaskExecutor taskExecutor
, #Qualifier("errorChannel") MessageChannel pocErrorChannel
, #Qualifier("inadequateResourceInterceptor") StatefulRetryOperationsInterceptor inadequateResourceInterceptor) {
org.aopalliance.aop.Advice[] adviceChain = new org.aopalliance.aop.Advice[]{inadequateResourceInterceptor};
int concurrentConsumers = 1;
AmqpInboundChannelAdapter a = springIntegrationConfigHelper.createInboundChannelAdapter(taskExecutor
, pocPriorityChannel(), new Queue[]{priorityPOCQueue}, concurrentConsumers, adviceChain
, new PocErrorHandler());
a.setErrorChannel(pocErrorChannel);
return a;
}
#Transformer(inputChannel = "poc.inputChannel", outputChannel = "poc.procesPoc")
public Message<PriorityMessage> incrementAttempts(Message<PriorityMessage> msg) {
//I stopped using this in the POC.
return msg;
}
#ServiceActivator(inputChannel="poc.procesPoc")
public void procesPoc(#Header(SimulateErrorHeaderPostProcessor.ERROR_SIMULATE_HEADER_KEY) Boolean simulateError
, #Headers Map<String, Object> headerMap
, PriorityMessage priorityMessage) throws InterruptedException {
if (isFirstMessageReceived == false) {
//Thread.sleep(15000); //Cause a bit of a backup so we can see prioritizing in action.
isFirstMessageReceived = true;
}
Integer retryAttempts = 0;
if (headerMap.containsKey("jmattempts")) {
retryAttempts = Integer.valueOf(headerMap.get("jmattempts").toString());
}
logger.debug("Received message with priority: " + priorityMessage.getPriority() + ", simulateError: " + simulateError + ", Current attempts count is "
+ retryAttempts);
if (simulateError && retryAttempts < PriorityMessage.MAX_MESSAGE_RETRY_COUNT) {
logger.debug(" Simulating an error and re-queue'ng. Current attempt count is " + retryAttempts);
throw new AnalyzerNonAdequateResourceException();
} else if (simulateError && retryAttempts > PriorityMessage.MAX_MESSAGE_RETRY_COUNT) {
logger.debug(" Max attempt count exceeded");
}
}
/**************************************************************************************************
*
* Error Channel
*
**************************************************************************************************/
//Note that we want to override default Spring error channel, so the name of the bean must be errorChannel
#Bean(name="errorChannel")
public MessageChannel pocErrorChannel() {
DirectChannel c = new DirectChannel();
c.setComponentName("errorChannel");
c.setBeanName("errorChannel");
return c;
}
#ServiceActivator(inputChannel="errorChannel")
public void pocHandleError(Message<MessagingException> message) throws Throwable {
MessagingException me = message.getPayload();
logger.error("pocHandleError: error encountered: " + me.getCause().getClass().getName());
SortedMap<String, Object> sorted= new TreeMap<>();
sorted.putAll(me.getFailedMessage().getHeaders());
if (me.getCause() instanceof AnalyzerNonAdequateResourceException) {
logger.debug("Headers: " + sorted.toString());
//Let this message get requeued
throw me.getCause();
}
Message<?> failedMsg = me.getFailedMessage();
Object o = failedMsg.getPayload();
StringBuilder sb = new StringBuilder();
if (o != null) {
sb.append("AnalyzerErrorHandler: Failed Message Type: ")
.append(o.getClass().getCanonicalName()).append(". toString: ").append(o.toString());
logger.error(sb.toString());
}
//The first level sometimes brings back either MessagingHandlingException or
//MessagingTransformationException which may contain a subcause
Exception e = (Exception)me.getCause();
int i = 0;
sb.delete(0, sb.length());
sb.append("AnalyzerErrorHandler nested messages: ");
while (e != null && i++ < 10) {
sb.append(System.lineSeparator()).append(" ")
.append(e.getClass().getCanonicalName()).append(": ")
.append(e.getMessage());
}
if (i > 0) {
logger.error(sb.toString());
}
//Don't want a message to recycle
throw new AmqpRejectAndDontRequeueException(e);
}
/**
* This gets set on the ListenerContainer. The default handler on the listener
* container logs everything with full stack trace. We don't want to do that
* for our known resource exception
*/
public static class PocErrorHandler implements ErrorHandler {
#Override
public void handleError(Throwable t) {
Throwable cause = t.getCause();
if (cause != null) {
while (cause.getCause() != null) {
cause = cause.getCause();
}
} else {
cause = t;
}
if (cause instanceof AnalyzerNonAdequateResourceException) {
logger.info(AnalyzerNonAdequateResourceException.class.getName() + ": not enough resources to process the item.");
return;
}
else {
logger.error("POC Listener Exception", t);
}
}
}
SpringIntegrationHelper
protected ConnectionFactory connectionFactory;
protected MessageConverter messageConverter;
#Autowired
public void setConnectionFactory (ConnectionFactory connectionFactory) {
this.connectionFactory = connectionFactory;
}
#Autowired
public void setMessageConverter(#Qualifier("jsonMessageConverter") MessageConverter messageConverter) {
this.messageConverter = messageConverter;
}
public AmqpInboundChannelAdapter createInboundChannelAdapter(TaskExecutor taskExecutor
, MessageChannel outputChannel, Queue[] queues, int concurrentConsumers
, org.aopalliance.aop.Advice[] adviceChain,
ErrorHandler errorHandler) {
SimpleMessageListenerContainer listenerContainer =
new SimpleMessageListenerContainer(connectionFactory);
//AUTO is default, but setting it anyhow.
listenerContainer.setAcknowledgeMode(AcknowledgeMode.AUTO);
listenerContainer.setAutoStartup(true);
listenerContainer.setConcurrentConsumers(concurrentConsumers);
listenerContainer.setMessageConverter(messageConverter);
listenerContainer.setQueues(queues);
//listenerContainer.setChannelTransacted(false);
listenerContainer.setErrorHandler(errorHandler);
listenerContainer.setPrefetchCount(1);
listenerContainer.setTaskExecutor(taskExecutor);
listenerContainer.setDefaultRequeueRejected(true);
if (adviceChain != null && adviceChain.length > 0) {
listenerContainer.setAdviceChain(adviceChain);
}
AmqpInboundChannelAdapter a = new AmqpInboundChannelAdapter(listenerContainer);
a.setMessageConverter(messageConverter);
a.setAutoStartup(true);
a.setHeaderMapper(MyAmqpHeaderMapper.createPassAllHeaders());
a.setOutputChannel(outputChannel);
return a;
}
It's not clear why you want to use a PriorityChannel in this context; why not use a priority queue in RabbitMQ? That way, you can run your flow on the container thread.
Sending the queue to the back of the queue yourself would work, but there is a risk of message loss.

Jackrabbit and concurrent modification

After we have done some performance testing for our application which uses jackrabbit we faced with the huge problem with concurrent modification jackrabbit's repository. Problem appears when we add nodes or edit them in multithread emulation. Then I wrote very simple test which shows us that problem is not in our environment.
There is it:
Simple Stateless Bean
#Stateless
#Local(TestFacadeLocal.class)
#Remote(TestFacadeRemote.class)
public class TestFacadeBean implements TestFacadeRemote, TestFacadeLocal {
public void doAction(int name) throws Exception {
new TestSynch().doAction(name);
}
}
Simple class
public class TestSynch {
public void doAction(int name) throws Exception {
Session session = ((Repository) new InitialContext().
lookup("java:jcr/local")).login(
new SimpleCredentials("username", "pwd".toCharArray()));
List added = new ArrayList();
Node folder = session.getRootNode().getNode("test");
for (int i = 0; i <= 100; i++) {
Node child = folder.addNode("" + System.currentTimeMillis(),
"nt:folder");
child.addMixin("mix:versionable");
added.add(child);
}
// saving butch changes
session.save();
//checking in all created nodes
for (Node node : added) {
session.getWorkspace().getVersionManager().checkin(node.getPath());
}
}
}
And Test class
public class Test {
private int c = 0;
private int countAll = 50;
private ExecutorService executor = Executors.newFixedThreadPool(5);
public ExecutorService getExecutor() {
return executor;
}
public static void main(String[] args) {
Test test = new Test();
try {
test.start();
} catch (Exception e) {
e.printStackTrace();
}
}
private void start() throws Exception {
long time = System.currentTimeMillis();
TestFacadeRemote testBean = (TestFacadeRemote) getContext().
lookup( "test/TestFacadeBean/remote");
for (int i = 0; i < countAll; i++) {
getExecutor().execute(new TestInstallerThread(i, testBean));
}
getExecutor().shutdown();
while (!getExecutor().isTerminated()) {
try {
Thread.sleep(500);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
System.out.println(c + " shutdown " +
(System.currentTimeMillis() - time));
}
class TestInstallerThread implements Runnable {
private int number = 0;
TestFacadeRemote testBean;
public TestInstallerThread(int number, TestFacadeRemote testBean) {
this.number = number;
this.testBean = testBean;
}
#Override
public void run() {
try {
System.out.println("Installing data " + number);
testBean.doAction(number);
System.out.println("STOP" + number);
} catch (Exception e) {
e.printStackTrace();
c++;
}
}
}
public Context getContext() throws NamingException {
Properties properties = new Properties();
//init props
..............
return new InitialContext(properties);
}
}
If I initialized executor with 1 thread in pool all done without any error. If I initialized executor with 5 thread I got sometimes errors:
on client
java.lang.RuntimeException: javax.transaction.RollbackException: [com.arjuna.ats.internal.jta.transaction.arjunacore.commitwhenaborted] [com.arjuna.ats.internal.jta.transaction.arjunacore.commitwhenaborted] Can't commit because the transaction is in aborted state
at org.jboss.aspects.tx.TxPolicy.handleEndTransactionException(TxPolicy.java:198)
on server at the beginning warn
ItemStateReferenceCache [ItemStateReferenceCache.java:176] overwriting cached entry 187554a7-4c41-404b-b6ee-3ce2a9796a70
and then
javax.jcr.RepositoryException: org.apache.jackrabbit.core.state.ItemStateException: there's already a property state instance with id 52fb4b2c-3ef4-4fc5-9b79-f20a6b2e9ea3/{http://www.jcp.org/jcr/1.0}created
at org.apache.jackrabbit.core.PropertyImpl.restoreTransient(PropertyImpl.java:195) ~[jackrabbit-core-2.2.7.jar:2.2.7]
at org.apache.jackrabbit.core.ItemSaveOperation.restoreTransientItems(ItemSaveOperation.java:879) [jackrabbit-core-2.2.7.jar:2.2.7]
We have tried synchronize this method and other workflow for handle multithread calls as one thread. Nothing helps.
And one more thing - when we have done similar test without ejb layer - all worked fine.
It looks like container wrapped in own transaction and then all crashed.
Maybe somebody faced with such a problem.
Thanks in advance.
From the Jackrabbit Wiki:
The JCR specification explicitly states that a Session is not thread-safe (JCR 1.0 section 7.5 and JCR 2.0 section 4.1.2). Hence, Jackrabbit does not support multiple threads concurrently reading from or writing to the same session. Each session should only ever be accessed from one thread.
...
If you need to write to the same node concurrently, then you need to use multiple sessions, and use JCR locking to ensure there is no conflict.

Resources