Eviction when using HQL queries - hazelcast

When I use a HQL statement like "delete from Setting where id=?" the 2nd level cache is not evicted from Hazelcast for the deleted entity.
Does anyone know about this issue? Is this a configuration problem?
Greetings,
Martin
Cache configuration:
<map name="default">
<in-memory-format>BINARY</in-memory-format>
<backup-count>1</backup-count>
<read-backup-data>false</read-backup-data>
<async-backup-count>0</async-backup-count>
<time-to-live-seconds>0</time-to-live-seconds>
<max-idle-seconds>0</max-idle-seconds>
<eviction-policy>NONE</eviction-policy>
<max-size policy="PER_NODE">0</max-size>
<eviction-percentage>25</eviction-percentage>
<merge-policy>com.hazelcast.map.merge.PassThroughMergePolicy</merge-policy>
</map>

Related

hazelcast client not getting the full dataset from a cluster

I am using hazelcast v3.6 (client-server mode). The relevant parts of the client and server configs are copied below. When I call IMap.size(), I get the count for the full set of data that I have inserted into the test cluster (2 node). However, when I get the keySet or entrySet, I only get the set for half of the keys/objects. I tried running the localKeySet from the client but that throws the following exception:
Exception in thread "main" java.lang.UnsupportedOperationException: Locality is ambiguous for client!!!
at com.hazelcast.client.proxy.ClientMapProxy.localKeySet(ClientMapProxy.java:1172)
I think localKeySet is not available on the client but need to reconfirm that as well.
Client config - relevant parts, not the full config:
<hazelcast-client xsi:schemaLocation="http://www.hazelcast.com/schema/client-config hazelcast-client-config-3.6.xsd"
xmlns="http://www.hazelcast.com/schema/client-config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<properties>
<property name="hazelcast.client.shuffle.member.list">true</property>
<property name="hazelcast.client.heartbeat.timeout">60000</property>
<property name="hazelcast.client.heartbeat.interval">5000</property>
<property name="hazelcast.client.event.thread.count">5</property>
<property name="hazelcast.client.event.queue.capacity">1000000</property>
<property name="hazelcast.client.invocation.timeout.seconds">120</property>
</properties>
<network>
<cluster-members>
<!-- ip addresses replaced -->
<address>101.102.103.104</address>
<address>101.102.103.105</address>
</cluster-members>
<smart-routing>true</smart-routing>
<redo-operation>true</redo-operation>
<connection-timeout>60000</connection-timeout>
<connection-attempt-period>3000</connection-attempt-period>
<connection-attempt-limit>2</connection-attempt-limit>
<socket-options>
<tcp-no-delay>false</tcp-no-delay>
<keep-alive>true</keep-alive>
<reuse-address>true</reuse-address>
<linger-seconds>3</linger-seconds>
<timeout>-1</timeout>
<buffer-size>32</buffer-size>
</socket-options>
</network>
Server Config - relevant parts
<network>
<port auto-increment="true" port-count="100">5701</port>
<outbound-ports>
<ports>0-5900</ports>
</outbound-ports>
<join>
<multicast enabled="false">
</multicast>
<tcp-ip enabled="true">
<member>101.102.103.104</member>
<member>101.102.103.105</member>
</tcp-ip>
</join>
<interfaces enabled="true">
<interface>101.102.103.104</interface> <!-- this is modified for the second server node -->
</interfaces>
<ssl enabled="false" />
<socket-interceptor enabled="false" />
<symmetric-encryption enabled="false">
<algorithm>PBEWithMD5AndDES</algorithm>
<!-- salt value to use when generating the secret key -->
<salt>test</salt>
<!-- pass phrase to use when generating the secret key -->
<password>testpass</password>
<!-- iteration count to use when generating the secret key -->
<iteration-count>19</iteration-count>
</symmetric-encryption>
</network>
Any thoughts on what might be causing the issue and ways to fix it?
Update:
I have tried replicating the setup on another two node cluster but have not been able to reproduce the same issue. When I call the keySet on the new cluster, I get the full set of keys, not half (unlike the original cluster). I am aware that I can use Predicates instead of getting the whole keySet.
Update
I will try to do that in the near future - I have to dig through the logs. The issue was resolved when I brought down one node and the second node became the primary for all the data (it was a two node test cluster with async replication factor of 1). After that, when I brought up the second node again, the data was distributed again and I did not see any issues with the two node cluster. So it appears that something happened after the cluster was brought up that resulted in the hazelcast client getting the map values from both nodes but any call to the keyset/entry set was just getting keys/values from one node. I tried switching the order in the hazelcast client (xml file) to see if that would change the keyset/entries that the client was receiving but that did not solve the issue.

Infinispan 7.1 Cassandra cache-store

I have a very simple cache of type String, Long in Infinispan. I would like to persist this cache to cassandra.
I have installed Infinispan 7.1 Server and I have a cassandra instance running.
I've looked at http://infinispan.org/docs/cachestores/cassandra/ which lists two xml excerpts. Since I am completely new to Infinispan, I have no idea where to add the listed xml.
Is there an example Infinispan 7.1 server installation somewhere where this is setup and working?
EDIT 1:
I am able to use file based persistence by defining myCache as follows:
<subsystem xmlns="urn:infinispan:server:core:7.1" default-cache-container="clustered">
<cache-container name="clustered" default-cache="default" statistics="true">
<transport executor="infinispan-transport" lock-timeout="60000"/>
<distributed-cache name="myCache" mode="SYNC" start="EAGER">
<file-store
shared="false" preload="true"
fetch-state="true"
read-only="false"
purge="false"
path="${java.io.tmpdir}">
<write-behind flush-lock-timeout="15000" thread-pool-size="5" />
</file-store>
</distributed-cache>
</cache-container>
</subsystem>
The schema for the xml defines a "store" element that extends "custom-store".
<xs:element name="store" type="tns:custom-store">
Sooo... I guess I need to study its schema to see how I can create a Cassandra store.
I have a constant feeling I'm doing something wrong because I can find no examples of anyone using this xml...
EDIT 2
Replaced file-store with the following:
<store
name="cassandraStore"
class="org.infinispan.loaders.cassandra.CassandraCacheStore"
shared="true" preload="false" passivation="false"
fetch-state="true">
<property name="host">localhost</property>
<property name="keySpace">mykeyspace</property>
<property name="entryColumnFamily">mytable</property>
<property name="expirationColumnFamily">mytableExpiration</property>
<property name="sharedKeyspace">false</property>
<property name="readConsistencyLevel">ONE</property>
<property name="writeConsistencyLevel">ONE</property>
<property name="configurationPropertiesFile">cassandrapool.properties</property>
<property name="keyMapper">org.infinispan.loaders.keymappers.DefaultTwoWayKey2StringMapper</property>
</store>
When I start infinispan server I get
"JBAS010292: org.infinispan.loaders.cassandra.CassandraCacheStore is not a valid cache store"
Looking at http://mvnrepository.com/artifact/org.infinispan/infinispan-cachestore-cassandra I see the latest version is 6.0.0.Alpha1 from July of 2013
I think the conclusion here is: Cassandra is not supported by Infinispan.

Gridgain: how to redistribute partitioned-mode cache when one node brought down

I am new to gridgain and we are doing a POC using gridgain. We did some simple examples using partitioned cache, it works well however we found that when we bring a node down, cache from that node was gone. so my questions is: if we keep using patitioned mode, is there any way to re-distributed cache when a node (or several nodes) is undeployed. if not, is there any good way to do it? Thanks!
configuration Code:
<context:component-scan base-package="com.test" />
<bean id="hostGrid" class="org.gridgain.grid.GridSpringBean">
<property name="configuration">
<bean class="org.gridgain.grid.GridConfiguration">
<property name="localHost" value="127.0.0.1"/>
<property name="peerClassLoadingEnabled" value="false"/>
<property name="marshaller">
<bean class="org.gridgain.grid.marshaller.optimized.GridOptimizedMarshaller">
<property name="requireSerializable" value="false"/>
</bean>
</property
<property name="cacheConfiguration">
<list>
<bean class="org.gridgain.grid.cache.GridCacheConfiguration">
<property name="name" value="CACHE"/>
<property name="cacheMode" value="PARTITIONED"/>
<property name="store" >
<bean class="com.test.CacheJdbcPOCStore"></bean>
</property>
</bean>
</list>
</property>
</bean>
</property>
</bean>
We deployed the same war (using above configuration) to 3 tomcat 7 server. we did not specify number of backup so it should be 1 by default.
follow up
I solved this problem by putting backups= 1 in configuration. looks like previously it did not create backup copy. however it should make 1 copy since it is by default. also, when i tried to bring down 2 nodes at one time, i saw part of cache was gone, so I set backups=2 and found no cache loss this time. so it looks like if in a very bad case where all nodes except for the main node crash, we need to have # of nodes -1 backups to prevent data loss. but if I do so then it is just like replicated mode and replicated mode has less restriction on query and transactions. So my question is : if we need to take the advantage of parallel computation and at mean time want to prevent data loss when nodes crash what is the best practice?
Thanks!
Number of backups is 0 by default. The documentation has been fixed.
You are right about REPLICATED mode. If you are worried about any data loss, the REPLICATED mode is the only way to guarantee it. The disadvantage here is that writes will get slower, as all the nodes in the cluster will be updated. The advantage is that the data is available on every node, so you can easily access it from your computations without worrying which node to send them to.

Memory leak in Spring library : SimpleMessageStore

3 of the webservices that I am working on uses Springs, SimpleMessageStore for storing the messages. For some reason it is causing memory leak in production env and I am unable to reproduce it in the lower environments. I am new to spring integration and need help in understanding what might be causing this.
the spring config code looks like this:
<!-- MESSAGE STORES -->
<bean id="monitoringHeaderRequestMsgStore" class="org.springframework.integration.store.SimpleMessageStore"/>
<bean id="gbqHeaderRequestMsgStore" class="org.springframework.integration.store.SimpleMessageStore"/>
<bean id="bondAgreementResponseMsgStore" class="org.springframework.integration.store.SimpleMessageStore"/>
<bean id="bondWIthRulesRequestMsgStore" class="org.springframework.integration.store.SimpleMessageStore"/>
<bean id="ProcessVariableMessageStores" class="com.aviva.uklife.investment.impl.ProcessVariableMessageStores">
<property name="_monitoringHeaderRequestMsgStore" ref="monitoringHeaderRequestMsgStore"/>
<property name="_gbqHeaderRequestMsgStore" ref="gbqHeaderRequestMsgStore"/>
<property name="_bondWIthRulesRequestMsgStore" ref="bondWIthRulesRequestMsgStore"/>
<property name="_bondAgreementResponseMsgStore" ref="bondAgreementResponseMsgStore"/>
</bean>
<!-- Retrieve stored MonitoringHeaderRequest -->
<int:transformer expression="headers.get('#{T(.....Constants).MONITORING_HEADER_REQUEST_CLAIM_CHECK_ID}')"/>
<int:claim-check-out message-store="monitoringHeaderRequestMsgStore" remove-message="false"/>
<!-- Store HeaderRequest -->
<int:gateway request-channel="header-req-store-channel"/>
<!-- PROCESS VARIABLES STORAGE IN STORE CHANNELS WITH KEY OR CLAIMCHECK ID -->
<int:chain input-channel="monitoring-header-req-store-channel">
<int:claim-check-in message-store="monitoringHeaderRequestMsgStore"/>
<int:header-enricher>
<int:header name="#{T(....Constants).MONITORING_HEADER_REQUEST_CLAIM_CHECK_ID}" expression="payload"/>
</int:header-enricher>
<int:claim-check-out message-store="monitoringHeaderRequestMsgStore" remove-message="false"/>
</int:chain>
thank you
To be honest, it isn't recommended to use SimpleMessageStore in the production environment. That's because of memory-leak, as you noticed. If you don't clear the MessageStore periodically.
Right, there are might be some cases, when you need to keep messages in the MessageStore for the long time. So consider to replace SimpleMessageStore with some persistent MessageStore.
From other side we need to have more info on the matter to provide better help.
Maybe you just have several aggregators and don't use expire-groups-upon-completion = "true"...

Global Event Configuration in Hazelcast

From the documentation of Hazelcast it mentioned that there are three configurable parameters related to Global Event Configuration:
hazelcast.event.queue.capacity: default value is 1000000
hazelcast.event.queue.timeout.millis: default value is 250
hazelcast.event.thread.count: default value is 5
I would like to ask how to config them in XML way. Is it correct to be set like this as the following?
<hazelcast xsi:schemaLocation="http://www.hazelcast.com/schema/config hazelcast-config-3.2.xsd"
xmlns="http://www.hazelcast.com/schema/config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<properties>
<property name="hazelcast.event.queue.capacity">10000000</property>
<property name="hazelcast.event.queue.timeout.millis">1000</property>
<property name="hazelcast.event.thread.count">10</property>
</properties>
</hazelcast>
And can I set the above parameters per event type (i.e. map event use a set of parameters and iTopic use another set of parameters) ? Will it be correct to set like the following?
<hazelcast xsi:schemaLocation="http://www.hazelcast.com/schema/config hazelcast-config-3.2.xsd"
xmlns="http://www.hazelcast.com/schema/config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<map name="*">
<properties>
<property name="hazelcast.event.queue.capacity">10000000</property>
<property name="hazelcast.event.queue.timeout.millis">1000</property>
<property name="hazelcast.event.thread.count">10</property>
</properties>
</map>
</hazelcast>
Thanks for the helping :)
You can set the properties like you did in your first example, or using -Dfoo=bar.
Unfortunately it is a global configuration because the same event system is shared between everything. It is on my list a long time to isolate the systems. So example 2 will not work.

Resources