I am using hazelcast v3.6 (client-server mode). The relevant parts of the client and server configs are copied below. When I call IMap.size(), I get the count for the full set of data that I have inserted into the test cluster (2 node). However, when I get the keySet or entrySet, I only get the set for half of the keys/objects. I tried running the localKeySet from the client but that throws the following exception:
Exception in thread "main" java.lang.UnsupportedOperationException: Locality is ambiguous for client!!!
at com.hazelcast.client.proxy.ClientMapProxy.localKeySet(ClientMapProxy.java:1172)
I think localKeySet is not available on the client but need to reconfirm that as well.
Client config - relevant parts, not the full config:
<hazelcast-client xsi:schemaLocation="http://www.hazelcast.com/schema/client-config hazelcast-client-config-3.6.xsd"
xmlns="http://www.hazelcast.com/schema/client-config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<properties>
<property name="hazelcast.client.shuffle.member.list">true</property>
<property name="hazelcast.client.heartbeat.timeout">60000</property>
<property name="hazelcast.client.heartbeat.interval">5000</property>
<property name="hazelcast.client.event.thread.count">5</property>
<property name="hazelcast.client.event.queue.capacity">1000000</property>
<property name="hazelcast.client.invocation.timeout.seconds">120</property>
</properties>
<network>
<cluster-members>
<!-- ip addresses replaced -->
<address>101.102.103.104</address>
<address>101.102.103.105</address>
</cluster-members>
<smart-routing>true</smart-routing>
<redo-operation>true</redo-operation>
<connection-timeout>60000</connection-timeout>
<connection-attempt-period>3000</connection-attempt-period>
<connection-attempt-limit>2</connection-attempt-limit>
<socket-options>
<tcp-no-delay>false</tcp-no-delay>
<keep-alive>true</keep-alive>
<reuse-address>true</reuse-address>
<linger-seconds>3</linger-seconds>
<timeout>-1</timeout>
<buffer-size>32</buffer-size>
</socket-options>
</network>
Server Config - relevant parts
<network>
<port auto-increment="true" port-count="100">5701</port>
<outbound-ports>
<ports>0-5900</ports>
</outbound-ports>
<join>
<multicast enabled="false">
</multicast>
<tcp-ip enabled="true">
<member>101.102.103.104</member>
<member>101.102.103.105</member>
</tcp-ip>
</join>
<interfaces enabled="true">
<interface>101.102.103.104</interface> <!-- this is modified for the second server node -->
</interfaces>
<ssl enabled="false" />
<socket-interceptor enabled="false" />
<symmetric-encryption enabled="false">
<algorithm>PBEWithMD5AndDES</algorithm>
<!-- salt value to use when generating the secret key -->
<salt>test</salt>
<!-- pass phrase to use when generating the secret key -->
<password>testpass</password>
<!-- iteration count to use when generating the secret key -->
<iteration-count>19</iteration-count>
</symmetric-encryption>
</network>
Any thoughts on what might be causing the issue and ways to fix it?
Update:
I have tried replicating the setup on another two node cluster but have not been able to reproduce the same issue. When I call the keySet on the new cluster, I get the full set of keys, not half (unlike the original cluster). I am aware that I can use Predicates instead of getting the whole keySet.
Update
I will try to do that in the near future - I have to dig through the logs. The issue was resolved when I brought down one node and the second node became the primary for all the data (it was a two node test cluster with async replication factor of 1). After that, when I brought up the second node again, the data was distributed again and I did not see any issues with the two node cluster. So it appears that something happened after the cluster was brought up that resulted in the hazelcast client getting the map values from both nodes but any call to the keyset/entry set was just getting keys/values from one node. I tried switching the order in the hazelcast client (xml file) to see if that would change the keyset/entries that the client was receiving but that did not solve the issue.
Related
We are implementing a flow where a <int-sftp:inbound-streaming-channel-adapter/> polls a directory for a file and when found it passes the stream to a service activator.
The issue is we will have multiple instances of the app running and we would like to lock the process so that only one instance can pick up the file.
Looking at the documentation, Redis Lock Registry looks to be the solution, is there an example of this being used in xml?
All I can find is a few references to it and the source code for it.
http://docs.spring.io/spring-integration/reference/html/redis.html point 24.1
Added info:
Ive added the RedisMetaDataStore and SftpSimplePatternFileListFilter. It does work but it does have one oddity, when sftpInboundAdapter is activated by the poller it adds an entry for each file in the metadatastore. Say there are 10 files, there would be 10 entries in the datastore, but it does not process all 10 files in "1 go", only 1 file is processed per poll from the adapter, which would be fine, but in a multi instance environment if the server which picked up the files went down after processing 5 files, another server doesn't seem to pick up the remaining 5 files unless the files are "touched".
Is the behaviour of picking up 1 file per poll correct or should it process all valid files during one poll.
Below is my XML
<int:channel id="sftpInbound"/> <!-- To Java -->
<int:channel id="sftpOutbound"/>
<int:channel id="sftpStreamTransformer"/>
<int-sftp:inbound-streaming-channel-adapter id="sftpInboundAdapter"
channel="sftpInbound"
session-factory="sftpSessionFactory"
filter="compositeFilter"
remote-file-separator="/"
remote-directory="${sftp.directory}">
<int:poller cron="${sftp.cron}"/>
</int-sftp:inbound-streaming-channel-adapter>
<int:stream-transformer input-channel="sftpStreamTransformer" output-channel="sftpOutbound"/>
<bean id="compositeFilter"
class="org.springframework.integration.file.filters.CompositeFileListFilter">
<constructor-arg>
<list>
<bean
class="org.springframework.integration.sftp.filters.SftpSimplePatternFileListFilter">
<constructor-arg value="Receipt*.txt" />
</bean>
<bean id="SftpPersistentAcceptOnceFileListFilter" class="org.springframework.integration.sftp.filters.SftpPersistentAcceptOnceFileListFilter">
<constructor-arg ref="metadataStore" />
<constructor-arg value="ReceiptLock_" />
</bean>
</list>
</constructor-arg>
</bean>
<bean id="redisConnectionFactory"
class="org.springframework.data.redis.connection.jedis.JedisConnectionFactory">
<property name="port" value="${redis.port}" />
<property name="password" value="${redis.password}" />
<property name="hostName" value="${redis.host}" />
</bean>
No; you need to use a SftpPersistentAcceptOnceFileListFilter (docs here) with a Redis (or some other) metadata store, not a lock registry.
EDIT
Regarding your comment below.
Yes, it's a known issue; in the next release we've added a max-fetch-size for exactly this reason - so the instances can each retrieve some of the files rather than the first instance grabbing them all.
(The inbound adapter works by first copying files found, that are not already in the store, to the local disk, and then emits them one at a time).
5.0 only available as a milestone right now M2 at the time of writing, but the current version and milestone repo can be found here; it won't be released for a few more months.
Another alternative would be to use outbound gateways - one to LS the files and one to GET individual files; your app would have to use the metadata store itself, though, to determine which file(s) can be fetched.
When I use a HQL statement like "delete from Setting where id=?" the 2nd level cache is not evicted from Hazelcast for the deleted entity.
Does anyone know about this issue? Is this a configuration problem?
Greetings,
Martin
Cache configuration:
<map name="default">
<in-memory-format>BINARY</in-memory-format>
<backup-count>1</backup-count>
<read-backup-data>false</read-backup-data>
<async-backup-count>0</async-backup-count>
<time-to-live-seconds>0</time-to-live-seconds>
<max-idle-seconds>0</max-idle-seconds>
<eviction-policy>NONE</eviction-policy>
<max-size policy="PER_NODE">0</max-size>
<eviction-percentage>25</eviction-percentage>
<merge-policy>com.hazelcast.map.merge.PassThroughMergePolicy</merge-policy>
</map>
I have a very simple cache of type String, Long in Infinispan. I would like to persist this cache to cassandra.
I have installed Infinispan 7.1 Server and I have a cassandra instance running.
I've looked at http://infinispan.org/docs/cachestores/cassandra/ which lists two xml excerpts. Since I am completely new to Infinispan, I have no idea where to add the listed xml.
Is there an example Infinispan 7.1 server installation somewhere where this is setup and working?
EDIT 1:
I am able to use file based persistence by defining myCache as follows:
<subsystem xmlns="urn:infinispan:server:core:7.1" default-cache-container="clustered">
<cache-container name="clustered" default-cache="default" statistics="true">
<transport executor="infinispan-transport" lock-timeout="60000"/>
<distributed-cache name="myCache" mode="SYNC" start="EAGER">
<file-store
shared="false" preload="true"
fetch-state="true"
read-only="false"
purge="false"
path="${java.io.tmpdir}">
<write-behind flush-lock-timeout="15000" thread-pool-size="5" />
</file-store>
</distributed-cache>
</cache-container>
</subsystem>
The schema for the xml defines a "store" element that extends "custom-store".
<xs:element name="store" type="tns:custom-store">
Sooo... I guess I need to study its schema to see how I can create a Cassandra store.
I have a constant feeling I'm doing something wrong because I can find no examples of anyone using this xml...
EDIT 2
Replaced file-store with the following:
<store
name="cassandraStore"
class="org.infinispan.loaders.cassandra.CassandraCacheStore"
shared="true" preload="false" passivation="false"
fetch-state="true">
<property name="host">localhost</property>
<property name="keySpace">mykeyspace</property>
<property name="entryColumnFamily">mytable</property>
<property name="expirationColumnFamily">mytableExpiration</property>
<property name="sharedKeyspace">false</property>
<property name="readConsistencyLevel">ONE</property>
<property name="writeConsistencyLevel">ONE</property>
<property name="configurationPropertiesFile">cassandrapool.properties</property>
<property name="keyMapper">org.infinispan.loaders.keymappers.DefaultTwoWayKey2StringMapper</property>
</store>
When I start infinispan server I get
"JBAS010292: org.infinispan.loaders.cassandra.CassandraCacheStore is not a valid cache store"
Looking at http://mvnrepository.com/artifact/org.infinispan/infinispan-cachestore-cassandra I see the latest version is 6.0.0.Alpha1 from July of 2013
I think the conclusion here is: Cassandra is not supported by Infinispan.
I am new to gridgain and we are doing a POC using gridgain. We did some simple examples using partitioned cache, it works well however we found that when we bring a node down, cache from that node was gone. so my questions is: if we keep using patitioned mode, is there any way to re-distributed cache when a node (or several nodes) is undeployed. if not, is there any good way to do it? Thanks!
configuration Code:
<context:component-scan base-package="com.test" />
<bean id="hostGrid" class="org.gridgain.grid.GridSpringBean">
<property name="configuration">
<bean class="org.gridgain.grid.GridConfiguration">
<property name="localHost" value="127.0.0.1"/>
<property name="peerClassLoadingEnabled" value="false"/>
<property name="marshaller">
<bean class="org.gridgain.grid.marshaller.optimized.GridOptimizedMarshaller">
<property name="requireSerializable" value="false"/>
</bean>
</property
<property name="cacheConfiguration">
<list>
<bean class="org.gridgain.grid.cache.GridCacheConfiguration">
<property name="name" value="CACHE"/>
<property name="cacheMode" value="PARTITIONED"/>
<property name="store" >
<bean class="com.test.CacheJdbcPOCStore"></bean>
</property>
</bean>
</list>
</property>
</bean>
</property>
</bean>
We deployed the same war (using above configuration) to 3 tomcat 7 server. we did not specify number of backup so it should be 1 by default.
follow up
I solved this problem by putting backups= 1 in configuration. looks like previously it did not create backup copy. however it should make 1 copy since it is by default. also, when i tried to bring down 2 nodes at one time, i saw part of cache was gone, so I set backups=2 and found no cache loss this time. so it looks like if in a very bad case where all nodes except for the main node crash, we need to have # of nodes -1 backups to prevent data loss. but if I do so then it is just like replicated mode and replicated mode has less restriction on query and transactions. So my question is : if we need to take the advantage of parallel computation and at mean time want to prevent data loss when nodes crash what is the best practice?
Thanks!
Number of backups is 0 by default. The documentation has been fixed.
You are right about REPLICATED mode. If you are worried about any data loss, the REPLICATED mode is the only way to guarantee it. The disadvantage here is that writes will get slower, as all the nodes in the cluster will be updated. The advantage is that the data is available on every node, so you can easily access it from your computations without worrying which node to send them to.
3 of the webservices that I am working on uses Springs, SimpleMessageStore for storing the messages. For some reason it is causing memory leak in production env and I am unable to reproduce it in the lower environments. I am new to spring integration and need help in understanding what might be causing this.
the spring config code looks like this:
<!-- MESSAGE STORES -->
<bean id="monitoringHeaderRequestMsgStore" class="org.springframework.integration.store.SimpleMessageStore"/>
<bean id="gbqHeaderRequestMsgStore" class="org.springframework.integration.store.SimpleMessageStore"/>
<bean id="bondAgreementResponseMsgStore" class="org.springframework.integration.store.SimpleMessageStore"/>
<bean id="bondWIthRulesRequestMsgStore" class="org.springframework.integration.store.SimpleMessageStore"/>
<bean id="ProcessVariableMessageStores" class="com.aviva.uklife.investment.impl.ProcessVariableMessageStores">
<property name="_monitoringHeaderRequestMsgStore" ref="monitoringHeaderRequestMsgStore"/>
<property name="_gbqHeaderRequestMsgStore" ref="gbqHeaderRequestMsgStore"/>
<property name="_bondWIthRulesRequestMsgStore" ref="bondWIthRulesRequestMsgStore"/>
<property name="_bondAgreementResponseMsgStore" ref="bondAgreementResponseMsgStore"/>
</bean>
<!-- Retrieve stored MonitoringHeaderRequest -->
<int:transformer expression="headers.get('#{T(.....Constants).MONITORING_HEADER_REQUEST_CLAIM_CHECK_ID}')"/>
<int:claim-check-out message-store="monitoringHeaderRequestMsgStore" remove-message="false"/>
<!-- Store HeaderRequest -->
<int:gateway request-channel="header-req-store-channel"/>
<!-- PROCESS VARIABLES STORAGE IN STORE CHANNELS WITH KEY OR CLAIMCHECK ID -->
<int:chain input-channel="monitoring-header-req-store-channel">
<int:claim-check-in message-store="monitoringHeaderRequestMsgStore"/>
<int:header-enricher>
<int:header name="#{T(....Constants).MONITORING_HEADER_REQUEST_CLAIM_CHECK_ID}" expression="payload"/>
</int:header-enricher>
<int:claim-check-out message-store="monitoringHeaderRequestMsgStore" remove-message="false"/>
</int:chain>
thank you
To be honest, it isn't recommended to use SimpleMessageStore in the production environment. That's because of memory-leak, as you noticed. If you don't clear the MessageStore periodically.
Right, there are might be some cases, when you need to keep messages in the MessageStore for the long time. So consider to replace SimpleMessageStore with some persistent MessageStore.
From other side we need to have more info on the matter to provide better help.
Maybe you just have several aggregators and don't use expire-groups-upon-completion = "true"...