Hazelcast EntryProcessor executes on all nodes - hazelcast

I am using an embeded version of Hazelcast 5.2.1 and I noticed that EntryProcessor executes on all cluster members.
In my entry processor, I have a logic which fetch some data from database (based on the data definition in the distributed map) and send the result into a bus. The problem is that I receive twice the same notification since the both cluster members execute the task.
Why the entry proccessor executes on both cluster members?
Is there a way to force the execution to only the key owner or at least that the entry processor is executed only once?
My entry processor definition:
public class myEntryProcessor implements EntryProcessor<String, SerializableSession, Void>, Offloadable, IdentifiedDataSerializable
Thank you for any help

Related

Hazelcast local entry listener on a Map

I've been looking into using a local entry listener instead of a normal entry listener so that an event is only processes by a single listener.
I've found various posts on this topic such as this, this, this, this and this. It seems that a local entry listener is indeed the way to go for handling an event only once in a multi node cluster.
However, I'm not sure how such a local entry listener would function under failure conditions. For instance, what happens to an evicted-event if the node which is the master for that entry is unavailable. Will the backup pick this up in time? Or could the event be missed due to hazelcast needing some time to figure out the master is down and a new master should be elected? Is this different between the older AP-system and the new CP-subsystem?
We've refrained from using a local entry listener. Instead we are now using the executorservice from hazelcast to schedule a named task. In this way, we can correctly respond to changes in the cluster. It does seem like hazelcast has a preferred member on which a task is executed, but that isn't an issue for us.
From Hazelcast docs:
Note that entries in distributed map are partitioned across the
cluster members; each member owns and manages the some portion of the
entries. Owned entries are called local entries. This listener will be
listening for the events of local entries. Let's say your cluster has
member1 and member2. On member2 you added a local listener and from
member1, you call {#code map.put(key2, value2)}. If the key2 is owned
by member2 then the local listener will be notified for the add/update
event. Also note that entries can migrate to other nodes for load
balancing and/or membership change.
The key part is: "Also note that entries can migrate to other nodes for load balancing and/or membership change.”
I guess that if an original partition owner fails, then some other node will become a new owner of those entries (or part of them, depending on cluster state after the repartitioning is done) and then it, the new owner, will run local entry listener.

Hazelcast - Ensure entry event is processed by a single handler

I have a Hazelcast cluster with multiple nodes, each consisting of identical instances of a "Daemon" server process. These daemons are Java applications with embedded Hazelcast caches as well as logic that forms the core of my platform. I need to distribute certain events on the platform to listeners across the cluster which can reside in any (or all) of the connected nodes. From my reading of the documentation it seems to me that if I attach an EntryEventListener to the maps on daemon startup then whenever the event happens in that map my callback will be called in every running instance of the daemon.
What I would like is for the callback to be called once (on any single node) across the cluster for an event. So if I have 10 nodes in the cluster, and each node registers an EntryEventListener on a map when it joins I would like any single one of those listener instances (on any of the nodes) to be triggered when that event happens and not all of them... I don't care which node listener handles the event, as long as it is only a single instance of the listener and not every registered listener. How can I do this?
I saw this old question which sounds like the same question, but I'm not certain and the answer doesn't make sense to me.
hazelcast entry listener on a multinode cluster
In The Hazelcast documentation there is this:
There is also another attribute called local, which is not shown in
the above examples. It is also a boolean attribute that is optional,
and if you set it to true, you can listen to the items on the local
member. Its default value is false.
Does that "local" attribute mean that the event would be triggered only on the node that is the primary owner of the key?
Thanks,
Troy
Yes, setting local to true will make the listener to fire events only if the member is the primary owner of the key. You can achieve what you want using local listeners

How can Hazelcast client be notified of an another added or removed client

There's an application stack containing of
2 embedded hazelcast apps; (app A)
2 apps using hazelcast clients. (app B)
App B needs to coordinate task execution among the nodes, so only one node executes a particular task.
With app A it's rather easy to implement by creating a gatekeeper as a library, which needs to be queried for a task execution permit. The gatekeeper would keep track of hazelcast members in the cluster, and assign permit to only a single node. It would register a MembershipListener in order to track changes in the cluster.
However, app B, being a Hazelcast client, can't make use of such gatekeeper, as clients can't access ClientService (via hazelcastInstance.getClientService()), thus it's unable to register a ClientListener (similar to MembershipListener, but for client nodes) to be notified of added or removed clients.
How could such coordination gatekeeper be implemented for applications that join the cluster as HazelcastClients?
You would probably have to use a listener on a member (take the oldest member in the cluster and update the listener when the "master" changes) and use an ITopic to inform other clients.
Can't think of another way right now.

Hazelcast - OperationTimeoutException

I am using Hazelcast version 3.3.1.
I have a 9 node cluster running on aws using c3.2xlarge servers.
I am using a distributed executor service and a distributed map.
Distributed executor service uses a single thread.
Distributed map is configured with no replication and no near-cache and stores about 1 million objects of size 1-2kb using Kryo serializer.
My use case goes as follow:
All 9 nodes constantly execute a synchronous remote operation on the distributed executor service and generate about 20k hits per second (about ~2k per node).
Invocations are executed using Hazelcast API: com.hazelcast.core.IExecutorService#executeOnKeyOwner.
Each operation accesses the distributed map on the node owning the partition, does some calculation using the stored object and stores the object in to the map. (for that I use the get and set API of the IMap object).
Every once in a while Hazelcast encounters a timeout exceptions such as:
com.hazelcast.core.OperationTimeoutException: No response for 120000 ms. Aborting invocation! BasicInvocationFuture{invocation=BasicInvocation{ serviceName='hz:impl:mapService', op=GetOperation{}, partitionId=212, replicaIndex=0, tryCount=250, tryPauseMillis=500, invokeCount=1, callTimeout=60000, target=Address[172.31.44.2]:5701, backupsExpected=0, backupsCompleted=0}, response=null, done=false} No response has been received! backups-expected:0 backups-completed: 0
In some cases I see map partitions start to migrate which makes thing even worse, nodes constantly leave and re-join the cluster and the only way I can overcome the problem is by restarting the entire cluster.
I am wondering what may cause Hazelcast to block a map-get operation for 120 seconds?
I am pretty sure it's not network related since other services on the same servers operate just fine.
Also note that the servers are mostly idle (~70%).
Any feedbacks on my use case will be highly appreciated.
Why don't you make use of an entry processor? This is also send to the right machine owning the partition and the load, modify, store is done automatically and atomically. So no race problems. It will probably outperform the current approach significantly since there is less remoting involved.
The fact that the map.get is not returning for 120 seconds is indeed very confusing. If you switch to Hazelcast 3.5 we added some logging/debugging stuff for this using the slow operation detector (executing side) and slow invocation detector (caller side) and should give you some insights what is happening.
Do you see any Health monitor logs being printed?

Hazelcast: Multiple Hazelcast Nodes are created in response to a single newHazelcastInstance call

I have a small hazelcast cluster which is under a medium sized and constant load. When I scale the cluster by adding a new server I get an interesting and unexplained result. As part of the creation of a new server I call Hazelcast.newHazelcastInstance(hzConfig);. This call normally creates a single hazelcast node in the cluster (as verified using their management console). In some of my test cases, this call is creating many hazelcast nodes in the cluster (testing has shown as many as 7 new nodes being created). Has anyone else seen this behavior? Is there a way to control this? Why is this happening? Can the number of nodes will be spawned be predicted?
Does the logging of members by the members themselves show a different result, because what you see there is the truth.
It could be that there is a bug in the management center.
So can you post the logging of the members?
It should look something like this:
Members [2] {
Member [192.168.1.104]:5701 this
Member [192.168.1.104]:5702
}

Resources