Hazelcast - Ensure entry event is processed by a single handler - hazelcast

I have a Hazelcast cluster with multiple nodes, each consisting of identical instances of a "Daemon" server process. These daemons are Java applications with embedded Hazelcast caches as well as logic that forms the core of my platform. I need to distribute certain events on the platform to listeners across the cluster which can reside in any (or all) of the connected nodes. From my reading of the documentation it seems to me that if I attach an EntryEventListener to the maps on daemon startup then whenever the event happens in that map my callback will be called in every running instance of the daemon.
What I would like is for the callback to be called once (on any single node) across the cluster for an event. So if I have 10 nodes in the cluster, and each node registers an EntryEventListener on a map when it joins I would like any single one of those listener instances (on any of the nodes) to be triggered when that event happens and not all of them... I don't care which node listener handles the event, as long as it is only a single instance of the listener and not every registered listener. How can I do this?
I saw this old question which sounds like the same question, but I'm not certain and the answer doesn't make sense to me.
hazelcast entry listener on a multinode cluster
In The Hazelcast documentation there is this:
There is also another attribute called local, which is not shown in
the above examples. It is also a boolean attribute that is optional,
and if you set it to true, you can listen to the items on the local
member. Its default value is false.
Does that "local" attribute mean that the event would be triggered only on the node that is the primary owner of the key?
Thanks,
Troy

Yes, setting local to true will make the listener to fire events only if the member is the primary owner of the key. You can achieve what you want using local listeners

Related

When clustering a Vert.x service, do clustered EventBus handlers propagate to new joining nodes?

This is something that I haven't been able to find in the official documentation nor anywhere else yet; the situtation I propose is basically this:
I have a cluster of N Vert.x instances of the same service, same codebase.
At some point in time I register an EventBus consumer C with an address A cluster-wide. I subscribe a completion handler to get notified when the registration completes on all nodes of the cluster.
Everything is working fine, but now I add a new node to the cluster.
My question is actually two-fold:
Will the C consumer be propagated to the new-joiner? That is, if I do a eventBus().publish(A, ...) from the new-joiner, will the handler get executed?
Will the completion handler be called again (My guess is no, but just in case)?
When you add a new node to the cluster, the app will be started again on this node (if if understood correctly the situation you described).
So on the new node, you'll register an EventBus consumer with for address A cluster-wide.
The new node will be aware of all registrations created previously on the cluster. The previous nodes will be aware of the new registration.
When you do eventBus().publish(A, ...) from the new-joiner, all nodes include it will invoke the consumer registered for this address.
On the new-joiner, the completion handler will be called when the registration has been persisted. There could be a (very small) delay before the new registration is visible from other nodes because the process is asynchronous.
The completion handler on previous nodes will not be invoked again (because the registration of the corresponding consumer already happened).

Hazelcast local entry listener on a Map

I've been looking into using a local entry listener instead of a normal entry listener so that an event is only processes by a single listener.
I've found various posts on this topic such as this, this, this, this and this. It seems that a local entry listener is indeed the way to go for handling an event only once in a multi node cluster.
However, I'm not sure how such a local entry listener would function under failure conditions. For instance, what happens to an evicted-event if the node which is the master for that entry is unavailable. Will the backup pick this up in time? Or could the event be missed due to hazelcast needing some time to figure out the master is down and a new master should be elected? Is this different between the older AP-system and the new CP-subsystem?
We've refrained from using a local entry listener. Instead we are now using the executorservice from hazelcast to schedule a named task. In this way, we can correctly respond to changes in the cluster. It does seem like hazelcast has a preferred member on which a task is executed, but that isn't an issue for us.
From Hazelcast docs:
Note that entries in distributed map are partitioned across the
cluster members; each member owns and manages the some portion of the
entries. Owned entries are called local entries. This listener will be
listening for the events of local entries. Let's say your cluster has
member1 and member2. On member2 you added a local listener and from
member1, you call {#code map.put(key2, value2)}. If the key2 is owned
by member2 then the local listener will be notified for the add/update
event. Also note that entries can migrate to other nodes for load
balancing and/or membership change.
The key part is: "Also note that entries can migrate to other nodes for load balancing and/or membership change.”
I guess that if an original partition owner fails, then some other node will become a new owner of those entries (or part of them, depending on cluster state after the repartitioning is done) and then it, the new owner, will run local entry listener.

How can Hazelcast client be notified of an another added or removed client

There's an application stack containing of
2 embedded hazelcast apps; (app A)
2 apps using hazelcast clients. (app B)
App B needs to coordinate task execution among the nodes, so only one node executes a particular task.
With app A it's rather easy to implement by creating a gatekeeper as a library, which needs to be queried for a task execution permit. The gatekeeper would keep track of hazelcast members in the cluster, and assign permit to only a single node. It would register a MembershipListener in order to track changes in the cluster.
However, app B, being a Hazelcast client, can't make use of such gatekeeper, as clients can't access ClientService (via hazelcastInstance.getClientService()), thus it's unable to register a ClientListener (similar to MembershipListener, but for client nodes) to be notified of added or removed clients.
How could such coordination gatekeeper be implemented for applications that join the cluster as HazelcastClients?
You would probably have to use a listener on a member (take the oldest member in the cluster and update the listener when the "master" changes) and use an ITopic to inform other clients.
Can't think of another way right now.

Gets a remove/evicted event fired on hazelcast #addLocalEntryListener if a member joins a cluster?

From the docs:
Note that entries in distributed map are partitioned across the cluster members; each member owns and manages the some portion of the entries. Owned entries are called local entries. This listener will be listening for the events of local entries. Let's say your cluster has member1 and member2. On member2 you added a local listener and from member1, you call map.put(key2, value2). If the key2 is owned by member2 then the local listener will be notified for the add/update event. Also note that entries can migrate to other nodes for load balancing and/or membership change.
Does the last sentence mean, that if a member joins a cluster and a key will moved to a new node an "EntryRemoved" event is fired on the local node (means the local node is now not the owner node any more)? And can I trust on this behavior?
No, EntryRemoved event is fired only when you explicitly remove the entry.

Hazelcast ITopic and listener crash

I have a multi-node cluster Hazelcast application that uses ITopic's. I'm trying to understand whether, in order for things to be "cleaned up" properly when a node crashes, my application should detect the node crash and remove that node's registration IDs - or whether Hazelcast automatically takes care of that.
By "node crash" I mean that an app that is part of a Hazelcast cluster terminates ungracefully, without calling ITopic.removeMessageListener or HazelcastInstance.shutdown. This could be due to the app crashing or being killed or the host crashing.
Here's the long story, in case it helps. I don't know the internals of Hazelcast and couldn't find anything relevant in the documentation. However, I can think of a couple of ways this "automatic" cleanup could work:
1. On each node, Hazelcast keeps a list of all subscribers, both local and remote. When it detects that another node is unavailable, Hazelcast automatically removes that other node's listeners from the list of ITopic subscribers.
2. On each node, Hazelcast only keeps a list of local subscribers. When a publisher calls ITopic.publish, Hazelcast sends the message to all nodes. Upon receiving the message, Hazelcast on each node calls onMessage on all local subscriber.
Here's a sample scenario. Let's suppose I have a Hazelcast cluster with 2 nodes, A and B. Both node A and node B register listeners to the same ITopic via ITopic.addMessageListener.
Let's suppose that node B crashes without calling ITopic.removeMessageListener or HazelcastInstance.shutdown
Eventually, Hazelcast on node A detects that node B is unavailable.
Now let's suppose that a publisher on node A calls ITopic.publish. Does Hazelcast on A still tries to send the message to the subscriber on B? And let's suppose that after some time node B is restarted, and a publisher on A calls ITopic.publish. Does Hazelcast on A still tries to send the message to the old subscriber on B?
Thank you in advance.
Hazelcast will remove listeners for dead nodes automatically on death-detection. If this doesn't happen (I guess there might be a reason for you to ask) this is a bug.
Hazelcast will also not try to send events to the dead node after it was recognized as dead, that said it means that events being send in abstinence of node B won't be redelivered whenever the node is coming back. There is no correlation between the old, dead node B and the newly connected one.
Does that answer the question? :)

Resources