Changes need to make in web application to use hazelcast - hazelcast

In my project , web application runs on different servers. On each server , application cache data during startup in ehcache and maps. I want to migrate to hazelcast. What are the changes need to make to use Hazelcast?

Three steps will get you started:
You need Hazelcast on the classpath.
For example, include http://repo1.maven.org/maven2/com/hazelcast/hazelcast/3.7.4/hazelcast-3.7.4.jar in your webapp's lib folder.
Create a Hazelcast instance in that webapp
For example, HazelcastInstance hazelcastInstance = Hazelcast.newHazelcastInstance();
Obtain a map reference, to use as if it was a local map
For example, java.util.Map<?, ?> map = hazelcastInstance.getMap("name");
Step 2 is where you'll likely have the most difficulty. You want the Hazelcast instances to find each other. They will do this using multicast by default, but if this is blocked for you you will need to be more explicit with the config and specify host addresses.
Send me a DM if you need help.

Related

Sharing hazelcast cache between multiple application and using write behind and read through

Question - Can I share the same hazelcast cluster (cache) between the multiple application while using the write behind and read through functionality using map store and map loaders
Details
I have enterprise environment have the multiple application and want to use the single cache
I have multiple application(microservices) ie. APP_A, APP_B and APP_C independent of each other.
I am running once instance of each application and each node will be the member node of the cluster.
APP_A has MAP_A, APP_B has MAP_B and APP_C has MAP_C. Each application has MapStore for their respective maps.
If a client sends a command instance.getMap("MAP_A").put("Key","Value") . This has some inconsistent behavior. Some time I see data is persistent in database but some times not.
Note - I wan to use the same hazelcast instance across all application, so that app A and access data from app B and vice versa.
I am assuming this is due to the node who handles the request. If request is handle by node A then it will work fine, but fails if request is handled by node B or C. I am assuming this is due to Mapstore_A implementation is not available with node B and C.
Am I doing something wrong? Is there something we can do to overcome this issue?
Thanks in advance.
Hazelcast is a clustered solution. If you have multiple nodes in the cluster, the data in each may get moved from place to place when data rebalancing occurs.
As a consequence of this, map store and map loader operations can occur from any node.
So all nodes in the cluster need the same ability to connect to the database.

Hazelcast - Ensure entry event is processed by a single handler

I have a Hazelcast cluster with multiple nodes, each consisting of identical instances of a "Daemon" server process. These daemons are Java applications with embedded Hazelcast caches as well as logic that forms the core of my platform. I need to distribute certain events on the platform to listeners across the cluster which can reside in any (or all) of the connected nodes. From my reading of the documentation it seems to me that if I attach an EntryEventListener to the maps on daemon startup then whenever the event happens in that map my callback will be called in every running instance of the daemon.
What I would like is for the callback to be called once (on any single node) across the cluster for an event. So if I have 10 nodes in the cluster, and each node registers an EntryEventListener on a map when it joins I would like any single one of those listener instances (on any of the nodes) to be triggered when that event happens and not all of them... I don't care which node listener handles the event, as long as it is only a single instance of the listener and not every registered listener. How can I do this?
I saw this old question which sounds like the same question, but I'm not certain and the answer doesn't make sense to me.
hazelcast entry listener on a multinode cluster
In The Hazelcast documentation there is this:
There is also another attribute called local, which is not shown in
the above examples. It is also a boolean attribute that is optional,
and if you set it to true, you can listen to the items on the local
member. Its default value is false.
Does that "local" attribute mean that the event would be triggered only on the node that is the primary owner of the key?
Thanks,
Troy
Yes, setting local to true will make the listener to fire events only if the member is the primary owner of the key. You can achieve what you want using local listeners

Hazelcast and the need for custom serializers; works when creating the server but not when connecting to existing

We are using Hazelcast to store stuff in distributed maps. We are having a problem with remote servers and I need some feedback on what we can do to resolve the issue.
We create the server - WORKS
We create a new server (Hazelcast.newHazelcastInstance) inside our application's JVM. The hazelcast Config object we pass in has a bunch of custom serializers defined for all the types we are going to put in the maps. Our objects are a mixture of Protobufs, plain java objects, and a combination of the two. The server starts, we can put objects in the map and get objects back out later. We recently decided to start running Hazelcast in its own dedicated server so we tried the scenario below.
Server already exists externally, we connect as a client - DOESN'T WORK
Rather than creating our Hazelcast instance we connect to a remote instance that is already running. We pass in a config with all the same serializers we used before. We successfully connect to Hazelcast and we can put stuff in the map (works as far as I can tell) but we don't get anything back out. No events get fired letting our listeners know objects were added to a map.
I want to be able to connect to a Hazelcast instance that is already running outside of our JVM. It is not working for our use case and I am not sure how it is supposed to work.
Does the JVM running Hazelcast externally need in its class loader all of the class types we might put into the map? It seems like that might be where the problem is but wouldn't that make it very limiting to use Hazelcast?
How do you typically manage those class loader issues?
Assuming the above is true, is there a way to tell Hazelcast we will serialize the objects before even putting them in the map? Basically we would give Hazelcast an ID and byte array and that is all we would expect back in return. If so that would avoid the entire class loader issue I think we are running into. We do not need to be able to search on objects based on their fields. We just need to know as objects come and go and what their ID is.
#Jonathan, when using client-server architecture, unless you use queries or other operations that require data to be serialized on the cluster, members don't need to know anything about serialization. They just store already serialized data & serve it. If these listeners that you mentioned are on the client app, it should be working fine.
Hazelcast has a feature called User Code Deployment, https://docs.hazelcast.org/docs/3.11/manual/html-single/index.html#member-user-code-deployment-beta, but it's mainly for user classes. Serialization related config should be present on members or you should add that later & do a rolling restart.
If you can share some of the exceptions/setup etc, I can give specific answers as well.

How can Hazelcast client be notified of an another added or removed client

There's an application stack containing of
2 embedded hazelcast apps; (app A)
2 apps using hazelcast clients. (app B)
App B needs to coordinate task execution among the nodes, so only one node executes a particular task.
With app A it's rather easy to implement by creating a gatekeeper as a library, which needs to be queried for a task execution permit. The gatekeeper would keep track of hazelcast members in the cluster, and assign permit to only a single node. It would register a MembershipListener in order to track changes in the cluster.
However, app B, being a Hazelcast client, can't make use of such gatekeeper, as clients can't access ClientService (via hazelcastInstance.getClientService()), thus it's unable to register a ClientListener (similar to MembershipListener, but for client nodes) to be notified of added or removed clients.
How could such coordination gatekeeper be implemented for applications that join the cluster as HazelcastClients?
You would probably have to use a listener on a member (take the oldest member in the cluster and update the listener when the "master" changes) and use an ITopic to inform other clients.
Can't think of another way right now.

How to get which hazelcast instance is running on the node

I am running many instances of hazelcast with different group names(i.e different cluster) on different nodes. Now I want to make program which runs on a given node and needs to know which HazelcastInstance is running on this node and access its config file. I dont want this program to create any new hazelcast instance. How this can be done?
It depends.
You can always look up the HazelcastInstance(s) using Hazelcast.getHazelcastInstanceByName if you know the name or get them all using getAllHazelcastInstances.
In some cases you want to get the HazelcastInstance after deserialization (e.g. you send a task to a hz instance using an iexecutor). In this case you can implement the HazelcastInstanceAware interface to get the instance injected.
So it depends a bit on your setup.
The config object you can load using HazelcastInstance.getConfig. The instance doesn't know if the config was made using a xml file, or was made programmatically.

Resources