How to get which hazelcast instance is running on the node - hazelcast

I am running many instances of hazelcast with different group names(i.e different cluster) on different nodes. Now I want to make program which runs on a given node and needs to know which HazelcastInstance is running on this node and access its config file. I dont want this program to create any new hazelcast instance. How this can be done?

It depends.
You can always look up the HazelcastInstance(s) using Hazelcast.getHazelcastInstanceByName if you know the name or get them all using getAllHazelcastInstances.
In some cases you want to get the HazelcastInstance after deserialization (e.g. you send a task to a hz instance using an iexecutor). In this case you can implement the HazelcastInstanceAware interface to get the instance injected.
So it depends a bit on your setup.
The config object you can load using HazelcastInstance.getConfig. The instance doesn't know if the config was made using a xml file, or was made programmatically.

Related

Sharing hazelcast cache between multiple application and using write behind and read through

Question - Can I share the same hazelcast cluster (cache) between the multiple application while using the write behind and read through functionality using map store and map loaders
Details
I have enterprise environment have the multiple application and want to use the single cache
I have multiple application(microservices) ie. APP_A, APP_B and APP_C independent of each other.
I am running once instance of each application and each node will be the member node of the cluster.
APP_A has MAP_A, APP_B has MAP_B and APP_C has MAP_C. Each application has MapStore for their respective maps.
If a client sends a command instance.getMap("MAP_A").put("Key","Value") . This has some inconsistent behavior. Some time I see data is persistent in database but some times not.
Note - I wan to use the same hazelcast instance across all application, so that app A and access data from app B and vice versa.
I am assuming this is due to the node who handles the request. If request is handle by node A then it will work fine, but fails if request is handled by node B or C. I am assuming this is due to Mapstore_A implementation is not available with node B and C.
Am I doing something wrong? Is there something we can do to overcome this issue?
Thanks in advance.
Hazelcast is a clustered solution. If you have multiple nodes in the cluster, the data in each may get moved from place to place when data rebalancing occurs.
As a consequence of this, map store and map loader operations can occur from any node.
So all nodes in the cluster need the same ability to connect to the database.

Hazelcast and the need for custom serializers; works when creating the server but not when connecting to existing

We are using Hazelcast to store stuff in distributed maps. We are having a problem with remote servers and I need some feedback on what we can do to resolve the issue.
We create the server - WORKS
We create a new server (Hazelcast.newHazelcastInstance) inside our application's JVM. The hazelcast Config object we pass in has a bunch of custom serializers defined for all the types we are going to put in the maps. Our objects are a mixture of Protobufs, plain java objects, and a combination of the two. The server starts, we can put objects in the map and get objects back out later. We recently decided to start running Hazelcast in its own dedicated server so we tried the scenario below.
Server already exists externally, we connect as a client - DOESN'T WORK
Rather than creating our Hazelcast instance we connect to a remote instance that is already running. We pass in a config with all the same serializers we used before. We successfully connect to Hazelcast and we can put stuff in the map (works as far as I can tell) but we don't get anything back out. No events get fired letting our listeners know objects were added to a map.
I want to be able to connect to a Hazelcast instance that is already running outside of our JVM. It is not working for our use case and I am not sure how it is supposed to work.
Does the JVM running Hazelcast externally need in its class loader all of the class types we might put into the map? It seems like that might be where the problem is but wouldn't that make it very limiting to use Hazelcast?
How do you typically manage those class loader issues?
Assuming the above is true, is there a way to tell Hazelcast we will serialize the objects before even putting them in the map? Basically we would give Hazelcast an ID and byte array and that is all we would expect back in return. If so that would avoid the entire class loader issue I think we are running into. We do not need to be able to search on objects based on their fields. We just need to know as objects come and go and what their ID is.
#Jonathan, when using client-server architecture, unless you use queries or other operations that require data to be serialized on the cluster, members don't need to know anything about serialization. They just store already serialized data & serve it. If these listeners that you mentioned are on the client app, it should be working fine.
Hazelcast has a feature called User Code Deployment, https://docs.hazelcast.org/docs/3.11/manual/html-single/index.html#member-user-code-deployment-beta, but it's mainly for user classes. Serialization related config should be present on members or you should add that later & do a rolling restart.
If you can share some of the exceptions/setup etc, I can give specific answers as well.

Changes need to make in web application to use hazelcast

In my project , web application runs on different servers. On each server , application cache data during startup in ehcache and maps. I want to migrate to hazelcast. What are the changes need to make to use Hazelcast?
Three steps will get you started:
You need Hazelcast on the classpath.
For example, include http://repo1.maven.org/maven2/com/hazelcast/hazelcast/3.7.4/hazelcast-3.7.4.jar in your webapp's lib folder.
Create a Hazelcast instance in that webapp
For example, HazelcastInstance hazelcastInstance = Hazelcast.newHazelcastInstance();
Obtain a map reference, to use as if it was a local map
For example, java.util.Map<?, ?> map = hazelcastInstance.getMap("name");
Step 2 is where you'll likely have the most difficulty. You want the Hazelcast instances to find each other. They will do this using multicast by default, but if this is blocked for you you will need to be more explicit with the config and specify host addresses.
Send me a DM if you need help.

Ignite Persistent Store Loading mechanism

I need the cachestore to be loaded at startup according to its configuration without any additional codes like :
CacheStore.load()
But in https://apacheignite.readme.io/docs/persistent-store , I could not come accross with an expression that it loads itself at startup automaticly.
am I missing something here or is there really no way to do this at boot time without conding?
Thx
I see the following way that should work in your case:
implement org.apache.ignite.lifecycle.LifecycleBean interface and process org.apache.ignite.lifecycle.LifecycleEventType#AFTER_NODE_START event in the implementation;
when the event fires call cache("cache_name").localLoadCache() in bean's implementation. Entries for which a started node is either primary or backup will be stored on that node.
register your LifecycleBean implementation with IgniteConfiguration.setLifecycleBeans(lifeCycleBean) or the same way in Spring XML.
As a result when a node is started with such configuration the pre-loading will be started automatically because of registered LifecycleBean.
Here you can find an example on how to work with LifecycleBean in Ignite.

Hazelcast: Multiple Hazelcast Nodes are created in response to a single newHazelcastInstance call

I have a small hazelcast cluster which is under a medium sized and constant load. When I scale the cluster by adding a new server I get an interesting and unexplained result. As part of the creation of a new server I call Hazelcast.newHazelcastInstance(hzConfig);. This call normally creates a single hazelcast node in the cluster (as verified using their management console). In some of my test cases, this call is creating many hazelcast nodes in the cluster (testing has shown as many as 7 new nodes being created). Has anyone else seen this behavior? Is there a way to control this? Why is this happening? Can the number of nodes will be spawned be predicted?
Does the logging of members by the members themselves show a different result, because what you see there is the truth.
It could be that there is a bug in the management center.
So can you post the logging of the members?
It should look something like this:
Members [2] {
Member [192.168.1.104]:5701 this
Member [192.168.1.104]:5702
}

Resources