I Am using a Hazelcast java client(on node1), and creating Hazelcast maps on the different node(different laptop--node2).
My setup:
on node2 - Hazelcast is running.
on node1 - Stand -alone java program which acts like a Hazelcast java client.
ClientConfig config = new ClientConfig();
config.getGroupConfig().setName("dev").setPassword("dev-pass");
config.addAddress("<node2-ip>:5701");
HazelcastInstance inst = HazelcastClient.newHazelcastClient(config);
//Creating a mapconfig
MapConfig mcfg = new MapConfig();
mcfg.setName("democache");
//creating a mapstore config
MapStoreConfig mapStoreCfg = new MapStoreConfig();
mapStoreCfg.setClassName("com.main.MyMapStore").setEnabled(true);
MyMapStore is my implementation of Hazelcast MapStore. This class resides on
mcfg.setMapStoreConfig(mapStoreCfg);
**inst.getConfig()**.addMapConfig(mcfg);
I am getting "UnsupportedOperationException" when i run this code.. When i do inst.getConfig(), getting this exception.. Can anyone please let me know what is work around for this!
Stacktrace is:
Exception in thread "main" java.lang.UnsupportedOperationException
at com.hazelcast.client.HazelcastClient.getConfig(HazelcastClient.java:144)
at ClientClass.main(ClientClass.java:34)
Hazelcast clients can not access cluster nodes' configuration. This operation is unsupported.
Also you should not update/change configuration after cluster is up.
UnsupportedOperationException, when doing HazelcastInstance.getConfig() from hazelcast client
Client do not store data, so it does not use MapStore, so you should configure mapstore not on client, but the other hazelcast server instances. Like that:
Config config = new Config();
config.addMapConfig(mapconfig);
HazelcastInstance node1 = Hazelcast.newHazelcastInstance(cfg);
Related
I am using Hazelcast native java client to connect remote Hazelcast cluster.
Below is the code, I want to configure TTL, Max Size , Eviction policy for below IMap tranMap from the java client.
Can anyone suggest how to set this parameters from the hazelcast client for each IMap.
I know how to configure it at cluster level in hazelcast.xml. But for my application use case, I have to configure it programmatically in ClientConfig object used in creating HazelcastClient instance.
ClientConfig config = new ClientConfig();
String[] addresses = { "192.178.11.01:5701", "192.178.30.18:5702" };
config.getNetworkConfig().addAddress(addresses);
HazelcastInstance hazelcastInstance = HazelcastClient.newHazelcastClient(config);
IMap<Integer, Transaction> map = hazelcastInstance.getMap("tranMap");
if you don't want to re-configure an existing map but just add a new configuration for a map which you'll going to use, it is doable
HazelcastInstance client = HazelcastClient.newHazelcastClient();
Config config = client.getConfig();
config.addMapConfig(new MapConfig()
.setName("foo")
.setTimeToLiveSeconds(10));
Remember to not create the map before configuring, so getMap call should be after you add the config.
This is already requested, https://github.com/hazelcast/hazelcast/issues/14750, but not yet implemented
I have a Spring Boot Application that uses Spring Data for Cassandra. One of the requirements is that the application will start even if the Cassandra Cluster is unavailable. The Application logs the situation and all its endpoints will not work properly but the Application does not shutdown. It should retry to connect to the cluster during this time. When the cluster is available the application should start to operate normally.
If I am able to connect during the application start and the cluster becomes unavailable after that, the cassandra java driver is capable of managing the retries.
How can I manage the retries during application start and still use Cassandra Repositories from Spring Data?
Thanx
It is possible to start a Spring Boot application if Apache Cassandra is not available but you need to define the Session and CassandraTemplate beans on your own with #Lazy. The beans are provided out of the box with CassandraAutoConfiguration but are initialized eagerly (default behavior) which creates a Session. The Session requires a connection to Cassandra which will prevent a startup if it's not initialized lazily.
The following code will initialize the resources lazily:
#Configuration
public class MyCassandraConfiguration {
#Bean
#Lazy
public CassandraTemplate cassandraTemplate(#Lazy Session session, CassandraConverter converter) throws Exception {
return new CassandraTemplate(session, converter);
}
#Bean
#Lazy
public Session session(CassandraConverter converter, Cluster cluster,
CassandraProperties cassandraProperties) throws Exception {
CassandraSessionFactoryBean session = new CassandraSessionFactoryBean();
session.setCluster(cluster);
session.setConverter(converter);
session.setKeyspaceName(cassandraProperties.getKeyspaceName());
session.setSchemaAction(SchemaAction.NONE);
return session.getObject();
}
}
One of the requirements is that the application will start even if the Cassandra Cluster is unavailable
I think you should read this session from the Java driver doc: http://datastax.github.io/java-driver/manual/#cluster-initialization
The Cluster object does not connect automatically unless some calls are executed.
Since you're using Spring Data Cassandra (that I do not recommend since it has less feature than the plain Mapper Module of the Java driver ...) I don't know if the Cluster object or Session object are exposed directly to the users ...
For retry, you can put the cluster.init() call in a try/catch block and if the cluster is still unavaible, you'll catch an NoHostAvailableException according to the docs. Upon the exception, you can schedule a retry of cluster.init() later
We are currently using vertx and hazelcast as its clustering implementation. For it to work as per the docs hazelcast is embedded inside our application meaning it will join as a member of the cluster. We would like our application to be independent of Hazelcast. The reason is when ever Hazelcast cache becomes inconsistent we are bringing down all our servers and restarting. Instead we would like to keep Hazelcast to its own server and connect vertx as a client so we restart hazelcast independent of our application server. Zookeeper cluster implementation does exactly how we would like but we don't want to maintain another cluster for just this purpose because we are also using Hazelcast for other cache purposes internal to our application. Currently we are doing some thing like this to make vertx work.
Config hazelcastConfig = new Config();
//Group
GroupConfig groupConfig = new GroupConfig();
groupConfig.setName(hzGroupName);
groupConfig.setPassword(groupPassword);
hazelcastConfig.setGroupConfig(groupConfig);
//Properties
Properties properties = new Properties();
properties.setProperty("hazelcast.mancenter.enabled", "false");
properties.setProperty("hazelcast.memcache.enabled", "false");
properties.setProperty("hazelcast.rest.enabled", "false");
properties.setProperty("hazelcast.wait.seconds.before.join", "0");
properties.setProperty("hazelcast.logging.type", "jdk");
hazelcastConfig.setProperties(properties);
//Network
NetworkConfig networkConfig = new NetworkConfig();
networkConfig.setPort(networkPort);
networkConfig.setPortAutoIncrement(networkPortAutoincrement);
//Interfaces
InterfacesConfig interfacesConfig = new InterfacesConfig();
interfacesConfig.setEnabled(true);
interfacesConfig.setInterfaces(interfaces);
networkConfig.setInterfaces(interfacesConfig);
//Join
JoinConfig joinConfig = new JoinConfig();
MulticastConfig multicastConfig = new MulticastConfig();
multicastConfig.setEnabled(false);
joinConfig.setMulticastConfig(multicastConfig);
TcpIpConfig tcpIpConfig = new TcpIpConfig();
tcpIpConfig.setEnabled(true);
List<String> members = Arrays.asList(hzNetworkMembers.split(","));
tcpIpConfig.setMembers(members);
joinConfig.setTcpIpConfig(tcpIpConfig);
networkConfig.setJoin(joinConfig);
//Finish Network
hazelcastConfig.setNetworkConfig(networkConfig);
clusterManager = new HazelcastClusterManager(hazelcastConfig);
VertxOptions options = new VertxOptions().setClusterManager(clusterManager);
options.setClusterHost(interfaces.get(0));
options.setMaxWorkerExecuteTime(VertxOptions.DEFAULT_MAX_WORKER_EXECUTE_TIME * workerVerticleMaxExecutionTime);
options.setBlockedThreadCheckInterval(1000 * 60 * 60);
Vertx.clusteredVertx(options, res -> {
if (res.succeeded()) {
vertx = res.result();
} else {
throw new RuntimeException("Unable to launch Vert.x");
}
});
********* Alternate Solution **********
we actually changed our distributed caching implementation from hazelcast to Redis (Amazon ElastiCache).
We coudnt rely on hazelcast for 3 reasons.
1) because of its inconsistency during server restarts 2) we were using embedded hazelcast and we ended up restarting our app when hazelcast data in inconsistent and we want our app to be independent of other services 3) memory allocation (hazelcast data) now is independent of application server
Vertx 3.2.0 now supports handing it a preconfigured Hazelcast instance for which to build a cluster. Therefore you have complete control over the Hazelcast configuration including how and where you want data stored. But you also need a bug fix from Vert.x 3.2.1 release to really use this.
See updated documentation at https://github.com/vert-x3/vertx-hazelcast/blob/master/src/main/asciidoc/index.adoc#using-an-existing-hazelcast-cluster
Note: When you create your own cluster, you need to have the extra Hazelcast settings required by Vertx. And those are noted in the documentation above.
Vert.x 3.2.1 release fixes an issue that blocks the use of client connections. Be aware that if you do distributed locks with Hazelcast clients, the default timeout is 60 seconds for the lock to go away if the network connection is stalled in a way that isn't obvious to the server nodes (all other JVM exits should immediately clear a lock).
You can lower this amount using:
// This is checked every 10 seconds, so any value < 10 will be treated the same
System.setProperty("hazelcast.client.max.no.heartbeat.seconds", "9");
Also be aware that with Hazelcast clients you may want to use near caching for some maps and look at other advanced configuration options for performance tuning a client which will behave differently than a full data node.
Since version 3.2.1 you can run other full Hazelcast nodes configured correctly with the map settings required by Vertx. And then create custom Hazelcast clients when starting Vertx (taken from a new unit test case):
ClientConfig clientConfig = new ClientConfig().setGroupConfig(new GroupConfig("dev", "dev-pass"));
HazelcastInstance clientNode1 = HazelcastClient.newHazelcastClient(clientConfig);
HazelcastClusterManager mgr1 = new HazelcastClusterManager(clientNode1);
VertxOptions options1 = new VertxOptions().setClusterManager(mgr1).setClustered(true).setClusterHost("127.0.0.1");
Vertx.clusteredVertx(options1, ...)
Obviously your client configuration and needs will differ. Consult the Hazelcast documentation for Client configuration: http://docs.hazelcast.org/docs/3.5/manual/html-single/index.html
Using
HazelcastInstance client = HazelcastClient.newHazelcastClient();
Ringbuffer<String> mybuffer = client.getRingbuffer("rb");
and connecting as a client to a multicast joiner is giving error:
Exception in thread "main" java.lang.UnsupportedOperationException
at com.hazelcast.client.impl.HazelcastClientProxy.getRingbuffer(HazelcastClientProxy.java:73)
but using another hazelcast instance works fine:
HazelcastInstance client = Hazelcast.newHazelcastInstance();
Ringbuffer<String> mybuffer = client.getRingbuffer("rb");
Thing is, it may be preferable to connect as a client to the already existing multicast instance rather than starting another instance. Is this by design, or what am I doing wrong?
THANKS
Client-Side implementation of RingBuffer is missing in 3.5 and 3.5.1 versions. It's available in 3.5.2+ though.
I am using Cassandra 1.2.3 and I made following change in cassandra config to enable user/password -
authenticator: org.apache.cassandra.auth.PasswordAuthenticator
I am able to access the existing keyspace using cassandra-cli.
But having issues with querying using hector -
(adding credentials at HFactory.getOrCreateCluster did help in moving forward but now i get same error while running queries against the keyspace)
Map<String, String> AccessMap = new HashMap<String, String>();
AccessMap.put("username", "cassandra");
AccessMap.put("password", "cassandra");
cluster = HFactory.getOrCreateCluster(clusterName,
new CassandraHostConfigurator(hostport), AccessMap);
ConfigurableConsistencyLevel ccl = new ConfigurableConsistencyLevel();
ccl.setDefaultReadConsistencyLevel(HConsistencyLevel.ONE);
keyspace = HFactory.createKeyspace(keyspaceName, cluster, new AllOneConsistencyLevelPolicy(), FailoverPolicy.ON_FAIL_TRY_ALL_AVAILABLE, AccessMap);
exception thrown when running
QueryResult> result = sliceQuery
.execute(); -
Caused by: me.prettyprint.hector.api.exceptions.HInvalidRequestException: InvalidRequestException(why:You have not logged in)
Without credentials being added SliceQuery was working fine.
I was using hector 1.1.4 but it is taken care in hector 1.1.5.