Using
HazelcastInstance client = HazelcastClient.newHazelcastClient();
Ringbuffer<String> mybuffer = client.getRingbuffer("rb");
and connecting as a client to a multicast joiner is giving error:
Exception in thread "main" java.lang.UnsupportedOperationException
at com.hazelcast.client.impl.HazelcastClientProxy.getRingbuffer(HazelcastClientProxy.java:73)
but using another hazelcast instance works fine:
HazelcastInstance client = Hazelcast.newHazelcastInstance();
Ringbuffer<String> mybuffer = client.getRingbuffer("rb");
Thing is, it may be preferable to connect as a client to the already existing multicast instance rather than starting another instance. Is this by design, or what am I doing wrong?
THANKS
Client-Side implementation of RingBuffer is missing in 3.5 and 3.5.1 versions. It's available in 3.5.2+ though.
Related
i've updated my spring-boot to v3.0.0 and spring-data-cassandra to v4.0.0 which resulted in unable to connect to cassandra cluster which is deployed in stg env and runs on IPv6 address having different datacenter rather DC1
i've added a config file which accepts localDC programatically
`#Bean(destroyMethod = "close")
public CqlSession session() {
CqlSession session = CqlSession.builder()
.addContactPoint(InetSocketAddress.createUnresolved("[240b:c0e0:1xx:xxx8:xxxx:x:x:x]", port))
.withConfigLoader(
DriverConfigLoader.programmaticBuilder()
.withString(DefaultDriverOption.LOAD_BALANCING_LOCAL_DATACENTER, localDatacenter)
.withString(DefaultDriverOption.AUTH_PROVIDER_PASSWORD,password)
.withString(DefaultDriverOption.CONNECTION_INIT_QUERY_TIMEOUT,"10s")
.withString(DefaultDriverOption.CONNECTION_CONNECT_TIMEOUT, "20s")
.withString(DefaultDriverOption.REQUEST_TIMEOUT, "20s")
.withString(DefaultDriverOption.CONTROL_CONNECTION_TIMEOUT, "20s")
.withString(DefaultDriverOption.SESSION_KEYSPACE,keyspace)
.build())
//.addContactPoint(InetSocketAddress.createUnresolved(InetAddress.getByName(contactPoints).getHostName(), port))
.build();
}
return session;`
and this is my application.yml file
spring:
data:
cassandra:
keyspace-name: xxx
contact-points: [xxxx:xxxx:xxxx:xxx:xxx:xxx]
port: xxx
local-datacenter: xxxx
use-dc-aware: true
username: xxxxx
password: xxxxx
ssl: true
SchemaAction: CREATE_IF_NOT_EXISTS
So locally I was able to connect to cassandra (by default it is pointing to localhost) , but in stg env my appplication is not able to connect to that cluster
logs in my stg env
caused by: com.datastax.oss.driver.api.core.AllNodesFailedException: Could not reach any contact point, make sure you've provided valid addresses (showing first 1 nodes, use getAllErrors() for more): Node (endPoint=/[240b:cOe0:102:xxxx:xxxx:x:x:x]:3xxx,hostId-null,hashCode=4e9ba6a8):[com.datastax.oss.driver.api.core.connection.ConnectionInitException:[s0|controllid:0x984419ed,L:/[240b:cOe0:102:5dd7: xxxx:x:x:xxx]:4xxx - R:/[240b:c0e0:102:xxxx:xxxx:x:x:x]:3xxx] Protocol initialization request, step 1 (OPTIONS: unexpected tarlure com.datastax.oss.driver.apt.core.connection.closedconnectiontxception: Lost connection to remote peer)]
Network
You appear to have a networking issue. The driver can't connect to any of the nodes because they are unreachable from a network perspective as it states in the error message:
... AllNodesFailedException: Could not reach any contact point ...
You need to check that:
you have configured the correct IP addresses,
you have configured the correct CQL port, and
there is network connectivity between your app and the cluster.
Security
I also noted that you configured the driver to use SSL:
ssl: true
but I don't see anywhere where you've configured the certificate credentials and this could explain why the driver can't initiate connections.
Check that the cluster has client-to-node encryption enabled. If it does then you need to prepare the client certificates and configure SSL on the driver.
Driver build
This post appears to be a duplicate of another question you posted but is now closed due to lack of clarity and details.
In that question it appears you are running a version of the Java driver not produced by DataStax as pointed out by #absurdface:
Specifically I note that java-driver-core-4.11.4-yb-1-RC1.jar isn't a Java driver artifact released by DataStax (there isn't even a 4.11.4 Java driver release). This could be relevant for reasons we'll get into ...
We are not aware of where this build came from and without knowing much about it, it could be the reason you are not able to connect to the cluster.
We recommend that you switch to one of the supported builds of the Java driver. Cheers!
A hearty +1 to everything #erick-ramirez mentioned above. I would also expand on his answers with an observation or two.
Normally spring-data-cassandra is used to automatically configure a CqlSession and make it available for injection (or for use in CqlTemplate etc.). That's what you'd normally be configuring with your application.yml file. But you're apparently creating the CqlSession directly in code, which means that spring-data-cassandra isn't involved... and therefore what's in your application.yml likely isn't being used.
This analysis strongly suggests that your CqlSession is not being configured to use SSL. My understanding is that your testing sequence went as follows:
Tested app locally on a local server, everything worked
Tested app against test environment, observed the errors above
If this sequence is correct and you have SSL enabled in you test environment but not on your local Cassandra instance that could very easily explain the behaviour you're describing.
This explanation could also explain the specific error you cite in the error message. "Lost connection to remote peer" indicates that something is unexpectedly killing your socket connection before any protocol messages are explained... and an SSL issue would cause almost exactly that behaviour.
I would recommend checking the SSL configuration for both servers involved in your testing. I would also suggest consulting the SSL-related documentation referenced by Erick above and confirm that you have all the relevant materials when building your CqlSession.
added the certificate in my spring application
public CqlSession session() throws IOException, CertificateException, NoSuchAlgorithmException, KeyStoreException, KeyManagementException {
Resource resource = new ClassPathResource("root.crt");
InputStream inputStream = resource.getInputStream();
CertificateFactory cf = CertificateFactory.getInstance("X.509");
Certificate cert = cf.generateCertificate(inputStream);
TrustManagerFactory trustManagerFactory = TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm());
KeyStore keyStore = KeyStore.getInstance(KeyStore.getDefaultType());
keyStore.load(null);
keyStore.setCertificateEntry("ca", cert);
trustManagerFactory.init(keyStore);
SSLContext sslContext = SSLContext.getInstance("TLSv1.3");
sslContext.init(null, trustManagerFactory.getTrustManagers(), null);
return CqlSession.builder()
.withSslContext(sslContext)
.addContactPoint(new InetSocketAddress(contactPoints,port))
.withAuthCredentials(username, password)
.withLocalDatacenter(localDatacenter)
.withKeyspace(keyspace)
.build();
}
so added the cert file in the configuration file of the cqlsession builder and this helped me in connecting to the remote cassandra cluster
We had enabled diagnostic feature on our batch account to stream events to event hub which we are capturing in our application to take action based on batch task states. However we are noticing that the connection gets closed automatically(probably because no events occurring over night) and hence we have to bounce back the server every once in a while to receive the events/messages back again.
We still rely on java 7 and here are the dependencies that we added for batch processing:
//azure dependency
compile('com.microsoft.azure:azure-storage:7.0.0')
compile('com.microsoft.azure:azure-batch:5.0.1') {
//do not get transitive dependency com.nimbusds:nimbus-jose-jw because spring security still rely on old version of it
excludes group: 'com.nimbusds', module: 'nimbus-jose-jw'
}
compile('com.fasterxml.jackson.core:jackson-core:2.9.8')
compile('org.apache.qpid:qpid-amqp-1-0-common:0.32')
compile('org.apache.qpid:qpid-amqp-1-0-client:0.32')
compile('org.apache.qpid:qpid-amqp-1-0-client-jms:0.32')
compile('org.apache.qpid:qpid-jms-client:0.40.0')
compile('org.apache.geronimo.specs:geronimo-jms_1.1_spec:1.1.1')
//end of azure dependency
And here is the code snipped that does the connection, actually we used the code example given here : http://theitjourney.blogspot.com/2015/12/sendreceive-messages-using-amqp-in-java.html since we couldn't find any working example for java 7 in azure doc itself.
/**
* Set up connection to the service bus using AMQP mechanism.
* NOTE: Messages received from the message bus are not guaranteed to follow order.
* */
MessageConsumer initiateConsumer(MessageListener messageListener, Integer partitionInx, BatchEventHubConfig batchEventHubConfig) {
// set up JNDI context
String queueName = "EventHub"
String connectionFactoryName = "SBCFR"
Hashtable<String, String> hashtable = new Hashtable<>()
hashtable.put("connectionfactory.${connectionFactoryName}", batchEventHubConfig.getAMQPConnectionURI())
hashtable.put("queue.${queueName}", "${batchEventHubConfig.name}/ConsumerGroups/${batchEventHubConfig.consumerGroup}/Partitions/${partitionInx}")
hashtable.put(Context.INITIAL_CONTEXT_FACTORY, "org.apache.qpid.amqp_1_0.jms.jndi.PropertiesFileInitialContextFactory")
Context context = new InitialContext(hashtable)
ConnectionFactory factory = (ConnectionFactory) context.lookup(connectionFactoryName)
Destination queue = (Destination) context.lookup(queueName)
Connection connection = factory.createConnection(batchEventHubConfig.sasPolicyName, batchEventHubConfig.sasPolicyKey)
connection.setExceptionListener(new BatchExceptionListener())
connection.start()
Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE)
MessageConsumer messageConsumer = session.createConsumer(queue)
messageConsumer.setMessageListener(messageListener)
messageConsumer
}
So is there a way to track if a connection was closed, and if so re-start the connection again?
Any information to further diagnose this issue will be appreciated as well.
I think I found the issue, I had used "SBCFR" as connectionFactoryName, looking closely at the example in the link, I should have used "SBCF". Also I updated the lib "org.apache.qpid:qpid-jms-client" from version "0.40.0" to "0.41.0"
Also in the above code, I shouldn't have used AUTO_ACKNOWLEGDE because for the longest time I thought something was wrong because I was never receiving the events in my local setup. Turned out other machines were also connected to the same consumer group and had already ack'ed the message.
We are currently using vertx and hazelcast as its clustering implementation. For it to work as per the docs hazelcast is embedded inside our application meaning it will join as a member of the cluster. We would like our application to be independent of Hazelcast. The reason is when ever Hazelcast cache becomes inconsistent we are bringing down all our servers and restarting. Instead we would like to keep Hazelcast to its own server and connect vertx as a client so we restart hazelcast independent of our application server. Zookeeper cluster implementation does exactly how we would like but we don't want to maintain another cluster for just this purpose because we are also using Hazelcast for other cache purposes internal to our application. Currently we are doing some thing like this to make vertx work.
Config hazelcastConfig = new Config();
//Group
GroupConfig groupConfig = new GroupConfig();
groupConfig.setName(hzGroupName);
groupConfig.setPassword(groupPassword);
hazelcastConfig.setGroupConfig(groupConfig);
//Properties
Properties properties = new Properties();
properties.setProperty("hazelcast.mancenter.enabled", "false");
properties.setProperty("hazelcast.memcache.enabled", "false");
properties.setProperty("hazelcast.rest.enabled", "false");
properties.setProperty("hazelcast.wait.seconds.before.join", "0");
properties.setProperty("hazelcast.logging.type", "jdk");
hazelcastConfig.setProperties(properties);
//Network
NetworkConfig networkConfig = new NetworkConfig();
networkConfig.setPort(networkPort);
networkConfig.setPortAutoIncrement(networkPortAutoincrement);
//Interfaces
InterfacesConfig interfacesConfig = new InterfacesConfig();
interfacesConfig.setEnabled(true);
interfacesConfig.setInterfaces(interfaces);
networkConfig.setInterfaces(interfacesConfig);
//Join
JoinConfig joinConfig = new JoinConfig();
MulticastConfig multicastConfig = new MulticastConfig();
multicastConfig.setEnabled(false);
joinConfig.setMulticastConfig(multicastConfig);
TcpIpConfig tcpIpConfig = new TcpIpConfig();
tcpIpConfig.setEnabled(true);
List<String> members = Arrays.asList(hzNetworkMembers.split(","));
tcpIpConfig.setMembers(members);
joinConfig.setTcpIpConfig(tcpIpConfig);
networkConfig.setJoin(joinConfig);
//Finish Network
hazelcastConfig.setNetworkConfig(networkConfig);
clusterManager = new HazelcastClusterManager(hazelcastConfig);
VertxOptions options = new VertxOptions().setClusterManager(clusterManager);
options.setClusterHost(interfaces.get(0));
options.setMaxWorkerExecuteTime(VertxOptions.DEFAULT_MAX_WORKER_EXECUTE_TIME * workerVerticleMaxExecutionTime);
options.setBlockedThreadCheckInterval(1000 * 60 * 60);
Vertx.clusteredVertx(options, res -> {
if (res.succeeded()) {
vertx = res.result();
} else {
throw new RuntimeException("Unable to launch Vert.x");
}
});
********* Alternate Solution **********
we actually changed our distributed caching implementation from hazelcast to Redis (Amazon ElastiCache).
We coudnt rely on hazelcast for 3 reasons.
1) because of its inconsistency during server restarts 2) we were using embedded hazelcast and we ended up restarting our app when hazelcast data in inconsistent and we want our app to be independent of other services 3) memory allocation (hazelcast data) now is independent of application server
Vertx 3.2.0 now supports handing it a preconfigured Hazelcast instance for which to build a cluster. Therefore you have complete control over the Hazelcast configuration including how and where you want data stored. But you also need a bug fix from Vert.x 3.2.1 release to really use this.
See updated documentation at https://github.com/vert-x3/vertx-hazelcast/blob/master/src/main/asciidoc/index.adoc#using-an-existing-hazelcast-cluster
Note: When you create your own cluster, you need to have the extra Hazelcast settings required by Vertx. And those are noted in the documentation above.
Vert.x 3.2.1 release fixes an issue that blocks the use of client connections. Be aware that if you do distributed locks with Hazelcast clients, the default timeout is 60 seconds for the lock to go away if the network connection is stalled in a way that isn't obvious to the server nodes (all other JVM exits should immediately clear a lock).
You can lower this amount using:
// This is checked every 10 seconds, so any value < 10 will be treated the same
System.setProperty("hazelcast.client.max.no.heartbeat.seconds", "9");
Also be aware that with Hazelcast clients you may want to use near caching for some maps and look at other advanced configuration options for performance tuning a client which will behave differently than a full data node.
Since version 3.2.1 you can run other full Hazelcast nodes configured correctly with the map settings required by Vertx. And then create custom Hazelcast clients when starting Vertx (taken from a new unit test case):
ClientConfig clientConfig = new ClientConfig().setGroupConfig(new GroupConfig("dev", "dev-pass"));
HazelcastInstance clientNode1 = HazelcastClient.newHazelcastClient(clientConfig);
HazelcastClusterManager mgr1 = new HazelcastClusterManager(clientNode1);
VertxOptions options1 = new VertxOptions().setClusterManager(mgr1).setClustered(true).setClusterHost("127.0.0.1");
Vertx.clusteredVertx(options1, ...)
Obviously your client configuration and needs will differ. Consult the Hazelcast documentation for Client configuration: http://docs.hazelcast.org/docs/3.5/manual/html-single/index.html
I cannot find this in the docs or javadocs: do I need to create one client per thread or is a client created by:
client = HazelcastClient.newHazelcastClient(cfg);
thread safe?
The client is thread-safe. Also when you get e.g. an IMap from it, it also is thread-safe.
HazelcastInstance client = HazelcastClient.newHazelcastClient(cfg)
IMap map = client.getMap("map");
So you can share this client instance with all your threads in the JVM.
I Am using a Hazelcast java client(on node1), and creating Hazelcast maps on the different node(different laptop--node2).
My setup:
on node2 - Hazelcast is running.
on node1 - Stand -alone java program which acts like a Hazelcast java client.
ClientConfig config = new ClientConfig();
config.getGroupConfig().setName("dev").setPassword("dev-pass");
config.addAddress("<node2-ip>:5701");
HazelcastInstance inst = HazelcastClient.newHazelcastClient(config);
//Creating a mapconfig
MapConfig mcfg = new MapConfig();
mcfg.setName("democache");
//creating a mapstore config
MapStoreConfig mapStoreCfg = new MapStoreConfig();
mapStoreCfg.setClassName("com.main.MyMapStore").setEnabled(true);
MyMapStore is my implementation of Hazelcast MapStore. This class resides on
mcfg.setMapStoreConfig(mapStoreCfg);
**inst.getConfig()**.addMapConfig(mcfg);
I am getting "UnsupportedOperationException" when i run this code.. When i do inst.getConfig(), getting this exception.. Can anyone please let me know what is work around for this!
Stacktrace is:
Exception in thread "main" java.lang.UnsupportedOperationException
at com.hazelcast.client.HazelcastClient.getConfig(HazelcastClient.java:144)
at ClientClass.main(ClientClass.java:34)
Hazelcast clients can not access cluster nodes' configuration. This operation is unsupported.
Also you should not update/change configuration after cluster is up.
UnsupportedOperationException, when doing HazelcastInstance.getConfig() from hazelcast client
Client do not store data, so it does not use MapStore, so you should configure mapstore not on client, but the other hazelcast server instances. Like that:
Config config = new Config();
config.addMapConfig(mapconfig);
HazelcastInstance node1 = Hazelcast.newHazelcastInstance(cfg);