How to configure multiple cassandra contact-points for lagom? - cassandra

In Lagom it appears the contact point get loaded from the service locator which accepts only a single URI. How can we specify multiple cassandra contact-points?
lagom.services {
cas_native = "tcp://10.0.0.120:9042"
}
I have tried setting just the contact points in the akka persistence config but that doesn't seem to override the service locator config.

All that I was missing was the session provider to override service lookup:
session-provider = akka.persistence.cassandra.ConfigSessionProvider
contact-points = ["10.0.0.120", "10.0.3.114", "10.0.4.168"]
was needed in the lagom cassandra config

Related

What load balancing policies are available in Cassandra Java driver 4.x?

We are upgrading datastax Cassandra java driver from 3.2 to 4.x to support DSE 6.8.
Load balancing policies our application currently supports are RoundRobinPolicy and DCAwareRoundRobinPolicy.
These policies aren't available in java-driver-core 4.12.
How can we support the above policies.Please help..
Current code in our application using cassandra-driver-core-3.1.0.jar:
public static LoadBalancingPolicy getLoadBalancingPolicy(String loadBalanceStr, boolean isTokenAware) {
LoadBalancingPolicy loadBalance = null;
if (isTokenAware) {
loadBalance = new TokenAwarePolicy(loadBalanceDataConvert(loadBalanceStr));
} else {
loadBalance = loadBalanceDataConvert(loadBalanceStr);
}
return loadBalance;
}
private static LoadBalancingPolicy loadBalanceDataConvert(String loadBalanceStr) {
if (CassandraConstants.CASSANDRACONNECTION_LOADBALANCEPOLICY_DC.equals(loadBalanceStr)) {
return new DCAwareRoundRobinPolicy.Builder().build();
} else if (CassandraConstants.CASSANDRACONNECTION_LOADBALANCEPOLICY_ROUND.equals(loadBalanceStr)) {
return new RoundRobinPolicy();
}
return null;
}
The load balancing has been heavily simplified in version 4.x of the Cassandra Java driver. You no longer need to nest multiple policies within each other to achieve high availability.
In our opinion, the best policy is the DefaultLoadBalancingPolicy which is enabled by default and achieves all the best attributes as the policies in older versions.
The DefaultLoadBalancingPolicy generates a query plan that is token-aware by default so replicas which own the data appear first and prioritised over other nodes in the local DC. For token-awareness to work, you must provide routing information either by keyspace (with getRoutingKeyspace()), or by routing key (with getRoutingKey()).
If routing information is not provided, the DefaultLoadBalancingPolicy generates a query plan that is a simple round-robin shuffle of available nodes in the local DC.
We understand that developers who are used to configuring DCAwareRoundRobinPolicy in older versions would like to continue using it but we do not recommend it. It is our opinion that failover should take place at the infrastructure layer, not the application layer.
Our opinion is that the DefaultLoadBalancingPolicy is the right choice in all cases. If you prefer to configure DC-failover, make sure you fully understand the implications and know that we think it is the wrong choice.
For details, see the following documents:
Java driver v4 Upgrade Guide
Load Balancing in Java driver v4

Hazelcast Eureka Cloud Discovery Plugin not working

We have implemented Hazelcast as an embedded cache in our Spring Boot app, and need a way using which Hazelcast members within a "cluster group" can discover each other dynamically so that we dont have to provide possible IP address/port where Hazelcast might be running.
We came across this hazelcast plugin on github:
https://github.com/hazelcast/hazelcast-eureka which seems to provide the same feature using Eureka as discovery/registration tool.
As mentioned in this github documentation, hazelcast-eureka-one library is included within our boot app classpath, we also disabled TCP-IP & multicast discovery and added below discovery strategy in hazelcast.xml:
<discovery-strategies>
<discovery-strategy class="com.hazelcast.eureka.one.EurekaOneDiscoveryStrategy" enabled="true">
<properties>
<property name="self-registration">true</property>
<property name="namespace">hazelcast</property>
</properties>
</discovery-strategy>
</discovery-strategies>
Our application also provides configured EurekaClient, which is what we are autowiring and inject into this plugin implementation:
*
Config hazelcastConfig = new FileSystemXmlConfig(hazelcastConfigFilePath);
**EurekaOneDiscoveryStrategyFactory.setEurekaClient(eurekaClient);**
hazelcastInstance = Hazelcast.newHazelcastInstance(hazelcastConfig);
*
Problem:
We are able to start 2 instances of our spring boot app on same machine and we notice that each app is starting hazelcast instance embedded on separate port (5701, 5702). But it doesnt seem to recognize each other running within a cluster, this is what we see in app logs when 2nd instance is starting:
Members [1] {
Member [10.41.70.143]:5702 - 7c42eb24-3fa0-45cb-9394-17175cc92b9c this
}
17-12-13 12:22:44.480 WARN [main] c.h.i.Node.log(LoggingServiceImpl.java:168) - [10.41.70.143]:5702 [domain-services] [3.8.2] Config seed port is 5701 and cluster size is 1. Some of the ports seem occupied!
which seem to indicate that both hazelcast instances are running independently and doesnt recognize other running instance in a cluster/group.
Also, immediately after restart we see this exception thrown frequently on both the nodes:
*
java.lang.ClassCastException: com.hazelcast.nio.tcp.MemberWriteHandler cannot be cast to com.hazelcast.nio.ascii.TextWriteHandler
at com.hazelcast.nio.ascii.TextReadHandler.<init>(TextReadHandler.java:109) ~[hazelcast-3.8.2.jar:3.8.2]
at com.hazelcast.nio.tcp.SocketReaderInitializerImpl.init(SocketReaderInitializerImpl.java:89) ~[hazelcast-3.8.2.jar:3.8.2]
*
which seem to indicate there is Incompatibility between hazelcast library in the classpath?
It seems like your Eureka service returns the wrong ports. Hazelcast tries to connect to 8080 and other ports in the same range, whereas Hazelcast uses 5701. Not exactly sure why this happens but it feels like you requesting the wrong service name from Eureka which ends up returning the HTTP (Tomcat?!) ports instead of the separate Hazelcast service that should be registered.

Lagom external Cassandra authentication

I have been trying to set up an external Cassandra for my Lagom setup.
In root pom I have written
<configuration>
<unmanagedServices>
<cas_native>http://ip:9042</cas_native>
</unmanagedServices>
<cassandraEnabled>false</cassandraEnabled>
</configuration>
In my impl application.conf
akka {
persistent {
journal {
akka.persistence.journal.plugin = "this-cassandra-journal"
this-cassandra-journal {
contact-points = ["10.15.2.179"]
port = 9042
cluster-id = "cas_native"
keyspace = "hello"
authentication.username = "cassandra"
authentication.password = "rodney"
# Parameter indicating whether the journal keyspace should be auto created
keyspace-autocreate = true
# Parameter indicating whether the journal tables should be auto created
tables-autocreate = true
}
}
snapshot-store {
akka.persistence.snapshot-store.plugin = "this-cassandra-snapshot-store"
this-cassandra-snapshot-store {
contact-points = ["10.15.2.179"]
port = 9042
cluster-id = "cas_native"
keyspace = "hello_snap"
authentication.username = "cassandra"
authentication.password = "rodney"
# Parameter indicating whether the journal keyspace should be auto created
keyspace-autocreate = true
# Parameter indicating whether the journal tables should be auto created
tables-autocreate = true
}
}
}
But I get the error
[warn] a.p.c.j.CassandraJournal - Failed to connect to Cassandra and initialize.
It will be retried on demand. Caused by: Authentication error on host /10.15.2.
179:9042: Host /10.15.2.179:9042 requires authentication, but no authenticator f
ound in Cluster configuration
[warn] a.p.c.s.CassandraSnapshotStore - Failed to connect to Cassandra and initi
alize. It will be retried on demand. Caused by: Authentication error on host /10
.15.2.179:9042: Host /10.15.2.179:9042 requires authentication, but no authentic
ator found in Cluster configuration
[warn] a.p.c.j.CassandraJournal - Failed to connect to Cassandra and initialize.
It will be retried on demand. Caused by: Authentication error on host /10.15.2.
179:9042: Host /10.15.2.179:9042 requires authentication, but no authenticator f
ound in Cluster configuration
[error] a.c.s.PersistentShardCoordinator - Persistence failure when replaying ev
ents for persistenceId [/sharding/ProductCoordinator]. Last known sequence numbe
r [0]
com.datastax.driver.core.exceptions.AuthenticationException: Authentication erro
r on host /10.15.2.179:9042: Host /10.15.2.179:9042 requires authentication, but
no authenticator found in Cluster configuration
at com.datastax.driver.core.AuthProvider$1.newAuthenticator(AuthProvider
.java:40)
at com.datastax.driver.core.Connection$5.apply(Connection.java:250)
at com.datastax.driver.core.Connection$5.apply(Connection.java:234)
at com.google.common.util.concurrent.Futures$AsyncChainingFuture.doTrans
form(Futures.java:1442)
at com.google.common.util.concurrent.Futures$AsyncChainingFuture.doTrans
form(Futures.java:1433)
at com.google.common.util.concurrent.Futures$AbstractChainingFuture.run(
Futures.java:1408)
at com.google.common.util.concurrent.Futures$2$1.run(Futures.java:1177)
at com.google.common.util.concurrent.MoreExecutors$DirectExecutorService
.execute(MoreExecutors.java:310)
at com.google.common.util.concurrent.Futures$2.execute(Futures.java:1174
)
I also tried providing this config
lagom.persistence.read-side {
cassandra {
}
}
How to make it work by providing credentials for Cassandra?
In Lagom, you may already use akka-persistence-cassandra settings for your journal and snapshot-store (see reference.conf in the source code, and scroll down for cassandra-snapshot-store.authentication.*). There's no need to configure it because Lagom's support for Cassandra persistence already declares akka-persistence-cassandraas the Akka Persistence implementation:
akka.persistence.journal.plugin = cassandra-journal
akka.persistence.snapshot-store.plugin = cassandra-snapshot-store
See https://github.com/lagom/lagom/blob/c63383c343b02bd0c267ff176bfb4e48c7202d7d/persistence-cassandra/core/src/main/resources/play/reference-overrides.conf#L5-L6
The third last bit to configure when connecting Lagom to Cassandra is Lagom's Read-Side. That is also doable via application.conf if you override the defaults.
Note how each storage may use a different Cassandra Ring/Keyspace/credentials/... so you can tune them separately.
See extra info in the Lagom docs.

Does Hazelcast honor a default cache configuration

In the hazelcast documentation there are a few brief references to a cache named "default" - for instance, here:
http://docs.hazelcast.org/docs/3.6/manual/html-single/index.html#jcache-declarative-configuration
Later, there is another mention of cache default configuration here:
http://docs.hazelcast.org/docs/3.6/manual/html-single/index.html#icache-configuration
What I would like is to be able to configure "default" settings that are inherited when caches are created. For instance, given the following configuration snippet:
<cache name="default">
<statistics-enabled>true</statistics-enabled>
<management-enabled>true</management-enabled>
<expiry-policy-factory>
<timed-expiry-policy-factory expiry-policy-type="ACCESSED" time-unit="MINUTES" duration-amount="2"/>
</expiry-policy-factory>
</cache>
I'd like for the following test to pass:
#Test
public void defaultCacheSettingsTest() throws Exception {
CacheManager cacheManager = underTest.get();
Cache cache = cacheManager.createCache("foo", new MutableConfiguration<>());
CompleteConfiguration cacheConfig = (CompleteConfiguration) cache.getConfiguration(CompleteConfiguration.class);
assertThat(cacheConfig.isManagementEnabled(), is(true));
assertThat(cacheConfig.isStatisticsEnabled(), is(true));
assertThat(cacheConfig.getExpiryPolicyFactory(),
is(AccessedExpiryPolicy.factoryOf(new Duration(TimeUnit.MINUTES, 2l)))
);
}
Ehcache has a "templating" mechanism and I am hoping that I can get a similar behavior.
Hazelcast supports configuration with wildcards. You can use <cache name="*"> for all Caches to share the same configuration, or apply other patterns to group Cache configurations as you wish.
Note that since you already use Hazelcast declarative configuration to configure your Caches, you should use CacheManager.getCache instead of createCache to obtain the Cache instance: Caches created with CacheManager.createCache(..., Configuration) disregard the declarative configuration since they are configured explicitly with the Configuration passed as argument.

Can Hazelcast connect as a client to existing Hazelcast cluster instead of joining as a member of the cluster to implement vertx clustering

We are currently using vertx and hazelcast as its clustering implementation. For it to work as per the docs hazelcast is embedded inside our application meaning it will join as a member of the cluster. We would like our application to be independent of Hazelcast. The reason is when ever Hazelcast cache becomes inconsistent we are bringing down all our servers and restarting. Instead we would like to keep Hazelcast to its own server and connect vertx as a client so we restart hazelcast independent of our application server. Zookeeper cluster implementation does exactly how we would like but we don't want to maintain another cluster for just this purpose because we are also using Hazelcast for other cache purposes internal to our application. Currently we are doing some thing like this to make vertx work.
Config hazelcastConfig = new Config();
//Group
GroupConfig groupConfig = new GroupConfig();
groupConfig.setName(hzGroupName);
groupConfig.setPassword(groupPassword);
hazelcastConfig.setGroupConfig(groupConfig);
//Properties
Properties properties = new Properties();
properties.setProperty("hazelcast.mancenter.enabled", "false");
properties.setProperty("hazelcast.memcache.enabled", "false");
properties.setProperty("hazelcast.rest.enabled", "false");
properties.setProperty("hazelcast.wait.seconds.before.join", "0");
properties.setProperty("hazelcast.logging.type", "jdk");
hazelcastConfig.setProperties(properties);
//Network
NetworkConfig networkConfig = new NetworkConfig();
networkConfig.setPort(networkPort);
networkConfig.setPortAutoIncrement(networkPortAutoincrement);
//Interfaces
InterfacesConfig interfacesConfig = new InterfacesConfig();
interfacesConfig.setEnabled(true);
interfacesConfig.setInterfaces(interfaces);
networkConfig.setInterfaces(interfacesConfig);
//Join
JoinConfig joinConfig = new JoinConfig();
MulticastConfig multicastConfig = new MulticastConfig();
multicastConfig.setEnabled(false);
joinConfig.setMulticastConfig(multicastConfig);
TcpIpConfig tcpIpConfig = new TcpIpConfig();
tcpIpConfig.setEnabled(true);
List<String> members = Arrays.asList(hzNetworkMembers.split(","));
tcpIpConfig.setMembers(members);
joinConfig.setTcpIpConfig(tcpIpConfig);
networkConfig.setJoin(joinConfig);
//Finish Network
hazelcastConfig.setNetworkConfig(networkConfig);
clusterManager = new HazelcastClusterManager(hazelcastConfig);
VertxOptions options = new VertxOptions().setClusterManager(clusterManager);
options.setClusterHost(interfaces.get(0));
options.setMaxWorkerExecuteTime(VertxOptions.DEFAULT_MAX_WORKER_EXECUTE_TIME * workerVerticleMaxExecutionTime);
options.setBlockedThreadCheckInterval(1000 * 60 * 60);
Vertx.clusteredVertx(options, res -> {
if (res.succeeded()) {
vertx = res.result();
} else {
throw new RuntimeException("Unable to launch Vert.x");
}
});
********* Alternate Solution **********
we actually changed our distributed caching implementation from hazelcast to Redis (Amazon ElastiCache).
We coudnt rely on hazelcast for 3 reasons.
1) because of its inconsistency during server restarts 2) we were using embedded hazelcast and we ended up restarting our app when hazelcast data in inconsistent and we want our app to be independent of other services 3) memory allocation (hazelcast data) now is independent of application server
Vertx 3.2.0 now supports handing it a preconfigured Hazelcast instance for which to build a cluster. Therefore you have complete control over the Hazelcast configuration including how and where you want data stored. But you also need a bug fix from Vert.x 3.2.1 release to really use this.
See updated documentation at https://github.com/vert-x3/vertx-hazelcast/blob/master/src/main/asciidoc/index.adoc#using-an-existing-hazelcast-cluster
Note: When you create your own cluster, you need to have the extra Hazelcast settings required by Vertx. And those are noted in the documentation above.
Vert.x 3.2.1 release fixes an issue that blocks the use of client connections. Be aware that if you do distributed locks with Hazelcast clients, the default timeout is 60 seconds for the lock to go away if the network connection is stalled in a way that isn't obvious to the server nodes (all other JVM exits should immediately clear a lock).
You can lower this amount using:
// This is checked every 10 seconds, so any value < 10 will be treated the same
System.setProperty("hazelcast.client.max.no.heartbeat.seconds", "9");
Also be aware that with Hazelcast clients you may want to use near caching for some maps and look at other advanced configuration options for performance tuning a client which will behave differently than a full data node.
Since version 3.2.1 you can run other full Hazelcast nodes configured correctly with the map settings required by Vertx. And then create custom Hazelcast clients when starting Vertx (taken from a new unit test case):
ClientConfig clientConfig = new ClientConfig().setGroupConfig(new GroupConfig("dev", "dev-pass"));
HazelcastInstance clientNode1 = HazelcastClient.newHazelcastClient(clientConfig);
HazelcastClusterManager mgr1 = new HazelcastClusterManager(clientNode1);
VertxOptions options1 = new VertxOptions().setClusterManager(mgr1).setClustered(true).setClusterHost("127.0.0.1");
Vertx.clusteredVertx(options1, ...)
Obviously your client configuration and needs will differ. Consult the Hazelcast documentation for Client configuration: http://docs.hazelcast.org/docs/3.5/manual/html-single/index.html

Resources