HikariCP poolName configuration for Slick 3.0 - slick

I am using the following typesafe configuration in application.conf for Slick 3.0. HikariCP is the default connection pool of Slick 3.0. I set the poolName as "primaryPool":
slick.dbs.primary= {
driver="com.typesafe.slick.driver.ms.SQLServerDriver$"
db {
url = "DB URL"
driver = com.microsoft.sqlserver.jdbc.SQLServerDriver
user = "myUser"
password = "myPassword"
poolName="primaryPool"
}
}
From the HikariCP log, I saw
Before cleanup pool stats db (total=21, inUse=0, avail=21, waiting=0)
The default connection pool name "db" is used but not what I expected primaryPool. I suspect the configuration format is not correct.
So my question is how to configure poolName in application.conf using Typesafe configuration?
Note: Because I will have several connection pools in my application, I hope particular pool name is logged to distinguish different pool.

I find a workaround by setting poolName in my own code:
val dbConfig = dbConfigProvider.get[JdbcProfile]
val poolName = dbConfig.config.getConfig("db").getString("poolName")
dbConfig .db.source.asInstanceOf[HikariCPJdbcDataSource].ds.setPoolName(poolName)
It is not a good solution since I hard code HikariCPJdbcDataSource, but it can meet my requirement at least.
Still hope get the help how to configure poolName correctly in the application.conf.

Related

Spark on Glue is not able to connect to AWS/ElasticSearch

I am running Spark inside Glue to write down to AWS/ElasticSearch with the following configuration for Spark:
conf.set("es.nodes", s"$nodes/$indexName")
conf.set("es.port", "443")
conf.set("es.batch.write.retry.count", "200")
conf.set("es.batch.size.bytes", "512kb")
conf.set("es.batch.size.entries", "500")
conf.set("es.index.auto.create", "false")
conf.set("es.nodes.wan.only", "true")
conf.set("es.net.ssl", "true")
however what I get is the following error:
diagnostics: User class threw exception: org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Cannot detect ES version - typically this happens if the network/Elasticsearch cluster is not accessible or when targeting a WAN/Cloud instance without the proper setting 'es.nodes.wan.only'
at org.elasticsearch.hadoop.rest.InitializationUtils.discoverClusterInfo(InitializationUtils.java:340)
at org.elasticsearch.spark.rdd.EsSpark$.doSaveToEs(EsSpark.scala:104)
....
I know in which "VPC" is running my ElasticSearch instance, but I am not sure how to set that for Glue/Spark or if it is a different problem. Any idea?
I have also tried to add a "glue jdbc" connection which should use the proper VPC connection but I am not sure how set it up properly:
import scala.reflect.runtime.universe._
def saveToEs[T <: Product : TypeTag](index: String, data: RDD[T]) =
SparkProvider.glueContext.getJDBCSink(
catalogConnection = "my-elasticsearch-connection",
options = JsonOptions(
"WHAT HERE?"
),
transformationContext = "SinkToElasticSearch"
).writeDynamicFrame(DynamicFrame(
SparkProvider.sqlContext.createDataFrame[T](data),
SparkProvider.glueContext))
Try to create to create a dummy JDBC connection. The dummy connection will tell Glue the ES - VPC, subnet and security group. A test connection might not work but when you run your job with the connection, it will use the connection metadata to launch elastic network interface in your VPC to facilitate this communication. More on connections can be found here:
[1] https://docs.aws.amazon.com/glue/latest/dg/start-connecting.html

Cassandra Connection with Groovy Script In SoapUI

thanks for the time. I am trying to access a remote Cassandra DB in order to complete my assertions. I see that the Server is running:
Cassandra V 3.0.8.1293
Driver Type: Cassandra CQL
Datastax Java Driver for Apache Cassandra - Core [3.0.5]
So, I am trying with the following simple code to access the DB
import com.datastax.driver.core.*
Cluster cluster = null;
try {
cluster = Cluster.builder().addContactPoint("x.x.x.x").withCredentials("xxxxxxx", "xxxxxx").withPort(9042).build()
Session session = cluster.connect();
ResultSet rs = session.execute("select * from TABLE");
Row row = rs.one();
} finally {
if (cluster != null) cluster.close();
}
when I use the cassandra-driver-core-2.0.1.jar I am getting the error :
ERROR:com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /x.x.x.x(null))
Read the documentation and a lot of posts here and on other blogs and I saw that there may be an incompatibility with the driver version so I tried to upgrade the driver to many versions (cassandra-driver-core-2.5,cassandra-driver-core-3,cassandra-driver-core-3.2), but on that I am getting the following:
ERROR:java.lang.ExceptionInInitializerError
Have also tried to connect using JDBC, but to no avail, using the configuration proposed at this thread
SoapUI JDBC connection with Apache Cassandra
Actually I am running out of ideas. Can anyone propose or point to some direction on how to actually achieve this, either by pointing me to some tutorial or any idea.
Thank you very much
I think you haven't enable remote access to cassandra.
Try enabling remote access using below configuration -
File Path /etc/cassandra/default.conf/cassandra.yaml
rpc_address: 0.0.0.0
broadcast_rpc_address: <serverIPAddress>
After that, restart cassandra service.

Lagom external Cassandra authentication

I have been trying to set up an external Cassandra for my Lagom setup.
In root pom I have written
<configuration>
<unmanagedServices>
<cas_native>http://ip:9042</cas_native>
</unmanagedServices>
<cassandraEnabled>false</cassandraEnabled>
</configuration>
In my impl application.conf
akka {
persistent {
journal {
akka.persistence.journal.plugin = "this-cassandra-journal"
this-cassandra-journal {
contact-points = ["10.15.2.179"]
port = 9042
cluster-id = "cas_native"
keyspace = "hello"
authentication.username = "cassandra"
authentication.password = "rodney"
# Parameter indicating whether the journal keyspace should be auto created
keyspace-autocreate = true
# Parameter indicating whether the journal tables should be auto created
tables-autocreate = true
}
}
snapshot-store {
akka.persistence.snapshot-store.plugin = "this-cassandra-snapshot-store"
this-cassandra-snapshot-store {
contact-points = ["10.15.2.179"]
port = 9042
cluster-id = "cas_native"
keyspace = "hello_snap"
authentication.username = "cassandra"
authentication.password = "rodney"
# Parameter indicating whether the journal keyspace should be auto created
keyspace-autocreate = true
# Parameter indicating whether the journal tables should be auto created
tables-autocreate = true
}
}
}
But I get the error
[warn] a.p.c.j.CassandraJournal - Failed to connect to Cassandra and initialize.
It will be retried on demand. Caused by: Authentication error on host /10.15.2.
179:9042: Host /10.15.2.179:9042 requires authentication, but no authenticator f
ound in Cluster configuration
[warn] a.p.c.s.CassandraSnapshotStore - Failed to connect to Cassandra and initi
alize. It will be retried on demand. Caused by: Authentication error on host /10
.15.2.179:9042: Host /10.15.2.179:9042 requires authentication, but no authentic
ator found in Cluster configuration
[warn] a.p.c.j.CassandraJournal - Failed to connect to Cassandra and initialize.
It will be retried on demand. Caused by: Authentication error on host /10.15.2.
179:9042: Host /10.15.2.179:9042 requires authentication, but no authenticator f
ound in Cluster configuration
[error] a.c.s.PersistentShardCoordinator - Persistence failure when replaying ev
ents for persistenceId [/sharding/ProductCoordinator]. Last known sequence numbe
r [0]
com.datastax.driver.core.exceptions.AuthenticationException: Authentication erro
r on host /10.15.2.179:9042: Host /10.15.2.179:9042 requires authentication, but
no authenticator found in Cluster configuration
at com.datastax.driver.core.AuthProvider$1.newAuthenticator(AuthProvider
.java:40)
at com.datastax.driver.core.Connection$5.apply(Connection.java:250)
at com.datastax.driver.core.Connection$5.apply(Connection.java:234)
at com.google.common.util.concurrent.Futures$AsyncChainingFuture.doTrans
form(Futures.java:1442)
at com.google.common.util.concurrent.Futures$AsyncChainingFuture.doTrans
form(Futures.java:1433)
at com.google.common.util.concurrent.Futures$AbstractChainingFuture.run(
Futures.java:1408)
at com.google.common.util.concurrent.Futures$2$1.run(Futures.java:1177)
at com.google.common.util.concurrent.MoreExecutors$DirectExecutorService
.execute(MoreExecutors.java:310)
at com.google.common.util.concurrent.Futures$2.execute(Futures.java:1174
)
I also tried providing this config
lagom.persistence.read-side {
cassandra {
}
}
How to make it work by providing credentials for Cassandra?
In Lagom, you may already use akka-persistence-cassandra settings for your journal and snapshot-store (see reference.conf in the source code, and scroll down for cassandra-snapshot-store.authentication.*). There's no need to configure it because Lagom's support for Cassandra persistence already declares akka-persistence-cassandraas the Akka Persistence implementation:
akka.persistence.journal.plugin = cassandra-journal
akka.persistence.snapshot-store.plugin = cassandra-snapshot-store
See https://github.com/lagom/lagom/blob/c63383c343b02bd0c267ff176bfb4e48c7202d7d/persistence-cassandra/core/src/main/resources/play/reference-overrides.conf#L5-L6
The third last bit to configure when connecting Lagom to Cassandra is Lagom's Read-Side. That is also doable via application.conf if you override the defaults.
Note how each storage may use a different Cassandra Ring/Keyspace/credentials/... so you can tune them separately.
See extra info in the Lagom docs.

How to configure multiple cassandra contact-points for lagom?

In Lagom it appears the contact point get loaded from the service locator which accepts only a single URI. How can we specify multiple cassandra contact-points?
lagom.services {
cas_native = "tcp://10.0.0.120:9042"
}
I have tried setting just the contact points in the akka persistence config but that doesn't seem to override the service locator config.
All that I was missing was the session provider to override service lookup:
session-provider = akka.persistence.cassandra.ConfigSessionProvider
contact-points = ["10.0.0.120", "10.0.3.114", "10.0.4.168"]
was needed in the lagom cassandra config

Can Hazelcast connect as a client to existing Hazelcast cluster instead of joining as a member of the cluster to implement vertx clustering

We are currently using vertx and hazelcast as its clustering implementation. For it to work as per the docs hazelcast is embedded inside our application meaning it will join as a member of the cluster. We would like our application to be independent of Hazelcast. The reason is when ever Hazelcast cache becomes inconsistent we are bringing down all our servers and restarting. Instead we would like to keep Hazelcast to its own server and connect vertx as a client so we restart hazelcast independent of our application server. Zookeeper cluster implementation does exactly how we would like but we don't want to maintain another cluster for just this purpose because we are also using Hazelcast for other cache purposes internal to our application. Currently we are doing some thing like this to make vertx work.
Config hazelcastConfig = new Config();
//Group
GroupConfig groupConfig = new GroupConfig();
groupConfig.setName(hzGroupName);
groupConfig.setPassword(groupPassword);
hazelcastConfig.setGroupConfig(groupConfig);
//Properties
Properties properties = new Properties();
properties.setProperty("hazelcast.mancenter.enabled", "false");
properties.setProperty("hazelcast.memcache.enabled", "false");
properties.setProperty("hazelcast.rest.enabled", "false");
properties.setProperty("hazelcast.wait.seconds.before.join", "0");
properties.setProperty("hazelcast.logging.type", "jdk");
hazelcastConfig.setProperties(properties);
//Network
NetworkConfig networkConfig = new NetworkConfig();
networkConfig.setPort(networkPort);
networkConfig.setPortAutoIncrement(networkPortAutoincrement);
//Interfaces
InterfacesConfig interfacesConfig = new InterfacesConfig();
interfacesConfig.setEnabled(true);
interfacesConfig.setInterfaces(interfaces);
networkConfig.setInterfaces(interfacesConfig);
//Join
JoinConfig joinConfig = new JoinConfig();
MulticastConfig multicastConfig = new MulticastConfig();
multicastConfig.setEnabled(false);
joinConfig.setMulticastConfig(multicastConfig);
TcpIpConfig tcpIpConfig = new TcpIpConfig();
tcpIpConfig.setEnabled(true);
List<String> members = Arrays.asList(hzNetworkMembers.split(","));
tcpIpConfig.setMembers(members);
joinConfig.setTcpIpConfig(tcpIpConfig);
networkConfig.setJoin(joinConfig);
//Finish Network
hazelcastConfig.setNetworkConfig(networkConfig);
clusterManager = new HazelcastClusterManager(hazelcastConfig);
VertxOptions options = new VertxOptions().setClusterManager(clusterManager);
options.setClusterHost(interfaces.get(0));
options.setMaxWorkerExecuteTime(VertxOptions.DEFAULT_MAX_WORKER_EXECUTE_TIME * workerVerticleMaxExecutionTime);
options.setBlockedThreadCheckInterval(1000 * 60 * 60);
Vertx.clusteredVertx(options, res -> {
if (res.succeeded()) {
vertx = res.result();
} else {
throw new RuntimeException("Unable to launch Vert.x");
}
});
********* Alternate Solution **********
we actually changed our distributed caching implementation from hazelcast to Redis (Amazon ElastiCache).
We coudnt rely on hazelcast for 3 reasons.
1) because of its inconsistency during server restarts 2) we were using embedded hazelcast and we ended up restarting our app when hazelcast data in inconsistent and we want our app to be independent of other services 3) memory allocation (hazelcast data) now is independent of application server
Vertx 3.2.0 now supports handing it a preconfigured Hazelcast instance for which to build a cluster. Therefore you have complete control over the Hazelcast configuration including how and where you want data stored. But you also need a bug fix from Vert.x 3.2.1 release to really use this.
See updated documentation at https://github.com/vert-x3/vertx-hazelcast/blob/master/src/main/asciidoc/index.adoc#using-an-existing-hazelcast-cluster
Note: When you create your own cluster, you need to have the extra Hazelcast settings required by Vertx. And those are noted in the documentation above.
Vert.x 3.2.1 release fixes an issue that blocks the use of client connections. Be aware that if you do distributed locks with Hazelcast clients, the default timeout is 60 seconds for the lock to go away if the network connection is stalled in a way that isn't obvious to the server nodes (all other JVM exits should immediately clear a lock).
You can lower this amount using:
// This is checked every 10 seconds, so any value < 10 will be treated the same
System.setProperty("hazelcast.client.max.no.heartbeat.seconds", "9");
Also be aware that with Hazelcast clients you may want to use near caching for some maps and look at other advanced configuration options for performance tuning a client which will behave differently than a full data node.
Since version 3.2.1 you can run other full Hazelcast nodes configured correctly with the map settings required by Vertx. And then create custom Hazelcast clients when starting Vertx (taken from a new unit test case):
ClientConfig clientConfig = new ClientConfig().setGroupConfig(new GroupConfig("dev", "dev-pass"));
HazelcastInstance clientNode1 = HazelcastClient.newHazelcastClient(clientConfig);
HazelcastClusterManager mgr1 = new HazelcastClusterManager(clientNode1);
VertxOptions options1 = new VertxOptions().setClusterManager(mgr1).setClustered(true).setClusterHost("127.0.0.1");
Vertx.clusteredVertx(options1, ...)
Obviously your client configuration and needs will differ. Consult the Hazelcast documentation for Client configuration: http://docs.hazelcast.org/docs/3.5/manual/html-single/index.html

Resources