I'm working on an API service which is using a Cassandra database connection. Is there a way to connect Cassandra db in Karate framework?
I'm familiar with the ways to connect oracle and postgreSql databases with Karate, but unable to find a method for Cassandra.
The simplest way is to build a Java app that connects to Cassandra using the Java driver. You can then call your Java code from Karate (see Calling Java in Karate for an example).
Here's some minimal code to help you get started with the Java driver:
import com.datastax.oss.driver.api.core.CqlSession;
import com.datastax.oss.driver.api.core.cql.ResultSet;
import com.datastax.oss.driver.api.core.cql.Row;
public class HelloCassandra {
public static void main(String[] args) {
try (CqlSession session = CqlSession.builder()
.withKeyspace("keyspace_name")
.build()) {
// Select the release_version from the system.local table:
ResultSet rs = session.execute("SELECT release_version FROM system.local");
Row row = rs.one();
//Print the results of the CQL query to the console:
if (row != null) {
System.out.println(row.getString("release_version"));
} else {
System.out.println("An error occurred.");
}
}
System.exit(0);
}
}
You'll need to add this in pom.xml:
<dependency>
<groupId>com.datastax.oss</groupId>
<artifactId>java-driver-core</artifactId>
<version>4.13.0</version>
</dependency>
You can get a full example pom.xml here. Cheers!
Related
I have have tried pushing custom metrics to splunk apm using below dependency and setting the properties using springboot application
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-registry-signalfx</artifactId>
</dependency>
Properties
management.metrics.export.signalfx.access-token= <token>
management.metrics.export.signalfx.enabled=true
management.metrics.export.signalfx.uri=<uri>
management.metrics.export.signalfx.source=testservice
Now there is a requirement to try pushing from spring mvc application and I have added same dependency in the pom and created custom metric code as below but I am getting error while spring application is deployed
MeterRegistry resgRegistry = new SignalFxMeterRegistry(new SignalFxConfig() {
#Override
public String get(String key) {
// TODO Auto-generated method stub
return null;
}
}, Clock.SYSTEM);
Timer myTimer = Timer.builder("surya_timer").register(Metrics.globalRegistry);
Timer timer = Timer.builder("test").register(resgRegistry);
timer.record(Duration.ofMillis(123));
myTimer.record(Duration.ofMillis(567));
Error
io.micrometer.core.instrument.config.validate.ValidationException: signalfx.accessToken was 'null' but it is required
io.micrometer.core.instrument.config.validate.Validated$Either.orThrow(Validated.java:375)
io.micrometer.core.instrument.config.MeterRegistryConfig.requireValid(MeterRegistryConfig.java:49)
io.micrometer.core.instrument.push.PushMeterRegistry.<init>(PushMeterRegistry.java:42)
io.micrometer.core.instrument.step.StepMeterRegistry.<init>(StepMeterRegistry.java:43)
io.micrometer.signalfx.SignalFxMeterRegistry.<init>(SignalFxMeterRegistry.java:78)
io.micrometer.signalfx.SignalFxMeterRegistry.<init>(SignalFxMeterRegistry.java:74)
Please help me How to set this access token in the spring application
Our application currently uses cassandra-driver-core-3.1.0 and it implements token-aware load balancing policy. We are upgrading the driver to java-driver-core-4.13.0, token-aware policy isn't available with the driver. In Datastax docs,it's mentioned token-aware is the default policy. Should we've to write some code for it or if we use the default load balancing policy, DefaultLoadBalancingPolicy token-aware will be taken care of? I'm new to Cassandra. Can anyone please help..
import com.datastax.driver.core.policies.RoundRobinPolicy;
import com.datastax.driver.core.policies.TokenAwarePolicy;
import com.datastax.driver.core.policies.DCAwareRoundRobinPolicy;
import com.datastax.driver.core.policies.LoadBalancingPolicy;
public static LoadBalancingPolicy getLoadBalancingPolicy(String loadBalanceStr, boolean isTokenAware) {
LoadBalancingPolicy loadBalance = null;
if (isTokenAware) {
loadBalance = new TokenAwarePolicy(loadBalanceDataConvert(loadBalanceStr));
} else {
loadBalance = loadBalanceDataConvert(loadBalanceStr);
}
return loadBalance;
}
private static LoadBalancingPolicy loadBalanceDataConvert(String loadBalanceStr) {
if (CassandraConstants.CASSANDRACONNECTION_LOADBALANCEPOLICY_DC.equals(loadBalanceStr)) {
return new DCAwareRoundRobinPolicy.Builder().build();
} else if (CassandraConstants.CASSANDRACONNECTION_LOADBALANCEPOLICY_ROUND.equals(loadBalanceStr)) {
return new RoundRobinPolicy();
}
return null;
}
https://docs.datastax.com/en/developer/java-driver/4.2/manual/core/load_balancing/
You don't need to implement anything yourself, just use default load balancing policy - it's token-aware by default, as described in the documentation.
Our application implements retry policies such as FallthroughRetryPolicy and LoggingRetryPolicy using driver cassandra-driver-core-3.1.0.jar.To support keyspace metadata for DSE6.8,we are upgrading the datastax driver version to java-driver-core-4.13.0.
For policies DefaultRetryPolicy,ConsistencyDowngradingRetryPolicy we can programatically use its class name in DriverConfigLoader.But,how can we implement FallthroughRetryPolicy and LoggingRetryPolicies,as these policies aren't available in java-driver-core-4.13.0.?Trying to use driverconfigloader as below
DriverConfigLoader loader =
DriverConfigLoader.programmaticBuilder().withClass(DefaultDriverOption.RETRY_POLICY,DefaultRetryPolicy.class).build();
LoggingRetryPolicy in our application is as below:
public static RetryPolicy getRetryPolicy(String retryPolicyStr, boolean isLogingPolicy) {
RetryPolicy retryPolicy = null;
if (isLogingPolicy) {
retryPolicy = new LoggingRetryPolicy(retryPolicyDataConvert(retryPolicyStr));
} else {
retryPolicy = retryPolicyDataConvert(retryPolicyStr);
}
return retryPolicy;
}
private static RetryPolicy retryPolicyDataConvert(String retryPolicyStr) {
if (CassandraConstants.CASSANDRACONNECTION_RETRYPOLICY_DEFAULT.equals(retryPolicyStr)) {
return DefaultRetryPolicy.INSTANCE;
} else if (CassandraConstants.CASSANDRACONNECTION_RETRYPOLICY_DOWNGRADING.equals(retryPolicyStr)) {
return DowngradingConsistencyRetryPolicy.INSTANCE;
} else if (CassandraConstants.CASSANDRACONNECTION_RETRYPOLICY_FALLTHROUGH.equals(retryPolicyStr)) {
return FallthroughRetryPolicy.INSTANCE;
}
return null;
}
Cluster.Builder builder = Cluster.builder();
builder.withRetryPolicy(CassandraPolicyDataTypeConvertUtil.getRetryPolicy(connectionInfo.getRetryPolicy(), connectionInfo.isLoggingRetryPolicy()));
Can anyone please help on this?
The FallthroughRetryPolicy and LoggingRetryPolicy do not exist in version 4.x of the Cassandra Java driver so you will not be able to use them.
Java driver v4 only comes with two built-in retry policies:
DefaultRetryPolicy
ConsistencyDowngradingRetryPolicy (do NOT use)
We do not recommend using ConsistencyDowngradingRetryPolicy. If you decide to use it, make sure you fully comprehend the consequences. In all cases, the correct retry policy to use is DefaultRetryPolicy unless you have a very edge case and want to implement your own retry policy.
For more info, see Retries in Java driver v4. Cheers!
I have installed 3 nodes of elastic search from Azure marketplace and any node can act as master node. Now how do I connect to the cluster ? If I had one node, I could simply use its IP with Port(9200) but here I have 3 nodes so how do I get the cluster endpoint ? Thanks
This how I did it and it worked well for me:
public class ElasticsearchConfig {
private Vector<String> hosts;
public void setHosts(String hostString) {
if(hostString == null || hostString.trim().isEmpty()) {
return;
}
String[] hostParts = hostString.split(",");
this.hosts = new Vector<>();
Collections.addAll(this.hosts, hostParts);
}
}
public class ElasticClient {
private final ElasticsearchConfig config;
private RestHighLevelClient client;
public ElasticClient(ElasticsearchConfig config) {
this.config = config;
}
public void start() throws Exception {
HttpHost[] httpHosts = new HttpHost[config.getHosts().size()];
config.getHosts()
.stream()
.map(host -> new HttpHost(host.split(":")[0], Integer.valueOf(host.split(":")[1])))
.collect(Collectors.toList())
.toArray(httpHosts);
client = new RestHighLevelClient(RestClient.builder(httpHosts));
System.out.println("Started ElasticSearch Client");
}
public void stop() throws Exception {
if (client != null) {
client.close();
}
client = null;
}
}
Set the ElasticsearchConfig as below:
ElasticsearchConfig config = new ElasticsearchConfig();
config.setHosts("ip1:port,ip2:port,ip3:port");
If all three nodes are part of same cluster than no need to specify all the nodes and even one single node ip is enough to connect to the cluster.
But above approach has some disadvantage and in small cluster with less workload its fine as in this case only one node which you configure in your elasticsearch client will act as co-ordinating node and can become a hot-spot in your cluster, its better to have all the nodes configured in your client so for every request any one node can act co-ordinating node and if you have huge workload you might also consider to have dedicated co-ordinating nodes for even better performance.
Hope this answer your question and I didn't provide the code snippet as don't know which language and client you are using and in your question I felt code is not issue but you want to understand the concept in detail.
Appreciate everyone's time and all those who replied; Turns out that by default, Azure marketplace copy of self managed ES, sets up only "Internal Load Balancer". I was able to get the cluster end point as soon as I configured "External Load Balancer". All set now.
Using a small test Java program like below, I can login to a single Cassandra 2.0.1 node and execute a query with no errors:
$ java -classpath .:cassandra-driver-core-1.0.4.jar DriverTester 127.0.0.1 myusername mypassword
Got test query result 1ba51260-5b6e-11e3-8f1c-7dd8c1a15c4b
Supplying a wrong password fails with a "Username and/or password are incorrect" error, as expected.
But when trying the same program against a node that belongs to a three-node cluster, using the right credentials, I get a somewhat confusing "Required key 'username' is missing" message:
$ java -classpath .:cassandra-driver-core-1.0.4.jar DriverTester cluster-node1 myusername mypassword
com.datastax.driver.core.exceptions.AuthenticationException: Authentication error on host /62.142.90.104: Required key 'username' is missing
at com.datastax.driver.core.Connection.initializeTransport(Connection.java:171)
at com.datastax.driver.core.Connection.<init>(Connection.java:132)
at com.datastax.driver.core.Connection.<init>(Connection.java:60)
at com.datastax.driver.core.Connection$Factory.open(Connection.java:419)
at com.datastax.driver.core.ControlConnection.tryConnect(ControlConnection.java:205)
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:168)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:81)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:773)
at com.datastax.driver.core.Cluster$Manager.access$100(Cluster.java:706)
at com.datastax.driver.core.Cluster.<init>(Cluster.java:79)
at com.datastax.driver.core.Cluster.<init>(Cluster.java:66)
at com.datastax.driver.core.Cluster$Builder.build(Cluster.java:687)
at fi.elisa.viihde.stats.DriverTester.openDatastaxSession(DriverTester.java:40)
at fi.elisa.viihde.stats.DriverTester.doTest(DriverTester.java:28)
at fi.elisa.viihde.stats.DriverTester.main(DriverTester.java:15)
Why is this happening?
The credentials are correct, since supplying the same ones to cqlsh works without problems:
$ cqlsh cluster-node1 -u myusername -p mypassword
Connected to cluster-node1 at xx.xx.xx.xx:9160.
[cqlsh 4.0.1 | Cassandra 2.0.1 | CQL spec 3.1.1 | Thrift protocol 19.37.0]
Use HELP for help.
cqlsh>
Test program code:
import com.datastax.driver.core.Cluster;
import com.datastax.driver.core.Session;
public class DriverTester {
public static void main(String... args) {
try {
doTest(args);
} catch (Throwable t) {
t.printStackTrace();
System.exit(1);
}
System.exit(0);
}
public static void doTest(String... args) {
String[] hostnames = args[0].split(",");
String username = args[1];
String password = args[2];
Cluster.Builder clusterBuilder = Cluster.builder()
.addContactPoints(hostnames).withPort(9042)
.withCredentials(username, password);
Session session = clusterBuilder.build().connect();
System.out.println("Got test query result " +
session.execute("select now() from system.local").one().getUUID(0));
}
}
Upgrading to Datastax driver 2.0.0-rc2 fixes this problem, as does upgrading Cassandra from 2.0.1 to 2.0.3 (or possibly 2.0.2). For the latter, see: https://issues.apache.org/jira/browse/CASSANDRA-6233