Connect with Cassandra Cluster using thrift - cassandra

I am using Cassandra 1.2.8 with Thrift API currently i am working remotely with only my local server. But i need to connect it with a cluster on my production servers. I can easily connect with cluster using Hector API but i have done most of my development using Thrift so i don't want to shift to Hector.
with Thrift i am connecting to a server with this code
TTransport tr = new TFramedTransport(new TSocket("10.11.11.111", 9160));
TProtocol proto = new TBinaryProtocol(tr, true, true);
Cassandra.Client client = new Cassandra.Client(proto);
tr.open();
Please guide me how to do this
Thanks

When you connect with one machine from the cluster, you are automatically connected to the whole cluster. The cluster manages the distribution etc. of the data transparently to your application, you need just one server to talk to, and it can be any one. It's really that simple.

Related

How do I enable cross-datacenter failover programatically in the Cassandra Java driver v4?

In datastax driver 3.x,we've DCAwareRoundRobin policy,which tries to connect to remote nodes if nodes in local datacenter fails.In the datastax drvier 4.x,we donot have that policy and confines to local-only.But,in the datastax docs,it's mentioned as:
Cross-datacenter failover is enabled with the following configuration option:
datastax-java-driver.advanced.load-balancing-policy.dc-failover {
max-nodes-per-remote-dc = 2
}
The driver will then attempt to open connections to nodes in remote datacenter.But,in the driver,we specify only a single datacenter to connect to as below:
CqlSession session = CqlSession.builder()
.addContactPoint(new InetSocketAddress("1.2.3.4", 9042))
.addContactPoint(new InetSocketAddress("5.6.7.8", 9042))
.withLocalDatacenter("datacenter1")
.build();
How the connection to remote datacenter is handled?Please help..
The quick answer is you don't do it programatically -- the Java driver does it for you after you've enabled advanced.load-balancing-policy.dc-failover.
The contact points are just the initial hosts the driver "contacts" to discover the cluster topology. After it has collected metadata about the cluster, it knows about nodes in remote DCs.
Since you've configured max-nodes-per-remote-dc = 2, the driver will add 2 nodes from each remote DC to the end of the query plan. The nodes in the local DC will be listed first then followed by the remote nodes. If the driver can't contact the nodes in the local DC, then it will start contacting the remote nodes in the query plan, one node at a time until it runs out of nodes to contact.
But I have to reiterate what I said in your other question, we do not recommend enabling DC-failover. For anyone else coming across this answer, you've been warned. Cheers!

Connecting to documentDB using mongodb 4.x node driver with port forwarding not working

I have locally setup a port forwarding to the documentDB that is working successfully on the mongodb driver versions 3.x. When I update the mongodb package to 4.x I am getting an error of a timeout with the reason ReplicaSetNoPrimary.
The code is very simple:
const MongoClient = require('mongodb').MongoClient;
const client = new MongoClient('mongodb://xxxx:xxxx#localhost:27017');
client.connect(function(err) {
if (err) {
console.log(err);
return;
}
const db = client.db('testdb');
console.log("Connected successfully to server");
client.close();
});
Has anyone been able to connect to the documentDB locally using port forwarding with the 4.x driver? Am I missing some sort of config options? (Keep in mind I have disabled all tls and everything to make it simpler to connect and as previously stated, successfully connect when using the mongodb 3.x packages)
When connecting to a replica set, the driver:
uses the host in the connection string as a seed to make an initial connection.
runs the isMaster or hello command on that initial connection to get the full list of host:port replica set members and their current status
drops the initial connections
connects to each of the members discovered in step #2
during operations, automatically monitors all of the members, sending operation to the primary even if a different node becomes primary
In your scenario, even though you are connecting to localhost, the initial connection returns the host:port pairs that are included in the replica set configuration.
The reason that this just became a problem is the MongoDB driver specifications changed to use unified topology by default.
Unified topology permits the driver to automatically detect if it is connecting to a standalone instance, replica set, or sharded cluster, which simplifies the connection process and reduces the administrative overhead required when changing how the database is deployed.
Since your connection is failing, I assume the hostname:port pairs listed in the replica set config are either not resolvable or not reachable from the test host.
To resolve this situation either:
make it so this machine can resolve the hostnames via DNS or hosts file, and permit the connections to those ports through any firewalls
use the directConnection=true connection option to disable topology discovery

Cassandra create pool with hosts on different ports

I have 10 Cassandra Nodes running on Kubernetes on my server and 1 contact point that expose the service on port 10023.
However, when the datastax driver tries to establish a connection with the other nodes of the cluster it uses the exposed port instead of the default one and i get the following error:
com.datastax.driver.core.ConnectionException: [/10.210.1.53:10023] Pool was closed during initialization
Is there a way to expose one single contact point and have it to communicate with the other nodes on the standard port (9042)?
i checked on the datastax documentation if there is anything related to it but i didn't find much.
this is how i connect to the cluster
Cluster.Builder builder = Cluster.builder();
builder.addContactPoints(address)
.withPort(Integer.valueOf(10023))
.withCredentials(user, password)
.withMaxSchemaAgreementWaitSeconds(600)
.withSocketOptions(
new SocketOptions()
.setConnectTimeoutMillis(Integer.valueOf(timeout))
.setReadTimeoutMillis(Integer.valueOf(timeout))
).build();
Cluster cluster = builder.withoutJMXReporting().build();
Session session = cluster.connect();
After driver contacts first node, it fetches information about cluster, and use this information, and this information includes on what ports Cassandra listens.
To implement what you want to do, you need that Cassandra listened on the corresponding port - this is configured via native_transport_port parameter of the cassandra.yaml.
Also, by default Cassandra driver will try to connect to all nodes in cluster because it uses DCAware/TokenAware load balancing policy. If you want to use only one node, then you need to use WhiteListPolicy instead of default policy. But is not optimal from the performance point of view.
I would suggest to re-think how you expose Cassandra to clients.

Apache Cassandra Server and Datastax Client - Changing IP Addresses

We are using the latest Apache Cassandra database server, and the Datastax Node.js client, running in the cloud.
When our Cassandra servers are rebuilt, they get new IP addresses. Then any running service clients can't find the new servers, the client driver obviously must cache the IP addresses, instead of using DNS.
Is there some way around this problem, other than doing client shutdown and get a new client, in our services when we encounter an error accessing the database?
If you only have 1 server, there is nothing you can do.
Otherwise the node when it rebuilds (if it is a single node in the cluster of many) will advertise the new IP to the cluster and cluster topology is updated. So the peers table will be updated and the driver can register this event (AFAIK).
But why not use private static addresses for your cassandra nodes?

How do I connect to local cassandra db

I have a cassandra db running locally. I can see it working in Ops Center. However, when I open dev center and try to connect I get a cryptic "unable to connect" error.
How can I get the exact name / connectionstring that I need to use to connect to this local cassandra db via dev center?
The hostname/IP to connect to is specified in the listen_address property of your cassandra.yaml.If you are connecting to Cassandra from your localhost only (a sandbox machine), then you can set the listen_address in your cassandra.yaml accordingly:
listen_address: localhost
When you start Cassandra, you should see lines similar to this either in STDOUT or in your system.log (timestamps removed for brevity):
Starting listening for CQL clients on localhost/127.0.0.1:9042...
Binding thrift service to localhost/127.0.0.1:9160
Listening for thrift clients...
These lines indicate which address you should be using to connect to your cluster. The first way to test your connection, is with clqsh. Note that cqlsh will connect to "localhost" by default. If you are connecting to a host/IP other than localhost, then you will need to specify it on the command line.
$ cqlsh
Connected to Test Cluster at localhost:9042.
[cqlsh 5.0.1 | Cassandra 2.1.0-rc5-SNAPSHOT | CQL spec 3.2.0 | Native protocol v3]
Use HELP for help.
cqlsh>
If this works, then you should also be able to connect (and test) from DataStax Dev Center (also on your local machine) by defining a connection to localhost, like this:
At this point, you should be able to connect via your application code (Java CQL3 driver shown):
cluster = Cluster.builder().addContactPoint("localhost").build();
Metadata metadata = cluster.Metadata;
Console.WriteLine("Connected to cluster: " + metadata.ClusterName.ToString());
Session session = cluster.connect();

Resources