My configuration are:
1 Spark machine on EC2: c3.2xlarge.
Communicating with 4 nodes of Cassandra on EC2.
I am getting the following error:
16/08/03 22:41:10 ERROR Session: Error creating pool to /XX.XX.XXX.XX:9042
com.datastax.driver.core.TransportException: [/XX.XX.XXX.XX:9042] Cannot connect
The XX are the public IP of the EC2 cassandra.
However inside my spark configuration: Im telling spark to use the seeder internal IP node, which then the spark-connector driver receive the information from Cassandra the public IP.
How my IT set the clusters up, I am assuming the following:
ERROR Session: Error creating pool to /127.0.0.1:9042
However I don't want to have my clusters connect via public IP and open up the firewall. I would like to have it stay to the internal IP of the cluster.
Is there a way to do this Spark code level wise or cassandra.yml configuration wise?
Related
I have 10 Cassandra Nodes running on Kubernetes on my server and 1 contact point that expose the service on port 10023.
However, when the datastax driver tries to establish a connection with the other nodes of the cluster it uses the exposed port instead of the default one and i get the following error:
com.datastax.driver.core.ConnectionException: [/10.210.1.53:10023] Pool was closed during initialization
Is there a way to expose one single contact point and have it to communicate with the other nodes on the standard port (9042)?
i checked on the datastax documentation if there is anything related to it but i didn't find much.
this is how i connect to the cluster
Cluster.Builder builder = Cluster.builder();
builder.addContactPoints(address)
.withPort(Integer.valueOf(10023))
.withCredentials(user, password)
.withMaxSchemaAgreementWaitSeconds(600)
.withSocketOptions(
new SocketOptions()
.setConnectTimeoutMillis(Integer.valueOf(timeout))
.setReadTimeoutMillis(Integer.valueOf(timeout))
).build();
Cluster cluster = builder.withoutJMXReporting().build();
Session session = cluster.connect();
After driver contacts first node, it fetches information about cluster, and use this information, and this information includes on what ports Cassandra listens.
To implement what you want to do, you need that Cassandra listened on the corresponding port - this is configured via native_transport_port parameter of the cassandra.yaml.
Also, by default Cassandra driver will try to connect to all nodes in cluster because it uses DCAware/TokenAware load balancing policy. If you want to use only one node, then you need to use WhiteListPolicy instead of default policy. But is not optimal from the performance point of view.
I would suggest to re-think how you expose Cassandra to clients.
We are using the latest Apache Cassandra database server, and the Datastax Node.js client, running in the cloud.
When our Cassandra servers are rebuilt, they get new IP addresses. Then any running service clients can't find the new servers, the client driver obviously must cache the IP addresses, instead of using DNS.
Is there some way around this problem, other than doing client shutdown and get a new client, in our services when we encounter an error accessing the database?
If you only have 1 server, there is nothing you can do.
Otherwise the node when it rebuilds (if it is a single node in the cluster of many) will advertise the new IP to the cluster and cluster topology is updated. So the peers table will be updated and the driver can register this event (AFAIK).
But why not use private static addresses for your cassandra nodes?
My objective is to access my Hbase cluster on Azure with Squirrel with a Phoenix driver running on my local computer.
My Hbase cluster on Azure is operational. I can see it in the Ambari dashboard and I can access it using SSH. I can start Phoenix with the sqlline.py command pointing to one of the zookeeper nodes. The !tables command returs four lines.
My Hbase cluster is included in an Azure VNet. From my local computer (running Windows 10) I can connect to this VNet. I can ping the IP address (10.254.x.x) of the zookeeper node successfully but pinging the FQDN of the zookeeper node results in an error message:
"Ping request could not find host zk1-.......ax.internal.cloudapp.net.
Please check the name and try again."
When I start Squirrel on my local computer with the URL pointing to the FQDN of the zookeeper node I get an error message:
"Unexpected Error occurred attempting to open an SQL connection". The
stack trace points to a java.util.concurrent.RuntimeException: "Unable
to establish connection"
When I start Squirrel on my local computer with the URL pointing to the IP address of the zookeeper node I get a different error:
"Unexpected Error occurred attempting to open an SQL connection". The
stack trace points to a java.util.concurrent.TimeoutException.
I suspect this has something to do with the Domain Name resolution problem as described here [https://superuser.com/questions/966832/windows-10-dns-resolution-via-vpn-connection-not-working]. I applied the resolution as described by LikeARock47 on Feb 23. This did not improve the situation however.
Does this indeed have to do with the Domain Name resolution issue or is the problem somewhere else?
Is there a better solution to the Domain Name resolution issue?
A JDBC connection from Squirrel on my local Windows10 computer has succesfully been established to the Hbase cluster by using the zookeeper IP address and the port and "/hbase-unsecure":
jdbc:phoenix:10.254.x.x:2181:/hbase-unsecure
I can manage my HBase cluster with a local Squirrel now!
I'd still be interested to find out how I can get the zookeeper FQDN resolved locally.....
I'm trying to run the simplest Spark standalone cluster on an Azure VM. I'm running a single master, with a single worker running on the same machine. I can access the Web UI perfectly, and can see that the worker is registered with the master.
But I can't connect to this cluster using spark-shell on my laptop. When I looked in the logs, I see
15/09/27 12:03:33 ERROR ErrorMonitor: dropping message [class akka.actor.ActorSelectionMessage]
for non-local recipient [Actor[akka.tcp://sparkMaster#40.113.XXX.YYY:7077/]]
arriving at [akka.tcp://sparkMaster#40.113.XXX.YYY:7077] inbound addresses
are [akka.tcp://sparkMaster#somehostname:7077]
akka.event.Logging$Error$NoCause$
Now I think the reason why this is happening is that on Azure, every virtual machine sits behind a type of firewall/load balancer. I'm trying to connect using the Public IP that Azure tells me (40.113.XXX.YYY), but Spark refuses to accept connections because this is not the IP of an interface.
Since this IP is not of the machine, I can't bind to an interface either.
How can I get Spark to accept these packets as well?
Thanks!
you can try to set the ip address to MASTER_IP which in spark-env.sh instead of hostname.
I got the same problem and was able to resolve by getting the configured --ip parameter of the command line that runs spark:
$ ps aux | grep spark
[bla bla...] org.apache.spark.deploy.master.Master --ip YOUR_CONFIGURED_IP [bla bla...]
Then I was able to connect to my cluster by using exactly the same string as YOUR_CONFIGURED_IP:
spark-shell --master spark://YOUR_CONFIGURED_IP:7077
I am presently trying to setup cassandra cluster (4 nodes with 2 seeds) in our production env.
When i connect with comma seperated host name and port, it is working fine.
cluster = HFactory.getOrCreateCluster("Test Cluster", "host1:9160,host2:9160,host3:9160,host4:9160");
But when I configured a cluster name at the lb connecting to the individaul nodes and configured the same in Hector thrift client. But i got the below eception,
cluster = HFactory.getOrCreateCluster("Test Cluster", "lbname");
SEVERE: me.prettyprint.hector.api.exceptions.HectorException: All host pools marked down. Retry burden pushed out to client.
me.prettyprint.hector.api.exceptions.HectorException: All host pools marked down. Retry burden pushed out to client.
at me.prettyprint.cassandra.connection.HConnectionManager.getClientFromLBPolicy(HConnectionManager.java:393)
at me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:249)
at me.prettyprint.cassandra.service.AbstractCluster.describeKeyspace(AbstractCluster.java:199)
at com.july.storage.cassandra.util.CassandraDBUtil.getDb(CassandraDBUtil.java:107)
at com.july.storage.cassandra.util.CassandraDBUtil.hasTable(CassandraDBUtil.java:91)
at com.july.storage.cassandra.action.CassandraHandler.getCall(CassandraHandler.java:65)
at com.july.storage.service.StorageService.GET(StorageService.java:58)
at com.july.storage.cassandra.action.CassandraHandler.main(CassandraHandler.java:571)
Don't use a load balancer in front of Cassandra. Let your client connect to all of the nodes. A load balancer will just be a single point of failure, and add unnecessary latency.