This question is for Cassandra as well.
I am using the Cassandra-driver packagae in Node to connect to my ScyllaDB node. The node can be accessed via cqlsh from other linux machines outside of its network. However, when I am using Windows to connect to it via a Node application, it is unable to reach the host. I have tried with ports 9042, 9160 and a few others as well.
Previously, I was using Docker to host multiple ScyllaDB nodes in the same Linux VM and Docker was finally exposing them via port 80 - which I was able to connect to, via the Node application.
Where do you think is the problem? Does Windows have a problem connecting to Scylla/Cassandra nodes?
P.S.: The Scylla node is hosted on a Ubuntu 18.04 VM on Azure.
Related
I installed Cassandra on Windows 10. When i trying to run cqlsh from /bin/,
I get the following error:
Connection error: ('Unable to connect to any servers', {'127.0.0.1': \
error(10061, "Tried connecting to [('127.0.0.1', 9042)].
Last error: No connection could be made because the target machine \
actively refused it")})
I installed Cassandra from apache.org official site . also I get reference from
https://phoenixnap.com/kb/install-cassandra-on-windows - Everything is looks good from the reference.
can anyone help me to solve this ? thanks in Advance.
The error states that cqlsh can't connect to the local Cassandra instance. The default configuration in conf/cassandra.yaml is for Cassandra to listen for CQL clients on localhost (127.0.0.1) and CQL port 9042:
native_transport_port: 9042
rpc_address: localhost
Since you're getting a "connection refused" error, the most likely issue is that Cassandra is not running on your Windows machine. Check the Cassandra logs (usually in logs/system.log) for errors which would provide clues as to why Cassandra couldn't start.
As a side note, there is very limited Windows support in Cassandra 3.x and there are several known issues that will not be fixed due to limitations in the operating system.
Furthermore, Windows support has been completely dropped in Cassandra 4.0 due to lack of maintainers and testing (CASSANDRA-16171).
As a workaround, we recommend the following:
Deploy Cassandra in Docker
Deploy Cassandra in a VM using software like VirtualBox
Deploy K8ssandra.io
If you just want to build apps with Cassandra as a backend, Astra DB has a free tier that lets you launch a Cassandra cluster in a few clicks with no credit card required. Cheers!
Do you keep this terminal open and Cassandra runs when you are trying to connect? Notice you have to launch cqlsh from a different terminal window.
Please check the steps again, mostly probably Cassandra simply doesn't run. Keep attention on the p.4 specially.
I have a client that wants to use Tableau on their EMR Spark cluster.
The documentation seems straightforward but I'm getting errors when I try to connect.
Here is the setup:
EMR cluster's master doesn't have a public IP, but from the Tableau desktop EC2 instance I am able to ping and telnet into the port 10001 where thrift is running
I am able to test thrift with beeline and it connects fine
I am not using SSL or authentication given the limit access the cluster has
I have installed both data direct 8.0 and simbaodbc
I'm using emr-5.13.0, the Hadoop distribution is Amazon 2.8.3 and the Spark version is 2.3.0.
The error is
Unable to connect to the ODBC Data Source. Check that the necessary drivers are installed and that the connection properties are valid.
[Simba][ThriftExtension] (5) Error occurred while contacting server: No more data to read.. This could be because you are trying to establish a non-SSL connection to an SSL-enabled server.
Unable to connect to the server "IP". Check that the server is running and that you have access privileges to the requested database."
I simply followed the documentation provided by Tableau which says to install the driver only (not mess with ODBC), then us it in Tableau. I have verified that I have set no SSL and no authentication before trying to connect. I also verified by running Datagrip and doing a query from the Tableau EC2 instance, which works as expected.
resolved the issue by ignoring the documentation and just setting up the odbc driver, then choosing it instead of sparksql as a source.
I am trying to connect Cassandra which is inside a docker container, from a Node js application which is also present in another docker container.
My question is What is the best way to do it?
Still now I am able to create both of the container using docker-compose.
There are many tutorials on connecting Docker containers. See:
https://deis.com/blog/2016/connecting-docker-containers-1/
https://docs.docker.com/engine/userguide/networking/default_network/container-communication/
https://docs.docker.com/engine/userguide/networking/
I have deployed a 3-node AWS ElasticMapReduce cluster bootstrapped with Apache Spark. From my local machine, I can access the master node by SSH:
ssh -i <key> hadoop#ec2-xx-xxx-xxx-xxx.compute-1.amazonaws.com
Once ssh'd into the master node, I can access PySpark via pyspark.
Additionally, (although insecure) I have configured my master node's security group to accept TCP traffic from my local machine's IP address specifically on port 7077.
However, I am still unable to connect my local PySpark instance to my cluster:
MASTER=spark://ec2-master-node-public-address:7077 ./bin/pyspark
The above command results in a number of exceptions and causes PySpark to unable to initialize a SparkContext object.
Does anyone know how to successfully create a remote connection like the one I am describing above?
Unless your local machine is the master node for your cluster, you cannot do that. You won't be able to do that with AWS EMR.
I am new to Cassandra.
I'm trying to deploy a test environment.
Win server 2012 (192.168.128.71) -> seed node
Win server 2008 (192.168.128.70) -> simple node
Win server 2008 (192.168.128.69) -> simple node
On all nodes, I installed the same version Cassandra (2.0.9 from Datastax).
Disabled windows firewall.
The cluster Ring formed. But on each node I see
Test Cluster (Cassandra 2.0.9) 1 of 3 agents connected
Node does not see the Remote Agent. On each PC, the agent service is running.
In file datastax_opscenter_agent-stderr, I see the following line
log4j:ERROR Could not read configuration file [log4j.properties].
log4j:ERROR Ignoring configuration file [log4j.properties].
Please tell me the possible cause how can I diagnose.
Thanks in advance!
The problem is that you have OpsCenter server running on all machines in the cluster. Agents connect to the local OpsCenter servers, so when you open the UI for one of them, you only see one agent connected.
To fix this, stop the server processes (DataStax_OpsCenter_Community) on all machines except for one, and add stomp_interface: <server-ip> to the address.yaml for the agents on all machines, then restart the agents.