I'm running a single node at the moment. I'm trying to enable password authentication for Cassandra.
I'm following this guide: http://cassandra.apache.org/doc/latest/operating/security.html#password-authentication
I'll note that I didn't alter system_auth's replication as it's a single node cluster.
I edited cassandra.yaml to use authenticator: PasswordAuthenticator.
I then restarted cassandra and tried the command cqlsh -u cassandra -p cassandra, but that gives me the error:
Connection error: ('Unable to connect to any servers',
{'127.0.0.1': AuthenticationFailed(u'Failed to authenticate to 127.0.0.1:
code=0100 [Bad credentials] message="org.apache.cassandra.exceptions.
UnavailableException: Cannot achieve consistency level QUORUM"',)})
I've tried running nodetool repair but it says: Replication factor is 1. No repair is needed for keyspace 'system_auth'
How do I solve this?
I managed to solve the problem.
I had to run ALTER KEYSPACE system_auth WITH replication = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 }; as it was set to {'class': 'NetworkTopologyStrategy', 'DC1': '1', 'DC2': '1'} previously, even though it was a single node cluster.
This is why it couldn't achieve a QUORUM.
Follow the below steps:
Set the authenticator in /etc/cassandra/cassandra.yaml file to AllowAllAuthenticator
Restart the cassandra
sudo service cassandra restart
run the following commands
cqlsh
ALTER KEYSPACE system_auth WITH replication = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
Now change the authenticator again back to PasswordAuthenticator
Restart the cassandra
sudo service cassandra restart
Now you will be able to login using the following command
cqlsh -u cassandra -p cassandra
The issue is happening since a system_auth was set to {'class': 'NetworkTopologyStrategy', 'DC1': '1', 'DC2': '1'} previously, even though it was a single node cluster.
http://docs.datastax.com/en/datastax_enterprise/4.8/datastax_enterprise/sec/secConfiguringInternalAuthentication.html
The user 'cassandra' always uses QUORUM in system_auth per default. Try creating a different user (as superuser) and your problem should be gone.
The reason is that you cannot have QUORUM on a single node cluser, see Igors Anwser.
In 1-node (or 2-node) configurations QUORUM is impossible, and repair is not needed (as it's used to fix data inconsistencies between nodes)
In the Cassandra.yaml file, switch the authentication back to authenticator: AllowAllAuthentication and make sure authorizer: AllowAllAuthorizer
is set as well. This will allow you to use cqlsh again. It will no longer be checking for authentication before connecting. Once in cqlsh follow the other answers to lower the required replication to a lower level.
In case you are using docker-desktop do the following:
Use CLI of the docker image:
cd etc/cassandra
vim cassandra.yml - to edit the file
Change authenticator to AllowAllAuthenticator
Restart the container and do following steps:
docker-compose up -d image_name
Refer below highlighted commands:
Now swith back the authenticator to PasswordAuthenticator in cassandra.yaml
Restart conatiner and now use below step to login to cqlsh:
What fixed it for me was go into the cqlsh shell, use the keyspace you're having issue's with, and run the command CONSISTENCY QUORUM
Related
I try to build Cassandra cluster with 3 nodes
On node 1 i've got a trouble when i try use cqlsh:
Connection error: ('Unable to connect to any servers', {'127.0.0.1': error(111, "Tried connecting to [('127.0.0.1', 9042)]. Last error: Connection refused")})
In cassandra.yaml i dont't see strings with ip 127.0.0.1, i wrote there correct parametres (in my mind)
cluster_name: 'cassandra_cluster'
seeds: "node_public_ip_1,node_public_ip_2,node_public_ip_3"
listen_address: node_public_ip_1
rpc_adress: node_public_ip_1
in cassandra-topology.properties:
`# Cassandra Node IP=Data Center:Rack`
node_public_ip_1=dc1:rac1
node_public_ip_2=dc1:rac1
node_public_ip_3=dc1:rac1
What i do wrong?
Thanks!
When you run cqlsh without specifying the host, it defaults to localhost (127.0.0.1). You need to specify the client IP address (rpc_address) of the node you want to connect to when running cqlsh.
Also, the standard recommendation for multi-homed servers is to use the private IP for internal cluster comms and the public IP for client connections. This means that in cassandra.yaml you would configure:
listen_address: private_ip
rpc_address: public_ip
Since the seeds are used for seeding cluster communication, you would also configure the seeds list with the nodes' private IP addresses.
Finally, if you're using the old PropertyFileSnitch, you should also configure cassandra-topology.properties with the nodes' private IP addresses.
However, PFS is really old and in almost all cases our recommendation is to use GossipingPropertyFileSnitch since it has all the benefits of being able to expand the cluster in the future without any downsides as I've explained in this post -- https://community.datastax.com/questions/8887/.
It is also a lot simpler to manage GPFS since you only need to set a single node's DC and rack configuration in cassandra-rackdc.properties without having to reconfigure all the nodes whenever you add/remove nodes from the cluster.
When you do switch to GPFS, you should delete cassandra-topology.properties on the nodes to prevent any gossip issues in the future as I've explained in this post -- https://community.datastax.com/questions/4621/. Cheers!
Try run the "nodetool status" and "nodetool describecluster" on both nodes of this cluster to identify the IP used by nodes.
and try connect passing the host/ip. like:
cqlsh <host> <port> -u <user> -p <password>
I have set up a 2 node Casandra cluster in GCP.
But the issue is I am not able go to cqlsh.I am getting the belwo error
$ cqlsh
Connection error: ('Unable to connect to any servers', {'127.0.0.1':
First, as I mentioned to you in this answer, using cqlsh to connect to 127.0.0.1 simply will not work in a multi-node cluster. You will need to specify the IP address shown in the result of your nodetool status command.
Next, the second part of the error message should give a big clue on this one:
AuthenticationFailed('Remote end requires authentication'),)})
With authentication enabled, you will need to provide a valid username and password to log in. If you have not created any new users, then the username and password will both be defaulted to "cassandra."
$ cqlsh 10.138.0.3 -u cassandra -p cassandra
Try this:
cqlsh 10.138.0.3 9042
We're setting up 3 node Cassandra cluster in AWS.
We performed the below steps ;
1) On all 3 nodes, installed the latest version of Oracle JDK 1.8.
On all 3 nodes, installed Cassandra 2.1.8.
On all 3 nodes, located cassandra.yaml and set the following properties:
cluster_name: 'ABC'
authenticator: PasswordAuthenticator
authorizer: CassandraAuthorizer
write_request_timeout_in_ms: 5000
2) In the same file, set both "listen_address" and "rpc_address" to a permanent address of a host (the one which is not changed after Amazon VM restart).
3) In the same file, under "seed_provider" property, set "seeds" property to a permanent address of a host (the one which is not changed after Amazon VM restart) which is choosen to be the seed.
4) Save all changes and close the file. Also open the required ports in firewall.
5) Started Cassandra on all nodes one by one and confirmed that all nodes are up.
Problem :
Connected to node 1 and executed the below query
root#ip-10-181-119-112:/etc/cassandra/bin# ./cqlsh -u cassandra -p cassandra 10-181-119-112
Connected to Dev Cluster at 10-181-119-112:9042.
[cqlsh 5.0.1 | Cassandra 2.1.8 | CQL spec 3.2.0 | Native protocol v3]
Use HELP for help.
cassandra#cqlsh> ALTER KEYSPACE system_auth WITH REPLICATION = { 'class': 'NetworkTopologyStrategy', 'us-east': 3 };
Tried to change the same on second node but I am not able to connect to second node facing below error.
root#ip-10-181-133-155:/etc/cassandra/bin# ./cqlsh -u cassandra -p cassandra 10-181-133-155
Connection error: ('Unable to connect to any servers', {'10-181-133-155': AuthenticationFailed(u'Failed to authenticate to 10-181-133-155: code=0100 [Bad credentials] message="org.apache.cassandra.exceptions.UnavailableException: Cannot achieve consistency level QUORUM"',)})
Please let me know what I am doing wrong here and help me in resolving this issue.
It seems like it can't read the authentication data for the default user. Try to nodetool repair the system_auth keyspace on all nodes. Also, make sure the datacenter name used in your replication settings ("us-east") matches what you see in nodetool status.
I have setup Cassandra, and I've created a keyspace('mykeyspace') and a table in it. I started Cassandra as a service, added the cassandra.properties file like this, in the presto installation files:
connector.name=cassandra
cassandra.contact-points=localhost
cassandra.native-protocol-port=9142
cassandra.thrift-port=9160
After this I have issued this command in Presto but I'm not sure if it is connecting to the Cassandra data:
./presto --server localhost:8080 --catalog cassandra --schema mykeyspace
Now, when I give the command 'show tables', I get this Exception-message:
All host(s) tried for query failed (tried: localhost/127.0.0.1 (com.datastax.driver.core.TransportException: [localhost/127.0.0.1] Cannot connect))
I have used cqlsh, to view a created table in 'mykeyspace' in cassandra, and hence sure that cassandra is running.
I would really appreciate any help to clear this error.
If you have a default cassandra installation, the dafault native protocol port is 9042. If that is the case, you can remove cassandra.native-protocol-port and cassandra.thrift-port properties.
If you want to keep this ports, you can change cassandra.yaml configuration file, property native_transport_port
I hope it's helps.
When I try to set up a connection, I get the error
Unable to connect to 'Test Cluster': All host(s) tried for query failed
Unexpected error during transport initialization ... (host ip adresses) Channel has been closed
The remote database is on port 9161, which I added on the "Native port protocol" line.
Additionally is has a username or password, which I also added in the set up. This is all on a 64bit Windows machine.
i did have same your error!!
you have to open cmd firt
type "cd "
type "cassandra" to run cassandra server
then, you try again in DevCenter with localhost and port: 9042
i hope it may help you! ^^
There is another common scenario where this might happen in case someone stumbles upon this thread in the future.
This type of thing typically happens when the host crashes expectantly resulting in the corruption of the sstables or commitlog files.
This is why it is really important to use replication since when you get into this situation you can run nodetool repair to repair the corrupted tables and data from other nodes.
If you are not fortunate enough to have replication configured, then you are in for some data loss. Clear the suspect file from \data\commitlogs, cry a little and restart the node.