I have full access to the Cassandra installation files and a PasswordAuthenticator configured in cassandra.yaml. What do I have to do to reset admin user's password that has been lost, while keeping the existing databases intact?
The hash has changed for Cassandra 2.1:
Switch to authenticator: AllowAllAuthenticator
Restart cassandra
UPDATE system_auth.credentials SET salted_hash = '$2a$10$H46haNkcbxlbamyj0OYZr.v4e5L08WTiQ1scrTs9Q3NYy.6B..x4O' WHERE username='cassandra';
Switch back to authenticator: PasswordAuthenticator
Restart cassandra
Login as cassandra/cassandra
CREATE USER and ALTER USER to your heart's content.
Solved with the following steps:
Change authenticator in cassandra.yaml to AllowAllAuthenticator and restart Cassandra
cqlsh
update system_auth.credentials set salted_hash='$2a$10$vbfmLdkQdUz3Rmw.fF7Ygu6GuphqHndpJKTvElqAciUJ4SZ3pwquu' where username='cassandra';
Exit cqlsh
Change authenticator back to PasswordAuthenticator and restart Cassandra
Now you can log in with
cqlsh -u cassandra -p cassandra
and change the password to something else.
As of cassandra 2.0
ALTER USER cassandra WITH PASSWORD 'password';
If you want to add a user.
// CREATE USER uname WITH PASSWORD 'password'; // add new user
// GRANT all ON ALL KEYSPACES to uname; // grant permissions to new user
Verify your existing users with LIST USERS;
EDIT
Oh boy, this is gona be fun! So, I found one hacktastic way but it requires changing sourcecode.
First a high level overview:
Edit source so you can make changes to the system_auth.credentials column family
Change the authenticator to AllowAllAuthenticator
Start C*
Log in with cqlsh without needing a password
Update the cassandra user's hash password
Undo the source changes and change back to PasswordAuthenticator.
Step 1 - edit source
Open the C* source and go to package org.apache.cassandra.service.ClientState;
Find the validateLogin() and ensureNotAnonymous() functions and comment all contained coude out so you end up with:
public void validateLogin() throws UnauthorizedException
{
// if (user == null)
// throw new UnauthorizedException("You have not logged in");
}
public void ensureNotAnonymous() throws UnauthorizedException
{
validateLogin();
// if (user.isAnonymous())
// throw new UnauthorizedException("You have to be logged in and not anonymous to perform this request");
}
Step2 - Change to AllowAllAuthenticator in cassandra.yaml
Step3 & 4 - Simple!
Step 5 - Execute this insert statement from cqlsh:
insert into system_auth.credentials (username, options, salted_hash)
VALUES ('cassandra', null, '$2a$10$vbfmLdkQdUz3Rmw.fF7Ygu6GuphqHndpJKTvElqAciUJ4SZ3pwquu');
Note* step 5 will work assuming the user named 'cassandra' has already been created. If you have another user created just switch the username you are inserting (this procedure resets a password, it doesn't add a new user).
Step 6 Fix the source by uncommenting validateLogin() and ensureNotAnonymous() and switch back to the PasswordAuthenticator in cassandra.yaml, you should now have access to cqlsh via ./cqlsh -u cassandra -p cassandra
Related
Currently we have several databases with a HADR configuration where the primary databases are on a Linux Server "A", with the standby all on Linux Server "B". DB2 version is 9.7.
We are attempting to relocate the primary database of one of these databases (with the intent of moving all later) to a new Linux Server "C". Efforts to find something similar have just brought back results of HADR takeover, that is not what we are aiming to do.
Lets call this database MYDB.
I have taken the steps below, and while HADR will start between the two by issuing the relevant start HADR commands first on the standby then on the primary, issuing 'db2pd -db MYDB -hadr' is showing them as disconnected with 'S0000000.log' as the log file on the opposite end. The correct log is being displayed locally.
STEPS TAKEN
Quiesced the database and then stopped HADR on the primary, confirmed on the secondary there was now a log gap, stopped HADR on the Standby and deactivated.
Took an offline backup on the current primary database and sent that to the new server "C", where an identical version of DB2 is already set up.
Created a new database 'MYDB' and restored from the backup sent over.
Updated the relevant database configurations:
On new server C :
db2 update db cfg for mydb using HADR_LOCAL_HOST C
db2 update db cfg for mydb using HADR_LOCAL_SVC hadr_mydb_c
On the existing standby B:
db2 update db cfg for mydb using HADR_REMOTE_HOST C
db2 update db cfg for mydb using HADR_REMOTE_SVC hadr_mydb_c
db2 update alternate server for database mydb using hostname c port 3700
'hadr_mydb_c' has been added to /etc/services on both 'B' and 'C' with a defined port of 3734
'C' has been added to both 'B' and 'C' hosts files. Log locations ect have been created to
match the existing server 'A'.
At this stage we have done a db2stop and start on the backup, then when we issue 'db2 start hadr on db mydb as standby' we get a confirmation that HADR has started.
On the new primary we issue 'db2 start hadr on db mydb as primary' we again get a confirmation that HADR has started.
db2pd shows HADR as active but the databases not connected.
Issuing the following command on the backup instance to see HADR details I can't see the MYDB database listed along with the other databases.
db2 "SELECT SUBSTR(DB_NAME, 1, 8) AS DBNAME, HADR_ROLE, HADR_STATE,HADR_SYNCMODE, HADR_CONNECT_STATUS,HADR_HEARTBEAT,HADR_TIMEOUT,HADR_LOG_GAP FROM TABLE (SNAP_GET_HADR (CAST (NULL as VARCHAR(128)), 0)) as T"
Luckily reverting the HADR config back to what they were previously we can reconnect HADR between the old primary and backup. Any ideas of how best to proceed?
I'm facing some issues when I try to run a simple SELECT query on influxdb via the Python library.
I'm trying to run the following query:
influx_client.query('SELECT * FROM "measurements" LIMIT 10;')
Of course I switched to the according database (and connected to the server) before executing the query. Also I tried those variants of the query:
influx_client.query("SELECT * FROM \"measurements\" LIMIT 10;")
influx_client.query("SELECT * FROM 'measurements' LIMIT 10;")
influx_client.query('SELECT * FROM \'measurements\' LIMIT 10;')
influx_client.query('SELECT * FROM {0} LIMIT 10;'.format("measurements"))
influx_client.query("SELECT * FROM {0} LIMIT 10;".format("measurements"))
however they all lead to the same issue.
The result (or more the error) that I get is the following:
influxdb.exceptions.InfluxDBClientError: 403: {"error":"error authorizing query: myuser not authorized to execute statement 'SELECT * FROM \"measurements\" LIMIT 10', requires READ on True"}
I know that my user have the required permissions because when connecting to the DB with a CLI I can execute the query. On top of that I checked the permissions with SHOW GRANTS and I could see that all requirements are satisfied (the user actualy does have all privileges).
I saw some simillar issues already (for instance in this issue) however this does not fit my case since I'm quoting the query.
Informations about the environment:
InfluxDB version: 1.8.0
InfluxDB-python version: 5.3.1
Python version: 3.6.8
Operating system version: CentOS 7
Any ideas ?
There are two things you need to check for the authentication issue:
https configuration with given private key and password certificate Link
Passing the user credentials for the influx db connection (Check the case sensitivity as well.
Have used influx and these are key configuration will lead to authentication issue.
using command CLI you need to provide the user permission to the given database
Using <you-database>
GRANT ALL PRIVILEGES TO <username>
Grant Permission To User
In Fauxton, I've setup a replication rule from a CouchDB v1.7.1 database to a new CouchDB v2.3.0 database.
The source does not have any authentication configured. The target does. I've added the username and password to the Job Configuration.
It looks like the replication got stuck somewhere in the process. 283.8 KB (433 documents) are present in the new database. The source contains about 18.7 MB (7215 docs) of data.
When restarting the database, I'm always getting the following error:
[error] 2019-02-17T17:29:45.959000Z nonode#nohost <0.602.0> --------
throw:{unauthorized,<<"unauthorized to access or create database
http://my-website.com/target-database-name/">>}:
Replication 5b4ee9ddc57bcad01e549ce43f5e31bc+continuous failed to
start "https://my-website.com/source-database-name/ "
-> "http://my-website.com/target-database-name/ " doc
<<"shards/00000000-1fffffff/_replicator.1550593615">>:<<"1e498a86ba8e3349692cc1c51a00037a">>
stack:[{couch_replicator_api_wrap,db_open,4,[{file,"src/couch_replicator_api_wrap.erl"},{line,114}]},{couch_replicator_scheduler_job,init_state,1,[{file,"src/couch_replicator_scheduler_job.erl"},{line,584}]}]
I'm not sure what is going on here. From the logs I understand there's an authorization issue. But the database is already present (hence, it has been replicated partially already).
What does this error mean and how can it be resolved?
The reason for this error is that the CouchDB v2.3.0 instance was being re-initialized on reboot. It required me to fill-in the cluster configuration again.
Therefore, the replication could not continue until I had the configuration re-applied.
The issue with having to re-apply the cluster configuration has been solved in another SO question.
I'm trying to set up a continuous replication job on a single-server deployment of CouchDB 2.2, from a local db to a local db, without spreading a user's password around.
I can get replication working with by creating a document in the _replicator db of this form:
{
"_id": "my-replication-job-id",
"_rev": "1-5dd6ea5ad8479bb30f84dac02aaba59c",
"source": "http://username:password#localhost:5984/source_db_name",
"target": "http://username:password#localhost:5984/target_db_name",
"continuous": true,
"owner": "user-that-created-this-job"
}
Is there any way to do this without inserting the user's password in plaintext (or a base64-encoded version of it) in the source and target fields of my replication job? This is all running on the same server... both databases and the replication job itself.
I see that Couch 2.2 added the ability for replication jobs to maintain their authenticated state using a cookie but my understanding is the username and password are still be necessary in order to initiate an authenticated session with the source and target DB's.
I should add that I have require_valid_user = true configured.
Thanks in advance.
It may be this issue:
https://github.com/apache/couchdb/issues/1550
Which has now been fixed, and I believe it will appear in 2.2.1
I'm trying to test memsql-spark-connector and for this I created a single node MemSQL cluster on AWS (https://docs.memsql.com/docs/quick-start-with-amazon-webservices).
On my laptop I want to run a Spark application in local mode. This application should simply create Dataframe for a table and collect all rows. Here is the code:
val conf = new SparkConf()
.setAppName("Test App")
.setMaster("local[*]")
.set("memsql.host", "x.x.x.x")
.set("memsql.port", "3306")
.set("memsql.user", "root")
.set("memsql.password", "1234")
.set("memsql.defaultDatabase", "dataframes_test")
val sc = new SparkContext(conf)
val memsql = new MemSQLContext(sc)
val df = memsql.table("person")
df.collect().foreach(println(_))
where x.x.x.x is the address of my AWS instance.
The problem is although I can connect to MemSQL server from my laptop, memsql-spark-connector tries to access leaf node directly (i.e. connect to port 3307 instead of 3306). And when this happens I get the following error:
java.sql.SQLException: Access denied for user 'root'#'108.208.196.149' (using password: YES)
But root user actually does have all permissions:
memsql> show grants for 'root'#'%';
+--------------------------------------------------------------------------------------------------------------------------------+
| Grants for root#% |
+--------------------------------------------------------------------------------------------------------------------------------+
| GRANT ALL PRIVILEGES ON *.* TO 'root'#'%' IDENTIFIED BY PASSWORD '*A49656EC00D74D3524072F3452C1FBA7A1F3B561' WITH GRANT OPTION |
+--------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)
Is it possible to grant permissions to leaf nodes so that this connection to x.x.x.x:3307 is successful as well?
I realize that it's probably not the way it's designed to be used, but I want to do it this way only for testing. It's convinient to debug when everything is in a single JVM, and I don't want to bother about Spark installation for now. I could install MemSQL locally to solve my problem, but I can't do this on Mac (is this right, BTW?).
Any help appreciated!
UPDATE: Just tried to connect locally on the server and still doesn't work:
ubuntu#ip-x-x-x-x:~$ memsql -P 3307 -u root -p
Enter password:
ERROR 1045 (28000): Access denied for user 'root'#'localhost' (using password: YES)
Password I'm providing is correct, on AWS it's an instance ID, so very hard to make a mistake.
This means that it wouldn't work even if I had Spark executor on the same instance with the leaf node. Feels like something is wrong with my setup, but I actually didn't change any settings, all are defaults.
Are master node and leaf node supposed to use the same credentials? Is there a way to setup them for the leaf separately?
That error means that the login was denied, i.e. incorrect username/password (not that the user doesn't have enough permissions). Make sure the password you are using in the spark connector matches the password on all the nodes.