I created a TiDB cluster with Docker, not Docker Compose. When I tested the TiDB account, I changed the root password, and then I had no way to connect to my cluster database.
We all know that there is mysqld-safe mode in MySQL to skip the grant table for passwordless login. Then in the TiDB cluster, what method should be used for the same or similar operations?
Of course, the way I found in the TiDB FAQ is to close the TiDB server and run it with the parameter ‘-skip-grant-table=true’. But unfortunately, this way in the cluster of docker deployment, I can only delete the TiDB container and then run a new container. This way I rerun the TiDB and can't even run.
I don't know how to do it, I look forward to your answer!!
Modify the tidb-server configuration file, add the following parameters, and then restart tidb-server:
[security]
skip-grant-table = true
Please refer to the documentation for modifying the user password:
https://pingcap.com/docs-cn/sql/user-account-management/
After the modification, you need to flush the privileges:
flush privileges
Related
I have created a Cassandra database in DataStax Astra and am trying to load a CSV file using DSBulk in Windows. However, when I run the dsbulk load command, the operation never completes or fails. I receive no error message at all, and I have to manually terminate the operation after several minutes. I have tried to wait it out, and have let the operation run for 30 minutes or more with no success.
I know that a free tier of Astra might run slower, but wouldn't I see at least some indication that it is attempting to load data, even if slowly?
When I run the command, this is the output that is displayed and nothing further:
C:\Users\JT\Desktop\dsbulk-1.8.0\bin>dsbulk load -url test1.csv -k my_keyspace -t test_table -b "secure-connect-path.zip" -u my_user -p my_password -header true
Username and password provided but auth provider not specified, inferring PlainTextAuthProvider
A cloud secure connect bundle was provided: ignoring all explicit contact points.
A cloud secure connect bundle was provided and selected operation performs writes: changing default consistency level to LOCAL_QUORUM.
Operation directory: C:\Users\JT\Desktop\dsbulk-1.8.0\bin\logs\LOAD_20210407-143635-875000
I know that DataStax recently changed Astra so that you need credentials from a generated Token to connect DSBulk, but I have a classic DB instance that won't accept those token credentials when entered in the dsbulk load command. So, I use my regular user/password.
When I check the DSBulk logs, the only text is the same output displayed in the console, which I have shown in the code block above.
If it means anything, I have the exact same issue when trying to run dsbulk Count operation.
I have the most recent JDK and have set both the JAVA_HOME and PATH variables.
I have also tried adding dsbulk/bin directory to my PATH variable and had no success with that either.
Do I need to adjust any settings in my Astra instance?
Lastly, is it possible that my basic laptop is simply not powerful enough for this operation or just running the operation crazy slow?
Any ideas or help is much appreciated!
I would like to refresh redis server password. The issue is that there are some external services using it so until I propagate this change thing will eventually stop working.
From my research I have only seen the requirepass command + the server restart, but this has downtime.
With other databases like Postgres, I would create new user-password, migrate permissions, change at application level and then invalidate the previous access.
How can I do this process in redis?
You can change the password without downtime by issuing:
redis> CONFIG SET requirepass <your new password>
To persist the changes for next restart, edit your .conf file or issue a CONFIG REWRITE.
Here's the example I have modeled after.
In the Readme's "Delete our manual pod" section:
The redis sentinels themselves, realize that the master has disappeared from the cluster, and begin the election procedure for selecting a new master. They perform this election and selection, and chose one of the existing redis server replicas to be the new master.
How do I select the new master? All 3 Redis server pods controlled by the redis replication controller from redis-controller.yaml still have the same
labels:
name: redis
which is what I currently use in my Service to select them. How will the 3 pods be distinguishable so that from Kubernetes I know which one is the master?
How will the 3 pods be distinguishable so that from Kubernetes I know
which one is the master?
Kubernetes isnt aware of the master nodes. You can find the pod manually by connecting to it and using:
redis-cli info
You will get lots of information about the server but we need role for our purpose:
redis-cli info | grep ^role
Output:
role: Master
Please note Replication controllers are replaced by Deployments for stateless services. For stateful services use Statefulsets.
Your client Redis library can actually handle this. For example with ioredis:
ioredis guarantees that the node you connected to is always a master even after a failover.
So, you actually connect to a redis-sentinel instead of a redis-client.
We need to do the same thing and tried different things like modifying chart. Finally, just created a simple python docker that does the labeling and created chart that expose the master redis as service. This periodically checked the pods create for redis-ha and label them according to their role ( master/ slave)
It uses the same sentinel commands to find the master/slave.
helm chart redis-pod-labeler here
source repo
I added nodes to a cluster which initialy used the wrong network interface as listen_adress. I fixed it by changeing the listen_address to the correct IP. The cluster is running well with that configuration but clients trying to connect to that cluster still receive the wrong IPs as Metadata from cluster. Is there any way to refresh metadata of a cluster whithout decommissioning the nodes and setting up new ones again?
First of all, you may try to follow this advice: http://www.datastax.com/documentation/cassandra/2.1/cassandra/operations/ops_gossip_purge.html
You will need to restart the entire cluster on a rolling basis - one node at a time
If this does not work, try this on each node:
USE system;
SELECT * FROM peers;
Then delete bad records from the peers and restart the node, then go to the next node and do it again.
How does one create the first user in a cassandra database?
I tried:
CREATE USER username WITH PASSWORD "";
and its says:
Bad Request: Only superusers are allowed to perform CREATE USER queries
But I have never created a user before this attempt, so how do you create the first user in a cassandra database?
This seems a little strange because it's like a chicken and egg problem, but people use Cassandra so I am sure there must be a solution somewhere.
Once you have enabled Authentication and Authorization, you can log-in (to your local Cassandra instance) as the default Cassandra admin user like this:
./cqlsh localhost -u cassandra -p cassandra
If you are running Cassandra on a Windows Server, I believe you need to invoke it with Python:
python cqlsh localhost -u cassandra -p cassandra
Once you get in, your first task should be to create another super user account.
CREATE USER dba WITH PASSWORD 'bacon' SUPERUSER;
Next, it is a really good idea to set the current Cassandra super user's password to something else...preferably something long and incomprehensible. With your new super user, you shouldn't need the default Cassandra account again.
ALTER USER cassandra WITH PASSWORD 'dfsso67347mething54747long67a7ndincom4574prehensi562ble';
For more information, check out this DataStax article: A Quick Tour of Internal Authentication and Authorization Security in DataStax Enterprise and Apache Cassandra
Change
authenticator: AllowAllAuthenticator
To
authenticator: PasswordAuthenticator
in cassandra.yamlconfiguration file and restart Cassandra.
This will create a superuser cassandra for you with the restart. Make sure you have Phthon27, thrift-0.91, Cassandra ( datastax community edition 2.0.9 ) etc installed. Now when you login to cassandra, it will let you enter as superuser. You can now create new superuser and change existing superuser's password as well.
python cqlsh localhost -u cassandra -p cassandra
Connected to Test Cluster at localhost:9160.
[cqlsh 4.1.1 | Cassandra 2.0.9 | CQL spec 3.1.1 | Thrift protocol 19.39.0]
Use HELP for help.
cqlsh> create user abc with password 'xyz' superuser;
cqlsh> alter user cassandra with password 'gaurav';
cqlsh> exit
To start to use authentication, the default superuser username/password pair is cassandra/cassandra. This should fix the chicken and egg problem.
Source:
http://www.datastax.com/docs/datastax_enterprise3.0/security/native_authentication
Re: Once you have enabled Authentication and Authorization (from the Mar 6 at 14:41 comment by BryceAtNetwork23)
First, is changing authorization required in order to setup authentication? I'm guessing not.
Second, setting up authorization is not exactly trivial if you have data center style replication setup. I setup authorization using the following steps:
In conf/cassandra.yaml, changed authenticator from AllowAllAuthenticator to PasswordAuthenticator for all nodes
Rebooted all nodes
Changed the default 'cassandra' password as described above and added other superusers
Altered the system_auth keyspace to be redundant (as per instructions in the cassandra.yaml file) by running: "ALTER KEYSPACE system_auth WITH REPLICATION = {'class': 'NetworkTopologyStrategy', 'MY_DATACENTER_NAME':N }"
I set N was set to the number of nodes in my datacenter (ie., fully redundant)
Ran bin/nodetool repair on each node serially
Does this sound reasonable to people who know what they're doing?