GlusterFS Geo Replication: gsync peer_gsec_create command not found - glusterfs

My condition right now is
i have 4 server, all of them were centos 8-minimal based
i have created a volume named gv0 and replicated it to 2 other server (total 3 nodes, GFS-1 | GFS-2 | GFS-3) and it works normal, i can store/read files from another client node
i want to create a geo replication for gv0 from the GFS-1 node to another node named GFS-4 and it's on different LAN network
i saw this tutorial and followed it till executing this command on the GFS-1 node
gluster system:: execute gsec_create
it gives me an error said: gsync peer_gsec_create command not found.
what i can do with this? i haven't found any solution to this on Google, please help me

Please try yum install glusterfs-geo-replication
Thanks

Related

Can't connect to database from snapshot. Ubuntu 18.04 postgres 9.5

I'm trying to connect to database restored from disk snapshot. I have new VM with ubuntu 18.04. I installed postgres, postgres-client and postgres-contrib from *.deb files. Then I mounted managed disk into machine with database and tried to connect - no success.
Postgres server running good, cluster ok, status ok, pg_isready ok. Until I change the data_directory in postgresql.conf from default to mounted - /snapshot_data_path/. After that my cluster is down and I cannot use psql in postgres (error ~ .s.PGSQL missing. File dissapearing from default directory).
chown -R to /snapshot_data_path - done, looks ok.
I think my database is running manualy when I use :
/usr/lib/postgresql/9.5/bin/postgres -d 3 -D /database/postgresql/9.5/main -c config_file=/etc/postgresql/9.5/main/postgresql.conf with that new path. EDIT: Error, I can connect but no data inside.
I try search over the internet for similar problems - no luck.
Is it possible to take database from 1 VM and connect it with another fresh VM? Or I must use pg_dump to achieve that?
Here the official documentation on this topic: https://www.postgresql.org/docs/9.0/backup-file.html
The database server must be shut down in order to get a usable backup.
Did you shutdown the server while the data snapshot was taken?

Cannot reinstall Cassandra - "Node with address <IP> already exists" + "Could not find tokens for <IP> to replace"

I am trying to re-install Cassandra on a machine that had it previously installed. The service fails to start with a message of:
A node with address /192.168.11.16 already exists, cancelling join. Use cassandra.replace_address if you want to replace this node.
Following up by adding the following:
JVM_OPTS="$JVM_OPTS -Dcassandra.replace_address=192.168.11.16"
to the end of my
cassandra-env.sh
and the service continues to fail to start with the following message:
Could not find tokens for /192.168.11.16 to replace
I have even tried starting it as 127.0.0.1 and localhost and using nodetool removenode and nodetool assassinate 192.168.11.16 and nothing seems to be able to clear out whatever is making it persist. Any guidance on how to get it up and running properly on this machine again is greatly appreciated.
TIA
JH
If your node already joined the cluster and not running then you should follow node replacement procedure as below.
https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsReplaceNode.html

Unable to run cqlsh(connection refused)

I'm getting a connection error "unable to connect to any server" when I run .cqlsh command from the bin directory of my node.
I'm using an edited yaml file containing only the following(rest all values present in the default yaml have been omitted) :
cluster name, num tokens, partitioner, data file directories, commitlog directory, commitlog sync, commitlog sync period, saved cache directory, seed provider info, listen address and endpoint snitch.
Is this error because I've not included some important parameter in the yaml like rpc address? Please help.
OS: RHEL 6.9
Cassandra: 3.0.14
The cassandra yaml file can have modified values, but you should not delete the rows and make your own yaml file. And yes, rpc address is needed in yaml file.
In writing the directories like data_file_directories, you should follow the same indentation as:
data_file_directories -
/path/to/access
Cassandra is very strict at it's indentation in yaml file. I once faced an issue due to this wrong indentation in data_file_directories.
Finally, run ./cqlsh , provide ip_address if it is a remote server.
Check the nodetool status and confirm whether the node is up and normal.
Check the following:
Cassandra is running: nodetool status / ps -elf | grep cassa
Port 9042 (default for CQL) is not used by something else: netstat -an | grep 9042
Try running cqlsh `hostname -i`
Good luck.

Setting up Cassandra on Cloud9 IDE

I've followed these instructions to install Cassandra: http://docs.datastax.com/en/cassandra/2.0/cassandra/install/installDeb_t.html
When I do $ cqlsh terminal replies me with
Connection error: Could not connect to localhost:9160
I read that the issue might be with configuration file cassandra.yaml
However, I turned out I can't access it. My etc/cassandra folder is empty: enter image description here
How to access cassandra.yaml?
Where is cassandra is stored in my project?
Is there a way to check if Cassandra is actually set up in project?
The image you have attached is showing the ~/.cassandra directory off of your home dir. That's not the same as/etc/cassandra. You should be able to confirm this with the following command:
$ ls -al /etc/cassandra/cassandra.yaml
-rw-r--r-- 1 cassandra cassandra 43985 Mar 11 12:46 /etc/cassandra/cassandra.yaml
To verify if Cassandra is even running, this should work for you if you have successfully completed the packaged install:
$ sudo service cassandra status
Otherwise, simply running this should work, too:
$ ps -ef | grep cassandra
When you set up Cassandra, you'll want to set the listen_address and rpc_address to the machine's hostname or IP. They're set to localhost by default, so if it's running cqlsh should connect to that automatically.
My guess is that Cassandra is not starting for you. Check the system.log file, which (for the packaged install) is stored in /var/logs/cassandra:
$ cat /var/log/cassandra/system.log
Check out that file, and you might find some clues as to what is happening here.
Also, did you really install Cassandra 2.0? That version has been deprecated, so for a new install you shouldn't go any lower than Cassandra 2.1.

Could not connect to cassandra with cqlsh

I want to connect to cassandra but got this error:
$ bin/cqlsh
Connection error: ('Unable to connect to any servers', {'192.168.1.200': error(10061, "Tried connecting to [('192.168.1.200', 9042)]. Last error: No connection could be made because the target machine actively refused it")})
Pretty simple.
The machine is actively refusing it because your system does not have cassandra running on it. Follow the following steps to completely get rid of this trouble :
Install Cassandra from DataStax (Datastax-DDC; Cassandra version 3).
Go to ~\installation\path\DataStax-DDC\apache-cassandra\bin.
Open up cmd there. (Use Alt+F+P to open it if you are on windows 8 or later).
type cassandra -f this will generate a lot of stuff on the window and you must get the last line as INFO 11:32:31 Created default superuser role 'cassandra'
Now open another cmd window in the same folder.
Type cqlsh
This should give you a prompt, without any error.
I also discovered that this error doesn't pop up if I use cassadra v2.x found here Archived version of Cassandra. I don't know why :( (If you find out please comment).
So, if the above steps do not work, you can always go back to Cassandra v2.x.
Cheers.
Check if you have started Cassandra server, then provide the host and port as the arguments.
$ bin/cqlsh 127.0.0.1 4092
I run into the same problem. This worked for me.
Go to any directory for example E:\ (doesn't have to be the same disc as the cassandra installation)
Create the following directories
E:\cassandra\storage\commitlogs
E:\cassandra\storage\data
E:\cassandra\storage\savedcaches
Then go to your cassandra installations conf path. In my case.
D:\DataStax-DDC\apache-cassandra\conf
Open cassandra.yaml. Edit the lines containing: data_file_directories, commitlog_directory, saved_caches_directory to look like the code below (change paths accordingly to where you created the folders)
data_file_directories:
- E:\cassandra\storage\data
commitlog_directory: E:\cassandra\storage\commitlog
saved_caches_directory: E:\cassandra\storage\savedcaches
Then open the cmd (I did it as administrator, but didn't check if it is necessary) to your cassandra installations bin path. In my case.
D:\DataStax-DDC\apache-cassandra\bin
run cassandra -f
Lots of stuff will be logged to your screen.
You should now be able to run cqlsh and all other stuff without problems.
Edit: The operating system was windows10 64bit
Edit2: If it stops working after a while check if the service is till running using nodetool status. If it isn't follow this instruction.
I also faced the same problem on a Win32 windows 7 machine.
Check if you have JAVA installed correctly and JAVA_HOME variable set.
Once you have checked the java installation and set JAVA_HOME, uninstall Cassandra and install it again.
Hopefully this would solve the problem. Mine was solved after applying the above two steps.
You need to mention host, user, password for cassandra cqlsh connection. Default cassandra cqlsh user is cassandra and password is cassandra.
$ bin/cqlsh <host> -u cassandra -p cassandra
I also had same problem. I applied many methods given on google and youtube but none of them worked in my case. Finally, I applied the following 3 steps and it worked in my case:-
Create a folder without any space in C or D whichever is your system drive. eg:- C:\cassandra
Install Cassandra in this folder instead of installing in"Program Files".
After installation, it will be like this- C:\cassandra\apache-cassandra-3.11.6
Copy python 2.7 installed in bin folder i.e.,C:\cassandra\apache-cassandra-3.11.6\bin
Now your program is ready for work.
There is no special method to connect cqlsh it simple as below:-
$ bin/cqlsh 127.0.0.1(host IP) 9042 or $ bin/cqlsh 127.0.0.1(host IP) 9160 (if older version of Cassandra)
Don't forget to check port connectivity if you are connecting cqlsh to remote host. Also you can use username/password if you enabled by default it is disabled.

Resources