I want to take the backup of a keyspace in cassandra, Using command.
Use the nodetool command. Something like:
nodetool -h localhost -p 7199 snapshot mykeyspace
Take a look at the documentation here:
http://docs.datastax.com/en/cassandra/2.0/cassandra/operations/ops_backup_restore_c.html
Related
Hi All I am new in Cassandra and got an assignment where I need to List all the available keyspaces in Cassandra and save to a .txt file
I have tried all possible codes and searched many sites but still I am unable to succeed.
I have tried below codes in order to save the available keyspaces in .txt file.
cqlsh -e 'DESCRIBE KEYSPACE firstkeyspace' > test.txt;
cqlsh -e "DESCRIBE KEYSPACE firstkeyspace" > pathtosomekeyspace.txt
cqlsh -e "DESC KEYSPACE firstkeyspace" > firstkeyspace_schema.txt;
cqlsh -e "DESC KEYSPACES" > firstkeyspace_schema.txt
I am getting error and unable to fix it.
SyntaxException: line 1:0 no viable alternative at input 'cqlsh' ([cqlsh]...)
I have also checked with single quote but still not working.
Request you all to help me to solve this problem.
Thanks in advance.
This error indicates that you're running the commands within cqlsh itself:
SyntaxException: line 1:0 no viable alternative at input 'cqlsh' ([cqlsh]...)
For example:
cqlsh> cqlsh -e "DESCRIBE KEYSPACE ks" > ks.txt ;
SyntaxException: line 1:0 no viable alternative at input 'cqlsh' ([cqlsh]...)
You need to exit out of cqlsh and run the commands at the Linux command line. For example:
$ cqlsh -e "DESCRIBE KEYSPACES" > keyspaces.txt
Don't confuse CQL commands like DESCRIBE KEYSPACES with Linux shell commands. Cheers!
So I see that error when I try to run cqlsh from within cqlsh.
aploetz#cqlsh> cqlsh -u aploetz -p xxxxxxxx -e 'DESCRIBE KEYSPACE stackoverflow' ;
SyntaxException: line 1:0 no viable alternative at input 'cqlsh' ([cqlsh]...)
That's not going to work. Exit out, and run it from your command line, instead.
aploetz#cqlsh> exit
% bin/cqlsh -u aploetz -p xxxxxxxx -e 'DESCRIBE KEYSPACE stackoverflow' > stackoverflow.txt
% head -n 5 stackoverflow.txt
CREATE KEYSPACE stackoverflow WITH replication = {'class': 'NetworkTopologyStrategy', 'SnakesAndArrows': '1'} AND durable_writes = true;
CREATE TABLE stackoverflow.customer_info_by_date (
billing_due_date date,
If you're referring to a hackerrrank prompt, or even otherwise, here is what I did to solve it!
hr gave me the example of trying:
cqlsh -e "command" > filename
HOWEVER: this didn't work for me just as it didn't for you. Instead, do:
COPY system_schema.keyspaces TO 'keyspace.txt';
**Here, system_schema.keyspaces is generic to all systems as it seems to collect all the keyspaces (rather than being one of my named keyspaces)
I have a cluster which I am considering enabling incremental repair on. If anything goes wrong I'd like to disable incremental repair on every node. How do I do that?
Turn node off and use sstablerepairedset to remove the repair time for each sstable so that they will all be candidates for future compactions.
find '/path/cassandra/data/keyspace/table/' -iname "*Data.db*" > sstables.txt
sudo -u cassandra sstablerepairedset --is-unrepaired -f sstables.txt
Then just go back to using repair with no -inc or in later versions use the -full flag
We do run repair -pr on every DSC 2.1.15 node within gc_grace like this:
nodetool -h localhost repair -pr -par mykeyspc
But in the log it says full=true:
[2017-02-12 00:00:01,683] Starting repair command #11, repairing 256 ranges for
keyspace mykeyspc (parallelism=PARALLEL, full=true)
Would have expected that a -pr didn't run a full repair or how to read this log?
It means full as in "not incremental". Can think of it as its fully repairing the data in those ranges, not just the unrepaired data. It is confusing argument naming. The -pr means its just repairing the primary ranges though so you still need to do that on each node.
When I run nodetool clearsnapshot I get the normal "Requested clearing snapshot(s)" message, but the snapshot is never removed. What can I do to troubleshoot why this is occurring? Is it acceptable for me to just manually remove the snapshot directories from the tablespace directories as a workaround for this?
nodetool clearsnapshot 1472489985541
Requested clearing snapshot(s) for [1472489985541]
nodetool listsnapshots | awk '{print $1}' | grep ^1 | sort -u
1472489985541
1473165734236
1473690660090
1474296554367
Is it acceptable for me to just manually remove the snapshot directories from the tablespace directories as a workaround for this?
Yes, you can always safely remove the snapshots directories manually. They are just hard links to actual SSTables
In order to delete a snapshot from all keyspaces using the snapshot name, you must specify the -t flag in your clearsnapshot command.
I am using Jconsole for monitoring Cassandra. I can get value like how much load each keyspace is having.
I want to find out disk space usage for each node in a cluster by remotely.
Is there any way to do so?
A shell script can do the trick
for i in node1_ip node2_ip ... nodeN_ip
do
ssh user#$i "du -sh /var/lib/cassandra/data" >> /tmp/disk_usage.txt
done
Replace /var/lib/cassandra/data if your data folder is put somewhere else