I have Apache Cassandra on linux OS.
What is the best solution for backup data from enviroment to antother enviroment without losing data
Medusa is very good open source tool for backup of Cassandra.
Related
I am not sure if my windows based Cassandra installation will work as it is on Linux based Cassandra nodes.
My data resides on windows Cassandra-DB and plans to shift on to LINUX server in order to use ELASSANDRA now.
Can same data files be copied from Win-OS to Linux-OS in same directories of Cassandra Folders?
As both are with different file system so i have some doubts if that will ever work.
If not what is the workaround to migrate all data?
The issue with the files has more to do with the version of Cassandra, rather than the OS. Cassandra's implementation in Java makes the underlying OS somewhat (albeit not completely) irrelevant.
Each version of Cassandra has a specific format for writing its SSTable files. As long as the version of Cassandra is the same between each server, copying the files should work.
Otherwise, if the Windows and Linux servers can see each other on the network, the easiest way to migrate would be to join the Linux server to the "cluster" on Windows. Just give the Linux server the IP of the Windows machine as its seed, set the cluster_name to be the same, and it should join. Then adjust the keyspace replication and run a repair.
You should not repair, but stream data from your existing DC with a nodetool rebuild -- [source_dc_name]:
1-Just start all nodes in your new DC with auto_bootstrap: false in conf/cassandra.yaml
2-Run nodetool rebuild on these nodes
3-Remove auto_bootstrap: false in conf/cassandra.yaml
If you start Elassandra in your new DC, Elasticsearch indices wil be rebuild while streaming or reparing, so have fun with it !
I have installed Cassandra CQL Client on window 10.but i want to take cassandra node's backup file from CDATA ADO.NET code, which will be stored on specific directory.
I Need help with code.
There is no way to take node's backup from Cassandra ADO.net drivers. We can achieve this scenario by Cassandra Snapshots for nodes backup.
onfluence.atlassian.com/bitbucketserver/basic-git-commands-776639767.html
I am new to big data and Spark(pyspark).
Recently I just setup a spark cluster and wanted to use Cassandra File System (CFS) on my spark cluster to help upload files.
Can any one tell me how to set it up and briefly introduce how to use CFS system? (like how to upload files / from where)
BTW I don't even know how to use HDFS(I downloaded pre-built spark-bin-hadoop but I can't find hadoop in my system tho.)
Thanks in advance!
CFS only exists in DataStax Enterprise and isn't appropriate for most Distributed File applications. It's primary focused as a substitute for HDFS for map/reduce jobs and small temporary but distributed files.
To use it you just use the CFS:// uri and make sure you are using dse spark-submit from your application.
I am trying to backup the whole cluster consistently. What are different ways to backup and restore Cassandra cluster?
If you are using the DataStax Enterprise version, then the easiest way is to perform the backups and restore using OpsCenter.
If you are using the DataStax Community or open-sourced version of Cassandra, then use nodetool snapshot to create backups of tables and/or keyspaces.
Please bear in mind that SSTables are immutable, i.e. they never change once they are written to disk. So unlike RDBMS data files, SSTables are not updated.
To perform a snapshot cluster-wide, use SSH tools such as pssh to perform parallel snapshots on all nodes.
More information on the snapshot utility is available here.
There are several ways to restore from snapshots. One way is to re-load the data using the sstableloader tool where the data is read back into the cluster. Another way is by copying the SSTable directory from snapshot and running nodetool refresh. Finally, you can replace the existing data with the snapshot and restarting the node.
More information on backups and restores are available here.
It's a backup restore question. We have a single node running Cassandra. We have also rsync'ed the /var/lib/cassandra folder into another (backup) server.
In case of emergency, we can quickly boot a new server and transfer files there. But the question is: will it work then? Let's assume we have the same Cassandra versions on both old and the new server, and the same OS version. Is it enough to simply transfer the whole /var/lib/cassandra folder? As you understand, backups is a critical thing so I want to be sure everything will be ok.
(we currently are using dsc2.0 package from the Ubuntu's repos)
Yes, running a 'normal' cluster with 2-3 nodes would be a better choice, I know. Both performance and reliability would increase. But for now we have what we have - it's a single node. And for some reasons right now we will not switch to a multinode cluster.
Thanks!