everybody
I'm running cassandra2.2.8, and have configured commitlog archive to run automatically.
The commitlog_archiving.properties:
archive_command=/bin/cp %path /data1/backup/%name
But I noticed it always copy the files that have rotated, while not the commitlog that is working currently . for instance, I have a commitlog file CommitLog-5-1533697321883.log is working now, and after it's rotated to another file, CommitLog-5-1533697321884.log, the file CommitLog-5-1533697321883.log will get archived, now all sessions is going to the CommitLog-5-1533697321884.log file, but it's not backuped at all, will lost in a disaster recovery.
My question is, is this the designed behaviour? What can I do to improve this situation? or is there any improvement in the cassandra 3?
Yes, this is designed behaviour - the current commit log is incomplete by design - that's why you get access to archived commit logs. (afaik, this is the same for most databases).
If your data is critical, you may need to consider tuning of consistency levels.
Related
As the data in case of Cassandra is physically removed during compaction, is it possible to access the recently deleted data in any way? I'm looking for something similar to Oracle Flashback feature (AS OF TIMESTAMP).
Also, I can see the pieces of deleted data in the relevant commit log file, however it's obviously unreadable. Is it possible to convert this file to a more readable format?
You will want to execute a restore from your commitlog.
The safest is to copy the commitlog to a new cluster (with same schema), and restore following the instructions (comments) from commitlog_archiving.properties file. In your case, you will want to set restore_point_in_time to a time between your insert and your delete.
I started to use cassandra 3.7 and always I have problems with the commitlog. When the pc unexpected finished by a power outage for example the cassandra service doesn't restart. I try to start for the command line, but always the error cassandra could not read commit log descriptor in file appears.
I have to delete all the commit logs to start the cassandra service. The problem is that I lose a lot of data. I tried to increment the replication factor to 3, but is the same.
What I can do to decrease amount of lost data?
pd: I only one pc to use cassandra database, it is not possible to add more pcs.
I think your option here is to work around the issue since its unlikely there is a guaranteed solution to prevent commit table files getting corrupted on sudden power outage. Since you only have a single node, it makes it more difficult to recover the data. Increasing the replication factor to 3 on a single node cluster is not going to help.
One thing you can try is to reduce the frequency at which the memtables are flushed. On flush of memtable the entries in the commit log are discarded, therefore reducing the amount of data lost. Details here. This will however not resolve the root issue
Is there a way to backup Cassandra directly to tape (streaming device)?
Or to perform real snapshots?
The snapshot Cassandra is referring to is not what I want to call a snapshot.
It is more a consistent copy of the database files to a directory.
Regards Tomas
First, let's clarify the Cassandra write path, so we know what we need to back up. Writes come in and are first journaled in the commitlog, then written to the memtable, then eventually flushed to sstables. When sstables flush, the relevant commitlog segments are deleted.
If you want a consistent backup of Cassandra, you need at the very least the sstables, but ideally the sstables + commitlog, so you can replay any data between the commitlog and the most recent flush.
If you're using tape backup, you can treat the files on disk (both commitlog and sstables) as typical data files - you can tar them, rsync them, copy them as needed, or point amanda or whatever tape system you're using at the data file directory + commitlog directory, and it should just work - there's not a lot of magic there, just grab them and back them up. One of the more common backup processes involves using tablesnap, which watches for new sstables and uploads them to s3.
You can backup Cassandra directly to Tape using SPFS
SPFS is a file system for Spectrum Protect.
Just mount the SPFS file system where you want the backups to land.
Eg
mount -t spfs /backup
And backup Cassandra to this path.
All operations that goes via this mountpoint (/backup), will automatically be translated to Spectrum Protect Client API calls.
On the Spectrum Protect backup server, one can use any type of supported media.
For instance: CD, Tape, VTL, SAS, SATA, SSD, Cloud etc..
In this way, you can easily backup your Cassandra directly to a backup server.
When restarting a Cassandra node a lot of time is spend on replaying the commitlog to achieve consistency. In our application, it is more important to bring the node back up and running fast, than to achieve consistency. Therefore we have set “durable_writes = false” on all our manually created keyspaces to disable the commitlog. (We have not touched the system keyspaces). Nevertheless, when we restart a note it still uses about one hour on replaying the commitlog.
What is left in my commitlog?
Can I in any way investigate the content of the commitlog?
How can the commitlog be turned off (if not durable_writes = false)?
durable_writes is set per keyspace, so if there are any keyspaces with it still enabled there will still be mutations in the commitlogs to replay on startup. You may want to walk output of describe schema.
There are some tables (ie system) that you want to keep durable, but it shouldn't have that much to cause an impact to startup. When starting up it logs out which keyspace/tables its reading so you can check which ones its replaying.
One hour is a very long time and has a certain smell to it, there may be something else going on here and probably warrants additional investigation. Some ideas is to check the logs and make sure it is the commitlog replay thats taking time (not rebuilding index summaries or something). Also check that there are not old commit logs that C* doesn't have permissions to delete or something that would stick around.
do 'nodetool drain' before shutting down the node.This will write all the commitlogs to sstables.
I've got a mostly idle (till now) 4 node cassandra cluster that's hooked up with opscenter. There's a table that's had very little writes in the last week or so (test cluster). It's running 2.1.0. Happened to ssh in, and out of curiosity, ran du -sh * on the data directory. Here's what I get:
4.2G commitlog
851M data
188K saved_caches
There's 136 files in the commit log directory. I flushed, and then drained cassandra, stopped and started the service. Those files are still there. What's the best way to get rid of these? Most of the stuff is opscenter related, and I'm inclined to just blow them away as I don't need the test data. Wondering what to do in case this pops up again. Appreciate any tips.
The files in the commit log directory have a fixed size determined by your settings in the cassandra.yaml. All files have a pre-allocated size, so you cannot change it by making flush, drain or other operations on the cluster.
You have to change the configuration if you want to reduce their size.
Look at the configuration settings "commitlog_total_space_in_mb" and "commitlog_segment_size_in_mb" to configure the size of each file and the total space occupied by all of them.