Does your data size reduces after upgrading DataStax Enterprise? - cassandra

We have upgraded from dse4.5 to dse4.8 . After running upgrade sstables on 4 nodes of my 10 node cluster , i see the size of my cluster shown in opscenter is reduced to 2.5TB from 3.2TB . However , there is no impact of it in production.
Does the data really get compressed or reduce in size after upgrading sstables ?

Answer posted by markc in the comment :
Compaction might play a part here you may have had pending compactions prior to the upgrade which have now completed. Snapshots will be under the data directory too, upgrade sstables will upgrade those too iirc. – markc Feb 23 at 12:19

Related

Skipping "nodetool upgradesstables" for time series ttl expiry data in Cassandra upgrade from 2.1 to 3.11

In Cassandra 2.1 cluster, data format is ka and post upgrade to Cassandra 3.11, I see the new sstables are written in md format. For the time-series data that is going to be expired in 3 months time, can I skip running the nodetool upgradesstables?
I validated the data reads are working fine from the older ka format sstables after upgrade. The reason I want to skip the upgrade is from other threads, I know it is going to take a lot of time for format conversion and anyway this data is going to expire in 3 months.
I don't think it's mandatory to run nodetool sstablesupgrade, Cassandra 3 will be able to work with old SSTables, but you lose a lot of advantages of Cassandra 3 (for instance, space consumption is significantly reduced). Also Datastax has a warning in their upgrade documentation:
WARNING: Failure to upgrade SSTables when required results in a significant performance impact and increased disk usage and possible data loss. Upgrading is not complete until the SSTables are upgraded.

removing a node from the cluster and tables in twcs

I have a cluster (tested it both with 2.1.14 and 3.0.17) in which I have a table that is TWCS (time window compaction). All sstables are kept in the correct windows just fine up until I remove a node from the cluster (in the same dc) and in that moment it seems all sstables are treated as one pool for normal size tiered, causing sstables from different time periods to join. Seeing as my cluster is 400 nodes spread over 6 datacenters a node removal is something quite common.
I did not find any bug talking about this, is this expected behavior? having all the sstables handled together causes a major problem space wise since it means new and old data are in the same sstable causing the old data to remain on disk much longer
(2.1 twcs is achieved using a jar from jeffjirsa github)
Have you disabled read repair on TWCS tables? it could inject out-of-sequence timestamps. TWCS itself will do size tiered but only on current window, iff it falls behind in compaction.

Cassandra compaction on few nodes is running slowly

We want cassandra flush to happen less frequently. In the latest version of apache cassandra memtable_cleanup_threshold has been deprecated. Is there a way we can still put more data in memtable? We also have concurrent_compactors set to 4 and compaction throughput set to 64. but when we run major compaction it still runs only one.

why repairing node takes a long time in cassandra

I add new node to exist cluster in cassandra and waiting to node join to cluster. after that, I update replication factor and repair each affected node according to Updating the replication factor. but why repairing node takes a long time?
Repair process depends on the amount of data that you have. Repairing 100GB of data usually (depending on your instances or servers) and the load on your cluster takes around 1 hour, this is some very vague rule of thumb. If you have large amounts of data please take into account that it may take hours before the repair is actually finished. It also depends on the cassandra version that you are using. Some versions of cassandra simply hang the repair process, please check system.log for more information. If you notice that repair failed, you might want to consider upgrading cassandra.

cassandra nodetool repair what does it really do?

Hi all (cassandra noob here),
I'm trying understand exactly what going on..with repair..trying to get to the point where we can run our repairs on a schedule.
I have setup a 4 DC (3 nodes per dc) Cassandra 2.1 cluster, 4gb Ram (1gb heap), HDD.
Due to various issues..(repair taking too long/crashing nodes with OOM), I decided to start fresh, nuke everything, I deleted /var/lib/cassandra/data, and /opt/cassandra/data
Recreated my keyspaces (no data), and ran nodetool repair -par -inc -local
I was supprised to see it took ~5min to run, watching the logs I see merkle trees being generated...
I guess..my question is, if my keyspaces have no data..whats it generating Merkle trees against?
After running a local-dc repair against each node in that dc, I decided to run a cross-dc repair, again with no data..
this time it took 4+ hours..with no data? This really feels wrong to me, what am I missing?
Thanks

Resources