Optimizing YugabyteDB for tables with frequent deletes - yugabytedb

[Question posted by a user on YugabyteDB Community Slack]
Wanted to check if we should do any optimization on the db side for the tables that have frequent inserts and deletes ...like re-indexing or vacuuming etc.
Workload:
300000 row inserts per hour.
Out of these most of the time 90% will get deleted up with in the hour and remaining will be cleaned up at the end of the day.

YugabyteDB uses rocksdb, which is an LSM-tree implementation. Any change, including a delete, is an addition to the memtable.
Unlike PostgreSQL, where changes introduce row versions that must be cleaned up, YugabyteDB performs this principle automatically.
When a memtable reaches a certain size, it is persisted as a SST file, and once the number of SST files reaches a certain amount, a background thread reads the SST files and merges them. Any changes that are old enough (>15 minutes by default) are removed because they are expired. This principle resembles PostgreSQL vacuuming.
If you do batch DML, and especially deletions with time-series, partitioning allows you to work on a logical single table, whilst the heavy transactions, such as deleting a day can be performed by removing a daily partition instead of doing it row by row.

Related

how dose yugabytedb guarantee the snapshot consistent during garbage collection?

For example:
There are two items, k1 and k2, with time t1.
Then, a read transaction(A) get a snapshot whose time is t1.And transaction(A) read k1 with t1 successfully.
At the same time, another transaction(B) write k2 with time t2(t2>t1).
yugabytedb does garbage collection in some way, so the k1 with t1 will be delete.
If transaction(A) read k2 with time t1, it will can't find any version of k2 with time less equal than t1.
I am confused how yugabytedb maintains a consistent snapshot.
I almost searched yugabytedb's transaction documents, but I didn't find anything related to garbage collection.
I have seen some descriptions of google spanner about garbage collection, which is to keep the old version for one hour.But yugabytedb use HLC instead of Truetime.
Can anyone introduce yugabytedb's garbage collection mechanism? Is it the same as spanner?
The general mechanism that YugabyteDB uses for consistency is using Hybrid Logical Clocks (HLC). See this presentation: HLC. Using HLC, transactions can pick a version of a row consistent with its transaction time. I do believe this is already known.
Because we use LSM tree storage, no data is overwritten. An updated row means another another entry for the same row, with a different HLC timestamp. This way, when a row is requested, a transaction can pick a row version consistent with its HLC.
Garbage collection, alias the purging of old, non-current versions of a row happens during major compaction. Major compaction is the process of merging SST files. If lots of data is changed, major compactions could happen happen after a short amount of time, therefore we implemented a parameter/gflag --timestamp_history_retention_interval_sec to guarantee a minimal of time a change remains available.
Obviously if the amount of changes is low, a non-current version could remain available for a long time.
Compaction happens per rocksdb database, which is per tablet of a table or secondary index.

Time window compaction strategy on data with TTLed inserts followed by TTLed updates

I am facing problem with cassandra compaction on table that stores event data. These events are generated from censors and have associated TTL. By default each event has TTL of 1 day. Few events have different TTL like 7/10/30 which is business requirement. Few events can have TTL of 5 years if event needs to be stored. More than 98% of rows have TTL of 1 day.
Although minor compaction is triggered from time to time, disk usage are constantly increasing. This is because of how SizeTierd compaction-strategy works i.e. it would choose table of similar size for compaction. This creates few huge tables which aren't compacted for long time. Presence of few large table would increase average SSTable size and compaction is run less frequently. Looks like STCS is not right choice. In load-test env, I added data to tables and switched to leveled compaction-strategy. With LCS disk space was reclaimed till certain point and then disk usage were constant. CPU was also less compared to STCS. However time window compaction-strategy looks more promising as it works well for time series TTLed data. I am going to test TWCS with my dataset. Mean while I am trying to find answer for few queries to which I didn't find answer or whatever I found was not clear to me.
In my use case, event is added to table with associated TTL. Then there are 5 more updates on same event within next minute. Updates are not made on single column, instead complete row is re-written with new TTL(which is same for al columns). This new TTL is liked to be slightly less than previous TTL. For example, event is created with TTL of 86400 seconds. It is updated after 5 second then new TTL would be 86395. Further update would be with new TTL which would be slightly less than 86395. After 4-5 updates, no update would be made to more than 99% rows. 1% rows would be re-written with TTL of 5 years.
From what I read: TWCS is for data inserted with immutable TTL. Does
this mean I should not use TWCS?
Also out of order writes are not well handled by TWCS. If event is
created at 10 AM on 5th Sep with 1 day TTL and same event row is
re-written with TTL of 5 years on 10th or 12th Sep, would that be
our of order write? I suppose out of order would be when I am
setting timestamp on data while adding data to DB or something that
would be caused by read repair.
Any guidance/suggestion will be appreciated!
NOTE: I am using cassandra 2.2.8, so I'll be creating jar for TWCS and then use it.
TWCS is a great option under certain circumstances. Here are the things to keep in mind:
1) One of the big benefits of TWCS is that merging/reconciliation among sstables does not occur. The oldest one is simply "lopped" off. Because of that, you don't want to have rows/cells span multiple "buckets/windows".
For example, If you insert a single column during one window and then the next window you insert a different column (i.e. an update of the same row but different column at a later period of time). Instead of compaction creating a single row with both columns, TWCS would lop one of the columns off (the oldest). Actually I am not sure if TWCS will even allows this to occur, but was giving you an example of what would happen if it did. In this example, I believe TWCS will disallow the removal of either sstable until both windows expire. Not 100% sure though. Either way, avoid this scenario.
2) TWCS has similar problems when out-of-time writes occur (overlap). There is a great article by the last pickle that explains this:
https://thelastpickle.com/blog/2016/12/08/TWCS-part1.html
Overlap can occur by repair or from an old compaction (i.e. if you were using STCS and then switched to TWCS, some of the sstables may overlap).
If there is overlap, say, between 2 sstables, you have to wait for both sstables to completely expire before TWCS can remove either of them, and when it does, both with be removed.
If you avoid both scenarios described above, TWCS is very efficient due to the nature of how it cleans things up - no more merging sstables. Simply remove the oldest window.
When you do set up TWCS, you have to remember that the oldest window gets removed after the TTLs expire and GC Grace passes as well - don't forget to add that part. Having a varying TTL number among rows, as you have described, may delay windows from getting removed. If you want to see what is either blocking TWCS from removing a sstable or what the sstables look like, you can use sstableexpiredblockers or the script in the above mentioned URL (which is essentially sstablemetadata with some fancy scripting).
Hopefully that helps.
-Jim

Is there a way to force cassandra to select rows even if it exceeds the maximum tombstone settings?

I'm using Cassandra 3.11, I have a table which contains a lot of tombstones. This is normal as periodically (once per day) I need to update big sets of data. Some data will never change, but there are updates. So it happens once in a while that I have many tombstones read during some maintenance periods. One of my select statement often will return with a failure status, the reason is the maximum amount of tombstones reached.
Is there a way to force cassandra to execute one specific select statement, even though the maximum Tombstone is reached? For my maintenance programs that would be very useful. I guess something similar to "allow filtering"...

How do we track the impact expired entries have on a time series table?

We are processing the oldest data as it comes into the time-series table. I am taking care to make sure that the oldest entries expire as soon as they are processed. Expectation is to have all the deletes at the bottom part of the clustering column of TimeUUID. So query will always read time slot without any deleted entries.
Will this scheme work? Are there any impacts of the expired columns that I should be aware of?
So keeping the timeuuid as part of clustering key guarantee the sort order to provide the most recent data.
If Cassandra 3.1 (DSE 5.x) and above :-
Now regarding the deletes, "avoid manual and use TWCS": Here is how
Let's say every X minutes your job process the data. Lets say X = 5min, (hopefully less than 24hours). Set the compaction to TWCS: Time Window Compaction Strategy and lets assume with TTL of 24hours.
WITH compaction= {
'compaction_window_unit': 'HOURS',
'compaction_window_size': '1',
};
Now there are 24buckets created in a day, each with one hour of data. These 24 buckets simply relates to 24 sstables (after compaction) in your Cassandra data directory. Now during the 25hour, the entire 1st bucket/sstable would automatically get dropped by TTL. Hence instead of coding for deletes, let Cassandra take care of the cleanup. The beauty of TWCS is to TTL the entire data within that sstable.
Now the READs from your application always goes to the recent bucket, 24th sstable in this case always. So the reads would never have to scan through the tombstones (caused by TTL).
If Cassandra 2.x or DSE 4.X, if TWCS isn't available yet :-
A way out till you upgrade to Cassandra 3.1 or above is to use artificial buckets. Say you introduce a time bucket variable as part of the partition key and keep the bucket value to be date and hour. This way each partition is different and you could adjust the bucket size to match the job processing interval.
So when you delete, only the processed partition gets deleted and will not come in the way while reading unprocessed ones. So scanning of tombstones could be avoided.
Its an additional effort on application side to start writing to the correct partition based on the current date/time bucket. But its worth it in production scenario to avoid Tombstone scan.
You can use TWCS to easily manage expired data, and perform filtering by some timestamp column on query time, to ensure that your query always getting the last results.
How do you "taking care" about oldest entries expiry? Cassandra will not show records with expired ttl, but they will persist in sstables until next compaction for this sstable. If you are deleting the rows by yourself, you can't make sure that your query will always read latest records, since Cassandra is eventually consistent, and theoretically there's can be some moments, when you will read stale data (or many such moments, based on your consistency settings).

gc_grace_seconds to remove tombstone rows in cassandra

I am using awesome Cassandra DB (3.7.0) and I have questions about tombstone.
I have table called raw_data. This table has default TTL as 1 hour. This table gets new data every second. Then another processor reads one row and remove the row.
It seems like this raw_data table becomes slow at reading and writing after several days of running.
Is this because of deleted rows are staying as tombstone? This table already has TTL as 1 hour. Should I set gc_grace_period to something less than 10 days (default value) to remove tombstones quickly? (By the way, I am single-node DB)
Thank you in advance.
Deleting your data is the way to have tombstone problems. TTL is the other way.
It is pretty normal for a Cassandra cluster to become slower and slower after each delete, and your cluster will eventually refuse to read data from this table.
Setting gc_grace_period to less than the default 10 days is only one part of the equation. The other part is the compaction strategy you use. Indeed, in order to remove tombstones a compaction is needed.
I'd change my mind about my single-node cluster and I'd go with the minimum standard 3 nodes with RF=3. Then I'd design my project around something that doesn't explicitly delete data. If you absolutely need to delete data, make sure that C* runs compaction periodically and removes tombstones (or force C* to run compactions), and make sure to have plenty of IOPS, because compaction is very IO intensive.
In short Tombstones are used to Cassandra to mark the data is deleted, and replicate the same to other nodes so the deleted data doesn't re-appear. These tombstone will be stored in Cassandra till the gc_grace_period. Creating more tobestones might slow down your table. As you are using a single node Cassandra you don't have to replicate anything in other nodes, hence you can update your gc grace seconds to 1 day, which will not affect. In future if you are planning to add new nodes and data centers change this gc grace seconds.

Resources