Tombstone warning with SELECT LIMIT 1 - cassandra

CREATE TABLE test (
ck INT,
pk INT,
PRIMARY KEY (ck, pk)
);
for (int i = 1; i < 10000; i++) {
sessionRW.execute(QueryBuilder.insertInto("test").value("ck", 1).value("pk", i));
}
root#cqlsh:ks> select * from test limit 5;
ck | pk
----+----
1 | 1
1 | 2
1 | 3
1 | 4
1 | 5
(5 rows)
root#cqlsh:ks> delete from test where ck = 1;
root#cqlsh:ks> insert into test(ck,pk) values (1, 0); -- new minimal value
root#cqlsh:ks> select * from test limit 1;
ck | pk
----+-------
1 | 0
(1 rows)
WARN 11:37:39 Read 1 live and 9999 tombstoned cells in ks.test (see tombstone_warn_threshold). 1 columns was reque
Why when I do a SELECT with "LIMIT 1" I get the tombstone warning ?
The rows are order by pk ASC and the lower pk value of this table (0) is the first row and is not deleted.
I don't understand why cassandra keep scanning my table for other results (hence fetching a lot of tombstone) because the first row match and I specified I just want one row.
I could have understood the warning If I didn't specified LIMIT. But what's the point of scanning the whole table when first row match with LIMIT 1?

Because the way cassandra stores data. The data is stored as a single wide row as columns even if it looks through cql like multiple rows. Therefore in order to get to the lastly inserted "row" it needs to read all tombstoned columns as well.
Below is an illustration
| 1 | 2 | 3 |...|9999| 0 |
----+---+---+---+---+----+---+
ck=1| T | T | T | T | T | |
As you can see it is one row under a clustering key 1. I marked tombstoned columns (or rows if you prefer) with "T". Cassandra reads the entire row, and then in order to find first non-tombstoned column, it needs to cycle through all 9999 tombstoned ones.

When you do "select * from test limit 1;", Cassandra has to go to all the nodes and filter the entire table to find the first live row. It needs to stream the tombstones to the coordinator since other nodes may be out of sync and the limit 1 would match a row that had been deleted. You should be able to avoid this by specifying the query such that the tombstones wouldn't matter, such as "select * from test where ck=1 and pk < 1;"

OK so I think I found the answer, the answer is cassandra is doing one more lookup after limit 1 (like if you did limit 2).
Just insert one more row:
insert into test(ck,pk) values (1, 1);
and now select * from test limit 1; won't trigger a tombstone error.
However, if you do LIMIT 2, it will trigger a tombstone error even if we have 2 valid rows, first in the table order.
Why cassandra is doing (limit+1) lookup is the question.

Related

Cassandra find records where list is empty [duplicate]

How do I query in cassandra for != null columns.
Select * from tableA where id != null;
Select * from tableA where name != null;
Then I wanted to store these values and insert these into different table.
I don't think this is possible with Cassandra. First of all, Cassandra CQL doesn't support the use of NOT or not equal to operators in the WHERE clause. Secondly, your WHERE clause can only contain primary key columns, and primary key columns will not allow null values to be inserted. I wasn't sure about secondary indexes though, so I ran this quick test:
create table nullTest (id text PRIMARY KEY, name text);
INSERT INTO nullTest (id,name) VALUES ('1','bob');
INSERT INTO nullTest (id,name) VALUES ('2',null);
I now have a table and two rows (one with null data):
SELECT * FROM nullTest;
id | name
----+------
2 | null
1 | bob
(2 rows)
I then try to create a secondary index on name, which I know contains null values.
CREATE INDEX nullTestIdx ON nullTest(name);
It lets me do it. Now, I'll run a query on that index.
SELECT * FROM nullTest WHERE name=null;
Bad Request: Unsupported null value for indexed column name
And again, this is done under the premise that you can't query for not null, if you can't even query for column values that may actually be null.
So, I'm thinking this can't be done. Also, if null values are a possibility in your primary key, then you may want to re-evaluate your data model. Again, I know the OP's question is about querying where data is not null. But as I mentioned before, Cassandra CQL doesn't have a NOT or != operator, so that's going to be a problem right there.
Another option, is to insert an empty string instead of a null. You would then be able to query on an empty string. But that still doesn't get you past the fundamental design flaw of having a null in a primary key field. Perhaps if you had a composite primary key, and only part of it (the clustering columns) had the possibility of being empty (certainly not part of the partitioning key). But you'd still be stuck with the problem of not being able to query for rows that are "not empty" (instead of not null).
NOTE: Inserting null values was done here for demonstration purposes only. It is something you should do your best to avoid, as inserting a null column value WILL create a tombstone. Likewise, inserting lots of null values will create lots of tombstones.
1) select * from test;
name | id | address
------------------+----+------------------
bangalore | 3 | ramyam_lab
bangalore | 4 | bangalore_ramyam
bangalore | 5 | jasgdjgkj
prasad | 11 | null
prasad | 12 | null
india | 6 | karnata
india | 7 | karnata
ramyam-bangalore | 3 | jasgdjgkj
ramyam-bangalore | 5 | jasgdjgkj
2)cassandra does't support null values selection.It is showing null for our understanding.
3) For handling null values use another strings like "not-available","null",then we can select data

How to scale a range sharded index on a timestamp column in YugabyteDB?

Is there any performance tuning to do for a write-bound workload in YugabyteDB? We thought that by simply adding additional nodes to our YugabyteDB cluster, without further tuning, we would have seen some noticeable increase in writes, however this is not the case. Schema can be found below.
Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
update_id | character varying(255) | | not null | | extended | |
node_id | character varying(255) | | not null | | extended | |
data | character varying | | not null | | extended | |
created_at | timestamp without time zone | | | timezone('utc'::text, now()) | plain | |
Indexes:
"test_pkey" PRIMARY KEY, lsm (update_id HASH)
"test_crat" lsm (created_at DESC)
This table has tablets spread across all tservers with RF=3. Created_at is a timestamp that changes all of the time. At this point it has no more than two days of data, all new inserts are acquiring a new timestamp.
In the case of the schema called out above, the test_crat index here is limited to 1 tablet because it is range-sharded. Since created_at has only recent values they will end up going to 1 shard/tablet even with tablet splitting, meaning that all inserts will go to 1 shard. As explained in this Google Spanner documentation, whose sharding, replication, and transactions architecture YugabyteDB is based off of, this is an antipattern for scalability. As mentioned in that documentation:
If you need a global (cross node) timestamp ordered table, and you need to support higher write rates to that table than a single node is capable of, use application-level sharding. Sharding a table means partitioning it into some number N of roughly equal divisions called shards. This is typically done by prefixing the original primary key with an additional ShardId column holding integer values between [0, N). The ShardId for a given write is typically selected either at random, or by hashing a part of the base key. Hashing is often preferred because it can be used to ensure all records of a given type go into the same shard, improving performance of retrieval. Either way, the goal is to ensure that, over time, writes are distributed across all shards equally. This approach sometimes means that reads need to scan all shards to reconstruct the original total ordering of writes.
What that would mean is: to get recent changes, you would have to query each of the shards. Suppose you have 32 shards:
select * from raw3 where shard_id = 0 and created_at > now() - INTERVAL 'xxx';
..
select * from raw3 where shard_id = 31 and created_at > now() - INTERVAL 'xxx';
On the insert, every row could just be given a random value for your shard_id column from 0..31. And your index would change from:
(created_at DESC)
to
(shard_id HASH, created_at DESC)
Another approach you could use that may not be as intuitive, but may be more effective, would be to use a partial index for each shard_id that you would want.
Here is a simple example using 4 shards:
create index partial_0 ON raw3(created_at DESC) where (extract(epoch from timezone('utc',created_at)) * 1000)::bigint % 4=0;
The partial index above only includes rows where the modulus of the epoch in milliseconds of created_at timestamp is 0. And you repeat for the other 3 shards:
create index partial_1 ON raw3(created_at DESC) where (extract(epoch from timezone('utc',created_at)) * 1000)::bigint % 4 = 1;
create index partial_2 ON raw3(created_at DESC) where (extract(epoch from timezone('utc',created_at)) * 1000)::bigint % 4 = 2;
create index partial_3 ON raw3(created_at DESC) where (extract(epoch from timezone('utc',created_at)) * 1000)::bigint % 4 = 3;
And then when you query PostgreSQL is smart enough to pick the right index:
yugabyte=# explain analyze select * from raw3 where (extract(epoch from timezone('utc',created_at)) * 1000)::bigint % 4 = 3 AND created_at < now();
QUERY PLAN
------------------------------------------------------------------------------------------------------------------
Index Scan using partial_3 on raw3 (cost=0.00..5.10 rows=10 width=16) (actual time=1.429..1.429 rows=0 loops=1)
Index Cond: (created_at < now())
Planning Time: 0.210 ms
Execution Time: 1.502 ms
(4 rows)
No need for a new shard_id column in the base table or in the index. If you want to reshard down the road, you can recreate new partial indexes with different shards and drop the old indexes.
More information about the DocDB sharding layer within YugabyteDB can be found here. If you are interested in the different sharding strategies we evaluated, and why we decided on consistent hash sharding as the default sharding strategy, take a look at this blog written by our Co-Founder and CTO Karthik Ranganathan.

How to find range in Cassandra Primary key?

Use case: Find maximum counter value in a specific id range
I want to create a table with these columns: time_epoch int, t_counter counter
The frequent query is:
select time_epoch, MAX t_counter where time_epoch >= ... and time_epoch < ...
This is to find the counter in specific time range. Planning to make time_epoch as primary key. I am not able to query the data. It is always asking for ALLOW FILTERING. Since its a very costly function, We dont want to use it.
How to design the table and query for the use case.
Let's assume that we can "bucket" (partition) your data by day, assuming that enough write won't happen in a day to make the partitions too large. Then, we can cluster by time_epoch in DESCending order. With time based data, storing data in descending order often makes the most sense (as business reqs usually care more about the most-recent data).
Therefore, I'd build a table like this:
CREATE TABLE event_counter (
day bigint,
time_epoch timestamp,
t_counter counter,
PRIMARY KEY(day,time_epoch))
WITH CLUSTERING ORDER BY (time_epoch DESC);
After inserting a few rows, the clustering order becomes evident:
> SELECT * FROM event_counter ;
WHERE day=20210219
AND time_epoch>='2021-02-18 18:00'
AND time_epoch<'2021-02-19 8:00';
day | time_epoch | t_counter
----------+---------------------------------+-----------
20210219 | 2021-02-19 14:09:21.625000+0000 | 1
20210219 | 2021-02-19 14:08:32.913000+0000 | 2
20210219 | 2021-02-19 14:08:28.985000+0000 | 1
20210219 | 2021-02-19 14:08:05.389000+0000 | 1
(4 rows)
Now SELECTing the MAX t_counter in that range should work:
> SELECT day,max(t_counter) as max
FROM event_counter
WHERE day=20210219
AND time_epoch>='2021-02-18 18:00'
AND time_epoch<'2021-02-19 09:00';
day | max
----------+-----
20210219 | 2
Unfortunately there is no better way. Think about it.
If you know cassandra architecture then you would know that your data is spread across multiple nodes based on primary key. only way to filter on values from primary key would be to transverse each node which is essentially what "ALLOW FILTERING" is done.

#memsql - REPLACE does not work on columnstore tables

Does REPLACE work with columnstore tables in #memsql ?
The Memsql documentation does not mention about this but I am not able to replace a record in a columnstore. IS there any way to implement REPLACE in a columnstore ?
Thanks
zebb
REPLACE is only useful with unique keys, otherwise it is just equivalent to INSERT, and columnstore tables do not support unique keys.
There isn't a good way to implement it efficiently in the columnstore, since columnstores are generally not intended to perform well for single-row updates. See http://docs.memsql.com/docs/columnstore.
One way you can implement it (not very efficiently) is with multistatement transactions. Run a select to see whether a matching row is already present, if so run an update, otherwise run an insert.
E.g. say we have
create table c(i int, a int, key using clustered columnstore(i));
We can do
memsql> begin;
Query OK, 0 rows affected (0.00 sec)
memsql> select count(*) from c where i = 4;
+----------+
| count(*) |
+----------+
| 0 |
+----------+
1 row in set (0.00 sec)
memsql> insert into c values (4, 4);
Query OK, 1 row affected (0.00 sec)
memsql> commit;
Query OK, 0 rows affected (0.00 sec)
in the case where there is no match, and
memsql> begin;
Query OK, 0 rows affected (0.00 sec)
memsql> select count(*) from c where i = 4;
+----------+
| count(*) |
+----------+
| 1 |
+----------+
1 row in set (0.01 sec)
memsql> update c set a = 4 where i = 4;
Query OK, 1 row affected (0.01 sec)
Rows matched: 1 Changed: 1 Warnings: 0
memsql> commit;
Query OK, 0 rows affected (0.00 sec)
in the case where there is a match.

Cassandra - Overlapping Data Ranges

I have the following 'Tasks' table in Cassandra.
Task_ID UUID - Partition Key
Starts_On TIMESTAMP - Clustering Column
Ends_On TIMESTAMP - Clustering Column
I want to run a CQL query to get the overlapping tasks for a given date range. For example, if I pass in two timestamps (T1 and T2) as parameters to the query, I want to get the all tasks that are applicable with in that range (that is, overlapping records).
What is the best way to do this in Cassandra? I cannot just use two ranges on Starts_On and Ends_On here because to add a range query to Ends_On, I have to have a equality check for Starts_On.
In CQL you can only range query on one clustering column at a time, so you'll probably need to do some kind of client side filtering in your application. So you could range query on starts_on, and as rows are returned, check ends_on in your application and discard rows that you don't want.
Here's another idea (somewhat unconventional). You could create a user defined function to implement the second range filter (in Cassandra 2.2 and newer).
Suppose you define your table like this (shown with ints instead of timestamps to keep the example simple):
CREATE TABLE tasks (
p int,
task_id timeuuid,
start int,
end int,
end_range int static,
PRIMARY KEY(p, start));
Now we create a user defined function to check returned rows based on the end time, and return the task_id of matching rows, like this:
CREATE FUNCTION my_end_range(task_id timeuuid, end int, end_range int)
CALLED ON NULL INPUT RETURNS timeuuid LANGUAGE java AS
'if (end <= end_range) return task_id; else return null;';
Now I'm using a trick there with the third parameter. In an apparent (major?) oversight, it appears you can't pass a constant to a user defined function. So to work around that, we pass a static column (end_range) as our constant.
So first we have to set the end_range we want:
UPDATE tasks SET end_range=15 where p=1;
And let's say we have this data:
SELECT * FROM tasks;
p | start | end_range | end | task_id
---+-------+-----------+-----+--------------------------------------
1 | 1 | 15 | 5 | 2c6e9340-4a88-11e5-a180-433e07a8bafb
1 | 2 | 15 | 7 | 3233a040-4a88-11e5-a180-433e07a8bafb
1 | 4 | 15 | 22 | f98fd9b0-4a88-11e5-a180-433e07a8bafb
1 | 8 | 15 | 15 | 37ec7840-4a88-11e5-a180-433e07a8bafb
Now let's get the task_id's that have start >= 2 and end <= 15:
SELECT start, end, my_end_range(task_id, end, end_range) FROM tasks
WHERE p=1 AND start >= 2;
start | end | test.my_end_range(task_id, end, end_range)
-------+-----+--------------------------------------------
2 | 7 | 3233a040-4a88-11e5-a180-433e07a8bafb
4 | 22 | null
8 | 15 | 37ec7840-4a88-11e5-a180-433e07a8bafb
So that gives you the matching task_id's and you have to ignore the null rows (I haven't figured out a way to drop rows using UDF's). You'll note that the filter of start >= 2 dropped one row before passing it to the UDF.
Anyway not a perfect method obviously, but it might be something you can work with. :)
A while ago I wrote an application that faced a similar problem, in querying events that had both start and end times. For our scenario, I was able to partition on a userID (as queries were for events of a specific user), set a clustering column for type of event, and also for event date. The table structure looked something like this:
CREATE TABLE userEvents (
userid UUID,
eventTime TIMEUUID,
eventType TEXT,
eventDesc TEXT,
PRIMARY KEY ((userid),eventTime,eventType));
With this structure, I can query by userid and eventtime:
SELECT userid,dateof(eventtime),eventtype,eventdesc FROM userevents
WHERE userid=dd95c5a7-e98d-4f79-88de-565fab8e9a68
AND eventtime >= mintimeuuid('2015-08-24 00:00:00-0500');
userid | system.dateof(eventtime) | eventtype | eventdesc
--------------------------------------+--------------------------+-----------+-----------
dd95c5a7-e98d-4f79-88de-565fab8e9a68 | 2015-08-24 08:22:53-0500 | End | event1
dd95c5a7-e98d-4f79-88de-565fab8e9a68 | 2015-08-24 11:45:00-0500 | Begin | lunch
dd95c5a7-e98d-4f79-88de-565fab8e9a68 | 2015-08-24 12:45:00-0500 | End | lunch
(3 rows)
That query will give me all event rows for a particular user for today.
NOTES:
If you need to query by whether or not an event is starting or ending (I did not) you will want to order eventType ahead of eventTime in the primary key.
You will store each event twice (once for the beginning, and once for the end). Duplication of data usually isn't much of a concern in Cassandra, but I did want to explicitly point that out.
In your case, you will want to find a good key to partition on, as Task_ID will be too unique (high cardinality). This is a must in Cassandra, as you cannot range query on a partition key (only a clustering key).
There doesn't seem to be a completely satisfactory way to do this in Cassandra but the following method seems to work well:
I cluster the table on the Starts_On timestamp in descending order. (Ends_On is just a regular column.) Then I constrain the query with Starts_On<? where the parameter is the end of the period of interest - i.e. filter out events that start after our period of interest has finished.
I then iterate through the results until the row Ends_On is earlier than the start of the period of interest and throw away the rest of the results rows. (Note that this assumes events don't overlap - there are no subsequent results with a later Ends_On.)
Throwing away the rest of the result rows might seem wasteful, but here's the crucial bit: You can set the paging size sufficiently small that the number of rows to throw away is relatively small, even if the total number of rows is very large.
Ideally you want the paging size just a little bigger than the total number of relevant rows that you expect to receive back. If the paging size is too small the driver ends up retrieving multiple pages, which could hurt performance. If it is too large you end up throwing away a lot of rows and again this could hurt performance by transfering more data than is necessary. In practice you can probably find a good compromise.

Resources