I have a use case where i need to perform delete from three tables in cassandra . The partitioning key is same for all the tables example :-
Db_name/table1/1111
Db_name/table2/1111
Db_name/table3/1111
Which operation i shall use Put/batch in order to maintain the atomicity. I want either all keys to be delete in one go or none is deleted
I need to delete huge no of such keys ..i mean lets say there are 10k such keys which i want to delete from all three tables . It would be something like
Loop over all the keys ..then delete key one by one from three table in one go
You need to use CQL batches to group updates to denormalised tables so they are executed as an atomic operation.
In cqlsh, the batched deletes would look like:
BEGIN BATCH
DELETE FROM table1 WHERE pk = 1111;
DELETE FROM table2 WHERE pk = 1111;
DELETE FROM table3 WHERE pk = 1111;
APPLY BATCH;
You'll need one batch statement for each partition key you are deleting. It's important that you don't group together unrelated partitions in a single batch since CQL batches are NOT an optimisation like it is in RDBMS.
I've explained this in a bit more detail in this article -- How to keep data in denormalized tables in sync. Cheers!
Related
On my Cassandra DB with 4 Nodes, I want to execute an update statement like this.
UPDATE table_todo SET todoUserKey = '123' WHERE todoUserKey = "000";
I get an exception
InvalidRequest: Error from server: code=2200 [Ivalid query] message="Some partition key parts are missing: id"
As I understand because of Cassandra running on multiple nodes, I need to specify exactly on which node I need to execute my Update operation.
But I dont have any information about the ID. How I can perform the update statement?
You must identify the row. Cassandra requires for all write operations to provide the exact primary key. From the CQL docs of UPDATE command:
The WHERE clause is used to select the row to update and must include
all columns of the PRIMARY KEY
It isn't possible to execute your query because if it was allowed, it means Cassandra has to perform a full table scan to check every single row on every single partition on every single node in the cluster.
Allowing such update operate simply does not scale. If you had a table with billions of partitions, each with hundreds or thousands of rows in a cluster with hundreds of nodes, it's not difficult to see that the allowing the query to run is very expensive and will not perform well.
For this exact reason, you need to specify not just the partition key but the clustering column (where appropriate) so Cassandra can update the specific row (or rows) within the partition without having to perform a full table scan. Cheers!
I understand that two tables with same partition columns and values have same token generated. Does that mean that all the cells of this partition in both tables are actually in the same partition ? How does Cassandra store data internally ?
Eg:
Create table table1 (emp_id int PRIMARY KEY, name text, role text);
Create table table2 (emp_id int PRIMARY KEY, name text, role text);
INSERT INTO table1(emp_id, name, role) VALUES (1, 'sahil', 'MTS');
INSERT INTO table2(emp_id, name, role) VALUES (1, 'sahil', 'MTS');
SELECT token(emp_id) from table1 where token(emp_id) = token(11596);
system.token(emp_id)
----------------------
**7447223576279188802**
SELECT token(emp_id) from table2 where token(emp_id) = token(1);
system.token(emp_id)
----------------------
**7447223576279188802**
For your example, because both tables have the same partition key, then when identical values are inserted, they will be mapped to the same token. It is on insert that the hash function to the PK is applied to determine what replica will get the data. If you use the Murmur3 partitioner (which is used by default) then you get a consistent token value, i.e. using the same PK and PK value, the result is the same. You can reference this page for understanding:
https://docs.datastax.com/en/cassandra-oss/3.x/cassandra/architecture/archDataDistributeHashing.html
Rows (items of data) that have the same table and the same partition key are said to be in the same partition. The most important consequence of being in the same partition is that data in the same partition is guaranteed to be co-located - handled by the same replica nodes and in ScyllaDB, even by the same CPU. This allows efficiently scanning a partition: All the partition's data can be read from a single node and Cassandra doesn't to go back and forth between replicas to read the various pieces of the partition and combine them. This is also what allows a node that handles the partition's full data to maintain it sorted by the clustering key: A process called compaction is merging different pieces of a sorted partition (these are sstables, or sorted string tables) into a bigger sorted partition.
When you have two different tables in the same keyspace, and use the same partition key in both, they are not stored physically on disk together - because each table has its own set of sstables (files on disk), so in that sense they are not "in the same partition". However, the co-location property which I mentioned earlier still holds (if the two tables are in the same keyspace): Two identically-keyed partitions in the two tables will be stored on exactly the same node. Why is this important/useful? Usually it isn't. One place where this knowledge can become useful is that it can be used in some situations to achieve atomic batch write to both tables at once, utilizing the fact that all replicas will see both writes together, whereas usually two writes to two tables go to different nodes at different times.
What are side effects of operations DELETE and INSERT executed in batch to modify data in column, which is a member of primary key in Cassandra?
Is there a better approach if a query with WHERE and update data for the same column , are needed?
Thanks in advance for reply.
There is no way to UPDATE a primary key column. You've to DELETE the old key and INSERT the new one (in a BATCH if atomicity is needed). If atomicity is not required (one statement does not impact the other), then you can execute as single request. As you are updating one row, INSERT and DELETE one row in a batch is fine. It'll not have great impact on performance. Deleting a large partition (there are too many rows per partition) will have impact and I think it is not your requirement either. But if you are needed to update your primary key quite frequently, then it is better to reconsider your data model.
As per this datastax doc Atomicity in Cassandra is:
In Cassandra, a write is atomic at the partition-level, meaning inserting or updating columns in a row is treated as one write operation.
Whereas according to this datastax doc Atomicity in Cassandra is:
In Cassandra, a write operation is atomic at the partition level, meaning the insertions or updates of two or more rows in the same partition are treated as one write operation.
My confusion is that whether atomicity is seen at a single row basis or it can have multiple rows included of a table at partition level?
I assume it is a combination of both depending on the type of query we are executing in Cassandra.
For Example :
If I have an insert query, it will always be inserting one row in a partition. So Cassandra ensures that this row is inserted successfully at partition level.
But if I have update query whose where clause has a condition which is qualifying multiple rows then the update operation is atomic at partition level means either all qualified rows as per the condition will be updated or none will be.
Is my understanding correct?
"row" and "partition" get conflated since previously row meant partition and now row means a part of a partition.
They are atomic to the partition. Keep in mind thats in reference to a single replica, so a or multiple rows in a batch containing 5 columns are all updated in a single operation on the one replica (no cross node isolation). If your setting (key, value) VALUES ('abc', 'def') you will never see just the key and not the value set. However you might make a read and only 1 replica has it set while other does not. Meaning depending on your replication factor and consistency level requested you will either see the whole thing or nothing. This can apply to multiple rows within a partition as well but you cannot update 2 rows with a single update statement without a batch (logged or unlogged).
I wanted to know if there's a way to join two or more result sets into one.
I actually need to execute more than one query and return just one result set. I can't use the UNION or the JOIN operators because I'm working with Cassandra (CQL)
Thanks in advance !
Framework like Playorm provide support for JOIN (INNER and LEFT JOINs)queries in Cassandra.
http://buffalosw.com/wiki/Command-Line-Tool/
You may see more examples at:
https://github.com/deanhiller/playorm/blob/master/src/test/java/com/alvazan/test/TestJoins.java
If your wanting to query multiple rows within the same column family you can use the IN keyword:
SELECT * FROM testCF WHERE key IN ('rowKeyA', 'rowKeyB', 'rowKeyZ') LIMIT 10;
This will get you back 10 results from each row.
If your needing to join results from different CFs, or query with differing WHERE clauses, then you need to run multiple queries and merge the results in code - cassandra doesn't cater for that kind of thing.
PlayOrm can do joins, but you may need to have PlayOrm partitioning on so you still scale. (ie. you dont' want to join 1 billion rows with 1 billion rows). Typically instead you do a join of one partition with another partition or a partition on the Account table joining a partition on the Users table. ie. make sure you design for scale still.