cassandra where clause, one field from udt - cassandra

How can I query based on only one field in UDT (Cassandra) ?
I have a UDT which contains marital status and I need data for all the married people
but when I query based on one field it gives empty output
How can I do this ?

short answer - NO, at least not in the stock Cassandra. It's possible to do using the DSE Search, but it adds its own constraints. Maybe when SAI will be implemented some day, it will support indexing of UDT fields. I don't remember if SASI supports this, but it's really not recommended to use. But even if indexing was supported, it's still bad case (maybe except SAI) for Cassandra because it will create big partitions to represent each status.
The general rule is that you need to always query by partition key, with possibility to use secondary indexes when you need to search for another field, but only inside partition. If you need to query by multiple fields that not primary keys, I would suggest to take another database, or use Elasticsearch/Solr (although they aren't very good in geo-distributed environment).

Related

How can you query all the data with an empty set in cassandra db?

As the title says, I'm trying to query all the data I got with no value stored in it. I've been searching for a while, and the only operation allowed that I've found is CONTAINS, which doesn't fit my need.
consider the following table:
CREATE TABLE environment(
id uuid,
name varchar,
message text,
public Boolean,
participants set<varchar>,
PRIMARY KEY (id)
)
How can I get all entries in the table with an empty set? E.g. participants = {} or null?
Unfortunately, you really can't. Cassandra makes queries like this difficult by design, because there's no way it can be done without doing a full table scan (scanning each and every node). This is why a big part of Cassandra data modeling is understanding all the ways that table will be queried, and building it to support those queries.
The other issue that you'll have to deal with, is that (generally speaking) Cassandra does not allow filtering by nulls. Again, it's a design choice...it's much easier to query for data that exists, rather than data that does not exist. Although, when writing with lightweight transactions, there are ways around this one (using the IF clause).
If you knew all of the ids ahead of time, you could write something to iterate through them, SELECT and check for null on the app-side. Although that approach will be slow (but it won't stress the cluster). Probably the better approach, is to use a distributed OLAP layer like Apache Spark. It still wouldn't be fast, but this is probably the best way to handle a situation like this.

Should I set foreign keys in Cassandra tables?

I am new to Cassandra and coming from relational background. I learned Cassandra does not support JOINs hence no concept of foreign keys. Suppose I have two tables:
Users
id
name
Cities
id
name
In RDBMS world I should pass city_id into users table. Since there is no concept of joins and you are allowed to duplicate data, is it still work passing city_id into users table while I can create a table users_by_cities?
The main Cassandra concept is that you design tables based off of your queries (as writes to the table have no restrictions). The design is based off of the query filters. An application that queries a table by some ID is somewhat unnatural as the CITY_ID could be any value and typically is unknown (unless you ran a prior query to get it). Something more natural may be CITY_NAME. Anyway, assuming there are no indexes on the table (which are mere tables themselves), there are rules in Cassandra regarding the filters you provide and the table design, mainly that, at a minimum, one of the filters MUST be the partition key. The partition key helps direct cassandra to the correct node for the data (which is how the reads are optimized). If none of your filters are the partition key, you'll get an error (unless you use ALLOW FILTERING, which is a no-no). The other filters, if there are any, must be the clustering columns (you can't have a filter that is neither the partition key nor the clustering columns - again, unless you use ALLOW FILTERING).
These restrictions, coming from the RDBMS world, are unnatural and hard to adjust to, and because of them, you may have to duplicate data into very similar structures (maybe the only difference is the partition keys and clustering columns). For the most part, it is up to the application to manipulate each structure when changes occur, and the application must know which table to query based off of the filters provided. All of these are considered painful coming from a relational world (where you can do whatever you want to one structure). These "constraints" need to be weighed against the reasons why you chose Cassandra for your storage engine.
Hope this helps.
-Jim

How do you UPDATE a Cassandra column without directly knowing the primary key?

Given a scenario where you have a User table, with id as PRIMARY KEY.
You have a column called email, and a column called name.
You want to UPDATE User.name based on User.email
I realized that the UPDATE command requires you to pass in a PRIMARY KEY. Does this mean I can't use a pure CQL migration, and would need to first query for the User.id primary key before I can UPDATE?
In this case, I DO know the PRIMARY KEY because the UUIDs are the same for dev and prod, but it feels dirty.
Yes, you're correct - you need to know primary key of the record to perform an update on the data, or deletion of specific record. There are several options here, depending of your data model:
Perform full scan of the table using effective token range scan (Look to this answer for more details);
If this is required very often, you can create a materialized view, with User.email as partition key, and fetch all message IDs that you can update (but you'll need to do this from your application, there is no nested query support in CQL). But also be aware that materialized views are "experimental" feature in Cassandra, and may not work all the time (it's more stable in DataStax Enterprise). Also, if you have some users with hundreds of thousands of emails, this may create big partitions.
Do like 2nd item with your code, by using an additional table
I think Alex's answer covers your question -- "how can I find a value in a PK column working backwards from a non-PK column's value?".
However, I think it's worth noting that asking this question indicates you should reconsider your data model. A rule of thumb in C* data model design is that you begin by considering the queries you need, and you've missed the UPDATE query use case. You can probably make things work without changing your model for now, but if you find you need to make other queries you're unprepared for, you'll run into operational issues with lots of indexes and/or MVs.
More generally, search around for articles and other resources about Cassandra data modeling. It sounds like you're basically using C* for a relational use case so you'll want to look into that.

Why do we need secondary indexes in cassandra and how do they really work?

I was trying to understand why secondary indexes were even necessary on Cassandra.
I know that secondary indexes are used because:
"Secondary indexes allow for efficient querying by specific values using equality predicates (where column x = value y). Also, queries on indexed values can apply additional filters to perform operations such as range queries."
from: http://www.datastax.com/docs/0.7/data_model/secondary_indexes
But what I did not understand is why a query like:
get users where birth_date = 1973;
required that the birth_date had a secondary index. Why is it necessary for secondary indexes to even exist? Can't cassandra just go through the table and then return the values when the constrained is matched? Why do we need to treat things that we might want to query in that way in any special way?
I am assuming that the fact that cassandra is distributed and going through the whole table might not be easy due to each row key being allocated to a different node making it a little complicated. But I didn't really understand how making it distributed complicated the problem and how secondary indices resolved it (i.e. how does cassandra resolve this issue?).
Related to this question, is it true that secondary indexes and primary keys are the only things that can be queried in the for of SELECT * FROM column_family_table WHERE col_x = constraint? Why is the primary key special?
With amount of data these nosql databases meant to deal with, going for table scan or region scan is not an option. That's what Cassandra has restricted and allowed queries over non row key columns only if secondary indxes are enabled. That way such indices and data would be co located on same data node.
Hope it helps.
-Vivek

Design approach cassandra for Rowkey and already exists check

I am new to cassandra and my cassandra is giving lot of read timeout errors..tweaked timout but still problem may be problem with design (for my application cassandra expected to store trillions of data):
Question 1 : In an all my cassandra tables i use UUID as rowkey...but for few tables just for maintainence i break that rule like in user table i make email id as rowkey....so that looking at tables i can understand data stored...IS using UUID right approach for huge case and second approach for user table is right or not ???????????
Question 2 : i have one relations table with startNodeId, relationTypeId, endNodeId...rowkey for that is UUID which is relationId.....i define secondary indexes on startNode, relationType, endNode as i can have lookup by any of them by business case.........becuase of that for each new row i have to do get to check ALREADY existing relation or not....One approach to avoid existing check is : i take startNodeId, relationTypeId, endNodeId SORT them and create HASH CODE and use that as ROWKEY...so my already checking explicitly will be avoided here..........IS THIS RIGHT approach ???????
Please guide me i am stuck at these thoughts...any guidance will really help me
Answering to your first question, until and unless you are comfortable in handling the rowkey with non-uuid value, its great also easier to track else go for the UUID.
Regarding to your second question, why don't you try the compound key. You don't have to maintain hashcode like stuffs, leave it on Cassandra.
1) Better use natural keys not UUIDs. Email, timestamp, composite primary keys, and so on. Using UUID is an approach from RDBMS world, you should avoid it in Cassandra
2) Read-modify-update is wrong pattern for Cassandra. Try rewritng data, if your business case allows this. Or just use timestamp and get the row with latest timestamp (don't forget about TTL).

Resources