How to model data using Cassandra and Ignite together? - cassandra

I'm researching how to model data having both Cassandra and Ignite together. So far the basic recommendation of data modeling in Cassandra (coming from this article) is clear: "model data around your queries". An author gives an example of "user lookup". We want to look up for users by their username or their email and according to him the best approach would be having two tables:
CREATE TABLE users_by_username (
username text PRIMARY KEY,
email text,
age int
)
CREATE TABLE users_by_email (
email text PRIMARY KEY,
username text,
age int
)
However things get confusing with Ignite on the top of Cassandra. Unfortunately I could not find any helpful examples or answers to the following questions:
Does having multiple tables that store user information mean having Ignite cache for each of these tables?
Does having compound primary key mean introducing new type for each key and use it as Ignite cache key?
Having Ignite means not having direct reads from Cassandra. Does it even make scene to bother modeling data following NoSql best practices? Would it be ok to just have one user table and let Ignite take care of queries by username or email.
CREATE TABLE users (
id uuid PRIMARY KEY,
username text,
email text,
age int
)

You should probably have one cache per Cassandra table.
If your original key is compound, so should Ignite key be.
You will need to use secondary indexes in Ignite to query by more than one field, and this means you will have to hold all data in Ignite (which is NOT necessary for pure caching scenario). This means enabling readThrough and writeThrough, doing loadCache and always doing all updates through Ignite. You will have to choose between "Ignite as cache for Cassandra" (stick to Cassandra's data layout, can hold partial data) and "Ignite as DB backed by Cassandra" (you can use layout optimal for Ignite, secondary indexes).

Related

How do I design a table in Cassandra for a TinyURL use case?

Recently I came across a well-known design problem.
'Tiny URL'
What I found was people vouching for NoSQL DBS such as DynamoDB or Cassandra. I've been reading about Cassandra for a couple of days, and I want to design my solution around this DB for this specific problem.
What would be the table definition? If I choose the following table definition:
Create table UrlMap(tiny_url text PRIMARY KEY, url text);
Wouldn't this result in a lot of partitions? since my partition key can take on around 68B values (using 6 char base64 strings)
Would that somehow affect the overall read/write performance? If so, what would be a better model to define the table.
Lot's of partitions is fine, think of it as using c* as a key value store.
The primary principle of data modelling in Cassandra is to design one table for each application query.
For a URL shortening service, the main application query is to retrieve the equivalent full URL for a given tiny URI. In pseudo-code, the query looks like:
GET long url FROM datastore WHERE uri = ?
Note that for the purpose of a service, we won't store the web domain name to make the app reusable for any domain. The filter (WHERE clause) is the URI so this is what you want as the partition key so we would design the table accordingly:
CREATE TABLE urls_by_uri (
uri text,
long_url text,
PRIMARY KEY(uri)
)
If we want to retrieve the URL for http://tinyu.rl/abc123, the CQL query is:
SELECT long_url FROM urls_by_uri WHERE uri = 'abc123'
As Phact and Andrew pointed, there is no need to worry about the number of partitions (records) you'll be storing in the table because you can store as many as 2^128 partitions in a Cassandra table which for practical purposes is limitless.
In Cassandra, each partition gets hashed into a token value using the Murmur3 hash algorithm (default partitioner). This implementation distributes each partition randomly across all nodes in the cluster. The same hash algorithm is used to determine which node "owns" the partition making retrieval (reads) very fast in Cassandra.
As long as you limit the SELECT queries to a single partition, retrieving the data is extremely fast. In fact, I work with hundreds of companies who have an SLA on reads of 95% between 6-9 milliseconds. This is achievable in Cassandra when you model your data correctly and size your cluster correctly. Cheers!

Cassandra - Shall I have to do so many writes?

I have 5 Tables:
users_by_id
users_by_username
users_by_email
users_by_likes
users_by_followers
I have to write 5 Statements every time if a user registered. Is that not expensive or bad ?
INSERT INTO users_by_id (...) values (..)
INSERT INTO users_by_email (...) values (..)
INSERT INTO users_by_username (...) values (..)
INSERT INTO users_by_likes (...) values (..)
INSERT INTO users_by_followers (...) values (..)
The second question: Maybe I update users_by_id I have to write 5 Update statments. Is there another solution? Or is that not this bad ?
Cassandra advocates denormalization of your data and creating data model according to your queries. You will have to write your data model such that it satisfies all the queries with good performance. For performance (due to its architecture and design) Cassandra asks for writing and reading using partition key.
It is not expensive to write 5 insertions for same set of data in 5 different tables. Your reads will perform better and as data size increases to web scale, you will thank your decision of creating 5 tables and writing to them.
You can explore materialized views (Materialized View and Datastax Link for Materialized View but remember it is an experimental feature. So you have to understand it properly and also identify open issues with materialized views.
I would recommend you study Cassandra data model that will make things easier to grasp.
Cassandra is designed to be write intensive database so do not hesitate to duplicate your data. One should always design tables for the read queries. If one table satisfies one query, it is a fine design.
Answer to your second question, you should design your tables such a way that you do not have to update table. Always think about inserting new values.
For example, below table design
CREATE TABLE user_by_email (
email text,
timestamp timestamp,
name text,
fullname text,
userId text,
PRIMARY KEY (email,timestamp)
) WITH CLUSTERING ORDER BY (timestamp DESC);
INSERT INTO user_by_email (email, DateTime.Now ........)
In this design, you should get the latest inserted value. Additionally , this design keeps change history for that key.
Think about, how many times we have to update values like user id, email, username? rarely.

Am I violating the data modelling rule in Cassandra?

I understand that we should not create 'N' number of partition under a single table because in this case, it tries to query from N number of nodes where the partitions are available.
(Modifying the example for understanding and security)
If I have a table like 'user'
CREATE TABLE user(
user_id int PRIMARY KEY,
user_name text,
user_phone varint
);
where user_id is unique.
Example - To get all the users from the table, I use the query :
select * from user;
So which means It goes to all the nodes where the partitions for the 'user_id' are available. Since I used the user_id as partition / primary key here, It will be scattered to all the nodes based on the partition_id.
Is it fine? Or Is there a better way to design this in Cassandra?
Edited :
By Keeping a single partition as 'uniquekey' and sorted by user_name will have the advantage that uniquekey will make a single partition. Is it the better design compare to the above one?
CREATE TABLE user(
user_id int,
user_name text,
user_phone varint,
primary key ('uniquekey', user_name));
select * from user where user_id = 'uniquekey';
A fundamental table design rule in Cassandra is called Query-Driven, which means you usually understand what are you trying to query on before you make the table schema.
If you just want to simply return all the rows (select * ) in the database (which is not a common use case for Cassandra since Cassandra aims to store very, very large amount of data), whatever you designed is fine. But Cassandra might not be the best choice in this case.
How to ensure a good table design in Cassandra?
Ref: Basic Rules of Cassandra Data Modeling

Updates in cassandra

Cassandra data modeling respects "Denormalization and duplication of data is a fact of life with Cassandra". But one of the cons for demormalized data is making the updates very hard. For example, if I have three tables catering for different queries, selecting is fine. However, if in my app, I want to update a username and I need to update these three tables? The update on first table looks ok. How about the latter two? The upates are going to be very expensive? How should I handle this case?
CREATE TABLE users_by_username (
username text PRIMARY KEY,
email text,
age int
)
CREATE TABLE users_by_email (
email text PRIMARY KEY,
username text,
age int
)
CREATE TABLE groups (
groupname text,
username text,
email text,
age int,
hash_prefix int,
PRIMARY KEY ((groupname, hash_prefix), username)
)
This is a typical problem I see when people try to put relational model in Cassandra which is being updated through time. Cassandra is a great database and for what it does, it works wonders. There are many features that enable all kinds of different data models and you can cover almost all use cases. When you look at your use case the question is why would you use Cassandra for relational model?
If you really want to make Cassandra cover your use case you will have to do a lot of different operations on application level just to execute updates and keep your data in consistent state.
After watching a few youtube clips, it looks like Canssandra's update is a simple write to append a record to the commit log in the file system. Then the data is put to memtable in cassandra server and send acknowledge to the client straight away. So the update call finishes. This makes the updating fast to the clients.
The whole compaction process happens afterwards, including flushing, sequential writing and merging based on the timestamp.

An Approach to Cassandra Data Model

Please note that I am first time using NoSQL and pretty much every concept is new in this NoSQL world, being from RDBMS for long time!!
In one of my heavy used applications, I want to use NoSQL for some part of the data and move out from MySQL where transactions/Relational model doesn't make sense. What I would get is, CAP [Availability and Partition Tolerance].
The present data model is simple as this
ID (integer) | ENTITY_ID (integer) | ENTITY_TYPE (String) | ENTITY_DATA (Text) | CREATED_ON (Date) | VERSION (interger)|
We can safely assume that this part of application is similar to Logging of the Activity!
I would like to move this to NoSQL as per my requirements and separate from Performance Oriented MySQL DB.
Cassandra says, everything in it is simple Map<Key,Value> type! Thinking in terms of Map level,
I can use ENTITY_ID|ENTITY_TYPE|ENTITY_APP as key and store the rest of the data in values!
After reading through User Defined Types in Cassandra, can I use UserDefinedType as value which essentially leverage as One Key and multiple values! Otherwise, Use it as normal column level without UserDefinedType! One idea is to use the same model for different applications across systems where it would be simple logging/activity data can be pushed to the same, since the key varies from application to application and within application each entity will be unique!
No application/business function to access this data without Key, or in simple terms no requirement to get data randomly!
References: http://www.ebaytechblog.com/2012/07/16/cassandra-data-modeling-best-practices-part-1/
Let me explain the cassandra data model a bit (or at least, a part of it). You create tables like so:
create table event(
id uuid,
timestamp timeuuid,
some_column text,
some_column2 list<text>,
some_column3 map<text, text>,
some_column4 map<text, text>,
primary key (id, timestamp .... );
Note the primary key. There's multiple columns specified. The first column is the partition key. All "rows" in a partition are stored together. Inside a partition, data is ordered by the second, then third, then fourth... keys in the primary key. These are called clustering keys. To query, you almost always hit a partition (by specifying equality in the where clause). Any further filters in your query are then done on the selected partition. If you don't specify a partition key, you make a cluster wide query, which may be slow or most likely, time out. After hitting the partition, you can filter with matches on subsequent keys in order, with a range query on the last clustering key specified in your query. Anyway, that's all about querying.
In terms of structure, you have a few column types. Some primitives like text, int, etc., but also three collections - sets, lists and maps. Yes, maps. UDTs are typically more useful when used in collections. e.g. A Person may have a map of addresses: map. You would typically store info in columns if you needed to query on it, or index on it, or you know each row will have those columns. You're also free to use a map column which would let you store "arbitrary" key-value data; which is what it seems you're looking to do.
One thing to watch out for... your primary key is unique per records. If you do another insert with the same pk, you won't get an error, it'll simply overwrite the existing data. Everything in cassandra is an upsert. And you won't be able to change the value of any column that's in the primary key for any row.
You mentioned querying is not a factor. However, if you do find yourself needing to do aggregations, you should check out Apache Spark, which works very well with Cassandra (and also supports relational data sources....so you should be able to aggregate data across mysql and cassandra for analytics).
Lastly, if your data is time series log data, cassandra is a very very good choice.

Resources