Is there any way to select TTL value for an element in a map in Cassandra with CQL3?
I've tried this, but it doesn't work:
SELECT TTL (mapname['element']) FROM columnfamily
Sadly, I'm pretty sure the answer is that it is not possible as of Cassandra 1.2 and CQL3. You can't query individual elements of a collection. As this blog entry says, "You can only retrieve a collection in its entirety". I'd really love to have the capability to query for collection elements, too, though.
You can still set the TTL for individual elements in a collection. I suppose if you wanted to be assured that a TTL is some value for your collection elements, you could read the entire collection and then update the collection (the entire thing or just a chosen few elements) with your desired TTL. Or, if you absolutely needed to know the TTL for individual data, you might just need to change your schema from collections back to good old dynamic columns, for which the TTL query definitely works.
Or, a third possibility could be that you add another column to your schema that holds the TTL of your collection. For example:
CREATE TABLE test (
key text PRIMARY KEY,
data map<text, text>,
data_ttl text
) WITH ...
You could then keep track of the TTL of the entire map column 'data' by always updating column 'data_ttl' whenever you update 'data'. Then, you can query 'data_ttl' just like any other column:
SELECT ttl(data_ttl) FROM test;
I realize none of these solutions are perfect... I'm still trying to figure out what will work best for me, too.
Related
I want to have a table in Cassandra that has a partition key say column 'A', and a column say 'B' which is of 'set' type and can have up to 10000 elements in the set.
But when i retrieve a row from this table then the whole set is retrieved at once and because of that the JVM heap increases rapidly. So should i stick to this schema or go with other schema where 'A' is partition key and i make dynamic columns for each element in the set in my other schema say 'B1', 'B2' ..... 'B10,000'where each of this column is a clustering key.
Which schema is suited best and will give the optimal performance please recommend.
NOTE: cqlsh 5.0.1v
Based off of what you've described, and the documentation I've read, I would not create a collection with 10k elements. Instead I would have two tables, one with everything but the collection, and then use the primary key values of the first table, as the partition key columns of the second table; adding the element name (or whatever you can use to identify an individual element) as a clustering column.
So for a given query, if you wanted everything for a particular primary key value (including all elements), you'd query the first table with the primary key, grab whatever you need, then hit the second table as well, looping/fetching through all elements.
If the query only provides a filter on the partition key (not the primary key - i.e. retrieving multiple rows) , the first query would have to retrieve all columns that make up the primary key for each row, and then query the second table looping for all elements - nested loop here - one loop for each primary key record retrieved from the first table, and a second loop to grab all elements for each pk record.
Probably the best way to go with this. That's how I would probably tackle this.
Does that make sense?
-Jim
I'm looking into Cassandra for a potential upcoming project which I think it could be a good fit for. The one potential place where it is stumping me is around a requirement for data retention. Basically we have a schema like this:
CREATE TABLE Things (
user_id int
thing_id int
a text static
b text static
.... more static fields
updated_at timestamp static
type text
subthing_id int
PRIMARY KEY (user_id, thing_id, subthing_id)
)
In relational database terms I would say that a Thing belongs to a User and a Thing has many Subthings.
A Thing has various sub-things associated with it that come in at later times that will do a new insert in turn updating the appropriate static fields. We need to store each Thing for 30 days after the last time a subthing was inserted for that Thing. So for example, Thing A and Thing B get inserted. A subthing for Thing B is inserted a week later. Thing A is deleted 30 days after initial insertion. Thing B (and all associated subthings) are deleted 7 days later.
As far as I can tell, I can't just insert with a TTL since I need to update the TTL of the other Thing rows sharing the same user_id and thing_id. I'm also not entirely sure how I would just run a DELETE command here since I'm not deleting by any of the keys. I believe the primary key is correct here since ALL queries will be based on the user_id (except the deletion which is determined by the updated_at).
My other concern is the idea of the tombstones. I have only read about them but the concern here is that I would be deleting potentially millions of these Things each day. Is that going to require daily compaction after the daily deletes are performed?
Update:
An alternative I have thought of since the original posting was having a second table that gets inserted to each time a subthing is added. It would look like:
CREATE TABLE Expirations (
expiry date
user_id int
thing_id int
PRIMARY KEY (expiry, user_id, thing_id)
)
Where expiry is the date of the given user_id and thing_id to be deleted. This table would have to be updated as necessary as things are inserted into the Things table and then I would have to run something each day to query for values where expiry is today and iterate over them to delete things from the Things table. I am not sure if this is considered the "Cassandra way" but it seems like it could work.
This is an interesting challenge. I would use a map data type to map each thing_id to all its subthing_ids. I'd go for something like:
CREATE TABLE Things (
partition_date timestamp,
insertion_date timestamp,
user_id int,
thing_map map<int,int>
a text static
b text static
.... more static fields
updated_at timestamp static
type text
PRIMARY KEY (partition_date, insertion_date, user_id)
) WITH CLUSTERING ORDER BY (insertion_date DESC)
Here I inserted a new field insertion_date that should hold exactly the insertion date, and a new field partition_date that becomes new new only PARTITION KEY, that should store a truncation of the insertion_date field, just to avoid some hotspots (I'm assuming that can simply query based on a day field due to your TTL requirements, if you need to query on the user_id field things are a bit different). I recently answered to similar questions about this modeling problem here and here, so have a look at these to get more information about the used technique (it's called bucketing).
Then there's the thing_map that is the core of your problem. Pushing a new object in the map should reset the TTL for that map entirely, so that could give you exactly the desired behavior. Note that the TTL will remove the field only, not the entire row, you'll simply need to test if it's null or not.
Finally, the tombstone behavior is a problem you're gonna having to face. If you can afford a complete row rewrite, that instead of updating only the map field you upsert all the row at once you'd get a delete at partition level, and the "reverse time-series" I've modeled with the clustering key should take care of that without too much problems.
How can I update an entire table and set a TTL for every entry?
Current Scenario (Cassandra 2.0.11):
table:
CREATE TABLE external_users (
external_id text,
type int,
user_id text,
PRIMARY KEY (external_id, type)
)
currently there are ~40mio entries in this table and i want to add a TTL for lets say 86 400 seconds (1day).
It's no problem for new entries with USING TTL(86400) or UPDATE current entries, but how do i apply a ttl for every already existing entry?
My idea was to select all data and update every single row with a little script. I was just wondering if there is an easier way to achieve this (because even with batch updates this is gonna take a while and is a big effort)
Thanks in advance
There is no way to alter TTL of existing data in C*. TTL is just an internal column attribute which is written together with all other column data into immutable SSTable. A quote from the docs:
If you want to change the TTL of expiring data, you have to re-insert the data with a new TTL. In Cassandra, the insertion of data is actually an insertion or update operation, depending on whether or not a previous version of the data exists.
I need to store latest updates that needs to be pushed to users' newsfeed page in Cassandra table for later retrieval and my table's schema is as follow:
CREATE TABLE newsfeed (user_name text,
post_id bigint,
post_type text,
favorited boolean,
shared boolean,
own boolean,
date timestamp,
PRIMARY KEY (user_name,date,post_id,post_type) );
The first three column (username, postid, and posttype) in combination will build the actual primary-key of the table, however since I wanted to ORDER the SELECT queries on this table based on "date"s of rows I placed the date-column into the primary key fields as the "second" entry (did I have to do this?).
When I want to delete a row by giving only "user_name, post_id, and post_type" as follow:
DELETE FROM newsfeed WHERE user_name='pooria' and post_id=36 and post_type='p';
I will get the following error:
Bad Request: Missing PRIMARY KEY part date since post_id is set
I need the date-column to be part of the primary key since I want to use it in my ORDER BY clauses and on the other hand I have to delete some rows without knowing their "date" values!
So how such problems are tackled in Cassandra? should I be fixing my Data Model and have different schema for job?
DataStax's Chief Evangelist Patrick McFadden posted an article demonstrating a few time series modeling patterns. Definitely makes for a good read, and should be of some help to you: Getting Started with Time Series Data Modeling.
I think your table is just fine. Although, with the way that composite primary keys work in Cassandra, if you cannot skip primary key components in a query. So if you do end up needing to query data by user_name, post_id, and/or post_type differently (without date), you should create a table specifically for that query (which does not include date in the primary key).
I will however say that in-general, creating a table which will process regular delete operations is not a good idea. In fact, I'm pretty sure that has been classified as a Cassandra "anti-pattern." Data really isn't deleted from Cassandra; it is tombstoned. Tombstones are reconciled at compaction time (assuming that the tombstone threshold time has been met), and having too many of them has been known to cause performance issues.
If you read the article I linked above, go down to the section named "Time Series Pattern 3." You will notice that the INSERT statements are run with the USING TTL clause. This gives the data a time-to-live in seconds, after which it will "quietly disappear." For instance, if you wanted to keep your data around for 24 hours (86400 seconds) you could do something like this:
INSERT INTO newsfeed (...) VALUES (...) USING TTL 86400
Using the TTL feature is a preferable alternative to regular cleansing by DELETE.
I was reading the following article about Cassandra:
http://www.ebaytechblog.com/2012/07/16/cassandra-data-modeling-best-practices-part-1/#.UzIcL-ddVRw
and it seemed to imply you can have varying column keys in cassandra for a given row key. Is that true? And if its true, how do you allow for varying row keys.
The reason I think this might be true is because say we have a user and it can like many items and we simply want the userId to be the rowkey. We let this rowKey (userID) map to all the items that specific user might like. Each specific user might like a different number of items. Therefore, if we could have multiple column keys, one for each itemID each user likes, then we could solve the problem that way.
Therefore, is it possible to have varying length of cassandra column keys for a specific rowKey? (and how do you do it)
Providing an example and/or some cql code would be awesome!
The thing that is confusing me is that I have seen some .cql files and they define keyspaces before hand and it seems pretty inflexible on how to make it dynamic, i.e. allow it to have additional columns as we please. For example:
CREATE TABLE IF NOT EXISTS results (
test blob,
tid timeuuid,
result text,
PRIMARY KEY(test, tid)
);
How can this even allow growing columns? Don't we need to specify the name before hand anyway?Or additional custom columns as the application desires?
Yes, you can have a varying number of columns per row_key. From a relational perspective, it's not obvious that tid is the name of a variable. It acts as a placeholder for the variable column key. Note in the inserts statements below, "tid", "result", and "data" are never mentioned in the statement.
CREATE TABLE IF NOT EXISTS results (
data blob,
tid timeuuid,
result text,
PRIMARY KEY(test, tid)
);
So in your example, you need to identify the row_key, column_key, and payload of the table.
The primary key contains both the row_key and column_key.
Test is your row_key.
tid is your column_key.
data is your payload.
The following inserts are all valid:
INSERT your_keyspace.results('row_key_1', 'a4a70900-24e1-11df-8924-001ff3591711', 'blob_1');
INSERT your_keyspace.results('row_key_1', 'a4a70900-24e1-11df-8924-001ff3591712', 'blob_2');
#notice that the column_key changed but the row_key remained the same
INSERT your_keyspace.results('row_key_2', 'a4a70900-24e1-11df-8924-001ff3591711', 'blob_3');
See here
Did you thought of exploring collection support in cassandra for handling such relations in colocated way{e.g. on same data node}.
Not sure if it helps, but what about keeping user id as row key and a map containing item id as key and some value?
-Vivel