Cassandra: Searching for NULL values - cassandra

I have a table MACRecord in Cassandra as follows :
CREATE TABLE has.macrecord (
macadd text PRIMARY KEY,
position int,
record int,
rssi1 float,
rssi2 float,
rssi3 float,
rssi4 float,
rssi5 float,
timestamp timestamp
)
I have 5 different nodes each updating a row based on its title i-e node 1 just updates rssi1, node 2 just updates rssi2 etc. This evidently creates null values for other columns.
I cannot seem to be able to a find a query which will give me only those rows which are not null. Specifically i have referred to this post.
I want to be able to query for example like SELECT *FROM MACRecord where RSSI1 != NULL as in MYSQL. However it seems both null values and comparison operators such as != are not supported in CQL.
Is there an alternative to putting NULL values or a special flag?. I am inserting float so unlike strings i cannot insert something like ''. What is a possible workaround for this problem?
Edit :
My data model in MYSQL was like this :
+-----------+--------------+------+-----+-------------------+-----------------------------+
| Field | Type | Null | Key | Default | Extra |
+-----------+--------------+------+-----+-------------------+-----------------------------+
| MACAdd | varchar(17) | YES | UNI | NULL | |
| Timestamp | timestamp | NO | | CURRENT_TIMESTAMP | on update CURRENT_TIMESTAMP |
| Record | smallint(6) | YES | | NULL | |
| RSSI1 | decimal(5,2) | YES | | NULL | |
| RSSI2 | decimal(5,2) | YES | | NULL | |
| RSSI3 | decimal(5,2) | YES | | NULL | |
| RSSI4 | decimal(5,2) | YES | | NULL | |
| RSSI5 | decimal(5,2) | YES | | NULL | |
| Position | smallint(6) | YES | | NULL | |
+-----------+--------------+------+-----+-------------------+-----------------------------+
Each node (1-5) was querying from MYSQL based on its number for example node 1 "SELECT *FROM MACRecord WHERE RSSI1 is not NULL"
I updated my data model in cassandra as follows so that rssi1-rssi5 are now VARCHAR types.
CREATE TABLE has.macrecord (
macadd text PRIMARY KEY,
position int,
record int,
rssi1 text,
rssi2 text,
rssi3 text,
rssi4 text,
rssi5 text,
timestamp timestamp
)
I was thinking that each node would initially insert string 'NULL' for a record and when an actual rssi data comes it will just replace the 'NULL' string so it would avoid having tombstones and would more or less appear to the user that the values are actually not valid pieces of data since they are flagged 'NULL'.
However i am still puzzled as to how i will retrieve results like i have done in MYSQL. There is no != operator in cassandra. How can i write a query which will give me a result set for example like "SELECT *FROM HAS.MACRecord where RSSI1 != 'NULL'" .

You can only select rows in CQL based on the PRIMARY KEY fields, which by definition cannot be null. This also applies to secondary indexes. So I don't think Cassandra will be able to do the filtering you want on the data fields. You could select on some other criteria and then write your client to ignore rows that had null values.
Or you could create a different table for each rssiX value, so that none of them would be null.
If you are only interested in some kind of aggregation, then the null values are treated as zero. So you could do something like this:
SELECT sum(rssi1) WHERE macadd='someadd';
The sum() function is available in Cassandra 2.2.
You might also be able to do some kind of trick with a user defined function/aggregate, but I think it would be simpler to have multiple tables.

Related

Efficiently update rows of a postgres table from another table in another database based on a condition in a common column

I have two pandas DataFrames:
df1 from database A with connection parameters {"host":"hostname_a","port": "5432", "dbname":"database_a", "user": "user_a", "password": "secret_a"}. The column key is the primary key.
df1:
| | key | create_date | update_date |
|---:|------:|:-------------|:--------------|
| 0 | 57247 | 1976-07-29 | 2018-01-21 |
| 1 | 57248 | | 2018-01-21 |
| 2 | 57249 | 1992-12-22 | 2016-01-31 |
| 3 | 57250 | | 2015-01-21 |
| 4 | 57251 | 1991-12-23 | 2015-01-21 |
| 5 | 57262 | | 2015-01-21 |
| 6 | 57263 | | 2014-01-21 |
df2 from database B with connection parameters {"host": "hostname_b","port": "5433", "dbname":"database_b", "user": "user_b", "password": "secret_b"}. The column id is the primary key (these values are originally the same than the one in the column key in df1; it's only a renaming of the primary key column of df1).
df2:
| | id | create_date | update_date | user |
|---:|------:|:-------------|:--------------|:------|
| 0 | 57247 | 1976-07-29 | 2018-01-21 | |
| 1 | 57248 | | 2018-01-21 | |
| 2 | 57249 | 1992-12-24 | 2020-10-11 | klm |
| 3 | 57250 | 2001-07-14 | 2019-21-11 | ptl |
| 4 | 57251 | 1991-12-23 | 2015-01-21 | |
| 5 | 57262 | | 2015-01-21 | |
| 6 | 57263 | | 2014-01-21 | |
Notice that the row[2] and row[3] in df2 have more recent update_date values (2020-10-11 and 2019-21-11 respectively) than their counterpart in df1 (where id = key) because their creation_date have been modified (by the given users).
I would like to update rows (i.e. in concrete terms; create_date and update_date values) of df1 where update_date in df2 is more recent than its original value in df1 (for the same primary keys).
This is how I'm tackling this for the moment, using sqlalchemy and psycopg2 + the .to_sql() method of pandas' DataFrame:
import psycopg2
from sqlalchemy import create_engine
connector = psycopg2.connect(**database_parameters_dictionary)
engine = create_engine('postgresql+psycopg2://', creator=connector)
df1.update(df2) # 1) maybe there is something better to do here?
with engine.connect() as connection:
df1.to_sql(
name="database_table_name",
con=connection,
schema="public",
if_exists="replace", # 2) maybe there is also something better to do here?
index=True
)
The problem I have is that, according to the documentation, the if_exists argument can only do three things:
if_exists{‘fail’, ‘replace’, ‘append’}, default ‘fail’
Therefore, to update these two rows, I have to;
1) use .update() method on df1 using df2 as an argument, together with
2) replacing the whole table inside the .to_sql() method, which means "drop+recreate".
As the tables are really large (more than 500'000 entries), I have the feeling that this will need a lot of unnecessary work!
How could I efficiently update only those two newly updated rows? Do I have to generate some custom SQL queries to compares the dates for each rows and only take the ones that have really changed? But here again, I have the intuition that, looping through all rows to compare the update dates will take "a lot" of time. How is the more efficient way to do that? (It would have been easier in pure SQL if the two tables were on the same host/database but it's unfortunately not the case).
Pandas can't do partial updates of a table, no. There is a longstanding open bug for supporting sub-whole-table-granularity updates in .to_sql(), but you can see from the discussion there that it's a very complex feature to support in the general case.
However, limiting it to just your situation, I think there's a reasonable approach you could take.
Instead of using df1.update(df2), put together an expression that yields only the changed records with their new values (I don't use pandas often so I don't know this offhand); then iterate over the resulting dataframe and build the UPDATE statements yourself (or with the SQLAlchemy expression layer, if you're using that). Then, use the connection to DB A to issue all the UPDATEs as one transaction. With an indexed PK, it should be as fast as this would ever be expected to be.
BTW, I don't think df1.update(df2) is exactly correct - from my reading, that would update all rows with any differing fields, not just when updated_date > prev updated_date. But it's a moot point if updated_date in df2 is only ever more recent than those in df1.

Order by in materialized view doesn't sort the results

I have a table with a structure like this:
CREATE TABLE kaefko.se_vi_f55dfeebae00d2b3 (
value text PRIMARY KEY,
id text,
popularity bigint);
With data that looks like this:
value | id | popularity
--------+------------------+------------
rally | 4eff16cb91f96cd6 | 2
reddit | 11aa39686ed66ba5 | 3
red | 552d7e95af481415 | 1
really | 756bfa499965863c | 1
right | c5850c6b08f7966b | 1
redis | 7f1d251f399442d7 | 1
And I've created a materialized view that should sort these values by the popularity from the biggest to the smallest ones:
CREATE MATERIALIZED VIEW kaefko.se_vi_f55dfeebae00d2b3_by_popularity AS
SELECT *
FROM kaefko.se_vi_f55dfeebae00d2b3
WHERE popularity IS NOT null
PRIMARY KEY (value, popularity)
WITH CLUSTERING ORDER BY (popularity DESC);
But the data in the materialized view looks like this:
value | popularity | id
--------+------------+------------------
rally | 2 | 4eff16cb91f96cd6
reddit | 3 | 11aa39686ed66ba5
really | 1 | 756bfa499965863c
right | 1 | c5850c6b08f7966b
redis | 1 | 7f1d251f399442d7
As you can see there are two main issues:
Data is not sorted as defined in the materialized view
There is just a part of all data in the materialized view
I'm not very experienced in Cassandra and I've already spent hours trying to find the reason why this happens with no avail. Could somebody please help me? Thank you <3
__
I'm using ScyllaDB 4.1.9-0 and cqlsh shows this:
[cqlsh 5.0.1 | Cassandra 3.0.8 | CQL spec 3.3.1 | Native protocol v4]
Alex's comment is 100% correct, the order is within the partition.
PRIMARY KEY (value, popularity)
WITH CLUSTERING ORDER BY (popularity DESC);
This means that the ordering of popularity is descending only for values where the 'value' field is the same - if I was to alter the data you used to show what this would look like as an example, you would get the following:
value | popularity | id
--------+------------+------------------
rally | 3 | 4eff16cb91f96cd6
rally | 2 | 11aa39686ed66ba5
really | 3 | 756bfa499965863c
really | 2 | c5850c6b08f7966b
really | 1 | 7f1d251f399442d7
The order is on a per partition key basis, not globally ordered.

How to retrieve data from cassandra using IN Query with given order?

I'm selecting data from a Cassandra database using a query. It is working fine but how to get the data in same order as I have given IN query?
I have created table like this:
id | n | p | q
----+---+---+------
5 | 1 | 2 | 4
10 | 2 | 4 | 3
11 | 1 | 2 | null
I am trying to select data using
SELECT *
FROM malleshdmy
WHERE id IN ( 11,10,5)
But, It producing same data as like stored.
id | n | p | q
----+---+---+------
5 | 1 | 2 | 4
10 | 2 | 4 | 3
11 | 1 | 2 | null
Please help me in this issue.
I want data as 11,10 and 5
If the id is partition key, then it's impossible - data are sorted only inside the clustering columns, and data for different partition keys could be returned in arbitrary order (but sorted inside that partition).
You need to sort data yourself.
Since id is your partition key, your data is actually being sorted by the token of id, not the values themselves:
cqlsh:testid> SELECT id,n,p,q,token(id) FROM table;
id | n | p | q | system.token(id)
----+---+---+------+----------------------
5 | 1 | 2 | 4 | -7509452495886106294
10 | 2 | 4 | 3 | -6715243485458697746
11 | 1 | 2 | null | -4156302194539278891
Because of this, you don't have any control over how the partition key is sorted.
In order to sort your data by id, you need to make id a clustering column rather than a partition key. Your data will still need a partition key, however, and this will always be sorted by token.
If you decide to make id a clustering column, you will need to specify that you want a descending order in your order by statement
CREATE TABLE clusterTable (
... partition type, //partition key with a type to be specified
... id INT,
... n INT,
... p INT,
... q INT,
... PRIMARY KEY((partition),id))
... WITH CLUSTERING ORDER BY (id DESC);
This link is very helpful in discussing how ordering works in Cassandra: https://www.datastax.com/dev/blog/we-shall-have-order

Create Cassandra CQL with IN and ORDER BY

I need a CQL to get all rows from the table based on set of current user friends (I'm using IN for that) and sort them based on created date.
I'm trying to play with key and clustering key, but got no ideas.
Here is my Cassandra table:
CREATE TABLE chat.news_feed(
id_news_feed uuid,
id_user_sent uuid,
first_name text,
last_name text,
security int,
news_feed text,
image blob,
image_preview text,
image_name text,
image_length int,
image_resolution text,
is_image int,
created_date timestamp,
PRIMARY KEY ((id_news_feed, id_user_sent), created_date))
WITH CLUSTERING ORDER BY (created_date DESC) AND comment = 'List of all news feed by link id';
and here is my CQL (formed in Java):
SELECT JSON id_news_feed, first_name, last_name, id_user_sent, news_feed, image_name, image_preview, image_length, created_date, is_image, image_resolution FROM chat.news_feed WHERE id_user_sent in (b3306e3f-1f1d-4a87-8a64-e22d46148316,b3306e3f-1f1d-4a87-8a64-e22d46148316) ALLOW FILTERING;
I coul not run it cause there is no key in my WHERE part of CQL.
Is there any way how I could get all rows created by set of users with Order By (I tried to create table different ways, but no results yet)?
Thank you!
Unlike the relational databases here you will probably need denormalization of the tables. First of all, you cannot effectively query everything from a single table. Also Cassandra does not support joins natively. I suggest to split up your table into several.
Let's start with the friends: the current user id should be part of the primary key and the friends should go as a clustering column.
CREATE TABLE chat.user_friends (
user_id uuid,
friend_id uuid,
first_name text,
last_name text,
security int,
PRIMARY KEY ((user_id), friend_id));
Now you can find the friend for each particular user by querying as follows:
SELECT * FROM chat.user_friends WHERE user_id = 'a001-...';
or
SELECT * FROM chat.user_friends WHERE user_id = 'a001-...' and friend_id in ('a121-...', 'a156-...', 'a344-...');
Next let's take care of news feed: before putting remaining columns into this table I'd think about the desired query against this table. The news feeds needs to be filtered by the user ids with IN listing and at the same time be sortable by time. So we put the created_date timestamp as a clustering key and friends' user_id as a partitioning key. Note that the timestamps will be sorted per user_id not globally (you can re-sort those on the client side). What's really important is to keep news_feed_id out of the primary key. This column still may contain uuid which is unique, but as long as we don't want to query this table to get a particular news feed by id. For this purpose We'd anyway require separate table (denormalization of the data) or materialized view (which I will not cover in this answer but is quite nice solution for some types of denormalization introduced in Cassandra 3.0).
Here is the updated table:
CREATE TABLE chat.news_feed(
id_user_sent uuid,
first_name text,
last_name text,
security int,
id_news_feed uuid,
news_feed text,
image blob,
image_preview text,
image_name text,
image_length int,
image_resolution text,
is_image int,
created_date timestamp,
PRIMARY KEY ((id_user_sent), created_date))
WITH CLUSTERING ORDER BY (created_date DESC) AND comment = 'List of all news feed by link id';
Some example dataset:
cqlsh:ks_test> select * from news_feed ;
id_user_sent | created_date | first_name | id_news_feed | image | image_length | image_name | image_preview | image_resolution | is_image | last_name | news_feed | security
--------------------------------------+---------------------------------+------------+--------------------------------------+-------+--------------+------------+---------------+------------------+----------+-----------+-----------+----------
01b9b9e8-519c-4578-b747-77c8d9c4636b | 2017-02-23 00:00:00.000000+0000 | null | fd25699c-78f1-4aee-913a-00263912fe18 | null | null | null | null | null | null | null | null | null
9bd23d16-3be3-4e27-9a47-075b92203006 | 2017-02-21 00:00:00.000000+0000 | null | e5d394d3-b67f-4def-8f1e-df781130ea22 | null | null | null | null | null | null | null | null | null
6e05257d-9278-4353-b580-711e62ade8d4 | 2017-02-25 00:00:00.000000+0000 | null | ec34c655-7251-4af8-9718-3475cad18b29 | null | null | null | null | null | null | null | null | null
6e05257d-9278-4353-b580-711e62ade8d4 | 2017-02-22 00:00:00.000000+0000 | null | 5342bbad-0b55-4f44-a2e9-9f285d16868f | null | null | null | null | null | null | null | null | null
6e05257d-9278-4353-b580-711e62ade8d4 | 2017-02-20 00:00:00.000000+0000 | null | beea0c24-f9d6-487c-a968-c9e088180e73 | null | null | null | null | null | null | null | null | null
63003200-91c0-47ba-9096-6ec1e35dc7a0 | 2017-02-21 00:00:00.000000+0000 | null | a0fba627-d6a7-463c-a00c-dd0472ad10c5 | null | null | null | null | null | null | null | null | null
And the filtered one:
cqlsh:ks_test> select * from news_feed where id_user_sent in (01b9b9e8-519c-4578-b747-77c8d9c4636b, 6e05257d-9278-4353-b580-711e62ade8d4) and created_date >= '2017-02-22';
id_user_sent | created_date | first_name | id_news_feed | image | image_length | image_name | image_preview | image_resolution | is_image | last_name | news_feed | security
--------------------------------------+---------------------------------+------------+--------------------------------------+-------+--------------+------------+---------------+------------------+----------+-----------+-----------+----------
01b9b9e8-519c-4578-b747-77c8d9c4636b | 2017-02-25 00:00:00.000000+0000 | null | 26dc0952-0636-438f-8a26-6a3fef4fb808 | null | null | null | null | null | null | null | null | null
01b9b9e8-519c-4578-b747-77c8d9c4636b | 2017-02-23 00:00:00.000000+0000 | null | fd25699c-78f1-4aee-913a-00263912fe18 | null | null | null | null | null | null | null | null | null
6e05257d-9278-4353-b580-711e62ade8d4 | 2017-02-25 00:00:00.000000+0000 | null | ec34c655-7251-4af8-9718-3475cad18b29 | null | null | null | null | null | null | null | null | null
6e05257d-9278-4353-b580-711e62ade8d4 | 2017-02-22 00:00:00.000000+0000 | null | 5342bbad-0b55-4f44-a2e9-9f285d16868f | null | null | null | null | null | null | null | null | null
P.S. As you might notice we got rid of the ALLOW FILTERING clause. Don't use ALLOW FILTERING in any application as it has significant performance penalty. This is only usable to look up some small chunk of data scattered around in different partitions.

Range query - Data modeling for time series in CQL Cassandra

I have a table like this:
CREATE TABLE test ( partitionkey text, rowkey text, date
timestamp, policyid text, policyname text, primary key
(partitionkey, rowkey));
with some data:
partitionkey | rowkey | policyid | policyname | date
p1 | r1 | pl1 | plicy1 | 2007-01-02 00:00:00+0000
p1 | r2 | pl2 | plicy2 | 2007-01-03 00:00:00+0000
p2 | r3 | pl3 | plicy3 | 2008-01-03 00:00:00+0000
I want to be able to find:
1/ data from a particular partition key
2/ data from a particular partition key & rowkey
3/ Range query on date given a partitionkey
1/ and 2/ are trivial:
select * from test where partitionkey='p1';
partitionkey | rowkey | policyid | policyname | range
p1 | r1 | pl1 | plicy1 | 2007-01-02 00:00:00+0000
p1 | r2 | pl2 | plicy2 | 2007-01-03 00:00:00+0000
but what about 3/?
Even with an index it doesnt work:
create index i1 on test (date);
select * from test where partitionkey='p1' and date =
'2007-01-02';
partitionkey | rowkey | policyid | policyname | date
p1 | r1 | pl1 plicy1 | 2007-01-02 00:00:00+0000
but
select * from test where partitionkey='p1' and
date > '2007-01-02';
Bad Request: No indexed columns present in
by-columns clause with Equal operator
Any idea?
thanks,
Matt
CREATE TABLE test ( partitionkey text, rowkey text, date timestamp,
policyid text, policyname text, primary key (partitionkey, rowkey));
First of all, you really should use more descriptive column names instead of partitionkey and rowkey (and even date, for that matter). By looking at those column names, I really can't tell what kind of data this table is supposed to be indexed by.
select * from test where partitionkey='p1' and date > '2007-01-02';
Bad Request: No indexed columns present in by-columns clause with Equal operator
As for this issue, try making your "date" column a part of your primary key.
primary key (partitionkey, rowkey, date)
Once you do that, I think your date range queries will function appropriately.
For more information on this, check out DataStax Academy's (free) course called Java Development With Apache Cassandra. Session 5, Module 104 discusses how to model time series data and that should help you out.

Resources