Will this SELECT QLDB query using index be optimized or not - node.js

I have one table with some indexes and some different columns. I am running this query to get the result of a user's data, not all the user data.
The user field is defined by UID
The time field is defined by TimeValue (i am entering time in numbers so I can compare them)
Data that I want is defined by Cause column
This is my query. Will this query do a table scan or will it only search for those rows where UID is the given one.
UID, TimeValue and Cause are index as well
SELECT Cause, Cause_Amount, UID FROM Contribution WHERE UID = 'u5JvslEo9DbQ7hcq4vkM74dWlxr2' AND TimeValue > 1620414948000 AND ( Cause = 'cleanAir' OR Cause = 'safeWater')

One way to check to see what QLDB is doing is to look at the ReadIOs value that comes back in your query stats. If you are seeing that your ReadIOs matches the total number of documents in the table, it is likely doing a scan, if it's lower, then you are using the index.

Related

Amazon qldb SELECT query to search specific rows with very less READIo's

I have a table with the following columns
UserID (index),
SID (index),
Cause (index),
TimeValue (index),
Amount
I want to query the cause of a specific user id, Meaning I want some cause which is equal to let's say water but I want this for only one user whose id is 'someId'
The query that I have right now is doing a full table scan
SELECT Cause, Cause_Amount, UserID FROM Contribution WHERE UserID = 'u5JvslEo9DbQ7hcq4vkM74dWlxr2' AND TimeValue > 1620414948000 AND ( Cause = 'cleanAir' OR Cause = 'safeWater')
So the approach should be that it should target userId with the given id then check the TimeValue then the cause and return the result.
I hope that makes sense
Generally speaking, you should put an index on your highest-cardinality document property. That means indexing on the property that is the most unique among your data. So if you have 10 users in your system but 1,000,000 documents in your Contribution table, that will need to scan 100,000 documents.
If you are seeing high read-IOs it is like because you have a small number of users but a large number of Contributions. It seems unlikely that Cause would be high-cardinality and QLDB does not yet (9/8/2021) support range queries and so the range query on TimeValue probably won't help either.
Are there other properties on your document that you could create an index on with higher cardinality?

Cassandra Allow filtering

I have a table as below
CREATE TABLE test (
day int,
id varchar,
start int,
action varchar,
PRIMARY KEY((day),start,id)
);
I want to run this query
Select * from test where day=1 and start > 1475485412 and start < 1485785654
and action='accept' ALLOW FILTERING
Is this ALLOW FILTERING efficient?
I am expecting that cassandra will filter in this order
1. By Partitioning column(day)
2. By the range column(start) on the 1's result
3. By action column on 2's result.
So the allow filtering will not be a bad choice on this query.
In case of the multiple filtering parameters on the where clause and the non indexed column is the last one, how will the filter work?
Please explain.
Is this ALLOW FILTERING efficient?
When you write "this" you mean in the context of your query and your model, however the efficiency of an ALLOW FILTERING query depends mostly on the data it has to filter. Unless you show some real data this is a hard to answer question.
I am expecting that cassandra will filter in this order...
Yeah, this is what will happen. However, the inclusion of an ALLOW FILTERING clause in the query usually means a poor table design, that is you're not following some guidelines on Cassandra modeling (specifically the "one query <--> one table").
As a solution, I could hint you to include the action field in the clustering key just before the start field, modifying your table definition:
CREATE TABLE test (
day int,
id varchar,
start int,
action varchar,
PRIMARY KEY((day),action,start,id)
);
You then would rewrite your query without any ALLOW FILTERING clause:
SELECT * FROM test WHERE day=1 AND action='accept' AND start > 1475485412 AND start < 1485785654
having only the minor issue that if one record "switches" action values you cannot perform an update on the single action field (because it's now part of the clustering key), so you need to perform a delete with the old action value and an insert it with the correct new value. But if you have Cassandra 3.0+ all this can be done with the help of the new Materialized View implementation. Have a look at the documentation for further information.
In general ALLOW FILTERING is not efficient.
But in the end it depends on the size of the data you are fetching (for which cassandra have to use ALLOW FILTERING) and the size of data its being fetched from.
In your case cassandra do not need filtering upto :
By the range column(start) on the 1's result
As you mentioned. But after that, it will rely on filtering to search data, which you are allowing in query itself.
Now, keep following in mind
If your table contains for example a 1 million rows and 95% of them have the requested value, the query will still be relatively efficient and you should use ALLOW FILTERING.
On the other hand, if your table contains 1 million rows and only 2 rows contain the requested value, your query is extremely inefficient. Cassandra will load 999, 998 rows for nothing. If the query is often used, it is probably better to add an index on the time1 column.
So ensure this first. If it works in you favour, use FILTERING.
Otherwise, it would be wise to add secondary index on 'action'.
PS : There is some minor edit.

how to do a query with cassandradb counter table

i have a table in Cassandradb as mentioned below:
CREATE TABLE remaining (owner varchar,buddy varchar,remain counter,primary key(owner,buddy));
generally i do some inc/dec operations on REMAIN field ,using cql like below:
update remaining set remain=remain + 1 where owner='userA' and buddy='userB';
update remaining set remain=remain + 1 where owner='userA' and buddy='userC';
....
and now i need to find out all buddies for userA which it's REMAIN field greater then 1. when i using:
select buddy,remain from remaining where owner='userA' and remain > 0;
gives me an error:
No indexed columns present in by-columns clause with Equal operator
how to do this in a cassandradb way?
The short answer to this is that you cannot do queries with conditionals on counter columns in Cassandra.
The reason behind this is that all Cassandra queries need to be modeled around the primary key of that particular table. Counter columns are not allowed as parts of the primary key of a table (their changing values would cause constant reorganization of the dat on disk). Counter columns are more used for tracking the state of a known piece of data, for example number of times a photo has been up-voted. This could be quickly recalled as long as we knew which photo we were interested in. To actually sort photos by numbers of votes you would need to perform an analytics style query using spark or Hadoop.

get_range_slices and CQL query handling, need for ALLOW FILTERING

I have a following CQL table (a bit simplified for clarity):
CREATE TABLE test_table (
user uuid,
app_id ascii,
domain_id ascii,
props map<ascii,blob>,
PRIMARY KEY ((user), app_id, domain_id)
)
The idea is that this table would contain many users (i.e. rows, say, dozens of millions). For each user there would be a few domains of interest and there would be a few apps per domain. And for each user/domain/app there would be a small set of properties.
I need to scan this entire table and load its contents in chunks for given app_id and domain_id. My idea was to use the TOKEN function to be able to read the whole data set in several iterations. So, something like this:
SELECT props FROM test_table WHERE app_id='myapp1'
AND domain_id='mydomain1'
AND TOKEN(user) > -9223372036854775808
AND TOKEN(user) < 9223372036854775807;
I was assuming that this query would be efficient because I specify the range of the row keys and by specifying the values of the clustering keys I effectively specify the column range. But when I try to run this query I get the error message "Bad Request: Cannot execute this query as it might involve data filtering and thus may have unpredictable performance. If you want to execute this query despite the performance unpredictability, use ALLOW FILTERING".
I have limited experience with Cassandra and I assumed that this sort of query would map into get_range_slices() call, which accepts the slice predicate (i.e. the range of columns defined by my app_id/domain_id values) and the key range defined by my token range. It seems either I misunderstand how this sort of query is handled or maybe I misunderstand about the efficiency of get_range_slices() call.
To be more specific, my questions are:
- if this data model does make sense for the kind of query I have in mind
- if this query is expected to be efficient
- if it is efficient, then why am I getting this error message asking me to ALLOW FILTERING
My only guess about the last one was that the rows that do not have the given combination of app_id/domain_id would need to be skipped from the result.
--- update ----
Thank for all the comments. I have been doing more research on this and there is still something that I do not fully understand.
In the given structure what I am trying to get is like a rectangular area from my data set (assuming that all rows have the same columns). Where top and the bottom of the rectangle is determined by the token range (range) and the left/right sides are defined as column range (slice). So, this should naturally transform into get_range_slices request. My understanding (correct me if I am wrong) that the reason why CQL requires me to put ALLOW FILTERING clause is because there will be rows that do not contain the columns I am looking for, so they will have to be skipped. And since nobody knows if it will have to skip every second row or first million rows before finding one that fits my criteria (in the given range) - this is what causes the unpredictable latency and possibly even timeout. Am I right? I have tried to write a test that does the same kind of query but using low-level Astyanax API (over the same table, I had to read the data generated with CQL, it turned out to be quite simple) and this test does work - except that it returns keys with no columns where the row does not contain the slice of columns I am asking for. Of course I had to implement some kind of simple paging based on the starting token and limit to fetch the data in small chunks.
Now I am wondering - again, considering that I would need to deal with dozens of millions of users: would it be better to partially "rotate" this table and organize it in something like this:
Row key: domain_id + app_id + partition no (something like hash(user) mod X)
Clustering key: column partition no (something like hash(user) >> 16 mod Y) + user
For the "column partition no"...I am not sure if it is really needed. I assume that if I go with this model I will have relatively small number of rows (X=1000..10000) for each domain + app combination. This will allow me to query the individual partitions, even in parallel if I want to. But (assuming the user is random UUID) for 100M users it will result in dozens or hundreds of thousands of columns per row. Is it a good idea to read one such a row in one request? It should created some memory pressure for Cassandra, I am sure. So maybe reading them in groups (say, Y=10..100) would be better?
I realize that what I am trying to do is not what Cassandra does well - reading "all" or large subset of CF data in chunks that can be pre-calculated (like token range or partition keys) for parallel fetching from different hosts. But I am trying to find a pattern that is the most efficient for such a use case.
By the way, the query like "select * from ... where TOKEN(user)>X and TOKEN(user)
Short answer
This warning means that Cassandra would have to read non-indexed data and filter out the rows that don't satisfy the criteria. If you add ALLOW FILTERING to the end of query, it will work, however it will scan a lot of data:
SELECT props FROM test_table
WHERE app_id='myapp1'
AND domain_id='mydomain1'
AND TOKEN(user) > -9223372036854775808
AND TOKEN(user) < 9223372036854775807
ALLOW FILTERING;
Longer explanation
In your example primary key consists of two parts: user is used as partition key, and <app_id, domain_id> form remaining part. Rows for different users are distributed across the cluster, each node responsible for specific range of token ring.
Rows on a single node are sorted by the hash of partition key (token(user) in your example). Different rows for single user are stored on a single node, sorted by <app_id, domain_id> tuple.
So, the primary key forms a tree-like structure. Partition key adds one level of hierarchy, and each remaining field of a primary key adds another one. By default, Cassandra processes only the queries that return all rows from the continuos range of the tree (or several ranges if you use key in (...) construct). If Cassandra should filter out some rows, ALLOW FILTERING must be specified.
Example queries that don't require ALLOW FILTERING:
SELECT * FROM test_table
WHERE user = 'user1';
//OK, returns all rows for a single partition key
SELECT * FROM test_table
WHERE TOKEN(user) > -9223372036854775808
AND TOKEN(user) < 9223372036854775807;
//OK, returns all rows for a continuos range of the token ring
SELECT * FROM test_table
WHERE user = 'user1'
AND app_id='myapp1';
//OK, the rows for specific user/app combination
//are stored together, sorted by domain_id field
SELECT * FROM test_table
WHERE user = 'user1'
AND app_id > 'abc' AND app_id < 'xyz';
//OK, since rows for a single user are sorted by app
Example queries that do require ALLOW FILTERING:
SELECT props FROM test_table
WHERE app_id='myapp1';
//Must scan all the cluster for rows,
//but return only those with specific app_id
SELECT props FROM test_table
WHERE user='user1'
AND domain_id='mydomain1';
//Must scan all rows having user='user1' (all app_ids),
//but return only those having specific domain
SELECT props FROM test_table
WHERE user='user1'
AND app_id > 'abc' AND app_id < 'xyz'
AND domain_id='mydomain1';
//Must scan the range of rows satisfying <user, app_id> condition,
//but return only those having specific domain
What to do?
In Cassandra it's not possible to create a secondary index on the part of the primary key. There are few options, each having its pros and cons:
Add a separate table that has primary key ((app_id), domain_id, user) and duplicate the necessary data in two tables. It will allow you to query necessary data for a specific app_id or <app_id, domain_id> combination. If you need to query specific domain and all apps - third table is necessary. This approach is called materialized views
Use some sort of parallel processing (hadoop, spark, etc) to perform necessary calculations for all app/domain combinations. Since Cassandra needs to read all the data anyway, there probably won't be much difference from a single pair. If the result for other pairs might be cached for later use, it will probably save some time.
Just use ALLOW FILTERING if query performance is acceptable for your needs. Dozens of millions partition keys is probably not too much for Cassandra.
Presuming you are using the Murmur3Partitioner (which is the right choice), you do not want to run range queries on the row key. This key is hashed to determine which node holds the row, and is therefore not stored in sorted order. Doing this kind of range query would therefore require a full scan.
If you want to do this query, you should store some known value as a sentinel for your row key, such that you can query for equality rather than range. From your data it appears that either app_id or domain_id would be a good choice, since it sounds like you always know these values when performing your query.

Counting rows in a sqlite db

I have a sqlite db on an ARM embedded platform running Linux with somewhat limited resources. Storage device is a microSD card. Sqlite version is 3.7.7.1. The application accessing sqlite is written in C++.
I want to know the number of rows in several tables in regular intervals. I currently use
select count(*) from TABLENAME;
to get this information. I'm having trouble with the performance: When the table sizes reach a certain point (~200K lines), I have a lot of system and iowait load every time I check the table sizes.
When I wrote this, I though looking up the number of rows in a table would be fast as it is probably stored somewhere. But now I'm suspecting that sqlite actually looks through all rows and when I pass the point where the data doesn't fit into the disk cache anymore I get a lot of io load. This would roughly fit from db size and available memory.
Can anyone tell me if sqlite behaves in the way I suspect?
Is there any way to get the number of table rows without producing this amount of load?
EDIT: plaes has asked about the table layout:
CREATE TABLE %s (timestamp INTEGER PRIMARY KEY, offset INTEGER, value NUMERIC);
Does this table have integer index? If not, then add one. Otherwise it has to scan the whole table to count the items.
This is an excerpt of comments from SQLite code that implements COUNT() parsing and execution:
/* If isSimpleCount() returns a pointer to a Table structure, then
** the SQL statement is of the form:
**
** SELECT count(*) FROM <tbl>
**
** where the Table structure returned represents table <tbl>.
**
** This statement is so common that it is optimized specially. The
** OP_Count instruction is executed either on the intkey table that
** contains the data for table <tbl> or on one of its indexes. It
** is better to execute the op on an index, as indexes are almost
** always spread across less pages than their corresponding tables.
*/
[...]
/* Search for the index that has the least amount of columns. If
** there is such an index, and it has less columns than the table
** does, then we can assume that it consumes less space on disk and
** will therefore be cheaper to scan to determine the query result.
** In this case set iRoot to the root page number of the index b-tree
** and pKeyInfo to the KeyInfo structure required to navigate the
** index.
**
** (2011-04-15) Do not do a full scan of an unordered index.
Also, you can get more information about your query with EXPLAIN QUERY PLAN.
From all the information I gathered, count() apparently really needs to scan the table. As plaes has pointed out, this is faster if the count is done on a integer indexed column, but scanning the index is still needed.
What I do now is store the row count somewhere and increment / decrement it manually in the same transactions I use to do inserts and deletes to keep it consistent.
Here are 2 possible table row count workarounds (with caveats) that do not cause a table / index scan:
Note for tables where you can use INTEGER PRIMARY KEY AUTOINCREMENT as a primary key, you can grab the count from the sqlite_sequence sqlite meta-table:
select name,seq from sqlite_sequence
seq will contain either the last id or the next id (I think the last but not sure).
"select max(pkid) from table", which will probably do an index search instead of a scan (and will also only be accurate for tables with no deletions).
Knowing this, if your use case includes UNIQUE deletions for tables you can use AUTOINCREMENT on, you could do a hybrid of the trigger-based solution and only count deleted rows (which would arguably be less bookkeeping than counting the inserts for most scenarios). However if you insert and delete the same row twice this also won't work.

Resources