Bigquery - Batch run ARRAY_AGG(STRUCT()) to avoid exceeding rescource limit - struct

I am trying to create some nested fields in Bigquery using the ARRAY_AGG(STRUCT()) method, but I am exceeding my resource limit whilst doing so. Is there a way of breaking the query down into batches to overcome this problem?
Example query
SELECT
cust_id,
ARRAY_AGG(
STRUCT(status_s_date,status_e_date,status_desc,current_status_flag,active_flag, price, payment_freq, product_group)
) as status
FROM table1
GROUP BY cust_id
I need all of these fields in the STRUCT but trying to doing so all at the same time for all of the data does not work. Is there a way of doing any of the following? If so, which method is best?
a) Creating mutiple structs and then joining them under a common name?
E.g Run the following script, creating structs 'status1' and 'status2'...
SELECT
cust_id,
ARRAY_AGG(
STRUCT(status_s_date,status_e_date,status_desc,current_status_flag,active_flag)
) as status1,
ARRAY_AGG(
STRUCT(status_s_date,status_e_date, price, payment_freq, product_group)
) as status2,
FROM table1
GROUP BY cust_id
...and then join the two structs on status_s_date and status_e_date to create a single struct called 'Status'
b) Partitioning the original table on status_s_date and then running the ARRAY_AGG(STRUCT() step in batches over the partitioned field.
c) Finally, some of the fields I want to put into the struct are string fields, which I understand take up more resource when nesting. Can I nest their numeric value equivalents and then apply a join to a lookup table afterwards to get their plain-English values?
I am very new to this process so I appreciate some of the above may not make sense. Any help gratefully received.
Regards

An update from me which I would appreciate if someone could confirm.
My original unnested table contained an 'ORDER BY' command, which I believe leads to increased resource requirements in the subsequent nesting process. I have removed the 'ORDER BY' (I'm not sure why it was there in the first place) and the ARRAY_AGG(STRUCT()) process is now running fine.
Is it the case that querying ORDERED tables is more resource intensive?

Related

Update child table with different where condition in Jooq

I have two tables (A & B) A.Id is a foreign key in table B (and a record in table B may or may not exist).
I want to update Table A based on some condition and irrespective of record exists in Table B.
Also want to update table B if it contains a record of A.Id.
How do we do this multiple table update on different condtions in a single execute in Jooq?
In most database products, you can't update multiple tables in a single statement, so you'll have to run separate statements in jOOQ. By "single execute", you probably mean you want to have the experience of a single operation in the jOOQ API (irrespective of how many statements are being generated), but that also doesn't exist.
So, you just run multiple statements:
ctx.update(A)
.set(A.X, someValue)
.where(someCondition)
.execute();
ctx.update(B)
.set(B.Y, someValue)
.where(someCondition)
.execute();
If, for some reason, the single round trip to the server is important to you, you can use the procedural language API in jOOQ to wrap the above in an anonymous block:
ctx.begin(
update(A)
.set(A.X, someValue)
.where(someCondition),
update(B)
.set(B.Y, someValue)
.where(someCondition)
).execute();
Or, you can batch the statements, but they will be separate statements.

Cassandra delete/update a row and get its previous value

How can I delete a row from Cassandra and get the value it had just before the deletion?
I could execute a SELECT and DELETE query in series, but how can I be sure that the data was not altered concurrently between the execution of those two queries?
I've tried to execute the SELECT and DELETE queries in a batch but that seems to be not allowed.
cqlsh:foo> BEGIN BATCH
... SELECT * FROM data_by_user WHERE user = 'foo';
... DELETE FROM data_by_user WHERE user = 'foo';
... APPLY BATCH;
SyntaxException: line 2:4 mismatched input 'SELECT' expecting K_APPLY (BEGIN BATCH [SELECT]...)
In my use case I have one main table that stores data for items. And I've build several tables that allow to lookup items based on those informations.
If I delete an item from the main table, I must also remove it from the other tables.
CREATE TABLE items (id text PRIMARY KEY, owner text, liking_users set<text>, ...);
CREATE TABLE owned_items_by_user (user text, item_id text, PRIMARY KEY ((user), item_id));
CREATE TABLE liked_items_by_user (user text, item_id tect, PRIMARY KEY ((user), item_id));
...
I'm afraid the tables might contain wrong data if I delete an item and at the same time someone e.g. hits the like button of that same item.
The deleteItem method execute a SELECT query to fetch the current row of the item from the main table
The likeItem method that gets executed at the same times runs an UPDATE query and inserts the item into the owned_items_by_user, liked_items_by_user, ... tables. This happens after the SELECT statement was executed and the UPDATE query is executed before the DELETE query.
The deleteItem method deletes the items from the owned_items_by_user, liked_items_by_user, ... tables based on the data just retrieved via the SELECT statement. This data does not yet contain the just added like. The item is therefore deleted, but the just added like remains in the liked_items_by_user table.
You can do a select beforehand, then do a lightweight transaction on the delete to ensure that the data still looks exactly like it did when you selected. If it does, you know the latest state before you deleted. If it does not, keep retrying the whole procedure until it sticks.
Unfortunately you cannot do a SELECT query inside a batch statement. If you read the docs here, only insert, update, and delete statements can be used.
What you're looking for is atomicity on the execution, but batch statements are not going to be the way forward. If the data has been altered, your worst case situation is zombies, or data that could reappear.
Cassandra uses a grade period mechanism to deal with this, you can find the details here. If for whatever reason, this is critical to your business logic, the "best" thing you can do in this situation is to increase the consistency level, or restructure the read pattern at application level to not rely on perfect atomicity, whichever the right trade off is for you. So either you give up some of the performance, or tune down the requirement.
In practice, QUORUM should be more than enough to satisfy most situations most of the time. Alternatively, you can do an ALL, and you pay the performance penalty, but that means all replicas for the given foo partition key will have to acknowledge the write both in the commitlog and the memtable. Note, this still means a flush from the commitlog will need to happen before the delete is complete, but you can tune the consistency to the level you require.
You don't have atomicity in the SQL sense, but depending on throughput it's unlikely that you will need it(touch wood).
TLDR:
USE CONSISTENCY ALL;
DELETE FROM data_by_user WHERE user = 'foo';
That should do the trick. The error you're seeing now is basically the ANTLR3 Grammar parser for CQL 3, which is not designed to accept to SELECT queries inside batches simply because they are not supported, you can see that here.

Cassandra changing Primary Key vs Firing multiple select queries

I have a table that stores list products that a user has. The table looks like this.
create table my_keyspace.userproducts{
userid,
username,
productid,
productname,
producttype,
Primary Key(userid)
}
All users belong to a group, there could be min 1 to max 100 users in a group
userid|groupid|groupname|
1 |g1 | grp1
2 |g2 | grp2
3 |g3 | grp3
We have new requirement to display all products for all users in a single group.
So do i change my userproducts so that my Partition Key is now groupid and make userid as my cluster key, so that i get all my results in one single query.
Or do I keep my table design as it is and fire multiple select queries by selecting all users in a group from second table and then fire one select query for each user, consolidate data in my code and then return it to the users
Thanks.
Even before getting to your question, your data modelling as you presented it has a problem: You say that you want to store "a list products that a user has". But this is not what the table you presented has - your table has a single product for each userid. The "userid" is the key of your table, and each entry in the table, i.e, each unique userid, has one combination of the other fields.
If you really want each user to have a list of products, you need the primary key to be (userid, productid). This means that each record is indexed by both a userid and a productid, or in other words - a userid has a list of records each with its own productid. Cassandra allows you to efficiently fetch all the productid records for a single userid because it implements the first part of the key as a "partition key" but the second part is a "clustering key".
Regarding your actual question, you indeed have two options: Either do multiple queries on your original tables, or do so-called denormalization, i.e., create a second table with exactly what you want searchable immediately. For the second option you can either do it manually (update both tables every time you have new data), or let Cassandra update the second table for you automatically, using a feature called Materialized Views.
Which of the two options - multiple queries or multiple updates - to use really depends on your workload. If it has many updates and rare queries, it is better to leave updates quick and make queries slower. If, on the other hand, it has few updates but many queries, it is better to make updates slower (when each update needs to update both tables) but make queries faster. Another important issue is how much query latency is important for you - the multiple queries option not only increases the load on the cluster (which you can solve by throwing more hardware at the problem) but also increases the latency - a problem which does not go away with more hardware and for some use cases may become a problem.
You can also achieve a similar goal in Cassandra by using the Secondary Index feature, which has its own performance characteristics (in some respects it is similar to the "multiple queries" solution).

Cassandra get latest entry for each element contained within IN clause

So, I have a Cassandra CQL statement that looks like this:
SELECT * FROM DATA WHERE APPLICATION_ID = ? AND PARTNER_ID = ? AND LOCATION_ID = ? AND DEVICE_ID = ? AND DATA_SCHEMA = ?
This table is sorted by a timestamp column.
The functionality is fronted by a REST API, and one of the filter parameters that they can specify to get the most recent row, and then I appent "LIMIT 1" to the end of the CQL statement since it's ordered by the timestamp column in descending order. What I would like to do is allow them to specify multiple device id's to get back the latest entries for. So, my question is, is there any way to do something like this in Cassandra:
SELECT * FROM DATA WHERE APPLICATION_ID = ? AND PARTNER_ID = ? AND LOCATION_ID = ? AND DEVICE_ID IN ? AND DATA_SCHEMA = ?
and still use something like "LIMIT 1" to only get back the latest row for each device id? Or, will I simply have to execute a separate CQL statement for each device to get the latest row for each of them?
FWIW, the table's composite key looks like this:
PRIMARY KEY ((application_id, partner_id, location_id, device_id, data_schema), activity_timestamp)
) WITH CLUSTERING ORDER BY (activity_timestamp DESC);
IN is not recommended when there are a lot of parameters for it and under the hood it's making reqs to multiple partitions anyway and it's putting pressure on the coordinator node.
Not that you can't do it. It is perfectly legal, but most of the time it's not performant and is not suggested. If you specify limit, it's for the whole statement, basically you can't pick just the first item out from partitions. The simplest option would be to issue multiple queries to the cluster (every element in IN would become one query) and put a limit 1 to every one of them.
To be honest this was my solution in a lot of the projects and it works pretty much fine. Basically coordinator would under the hood go to multiple nodes anyway but would also have to work more for you to get you all the requests, might run into timeouts etc.
In short it's far better for the cluster and more performant if client asks multiple times (using multiple coordinators with smaller requests) than to make single coordinator do to all the work.
This is all in case you can't afford more disk space for your cluster
Usual Cassandra solution
Data in cassandra is suggested to be ready for query (query first). So basically you would have to have one additional table that would have the same partitioning key as you have it now, and you would have to drop the clustering column activity_timestamp. i.e.
PRIMARY KEY ((application_id, partner_id, location_id, device_id, data_schema))
double (()) is intentional.
Every time you would write to your table you would also write data to the latest_entry (table without activity_timestamp) Then you can specify the query that you need with in and this table contains the latest entry so you don't have to use the limit 1 because there is only one entry per partitioning key ... that would be the usual solution in cassandra.
If you are afraid of the additional writes, don't worry , they are inexpensive and cpu bound. With cassandra it's always "bring on the writes" I guess :)
Basically it's up to you:
multiple queries - a bit of refactoring, no additional space cost
new schema - additional inserts when writing, additional space cost
Your table definition is not suitable for such use of the IN clause. Indeed, it is supported on the last field of the primary key or the last field of the clustering key. So you can:
swap your two last fields of the primary key
use one query for each device id

Real time complex queries on Cassandra

We're looking for a tool (preferably open source) which helps us to perform complex queries (advanced filtering and joins, no need full SQL) in real time.
Assume that all the data needed fits in memory, and we want to avoid, if possible, the overhead of map reduce tools.
To be more specific, we need to load n partitions of a single table, and join them by clustering column.
Variables Table:
Variable ID: Partition key
Person ID: Clustering key
Variable Value
Desired output columns:
Person ID, Variable 1 Value, Variable 2 Vale, ..., Variable N Value
We can achieve it by an in-memory load-filter-join process, but we were wondering if there's any tool out there with this use case covered out of the box and with a fair performance.
We've tested Spark, but the partitioning of Spark C* connector is based on the primary key, so each Variable ID would be loaded in a different Spark node, and the join process would be really slow (all the data would travel all over the Spark cluster).
Any tips? known tools?
I believe that you have a number of options to perform this task:
Rethink your database schema, denormalize it. var_id:person_id:value rows are not the best table schema if you want to query by person_id (and it smells really bad as an entity-attribute-value db antipattern):
EAV gives a flexibility to the developer to define the schema as needed and this is good in some circumstances. On the other hand it performs very poorly in the case of an ill-defined query and can support other bad practices. In other words, EAV gives you enough rope to hang yourself and in this industry, things should be designed to the lowest level of complexity because the guy replacing you on the project will likely be an idiot.
You can use schema with multiple columns (cassandra can handle a lot of them):
create table person_data (
person_id int primary key,
var1 text,
var2 text,
var3 text,
var4 text,
....
);
If you don't have a predefined set of variables, you can use cql3 collections like map for storing the data in a more flexible way.
Create a secondary index on person_id (even it's a clustering key already). You can query for all data for a specific user without using joins, but with some issues:
As your query will hit multiple partitions, it will require not a single disk seek, but a series of them, so your query latency may be higher than you're expecting.
secondary indexes are not free: C* must perform more work under the hood if you insert a row to a table with indexed columns.
Use external index like ElasticSearch/Solr if you plan to have a lot of complex queries which do not fit well into cql3.

Resources