How to query count for each column in DynamoDB - node.js

I have a DynamoDB with 50 different columns labeled question1 - question 50. Each of these columns have either a, b, c, or d as answers to a multiple choice question. What is the most efficient way of getting the count of how many people answered 'a' for question1?
I'm trying to return the count of a, b, c, d for ALL questions, so I want to see how many answered a for question1, how many answered b for question 1, etc. So in the end I should have a count for each question and their answer.
Currently I have this, but I don't feel like it's efficient to type everything out. Is there a simplified way of doing this?
exports.handler = async function(event, ctx, callback) {
const params = {
ScanFilter: {
'question1' : {
ComparisonOperator: 'EQ',
AttributeValueList: {
S: 'a'
}
}
},
TableName : 'app',
Select: 'COUNT'
};
try {
data = await dynamoDb.scan(params).promise()
console.log(data)
}
catch (err) {
console.log(err);
}
}

You have missed mentioning two things - is this a one time operation for you or you need to do this regularly? and how many records do you have?
If this is a one time operation:
Since you have 50 questions and 4 options for each (200 combinations) and assuming you have a lot of data, the easiest solution is to export the entire data to a csv and do a pivot table there. This is easier than scanning entire table and doing aggregation operations in memory. Or you can export the table to s3 as json and use athena to run queries on the data.
If you need to do this regularly, you can do one of the following:
Save your aggregate counts as GSI in the same table or in a new table or somewhere else entirely. Enable and send streams to a lambda function. Increment these counts according to the new data coming in.
Use elastic search - Enable streams on your ddb and have a lambda function send them to an elastic search index. Index the current data as well. And then do aggregate queries on this index.

RDBMS's aggregate quite easily...DDB not so much.
Usual answer with DDB is to enable streams and have a lambda attached to the stream that calculates the needed aggregations and stores them in a separate record in DDB.
Read through the Using Global Secondary Indexes for Materialized Aggregation Queries section of the docs.

Related

Iceberg: How to quickly traverse a very large table

I'm new to iceberg, and i have a question about query big table.
We have a Hive table with a total of 3.6 million records and 120 fields per record. and we want to transfer all the records in this table to other databases, such as pg, kafak, etc.
Currently we do like this:
Dataset<Row> dataset = connection.client.read().format("iceberg").load("default.table");
// here will stuck for a very long time
dataset.foreachPartition(par ->{
par.forEachRemaining(row ->{
```
});
});
but it can get stuck for a long time in the foreach process.
and I tried the following method, the process does not stay stuck for long, but the traversal speed is very slow, the traverse efficiency is about 50 records/second.
HiveCatalog hiveCatalog = createHiveCatalog(props);
Table table = hiveCatalog.loadTable(TableIdentifier.of("default.table"));
CloseableIterable<Record> records = IcebergGenerics.read(table) .build();
records.forEach( record ->{
```
});
Neither of these two ways can meet our needs, I would like to ask whether my code needs to be modified, or is there a better way to traverse all records? Thanks!
In addition to reading row by row, here is another idea.
If your target database can import files directly, try retrieving files from Iceberg and importing them directly to the database.
Example code is as follows:
Iterable<DataFile> files = FindFiles.in(table)
.inPartition(table.spec(), StaticDataTask.Row.of(1))
.inPartition(table.spec(), StaticDataTask.Row.of(2))
.collect();
You can get the file path and Format from the DataFile.

TypeORM count grouping with different left joins each time

I am using NestJS with TypeORM and PostgreSQL. I have a queryBuilder which joins other tables based on the provided array of relations.
const query = this.createQueryBuilder('user');
if (relations.includes('relation1') {
query.leftJoinAndSelect('user.relation1', 'r1');
}
if (relations.includes('relation2') {
query.leftJoinAndSelect('user.relation2', 'r2');
}
if (relations.includes('relation3') {
query.leftJoinAndSelect('user.relation3', 'r3');
}
// 6 more relations
Following that I select a count on another table.
query
.leftJoin('user.relation4', 'r4')
.addSelect('COUNT(case when r4.value > 10 then r4.id end', 'user_moreThan')
.addSelect('COUNT(case when r4.value < 10 then r4.id end', 'user_lessThan')
.groupBy('user.id, r1.id, r2.id, r3.id ...')
And lastly I use one of the counts (depending on the request) for ordering the result with orderBy.
Now, of course, based on the relations parameter, the requirements for the groupBy query change. If I join all tables, TypeORM expects all of them to be present in groupBy.
I initially had the count query separated, but that was before I wanted to use the result for ordering.
Right now I planned to just dynamically create the groupBy string, but this approach somehow feels wrong and I am wondering if it is in fact the way to go or if there is a better approach to achieving what I want.
You can add group by clause conditionally -
if (relations.includes('relation1') {
query.addGroupBy('r1.id');
}

How to sort data by rating with Aws Lambda using nodeJS

I have a db on Dynamodb. And writing some user scores to db. Also I have a lambda function which wrote it with nodejs. I want to get first 10 users who have most points. How could I scan this users?
Thanks a lot.
Max() in NoSQL is much trickier than in SQL. And it doesn't really scale - if you want very high scalability on achieving this let me know, but let's get back to the question.
Assuming your table looks like:
User
----------
userId - hashKey
score
...
Add a dummy category attribute to your table, which will be constant (for example value "A"). Create the index:
category - hash key
score - sort key
Query this index by hash key "A" in reserve order in order to get results much faster than a scan. But this scales to max 10GB (max partition size, all data being in same partition). Also make sure you project only needed attributes for this index, in order to save space.
You can go up to 30GB for example, by setting 3 categories ("A", "B", "C"), executing 3 queries and merge programatically the results. This will affect performance a bit, but still better than a full scan.
EDIT
var params = {
TableName: 'MyTableName',
Limit: 10,
// Set ScanIndexForward to false to display most recent entries first
ScanIndexForward: false,
KeyConditionExpression: 'category = : category',
ExpressionAttributeValues: {
':category': {
S: 'category',
},
},
};
dynamo.query(params, function(err, data) {
// handle data
});
source: https://www.debassociates.com/blog/query-dynamodb-table-from-a-lambda-function-with-nodejs-and-apex-up/

Pass column name as argument - Postgres and Node JS

I have a query (Update statement) wrapped in a function and will need to perform the same statement on multiple columns during the course of my script
async function update_percentage_value(value, id){
(async () => {
const client = await pool.connect();
try {
const res = await client.query('UPDATE fixtures SET column_1_percentage = ($1) WHERE id = ($2) RETURNING *', [value, id]);
} finally {
client.release();
}
})().catch(e => console.log(e.stack))
}
I then call this function
update_percentage_value(50, 2);
I have many columns to update at various points of my script, each one needs to be done at the time. I would like to be able to just call the one function, passing the column name, value and id.
My table looks like below
CREATE TABLE fixtures (
ID SERIAL PRIMARY KEY,
home_team VARCHAR,
away_team VARCHAR,
column_1_percentage INTEGER,
column_2_percentage INTEGER,
column_3_percentage INTEGER,
column_4_percentage INTEGER
);
Is it at all possible to do this?
I'm going to post the solution that was advised by Sehrope Sarkuni via the node-postgres GitHub repo. This helped me a lot and works for what I require:
No column names are identifiers and they can't be specified as parameters. They have to be included in the text of the SQL command.
It is possible but you have to build the SQL text with the column names. If you're going to dynamically build SQL you should make sure to escape the components using something like pg-format or use an ORM that handles this type of thing.
So something like:
const format = require('pg-format');
async function updateFixtures(id, column, value) {
const sql = format('UPDATE fixtures SET %I = $1 WHERE id = $2', column);
await pool.query(sql, [value, id]);
}
Also if you're doing multiple updates to the same row back-to-back then you're likely better off with a single UPDATE statement that modifies all the columns rather than separate statements as they'd be both slower and generate more WAL on the server.
To get the column names of the table, you can query the information_schema.columns table which stores the details of column structure of your table, this would help you in framing a dynamic query for updating a specific column based on a specific result.
You can get the column names of the table with the help of following query:
select column_name from information_schema.columns where table_name='fixtures' and table_schema='public';
The above query would give you the list of columns in the table.
Now to update each one for a specific purpose, You can store the result set of column name to a variable and pass that variable to the function to perform the required action.

How to keep a Firebase database sync with BigQuery?

We are working on a project where a lot of data is involved. Now we recently read about Google BigQuery. But how can we export the data to this platform? We have seen the sample of importing logs into Google BigQuery. But this does not contain information about updating and deleting data (only inserting).
So our objects are able to update their data. And we have a limited amount of queries on the BigQuery tables. How can we synchronize our data without exceeding the BigQuery quota limits.
Our current function code:
'use strict';
// Default imports.
const functions = require('firebase-functions');
const bigQuery = require('#google-cloud/bigquery')();
// If you want to change the nodes to listen to REMEMBER TO change the constants below.
// The 'id' field is AUTOMATICALLY added to the values, so you CANNOT add it.
const ROOT_NODE = 'categories';
const VALUES = [
'name'
];
// This function listens to the supplied root node.
// When the root node is completed empty all of the Google BigQuery rows will be removed.
// This function should only activate when the root node is deleted.
exports.root = functions.database.ref(ROOT_NODE).onWrite(event => {
if (event.data.exists()) {
return;
}
return bigQuery.query({
query: [
'DELETE FROM `stampwallet.' + ROOT_NODE + '`',
'WHERE true'
].join(' '),
params: []
});
});
// This function listens to the supplied root node, but on child added/removed/changed.
// When an object is inserted/deleted/updated the appropriate action will be taken.
exports.children = functions.database.ref(ROOT_NODE + '/{id}').onWrite(event => {
const id = event.params.id;
if (!event.data.exists()) {
return bigQuery.query({
query: [
'DELETE FROM `stampwallet.' + ROOT_NODE + '`',
'WHERE id = ?'
].join(' '),
params: [
id
]
});
}
const item = event.data.val();
if (event.data.previous.exists()) {
let update = [];
for (let index = 0; index < VALUES.length; index++) {
const value = VALUES[index];
update.push(item[value]);
}
update.push(id);
return bigQuery.query({
query: [
'UPDATE `stampwallet.' + ROOT_NODE + '`',
'SET ' + VALUES.join(' = ?, ') + ' = ?',
'WHERE id = ?'
].join(' '),
params: update
});
}
let template = [];
for (let index = 0; index < VALUES.length; index++) {
template.push('?');
}
let create = [];
create.push(id);
for (let index = 0; index < VALUES.length; index++) {
const value = VALUES[index];
create.push(item[value]);
}
return bigQuery.query({
query: [
'INSERT INTO `stampwallet.' + ROOT_NODE + '` (id, ' + VALUES.join(', ') + ')',
'VALUES (?, ' + template.join(', ') + ')'
].join(' '),
params: create
});
});
What would be the best way to sync firebase to bigquery?
BigQuery supports UPDATE and DELETE, but not frequent ones - BigQuery is an analytical database, not a transactional one.
To synchronize a transactional database with BigQuery you can use approaches like:
Export a daily dump, and import it into BigQuery.
Treat updates and deletes as new events, and keep appending events to your BigQuery event log.
Use a tool like https://github.com/MemedDev/mysql-to-google-bigquery.
Approaches like "BigQuery at WePay part III: Automating MySQL exports every 15 minutes with Airflow, and dealing with updates"
With Firebase you could schedule a daily load to BigQuery from their daily backups:
https://firebase.googleblog.com/2016/10/announcing-automated-daily-backups-for-the-firebase-database.html
... way to sync firebase to bigquery?
I recommend considering streaming all you data into BigQuery as a historical data. You can mark entries as new(insert), update or delete. Then, on BigQuery side, you can write query that will resolve most recent values for specific record based on whatever logic you have.
So your code can be reused almost 100% - just fix logic of UPDATE/DELETE to have it as INSERT
// When an object is inserted/deleted/updated the appropriate action will be taken.
So our objects are able to update their data. And we have a limited amount of queries on the BigQuery tables. How can we synchronize our data without exceeding the BigQuery quota limits?
Yes, BigQuery supports UPDATE, DELETE, INSERT as a part of Data Manipulation Language.
General availability was announced in BigQuery Standard SQL at March 8, 2017
Before considering using this feature for syncing BigQuery with transactional data – please take a look at Quotas, Pricing and Known Issues.
Below are some excerpts!
Quotas (excerpts)
DML statements are significantly more expensive to process than SELECT statements.
• Maximum UPDATE/DELETE statements per day per table: 96
• Maximum UPDATE/DELETE statements per day per project: 1,000
Pricing (excerpts, extra highlighting + comment added)
BigQuery charges for DML queries based on the number of bytes processed by the query.
The number of bytes processed is calculated as follows:
UPDATE Bytes processed = sum of bytes in referenced fields in the scanned tables + the sum of bytes for all fields in the updated table at the time the UPDATE starts.
DELETE Bytes processed = sum of bytes of referenced fields in the scanned tables + sum of bytes for all fields in the modified table at the time the DELETE starts.
Comment by post author: As you can see you will be charged for whole table scan even though you update just one row! This is a key here for decision making, I think!
Known Issues (excerpts)
• DML statements cannot be used to modify tables with REQUIRED fields in their schema.
• Each DML statement initiates an implicit transaction, which means that changes made by the statement are automatically committed at the end of each successful DML statement. There is no support for multi-statement transactions.
• The following combinations of DML statements are allowed to run concurrently on a table:
UPDATE and INSERT
DELETE and INSERT
INSERT and INSERT
Otherwise one of the DML statements will be aborted.
For example, if two UPDATE statements execute simultaneously against the table then only one of them will succeed.
• Tables that have been written to recently via BigQuery Streaming (tabledata.insertall) cannot be modified using UPDATE or DELETE statements. To check if the table has a streaming buffer, check the tables.get response for a section named streamingBuffer. If it is absent, the table can be modified using UPDATE or DELETE statements.
The reason why you didn't find update and delete functions in BigQuery is they are not supported by BigQuery. BigQuery has only append and truncate operations. If you want to update or delete row in your BigQuery you'll need to delete the whole database and write it again with modified row or without it. It is not a good idea.
BigQuery is used to store big amounts of data and have a quick access to it, for example it is good for collecting data from different sensors. But for your customer database you need to use MySQL or NoSQL database.

Resources