pg-promise doesn't delete but doesn't throw error either - node.js

I have the following query which successfully delete records when I run it in the psql shell:
delete from schedule_set USING schedule, event WHERE (schedule.schedule_set_id = schedule_set.id) and (schedule_set.id = ${id}) and (select count(*) from event where (complete IS TRUE)) = 0 RETURNING *
I'm generating the query with a JS function - that's why I have ${id} in the query, than call the following:
return db.any(schedule.deleteSchedulesByScheduleSetId(params), params)
.catch(err => {
console.log(err);
});
When I have a syntax error in the query I get an error - this tells me that the query is being ran against the database. When I fix the syntax I don't get any errors but nothing is deleted either.
Any ideas why or how to debug? - thanks

Method any, as documented, only rejects when executing the query results in an error. If your DELETE query doesn't delete anything, it is not an error.
If you want your DELETE query to result in an error when it doesn't delete anything, you can use RETURNING * in the end of the query, and execute it either with many or one, depending on how many records it is expected to delete (and thus return) normally.
The most practical approach however is not to use any of the above, because returning deleted rows is an extra operation, typically useless. Instead, you should rely on the number of rows actually deleted, and then act accordingly.
For that you would you use method result, converting the output into the number of rows affected (deleted in your case):
db.result(query, values, a => a.rowCount)
.then(count => {
// count = number of rows deleted
})
.catch(error => {
// error
})

Related

Most performant way to Insert or Read(if record already exists) in Google Cloud Spanner

Assuming I have a cars table where vin is the primary key.
I want to insert a record(in a transaction) or read the record(if one already exists with the same PK).
What's the most performant way to insert the record or read it if one already exists with the same PK?
This is my current approach:
Case A: Record does not exist
Insert record
Return record
Case B: Record already exists
Insert record
Check if error is due to the record already existing
Read the record
Return record
const car = { vin: '123', make: 'honda', model: 'accord' };
spannerDatabase.runTransactionAsync(async (databaseTransaction) => {
try {
// Try to insert car
await databaseTransaction.insert('cars', car);
await databaseTransaction.commit();
return car;
} catch (error) {
await databaseTransaction.end();
// Spanner "row already exists" error. Insert failed because there is already a record with the same vin(PK)
if (error.code === 6) {
// Since the record already exists, I want to read it and return it. Whats the most performant way to do this?
const existingRecord = await carsTable.read({
columns: ['vin', 'make', 'model'],
keys: [car.vin],
json: true,
});
return existingRecord;
}
}
})
As #skuruppu mentioned in the comment above, your current example is mostly fine for what you are describing. It does however implicitly assume a couple of things, as you are not executing the read and the insert in the same transaction. That means that the two operations together are not atomic, and other transactions might update or delete the record between your two operations.
Also, your approach assumes that scenario A (record does not exist) is the most probable. If that is not the case, and it is just as probable that the record does exist, then you should execute the read in the transaction before the write.
You should also do that if there are other processes that might delete the record. Otherwise, another process might delete the record after you tried to insert the record, but before you try to read it (outside the transaction).
The above is only really a problem if there are other processes that might delete or alter the record. If that is not the case, and also won't be in the future, this is only a theoretical problem.
So to summarize:
Your example is fine if scenario A is the most probable and no other process will ever delete any records in the cars table.
You should execute the read before the write using the same read/write transaction for both operations if any of the conditions in 1 are not true.
The read operation that you are using in your example is the most efficient way to read a single row from a table.

How to perform multiple inserts In knex transaction with dependent validity checks between inserts?

I'm writing a multi-step knex transaction that needs to insert multiple records into a table. Each of these records needs to pass a validity check written in Node before it is inserted.
I would batch perform the validity checks then insert all at once except that two records might invalidate each other. I.e. if I insert record 1, record 2 might no longer be valid.
Unfortunately, I can't seem to query the database in a transaction-aware fashion. During the the validity-check of the second record, my queries (used for the validity checks) do not show that the first insert exists.
I'm using the trx (transaction) connection object rather than the base knex object. I expected this would fix it since the transaction connection object is supposed to be promise aware, but alas it did not.
await knex.transaction(async trx => {
/* perform validity checks and insert if valid */
/* Check the record slated to be created
* against the rules to ensure validity. */
relationshipValidityChecks = await Promise.all(
relationshipsToCreate.map(async r => {
const obj = {
fromId: r.fromId,
toId: r.toId,
type: r.type,
...(await relationshipIsValid( // returns an object with validity boolean
r.fromId,
r.toId,
r.type,
trx // i can specify which connection obj to use (trx/knx)
))
};
if (obj.valid) {
await trx.raw(
`
insert into t1.relationship
(from_id, to_id, type) values
(?, ?, ?)
returning relationship_key;
`,
[r.fromId, r.toId, r.type]
);
}
return obj;
})
);
}
When I feed in two records that are valid by themselves but invalidate each other, the first record should be inserted and the second record should return an invalid error. The relationshipIsValid is somewhat complicated so I left it out, but I'm certain it works as expected because if I feed the aforementioned two records in separately (i.e. in two different endpoint calls) the second will return the invalid error.
Any help would be greatly appreciated. Thanks!

Massive inserts with pg-promise

I'm using pg-promise and I want to make multiple inserts to one table. I've seen some solutions like Multi-row insert with pg-promise and How do I properly insert multiple rows into PG with node-postgres?, and I could use pgp.helpers.concat in order to concatenate multiple selects.
But now, I need to insert a lot of measurements in a table, with more than 10,000 records, and in https://github.com/vitaly-t/pg-promise/wiki/Performance-Boost says:
"How many records you can concatenate like this - depends on the size of the records, but I would never go over 10,000 records with this approach. So if you have to insert many more records, you would want to split them into such concatenated batches and then execute them one by one."
I read all the article but I can't figure it out how to "split" my inserts into batches and then execute them one by one.
Thanks!
UPDATE
Best is to read the following article: Data Imports.
As the author of pg-promise I was compelled to finally provide the right answer to the question, as the one published earlier didn't really do it justice.
In order to insert massive/infinite number of records, your approach should be based on method sequence, that's available within tasks and transactions.
var cs = new pgp.helpers.ColumnSet(['col_a', 'col_b'], {table: 'tableName'});
// returns a promise with the next array of data objects,
// while there is data, or an empty array when no more data left
function getData(index) {
if (/*still have data for the index*/) {
// - resolve with the next array of data
} else {
// - resolve with an empty array, if no more data left
// - reject, if something went wrong
}
}
function source(index) {
var t = this;
return getData(index)
.then(data => {
if (data.length) {
// while there is still data, insert the next bunch:
var insert = pgp.helpers.insert(data, cs);
return t.none(insert);
}
// returning nothing/undefined ends the sequence
});
}
db.tx(t => t.sequence(source))
.then(data => {
// success
})
.catch(error => {
// error
});
This is the best approach to inserting massive number of rows into the database, from both performance point of view and load throttling.
All you have to do is implement your function getData according to the logic of your app, i.e. where your large data is coming from, based on the index of the sequence, to return some 1,000 - 10,000 objects at a time, depending on the size of objects and data availability.
See also some API examples:
spex -> sequence
Linked and Detached Sequencing
Streaming and Paging
Related question: node-postgres with massive amount of queries.
And in cases where you need to acquire generated id-s of all the inserted records, you would change the two lines as follows:
// return t.none(insert);
return t.map(insert + 'RETURNING id', [], a => +a.id);
and
// db.tx(t => t.sequence(source))
db.tx(t => t.sequence(source, {track: true}))
just be careful, as keeping too many record id-s in memory can create an overload.
I think the naive approach would work.
Try to split your data into multiple pieces of 10,000 records or less.
I would try splitting the array using the solution from this post.
Then, multi-row insert each array with pg-promise and execute them one by one in a transaction.
Edit : Thanks to #vitaly-t for the wonderful library and for improving my answer.
Also don't forget to wrap your queries in a transaction, or else it
will deplete the connections.
To do this, use the batch function from pg-promise to resolve all queries asynchronously :
// split your array here to get splittedData
int i = 0
var cs = new pgp.helpers.ColumnSet(['col_a', 'col_b'], {table: 'tmp'})
// values = [..,[{col_a: 'a1', col_b: 'b1'}, {col_a: 'a2', col_b: 'b2'}]]
let queries = []
for (var i = 0; i < splittedData.length; i++) {
var query = pgp.helpers.insert(splittedData[i], cs)
queries.push(query)
}
db.tx(function () {
this.batch(queries)
})
.then(function (data) {
// all record inserted successfully !
}
.catch(function (error) {
// error;
});

Inserting multiple records with pg-promise

I have a scenario in which I need to insert multiple records. I have a table structure like id (it's fk from other table), key(char), value(char). The input which needs to be saved would be array of above data. example:
I have some array objects like:
lst = [];
obj = {};
obj.id= 123;
obj.key = 'somekey';
obj.value = '1234';
lst.push(obj);
obj = {};
obj.id= 123;
obj.key = 'somekey1';
obj.value = '12345';
lst.push(obj);
In MS SQL, I would have created TVP and passed it. I don't know how to achieve in postgres.
So now what I want to do is save all the items from the list in single query in postgres sql, using pg-promise library. I'm not able to find any documentation / understand from documentation. Any help appreciated. Thanks.
I am the author of pg-promise.
There are two ways to insert multiple records. The first, and most typical way is via a transaction, to make sure all records are inserted correctly, or none of them.
With pg-promise it is done in the following way:
db.tx(t => {
const queries = lst.map(l => {
return t.none('INSERT INTO table(id, key, value) VALUES(${id}, ${key}, ${value})', l);
});
return t.batch(queries);
})
.then(data => {
// SUCCESS
// data = array of null-s
})
.catch(error => {
// ERROR
});
You initiate a transaction with method tx, then create all INSERT query promises, and then resolve them all as a batch.
The second approach is by concatenating all insert values into a single INSERT query, which I explain in detail in Performance Boost. See also: Multi-row insert with pg-promise.
For more examples see Tasks and Transactions.
Addition
It is worth pointing out that in most cases we do not insert a record id, rather have it generated automatically. Sometimes we want to get the new id-s back, and in other cases we don't care.
The examples above resolve with an array of null-s, because batch resolves with an array of individual results, and method none resolves with null, according to its API.
Let's assume that we want to generate the new id-s, and that we want to get them all back. To accomplish this we would change the code to the following:
db.tx(t => {
const queries = lst.map(l => {
return t.one('INSERT INTO table(key, value) VALUES(${key}, ${value}) RETURNING id',
l, a => +a.id);
});
return t.batch(queries);
})
.then(data => {
// SUCCESS
// data = array of new id-s;
})
.catch(error => {
// ERROR
});
i.e. the changes are:
we do not insert the id values
we replace method none with one, to get one row/object from each insert
we append RETURNING id to the query to get the value
we add a => +a.id to do the automatic row transformation. See also pg-promise returns integers as strings to understand what that + is for.
UPDATE-1
For a high-performance approach via a single INSERT query see Multi-row insert with pg-promise.
UPDATE-2
A must-read article: Data Imports.

How do I see output of SQL query in node-sqlite3?

I read all the documentation and this seemingly simple operation seems completely ignored throughout the entire README.
Currently, I am trying to run a SELECT query and console.log the results, but it is simply returning a database object. How do I view the results from my query in Node console?
exports.runDB = function() {
db.serialize(function() {
console.log(db.run('SELECT * FROM archive'));
});
db.close();
}
run does not have retrieval capabilities. You need to use all, each, or get
According to the documentation for all:
Note that it first retrieves all result rows and stores them in
memory. For queries that have potentially large result sets, use the
Database#each function to retrieve all rows or Database#prepare
followed by multiple Statement#get calls to retrieve a previously
unknown amount of rows.
As an illistration:
db.all('SELECT url, rowid FROM archive', function(err, table) {
console.log(table);
});
That will return all entries in the archive table as an array of objects.

Resources