UPDATE prepared statement with Object - node.js

I have an Object that maps column names to values. The columns to be updated are not known beforehand and are decided at run-time.
e.g. map = {col1: "value1", col2: "value2"}.
I want to execute an UPDATE query, updating a table with those columns to the corresponding values. Can I do the following? If not, is there an elegant way of doing it without building the query manually?
db.none('UPDATE mytable SET $1 WHERE id = 99', map)

is there an elegant way of doing it without building the query manually?
Yes, there is, by using the helpers for SQL generation.
You can pre-declare a static object like this:
const cs = new pgp.helpers.ColumnSet(['col1', 'col2'], {table: 'mytable'});
And then use it like this, via helpers.update:
const sql = pgp.helpers.update(data, cs) + /* WHERE clause with the condition */;
// and then execute it:
db.none(sql).then(data => {}).catch(error => {})
This approach will work with both a single object and an array of objects, and you will just append the update condition accordingly.
See also: PostgreSQL multi-row updates in Node.js
What if the column names are not known beforehand?
For that see: Dynamic named parameters in pg-promise, and note that a proper answer would depend on how you intend to cast types of such columns.

Something like this :
map = {col1: "value1", col2: "value2",id:"existingId"}.
db.none("UPDATE mytable SET col1=${col1}, col2=${col2} where id=${id}", map)

Related

Using parameter with table alias in postgres

I'm new to node-postgres and trying to pass in parameters connected to a table alias, but it's not working. How can I parameterize the simple query below? Take something like
const pgQuery = 'SELECT t1.YEAR as year, t1.CODE as code FROM data t1;';
and instead do something like
const pgQuery = 'SELECT t1.$1 as year, t1.$2 as code FROM data t1;';
const values = ['YEAR', 'CODE'];
Per the docs here:
https://node-postgres.com/features/queries
PostgreSQL does not support parameters for identifiers. If you need to have dynamic database, schema, table, or column names (e.g. in DDL statements) use pg-format package for handling escaping these values to ensure you do not have SQL injection!
Which takes you here:
https://www.npmjs.com/package/pg-format
Repo:
https://github.com/datalanche/node-pg-format

Pass column name as argument - Postgres and Node JS

I have a query (Update statement) wrapped in a function and will need to perform the same statement on multiple columns during the course of my script
async function update_percentage_value(value, id){
(async () => {
const client = await pool.connect();
try {
const res = await client.query('UPDATE fixtures SET column_1_percentage = ($1) WHERE id = ($2) RETURNING *', [value, id]);
} finally {
client.release();
}
})().catch(e => console.log(e.stack))
}
I then call this function
update_percentage_value(50, 2);
I have many columns to update at various points of my script, each one needs to be done at the time. I would like to be able to just call the one function, passing the column name, value and id.
My table looks like below
CREATE TABLE fixtures (
ID SERIAL PRIMARY KEY,
home_team VARCHAR,
away_team VARCHAR,
column_1_percentage INTEGER,
column_2_percentage INTEGER,
column_3_percentage INTEGER,
column_4_percentage INTEGER
);
Is it at all possible to do this?
I'm going to post the solution that was advised by Sehrope Sarkuni via the node-postgres GitHub repo. This helped me a lot and works for what I require:
No column names are identifiers and they can't be specified as parameters. They have to be included in the text of the SQL command.
It is possible but you have to build the SQL text with the column names. If you're going to dynamically build SQL you should make sure to escape the components using something like pg-format or use an ORM that handles this type of thing.
So something like:
const format = require('pg-format');
async function updateFixtures(id, column, value) {
const sql = format('UPDATE fixtures SET %I = $1 WHERE id = $2', column);
await pool.query(sql, [value, id]);
}
Also if you're doing multiple updates to the same row back-to-back then you're likely better off with a single UPDATE statement that modifies all the columns rather than separate statements as they'd be both slower and generate more WAL on the server.
To get the column names of the table, you can query the information_schema.columns table which stores the details of column structure of your table, this would help you in framing a dynamic query for updating a specific column based on a specific result.
You can get the column names of the table with the help of following query:
select column_name from information_schema.columns where table_name='fixtures' and table_schema='public';
The above query would give you the list of columns in the table.
Now to update each one for a specific purpose, You can store the result set of column name to a variable and pass that variable to the function to perform the required action.

Inserting a value on a frozen set in cassandra 3

I am currently working on a Cassandra 3 database in which one of its tables has a column that is defined like this:
column_name map<int, frozen <set<int>>>
When I have to change the value of a complete set given a map key x I just have to do this:
UPDATE keyspace.table SET column_name[x] = {1,2,3,4,5} WHERE ...
The thing is that I need to insert a value on a set given a key. I tried with this:
UPDATE keyspace.table SET column_name[x] = column_name[x] + {1} WHERE ...
But it returns:
SyntaxException: line 1:41 no viable alternative at input '[' (... SET column_name[x] = [column_name][...)
What am I doing wrong? Does anyone know how to insert data the way I need?
Since the value of map is frozen, you can't use update like this.
A frozen value serializes multiple components into a single value. Non-frozen types allow updates to individual fields. Cassandra treats the value of a frozen type as a blob. The entire value must be overwritten.
You have to read the full map get the value of the key append new item and then reinsert

Scala slick 2.0 updateAll equivalent to insertALL?

Looking for a way to do a batch update using slick. Is there an equivalent updateAll to insertALL? Goole research has failed me thus far.
I have a list of case classes that have varying status. Each one having a different numeric value so I cannot run the typical update query. At the same time, I want to save the multiple update requests as there could be thousands of records I want to update at the same time.
Sorry to answer my own question, but what i ended up doing is just dropping down to JDBC and doing batchUpdate.
private def batchUpdateQuery = "update table set value = ? where id = ?"
/**
* Dropping to jdbc b/c slick doesnt support this batched update
*/
def batchUpate(batch:List[MyCaseClass])(implicit subject:Subject, session:Session) = {
val pstmt = session.conn.prepareStatement(batchUpdateQuery)
batch map { myCaseClass =>
pstmt.setString(1, myCaseClass.value)
pstmt.setString(2, myCaseClass.id)
pstmt.addBatch()
}
session.withTransaction {
pstmt.executeBatch()
}
}
It's not clear to me what you are trying to achieve, insert and update are two different operation, for insert makes sense to have a bulk function, for update it doesn't in my opinion, in fact in SQL you can just write something like this
UPDATE
SomeTable
SET SomeColumn = SomeValue
WHERE AnotherColumn = AnotherValue
Which translates to update SomeColumn with the value SomeValue for all the rows which have AnotherColumn equal to AnotherValue.
In Slick this is a simple filter combined with map and update
table
.filter(_.someCulomn === someValue)
.map(_.FieldToUpdate)
.update(NewValue)
If instead you want to update the whole row just drop the map and pass a Row object to the update function.
Edit:
If you want to update different case classes I'm lead to think that these case classes are rows defined in your schema and if that's the case you can pass them directly to the update function since it's so defined:
def update(value: T)(implicit session: Backend#Session): Int
For the second problem I can't suggest you a solution, looking at the JdbcInvokerComponent trait it looks like the update function invokes the execute method immediately
def update(value: T)(implicit session: Backend#Session): Int = session.withPreparedStatement(updateStatement) { st =>
st.clearParameters
val pp = new PositionedParameters(st)
converter.set(value, pp, true)
sres.setter(pp, param)
st.executeUpdate
}
Probably because you can actually run one update query at the time per table and not multiple update on multiple tables as stated also on this SO question, but you could of course update multiple rows on the same table.

Subsonic 3 Simple Query inner join sql syntax

I want to perform a simple join on two tables (BusinessUnit and UserBusinessUnit), so I can get a list of all BusinessUnits allocated to a given user.
The first attempt works, but there's no override of Select which allows me to restrict the columns returned (I get all columns from both tables):
var db = new KensDB();
SqlQuery query = db.Select
.From<BusinessUnit>()
.InnerJoin<UserBusinessUnit>( BusinessUnitTable.IdColumn, UserBusinessUnitTable.BusinessUnitIdColumn )
.Where( BusinessUnitTable.RecordStatusColumn ).IsEqualTo( 1 )
.And( UserBusinessUnitTable.UserIdColumn ).IsEqualTo( userId );
The second attept allows the column name restriction, but the generated sql contains pluralised table names (?)
SqlQuery query = new Select( new string[] { BusinessUnitTable.IdColumn, BusinessUnitTable.NameColumn } )
.From<BusinessUnit>()
.InnerJoin<UserBusinessUnit>( BusinessUnitTable.IdColumn, UserBusinessUnitTable.BusinessUnitIdColumn )
.Where( BusinessUnitTable.RecordStatusColumn ).IsEqualTo( 1 )
.And( UserBusinessUnitTable.UserIdColumn ).IsEqualTo( userId );
Produces...
SELECT [BusinessUnits].[Id], [BusinessUnits].[Name]
FROM [BusinessUnits]
INNER JOIN [UserBusinessUnits]
ON [BusinessUnits].[Id] = [UserBusinessUnits].[BusinessUnitId]
WHERE [BusinessUnits].[RecordStatus] = #0
AND [UserBusinessUnits].[UserId] = #1
So, two questions:
- How do I restrict the columns returned in method 1?
- Why does method 2 pluralise the column names in the generated SQL (and can I get round this?)
I'm using 3.0.0.3...
So far my experience with 3.0.0.3 suggests that this is not possible yet with the query tool, although it is with version 2.
I think the preferred method (so far) with version 3 is to use a linq query with something like:
var busUnits = from b in BusinessUnit.All()
join u in UserBusinessUnit.All() on b.Id equals u.BusinessUnitId
select b;
I ran into the pluralized table names myself, but it was because I'd only re-run one template after making schema changes.
Once I re-ran all the templates, the plural table names went away.
Try re-running all 4 templates and see if that solves it for you.

Resources