How to prevent injection for my postgres query in node? - node.js

Below is my code base with the query
export const getQuery = (idList) => {
return `SELECT * from glacier_restore_progress where id in ${idList}`;
}
const query = getQuery('(1,2)');
dbResponse = await pool.query(query)
...
it works fine. But the Sql Injection issue is popping from my sonar server.
So i tried below code change and it didn't work,
...
dbResponse = await pool.query('SELECT * from glacier_restore_progress where id in $1', ['(1,2)']);
What am i missing here?

The best solution uses the ANY function with Postgres array coercion. This lets you match a column with an arbitrary array of values as if you had written out col IN (v1, v2, v3). This is the approach in pero's answer.
SELECT * FROM glacier_restore_progress WHERE id = ANY($1::int[])
dbResponse = await pool.query('SELECT * FROM glacier_restore_progress WHERE id = ANY($1::int[])', [[1,2, ...]]);

Related

Conditional statements in my BigQuery queries are being ignored

I have a simple BigQuery table with a few columns. One of the columns is named my_id (of type STRING). I'm querying my BigQuery datasets like this:
import * as bq from "#google-cloud/bigquery";
const bqdb = new bq.BigQuery();
// ...
const projectId = 'my_project';
const datasetId = "my_dataset";
const tableId = "my_table";
const dbId = [projectId, datasetId, tableId].join('.');
// myIds is an array of strings
const stringedArray = myIds.map((id) => '\'' + id + '\'');
const sql_select_query = `
SELECT my_id
FROM \`${dbId}\`
WHERE my_id IN (${String(stringedArray)})
LIMIT 1
;
`;
const dataset = bqdb.dataset(datasetId);
const destinationTable = dataset.table(tableId);
console.log("Querying database...");
const queryOptions = {
query: sql_select_query,
destination: destinationTable,
write_disposition: "WRITE_APPEND",
priority: 'BATCH',
};
// Run the query as a job
const [job] = await bqdb.createQueryJob(queryOptions);
// Wait for the job to finish.
const results = await job.getQueryResults({maxResults: 500});
const resutsArray = results[0]
This query brings back the ENTIRE table (all rows, all columns). In other words, the result of this query is the same as if I'd wrote:
const sql_select_query = `
SELECT *
FROM \`${dbId}\`
;
`;
The output is formatted like a successful query: there's no error messages or warnings. But all my conditionals are being ignored, even LIMIT.
Why is BigQuery dumping the entire table into the response?
If your query settings are set to writing results to a destination table and using write_disposition: "WRITE_APPEND", job.getQueryResults() will return the data of the destination table along with the newly appended data which is an expected behavior of BigQuery.
job.getQueryResults() will only return the initially selected result if a destination table is configured and write disposition is either 'write if empty' or 'overwrite table'.
As a workaround you can query two times, first using temporary table to return results with condition and run the query again appending to the destination table.
Using your code, you can create two query options. First query option does not have destination and second query option has the destination along with write_disposition. Then create two jobs that uses the first and second query option.
Code snippet:
const queryOptions = {
query: sql_select_query,
priority: 'BATCH',
};
const queryOptionsWrite = {
query: sql_select_query,
destination: destinationTable,
write_disposition: "WRITE_APPEND",
priority: 'BATCH',
};
const [queryJob] = await bqdb.createQueryJob(queryOptions);
const queryResults = await queryJob.getQueryResults();
console.log("Query result:");
console.log(queryResults[0]);
const [writeJob] = await bqdb.createQueryJob(queryOptionsWrite);
const writeResults = await writeJob.getQueryResults();
console.log("\nUpdated table values:");
console.log(writeResults[0]);
Test done:

Is there a way I can use Group By and Count with Type Orm Repository

I am new here recently joined and New in Type ORM My code that I am trying
FIRST QUERY: I would like to use this approach but not sure how I can group by with count on the column and then order by desc on that column
const result = await this.jobViewsRepository.find({
relations: ["jobs"],
loadEagerRelations: true,
order: { id: "DESC" },
skip: offset,
take: limit,
}
);
I am trying if I can use this in my above query
SECOND QUERY: IT'S WORKING FOR ME PERFECTLY THE RESULT I AM LOOKING
const res = await this.jobViewsRepository.createQueryBuilder('jobViews')
.addSelect("COUNT(jobViews.user_id) AS jobViews_total_count" )
.leftJoinAndSelect(Jobs, "jobs", "jobs.id = jobViews.job_id")
.where("jobs.user_id != :id", { id: user_id })
.groupBy("jobViews.job_id")**
.orderBy('jobViews_total_count', 'DESC')**
.limit(limit)
.offset(offset)
.getRawMany();
Please if any can help me out in this will be really appreciated
Thanks
At least in the current version there is no way to do this feature (neither in the documentation nor in the web)
I believe you can use .query to write your own query
Now is the only one way is to use queryBuilder And .groupBy("user.id") with .addGroupBy("user.id")
https://orkhan.gitbook.io/typeorm/docs/select-query-builder#adding-group-by-expression
Or write raw query:
import { getManager } from 'typeorm';
const entityManager = getManager();
const someQuery = await entityManager.query(`
SELECT
fw."X",
fw."Y",
ew.*
FROM "table1" as fw
JOIN "table2" as ew
ON fw."X" = $1 AND ew.id = fw."Y";
`, [param1]);
you use findAndCount to count the result size
result = await this.jobViewsRepository.findAndCount({ ... })
the result = [data,count]

knex use count() result as a value for inserting a new row

I'm trying to use count() result as a value for inserting a new row.
The issue is that in case of multithreading I'm getting a wrong value for count(), as the current code doesn't work properly with a transaction.
I've tried many ways in order to achieve locking with and without explicit knex transaction, but wasn't able to get the right count() value.
const now = knex.fn.now();
const [{ count }] = await knex
.count()
.from(STUDENTS)
.where(CLASS_ID_COL, classId)
.then(daoUtils.normalize);
const [id] = await knex
.insert(
{
[CREATED_AT_COL]: now,
[UPDATED_AT_COL]: now,
[CLASS_ID_COL]: classId,
[ORDER_COL]: Number(count)
},
ID_COL
)
.into(STUDENTS);
Thanks in advance.
I have found a solution using .forUpdate():
const now = knex.fn.now();
return knex.transaction(async trx => {
const [id] = await knex
.insert(
{
[CREATED_AT_COL]: now,
[UPDATED_AT_COL]: now,
[CLASS_ID_COL]: classId,
[ORDER_COL]: Number(count)
},
ID_COL
)
.into(STUDENTS);
const result = await trx
.select("*")
.forUpdate()
.from(STUDENTS)
.where(CLASS_ID_COL, classId);
await trx.table(STUDENTS)
.update(ORDER_COL, Number(result.length) - 1)
.where(ID_COL, id);
return id;
});

How to pass query statement to bigquery in node.js environment

During the big query, the parameters of the function in the SQL statement
I want to update the result of a sql statement by inserting it as # variable name.
However, there is no method to support node.js.
For python, there are methods like the following example.
You can use the function's parameters as # variable names.
query = "" "
SELECT word, word_count
FROM `bigquery-public-data.samples.shakespeare`
WHERE corpus = # corpus
AND word_count> = #min_word_count
ORDER BY word_count DESC;"" "
query_params = [
bigquery.ScalarQueryParameter ('corpus', 'STRING', 'romeoandjuliet'),
bigquery.ScalarQueryParameter ('min_word_count', 'INT64', 250)]
job_config = bigquery.QueryJobConfig ()
job_config.query_parameters = query_params
related document:
https://cloud.google.com/bigquery/docs/parameterized-queries#bigquery-query-params-python
I would like to ask for advice.
BigQuery node.js client supports parameterized queries when you pass them with the params key in options. Just updated the docs to show this. Hope this helps!
Example:
const sqlQuery = `SELECT word, word_count
FROM \`bigquery-public-data.samples.shakespeare\`
WHERE corpus = #corpus
AND word_count >= #min_word_count
ORDER BY word_count DESC`;
const options = {
query: sqlQuery,
// Location must match that of the dataset(s) referenced in the query.
location: 'US',
params: {corpus: 'romeoandjuliet', min_word_count: 250},
};
// Run the query
const [rows] = await bigquery.query(options);
let ip_chunk = "'1.2.3.4', '2.3.4.5', '10.20.30.40'"
let query = `
SELECT
ip_address.ip as ip,
instance.zone as zone,
instance.name as vmName,
instance.p_name as projectName
FROM
\`${projectId}.${datasetId}.${tableId}\` instance,
UNNEST(field_x.DATA.some_info) ip_address
WHERE ip_address.networkIP IN (${ip_chunk})`
**Use - WHERE ip_address.networkIP in (${ip_chunk})
instead of - WHERE ip in (${ip_chunk})**
It is worth adding that you can create a stored procedure and pass parameters the same way as the accepted answer shows.
const { BigQuery } = require('#google-cloud/bigquery');
function testProc() {
return new Promise((resolve) => {
const bigquery = new BigQuery();
const sql = "CALL `my-project.my-dataset.getWeather`(#dt);";
const options = {
query: sql,
params: {dt: '2022-09-01'},
location: 'US'
};
// Run the query
const result = bigquery.query(options);
return result.then((rows) => {
console.log(rows);
resolve(rows);
});
});
}
testProc().catch((err) => { console.error(JSON.stringify(helpers.getError(err.message))); });

How to select a UUID from a prepared statement in Postgres?

I'm trying to select a user based on a UUID:
const client = await this.pg.connect()
const { rowsAct } = await client.query(`SELECT * FROM users WHERE uid=$1`, [
userUUID
])
I also tried without the variable:
const client = await this.pg.connect()
const { rowsAct } = await client.query(`SELECT * FROM users WHERE uid=$1`, [
'4fcf0ca3-4e26-40a9-bbe5-78ff8fdb6e0f'
])
I tried using ::uuid casting but maybe I did it wrong. The returned rowsAct is always undefined.
I verified the userUUID variable was populated and was a valid uuid:
console.log(typeof userUUID) // string
What am I doing wrong? How can I properly select a row form it's UUID?
Thanks!
You'll need to wrap your argument in parentheses before applying the ::uuid type cast so that the prepared statement can properly interpolate argument.
SELECT * FROM users WHERE uid = ($1)::uuid
You can also cast the column itself to text but it's less performant since the query will have to cast each row.
SELECT * FROM users WHERE uid::text = $1

Resources