I am using pg-promise package with Nodejs to execute PostgreSQL queries. I want to see the queries executed. Only specific queries, say, just one query that I want to debug.
I can see that one recommended way is to use the pg-monitor to catch the events and log them as mentioned here in the examples documentation.
Without using pg-monitor, is there a simple way to just print the prepared query that is executed. I can't see it in the docs.
Example:
db.query("SELECT * FROM table WHERE id = $/id/", {id: 2})
How to print this query to yield?
SELECT * FROM table WHERE id = 2
is there a simple way to just print the prepared query that is executed...
A query in general - yes, see below. A Prepared Query - no, those are by definition formatted on the server-side.
const query = pgp.as.format('SELECT * FROM table WHERE id = $/id/', {id: 2});
console.log(query);
await db.any(query);
And if you want to print all queries executed by your module, without using pg-monitor, simply add event query handler when initializing the library:
const initOptions = {
query(e) {
console.log(e.query);
}
};
const pgp = require('pg-promise')(initOptions);
Related
I'm using sequelize and want to know, how to send case-sensitive queries to my DB.
I have table users with a column Login.
And when I send request, (data.Login = '***0xwraith***'), sequelize finds me user with login 0xWraith. But I want the logins 0xWraith and 0xwraith be distinct and separate.
This is the code I use:
let db_user = await db.users.findOne({
where: {
Login: data.login
}
});
MySQL's string comparison is case insensitive by default.
If you would like to have case sensitive query just for this query, you can add BINARY keyword before the comparison to enable it.
SELECT * FROM users WHERE BINARY(`Login`) = '0xWraith';
You can write the query in Sequelize as following.
db.users.findOne({
where: sequelize.where(sequelize.fn('BINARY', sequelize.col('Login')), data.login)
})
If you want to enable the case sensitivity as a table level, read more about it here https://dev.mysql.com/doc/refman/8.0/en/case-sensitivity.html.
First, simple BigQuery SQL:
We're trying to take the following runnable BigQuery SQL query and convert it to a parameterized query to execute in Node.js:
SELECT * FROM UNNEST([
STRUCT(
ST_GEOGFROMTEXT('POINT(1 2)') AS lnglat,
TIMESTAMP('2020-01-01') AS stamp
)
])
The query simply builds a pseudo-table from an array of STRUCTs. Most notably, the output types match what you'd expect, the stamp column is a BigQuery TIMESTAMP type, and lnglat is a BigQuery GEOGRAPHY type.
Now, let's try in Node.js.
Let's substitute the array of BigQuery STRUCTs above, for #points, and pass a JavaScript array of Object literals as params:
// this is version 5.3.0
const { BigQuery, Geography } = require('#google-cloud/bigquery');
const bigquery = new BigQuery();
(async () => {
const query = 'SELECT * from UNNEST(#points)';
const params = { points: [
{
lnglat: new Geography('POINT(1 2)'),
stamp: BigQuery.timestamp('2020-01-01')
}
] };
const [job] = await bigquery.createQueryJob({ query, params });
// Wait for the query to finish
const [rows] = await job.getQueryResults();
// Print the results
console.log('Rows:');
console.log(rows);
})();
Returns the following result on my CLI:
> node index.js
Rows:
[
{
lnglat: { value: 'POINT(1 2)' },
stamp: BigQueryTimestamp { value: '2020-01-01T00:00:00.000Z' }
}
]
The problem is, despite the NodeJS SDK containing docs around "Geography" here, here, and here, none of these methods seem to actually force BigQuery to construct a native BigQuery GEOGRAPHY type inside of BigQuery.
It seems, instead, BigQuery will interpret the new Geography() as a RECORD type, with a value field as indicated in the response above, and also verified by inspecting the temporary (anon) table that is created in the BigQuery UI:
We've tried different variants of geography functions/classes: Geography, BigQuery.Geography, and bigquery.Geography; they all return the same RECORD type.
Strangely, if we instead query an existing table (as opposed to constructing a pseudo-table at runtime), the result is more consistent with what I would expect:
Rows:
[ { lnglat: Geography { value: 'POINT(-118.43356046 45.97057312)' } } ]
Note the Geography type in the response!
We are aware that we can fallback to specifying lnglat as a JavaScript string literal, and the following SQL will convert it into a native GEOGRAPHY by wrapping in a CTE:
WITH points AS (
SELECT * from UNNEST(#points)
)
SELECT * EXCEPT(lnglat), ST_GEOGFROMTEXT(lnglat) AS lnglat FROM points
But unfortunately, we want to use this pseudotable as a filter against a much larger on-disk table, and using this CTE-wrapper eliminates the ability for that query (not illustrated here) to leverage clustering. Clustering is very important for cost savings and execution performance. I can elaborate more on this if you request.
At the end of the day, it still doesn't explain why native GEOGRAPHYs are not materializing in the pseudo-table.
Question:
How do we use BigQuery NodeJS SDK to construct a native BigQuery GEOGRAPHY type, similar to what we can do with BigQuery.timestamp() (above), without CTEs?
I have an API method where the user can pass in their own query. The field in the collection is simply ns, so the user might pass something like:
v.search = function(query: Object){
// query => {ns:{$in:['foo','bar',baz]}} // valid!
// query => {ns:{$in:{}}} // invalid!
// query => {ns:/foo/} // valid!
});
is there some way to do this, like a smoke test that can fail queries that are obviously wrong?
I am hoping that some MongoDB libraries would export this functionality... but in all likelihood they validate the query only by sending it to the database, which is in fact, the real arbiter of which query is valid/invalid.
But I am looking to validate the query before sending it to the DB.
Some modules that are part of MongoDB Compass have been made open source.
There are two modules that may be of use for your use case:
mongodb-language-model
mongodb-query-parser
Although they may not fit your use case 100%, it should give you a very close validation. For example npm install mongodb-language-model, then:
var accepts = require('mongodb-language-model').accepts;
console.log(accepts('{"ns":{"$in":["foo", "bar", "baz"]}}')); // true
console.log(accepts('{"ns":{"$in":{}}}')); // false
console.log(accepts('{"ns":{"$regex": "foo"}}')); // true
Also may be of interest, npm install mongodb-query-parser to parse a string value into a JSON query. For example:
var parse = require('mongodb-query-parser');
var query = '{"ns":{"$in":["foo", "bar", "baz"]}}';
console.log(parse.parseFilter(query)); // {ns:{'$in':['foo','bar','baz']}}
I don't think it's possible to do otherwise than by reflecting query.ns object and checking every of its property / associated value
I'm writing a raw SQL query to implement Postgres full text search in my node backend. I've looked through the official docs, which state:
plainto_tsquery transforms unformatted text querytext to tsquery. The text is parsed and normalized much as for to_tsvector, then the & (AND) Boolean operator is inserted between surviving words.
but I'm not familiar enough with all the different SQL injection techniques to know for certain whether the following will be properly escaped:
'SELECT * FROM "Products" WHERE "catalog_ts_vector" ## plainto_tsquery(\'english\', ' + search_term + ')'
The user will be able to enter whatever search_term they want via the URI.
Do I need to do further escaping/manipulation, or is this functionality fully baked into plainto_tsquery() and other Postgres safeguards?
Edit
As a side note, I plan to strip out most non-alphanumeric characters (including parentheses) with .replace(/[^\w-_ .\&]|\(\)/g, ' '); that should go a long way, but I'm still curious if this is even necessary.
Most likely you're using pg module as PostgreSQL client for node.js. In this case you don't need to worry about sql injection, pg prevents it for you. Just not use string concatination to create query, use parameterized queries (or prepared statement):
var sql = 'SELECT * FROM "Products" WHERE "catalog_ts_vector" ## plainto_tsquery(\'english\', $1)';
var params = [search_term];
client.query(sql, params, function(err, result) {
// handle error and result here
});
Also look at Prepared Statment part of pg wiki and PostgreSQL PREPARE statement.
UPD What about sequelize - it uses pg module by default, but you can specify you preferable pg client in dialectModulePath config parameter (see here). Also you can use parameterized queries in sequelize too. Even better - you can use named parameters. So you code will be:
var sql = 'SELECT * FROM "Products" WHERE "catalog_ts_vector" ## plainto_tsquery(\'english\', :search_term)';
var params = { search_term: search_term }
sequelize.query(sql, Product, null, params).then(function(products) {
// handle your products here
})
Where Product is your sequelize product model.
I implementing module who automatically generate mongoose query by requested params, so for simplified test process I need to be able to get text representation of final query. How could I do that?
Like we have something like this:
var q = AppModel.find({id:777}).sort({date:-1})
I need to get something like this
"db.appmodels.where({id:777}).sort({date: -1})"
You can set debug for mongoose, which would by default send the queries to console, to use the following:
mongoose.set('debug', function (collectionName, method, query, doc) {
// Here query is what you are looking for.
// so whatever you want to do with the query
// would be done in here
})
Given a query object q you can rebuild the query using its fields, namely q._conditions and q._update. This is undocumented though and could easily break between versions of Mongoose (tested on Mongoose 4.0.4).