AND occasionally produces wrong result on shell and node native driver - node.js

I’ve built a dynamic query generator to create my desired queries based on many factors, however, in rare cases it acted weird. After a day on reading logs I found a situation that can be simplified in this:
db.users.find({att: ‘a’, att: ‘b’})
What I expect is that mongodb by default uses AND, so above query’s result should be an empty array. However, it’s not!
But when I use AND explicitly, the result is an empty array
db.users.find({$and: [{att: 'a'}, {att: ‘b'}]})

In javascript object’s keys should be unique, otherwise the value will be replaced by the latest value
(also mongodb shell is based on js, so it follows some rules of js)
const t = {att: 'a', att: 'b'};
console.log(t);
So in your case your query is acting like this:
db.users.find({att: ‘b’})
You’ve to handle this situation on your code if you want the result be empty in the mentioned condition

Related

flux query: filter out all records related to one matching the condition

I'm trying to filter an influx DB query (using the nodeJS influxdb-client library).
As far as I can tell, it only works with "flux" queries.
I would like to filter out all records that share a specific attribute with any record that matches a particular condition. I'm filtering using the filter-function, but I'm not sure how I can continue from there. Is this possible in a single query?
My filter looks something like this:
|> filter(fn:(r) => r["_value"] == 1 and r["button"] == "1" ) and I would like to leave out all the record that have the same r["session"] as any that match this filter.
Do I need two queries; one to get those r["session"]s and one to filter on those, or is it possible in one?
Update:
Trying the two-step process. Got the list of r["session"]s into an array, and attempting to use the contains() flux function now to filter values included in that array called sessionsExclude.
Flux query section:
|> filter(fn:(r) => contains(value: r["session"], set: ${sessionsExclude}))
Getting an error unexpected token for property key: INT ("102")'. Not sure why. Looks like flux tries to turn the values into Integers? The r["session"] is also a String (and the example in the docs also uses an array of Strings)...
Ended up doing it in two queries. Still confused about the Strings vs Integers, but casting the value as an Int and printing out the array of r["session"] within the query seems to work like this:
'|> filter(fn:(r) => not contains(value: int(v: r["session"]), set: [${sessionsExclude.join(",")}]))'
Added the "not" to exclude instead of retain the values matching the array...

Node-Postgres/Knex returning CITEXT[] as a string in JS, instead of an array of strings

I'm using Knex, which itself uses package "pg" (aka "node-postgres").
If you SELECT some rows from a table with a TEXT[] column, all is well... in JS you get an array of strings.
But if you're using a CITEXT[] column, instead you just get back a string in JS like:
"{First-element,Second-element}"
Normally when you want to instruct the pg package on how to return specific postgres types, you can do something like this:
import {types} from 'pg';
types.setTypeParser(types.builtins.TIMESTAMPTZ, 'text');
types.setTypeParser(types.builtins.TIMESTAMP, 'text');
types.setTypeParser(types.builtins.DATE, 'text');
types.setTypeParser(types.builtins.TIME, 'text');
types.setTypeParser(types.builtins.TIMETZ, 'text');
The types.builtins.* constants have values that are hardcoded OID numbers for the known built-in types in postgres. Those OID numbers are the same across all postgres installations.
However due to the fact that CITEXT[] is an extension, the OID numbers for the CITEXT + CITEXT[] types will be different on every server, e.g. with the following SQL query:
SELECT typname, oid, typarray FROM pg_type WHERE typname like '%citext%';
On my development server I get:
typname|oid |typarray|
-------|-----|--------|
citext |17459|17464 |
_citext|17464|0 |
But on my production server I get:
typname|oid |typarray|
-------|-----|--------|
citext |18618|18623 |
_citext|18623|0 |
How can I solve this?
Some hacky options that I really don't want to do are:
Find out all the different OID values for all my servers and hard code them in - very hacky and really don't want to do this.
Write code specifically for every table/column that manually converts the strings to array - also hacky and repetitive
When the node process initializes, get the server's OID value for the server and then call the types.setTypeParser() function with that dynamic value - also not very good
How can I solve this for all tables/columns without these hacky solutions?
I don't believe there is any way how to do it without querying DB.
I would probably query the correct OID number before starting the node app and store it to an environment variable and then initialize pg types with the value from process.env.
That is also a bit hacky, but at least the hack is mostly encapsulated out of the application code.

Is there a way to pass a parameter to google bigquery to be used in their "IN" function

I'm currently writing an app that accesses google bigquery via their "#google-cloud/bigquery": "^2.0.6" library. In one of my queries I have a where clause where i need to pass a list of ids. If I use UNNEST like in their example and pass an array of strings, it works fine.
https://cloud.google.com/bigquery/docs/parameterized-queries
However, I have found that UNNEST can be really slow and just want to use IN on its own and pass in a string list of ids. No matter what format of string list I send, the query returns null results. I think this is because of the way they convert parameters in order to avoid sql injection. I have to use a parameter because I, myself want to avoid SQL injection attacks on my app. If i pass just one id it works fine, but if i pass a list it blows up so I figure it has something to do with formatting, but I know my format is correct in terms of what IN would normally expect i.e. IN ('', '')
Has anyone been able to just pass a param to IN and have it work? i.e. IN (#idParam)?
We declare params like this at the beginning of the script:
DECLARE var_country_ids ARRAY<INT64> DEFAULT [1,2,3];
and use like this:
WHERE if(var_country_ids is not null,p.country_id IN UNNEST(var_country_ids),true) AND ...
as you see we let NULL and array notation as well. We don't see issues with speed.

Knex + SQL Server whereIn query 8-12s -- raw version returns NO results but if I input the .toQuery() result directly I get results

The database is in Azure cloud and not being used in production currently. There are 80.000 rows and a uprn is a VARCHAR(100);
I'm already using JOI to validate each UPRN as well;
I'm using KNEX with a SQL Server database with the following whereIn query:
knex(LOCATIONS.table).whereIn(LOCATIONS.uprn, req.body.uprns)
but this takes 8-12s to complete and sometimes timesout. if I use .toQuery() on the same thing, SSMS will return the result within 1-2.
If I do a raw query, the resulting .toQuery() or toString() works in SSMS and returns results. But if I try to use the raw directly, it will return 0 results.
I'm looking to either fix what's making whereIn so slow or get the raw query working.
EDIT 1:
After much debugging and trying -- it seems that the bug is due to how knex deals with arrays, so I made a for-of loop to add ? ? ? for each array element and then inputed the array for all params.
This led me to realizing the performance issue is due to SQL server way of parameterising.
I ended up building a raw query string with all of the parameters and validating the input with Joi string/regex config:
Joi.string()
.min(1)
.max(35)
.regex(/^[a-z\d\-_\s]+$/i)
allowing only for alphanumeric, dashes and spaces which should prevent sql injection.
I'm going to look deeper into security issues with this and might make a separate login that can only SELECT data from that table and nothing more to run with these queries.
Needed to just handle it raw and validate separately.

Best practice to pass query conditions in ajax request

I'm writing a REST api in node js that will execute a sql query and send the results;
in the request I need to send the WHERE conditions; ex:
GET 127.0.0.1:5007/users //gets the list of users
GET 127.0.0.1:5007/users
id = 1 //gets the user with id 1
Right now the conditions are passed from the client to the rest api in the request's headers.
In the API I'm using sequelize, an ORM that needs to receive WHERE conditions in a particular form (an object); ex: having the condition:
(x=1 AND (y=2 OR z=3)) OR (x=3 AND y=1)
this needs to be formatted as a nested object:
-- x=1
-- AND -| -- y=2
| -- OR ----|
| -- z=3
-- OR -|
|
| -- x=3
-- AND -|
-- y=1
so the object would be:
Sequelize.or (
Sequelize.and (
{x=1},
Sequelize.or(
{y=2},
{z=3}
)
),
Sequelize.and (
{x=3},
{y=1}
)
)
Now I'm trying to pass a simple string (like "(x=1 AND (y=2 OR z=3)) OR (x=3 AND y=1)"), but then I will need a function on the server that can convert the string in the needed object (this method in my opinion has the advantage that the developer writing the client, can pass the where conditions in a simple way, like using sql, and this method is also indipendent from the used ORM, with no need to change the client if we need to change the server or use a different ORM);
The function to read and convert the conditions' string into an object is giving me headache (I'm trying to write one without success, so if you have some examples about how to do something like this...)
What I would like to get is a route capable of executing almost any kind of sql query and give the results:
now I have a different route for everything:
127.0.0.1:5007/users //to get all users
127.0.0.1:5007/users/1 //to get a single user
127.0.0.1:5007/lastusers //to get user registered in the last month
and so on for the other tables i need to query (one route for every kind of request I need in the client);
instead I would like to have only one route, something like:
127.0.0.1:5007/request
(when calling this route I will pass the table name and the conditions' string)
Do you think this solution would be a good solution or you generally use other ways to handle this kind of things?
Do you have any idea on how to write a function to convert the conditions' string into the desired object?
Any suggestion would be appreciated ;)
I would strongly advise you not to expose any part of your database model to your clients. Doing so means you can't change anything you expose without the risk of breaking the clients. One suggestion as far as what you've supplied is that you can and should use query parameters to cut down on the number of endpoints you've got.
GET /users //to get all users
GET /users?registeredInPastDays=30 //to get user registered in the last month
GET /users/1 //to get a single user
Obviously "registeredInPastDays" should be renamed to something less clumsy .. it's just an example.
As far as the conditions string, there ought to be plenty of parsers available online. The grammar looks very straightforward.
IMHO the main disadvantage of your solution is that you are creating just another API for quering data. Why create sthm from scratch if it is already created? You should use existing mature query API and focus on your business logic rather then inventing sthm new.
For example, you can take query syntax from Odata. Many people have been developing that standard for a long time. They have already considered different use cases and obstacles for query API.
Resources are located with a URI. You can use or mix three ways to address them:
Hierarchically with a sequence of path segments:
/users/john/posts/4711
Non hierarchically with query parameters:
/users/john/posts?minVotes=10&minViews=1000&tags=java
With matrix parameters which affect only one path segment:
/users;country=ukraine/posts
This is normally sufficient enough but it has limitations like the maximum length. In your case a problem is that you can't easily describe and and or conjunctions with query parameters. But you can use a custom or standard query syntax. For instance if you want to find all cars or vehicles from Ford except the Capri with a price between $10000 and $20000 Google uses the search parameter
q=cars+OR+vehicles+%22ford%22+-capri+%2410000..%2420000
(the %22 is a escaped ", the %24 a escaped $).
If this does not work for your case and you want to pass data outside of the URI the format is just a matter of your taste. Adding a custom header like X-Filter may be a valid approach. I would tend to use a POST. Although you just want to query data this is still RESTful if you treat your request as the creation of a search result resource:
POST /search HTTP/1.1
your query-data
Your server should return the newly created resource in the Location header:
HTTP/1.1 201 Created
Location: /search/3
The result can still be cached and you can bookmark it or send the link. The downside is that you need an additional POST.

Resources