So i have some variables, that contains strings. I would like to access my mongodb using these strings, but of course it won't work if i just write it down like that:
...some db connection code
var x="name";
...find all data, then loop
..function(err,docs){
docs[i].x;
}
The queastion is, how can i access the X parameters of my collection.
You can use this way :
database.mydata[i][x];
Related
I'm using Knex, which itself uses package "pg" (aka "node-postgres").
If you SELECT some rows from a table with a TEXT[] column, all is well... in JS you get an array of strings.
But if you're using a CITEXT[] column, instead you just get back a string in JS like:
"{First-element,Second-element}"
Normally when you want to instruct the pg package on how to return specific postgres types, you can do something like this:
import {types} from 'pg';
types.setTypeParser(types.builtins.TIMESTAMPTZ, 'text');
types.setTypeParser(types.builtins.TIMESTAMP, 'text');
types.setTypeParser(types.builtins.DATE, 'text');
types.setTypeParser(types.builtins.TIME, 'text');
types.setTypeParser(types.builtins.TIMETZ, 'text');
The types.builtins.* constants have values that are hardcoded OID numbers for the known built-in types in postgres. Those OID numbers are the same across all postgres installations.
However due to the fact that CITEXT[] is an extension, the OID numbers for the CITEXT + CITEXT[] types will be different on every server, e.g. with the following SQL query:
SELECT typname, oid, typarray FROM pg_type WHERE typname like '%citext%';
On my development server I get:
typname|oid |typarray|
-------|-----|--------|
citext |17459|17464 |
_citext|17464|0 |
But on my production server I get:
typname|oid |typarray|
-------|-----|--------|
citext |18618|18623 |
_citext|18623|0 |
How can I solve this?
Some hacky options that I really don't want to do are:
Find out all the different OID values for all my servers and hard code them in - very hacky and really don't want to do this.
Write code specifically for every table/column that manually converts the strings to array - also hacky and repetitive
When the node process initializes, get the server's OID value for the server and then call the types.setTypeParser() function with that dynamic value - also not very good
How can I solve this for all tables/columns without these hacky solutions?
I don't believe there is any way how to do it without querying DB.
I would probably query the correct OID number before starting the node app and store it to an environment variable and then initialize pg types with the value from process.env.
That is also a bit hacky, but at least the hack is mostly encapsulated out of the application code.
I'm currently writing an app that accesses google bigquery via their "#google-cloud/bigquery": "^2.0.6" library. In one of my queries I have a where clause where i need to pass a list of ids. If I use UNNEST like in their example and pass an array of strings, it works fine.
https://cloud.google.com/bigquery/docs/parameterized-queries
However, I have found that UNNEST can be really slow and just want to use IN on its own and pass in a string list of ids. No matter what format of string list I send, the query returns null results. I think this is because of the way they convert parameters in order to avoid sql injection. I have to use a parameter because I, myself want to avoid SQL injection attacks on my app. If i pass just one id it works fine, but if i pass a list it blows up so I figure it has something to do with formatting, but I know my format is correct in terms of what IN would normally expect i.e. IN ('', '')
Has anyone been able to just pass a param to IN and have it work? i.e. IN (#idParam)?
We declare params like this at the beginning of the script:
DECLARE var_country_ids ARRAY<INT64> DEFAULT [1,2,3];
and use like this:
WHERE if(var_country_ids is not null,p.country_id IN UNNEST(var_country_ids),true) AND ...
as you see we let NULL and array notation as well. We don't see issues with speed.
I've read through every stack overflow I can find and I don't understand why this still isn't working.
I'm trying to construct a NodeJS Mongo find query and very simply want to use a variable as the values, the key does not need to be dynamic.
This is the code I was working with initially :
collection.find({project_id : project_id_val})
but this simply returns :
Found the following records
[]
I've also tried constructing my own javascript object and passing that in e.g.
Query = {}
Query["project id"] = project_id_val
collection.find(query)
But that doesn't work either, I know the key/value pair is correct because
project_id: "12345" works absolutely fine, and returns exactly what I want it to. I feel like this should be very simple so if someone could let me know where I'm going wrong that would be great.
Thanks.
So, I have one collection called UserInfo that I use in different functions. Currently, I just create a local variable in every function that needs to use it. Is this normal practice? How expensive it is to create this collection variable?
ex: var collection = myDb.collection('UsersInfo');
I am assuming you are sharing myDB (mongoClient) across functions, probably passing them as function parameters or using globals.
If you have just one collection, why not do the same as myDb object with collection object.
I want to create a "prepared statement" in postgres using the node-postgres module. I want to create it without binding it to parameters because the binding will take place in a loop.
In the documentation i read :
query(object config, optional function callback) : Query
If _text_ and _name_ are provided within the config, the query will result in the creation of a prepared statement.
I tried
client.query({"name":"mystatement", "text":"select id from mytable where id=$1"});
but when I try passing only the text & name keys in the config object, I get an exception :
(translated) message is binding 0 parameters but the prepared statement expects 1
Is there something I am missing ? How do you create/prepare a statement without binding it to specific value in order to avoid re-preparing the statement in every step of a loop ?
I just found an answer on this issue by the author of node-postgres.
With node-postgres the first time you issue a named query it is
parsed, bound, and executed all at once. Every subsequent query issued
on the same connection with the same name will automatically skip the
"parse" step and only rebind and execute the already planned query.
Currently node-postgres does not support a way to create a named,
prepared query and not execute the query. This feature is supported
within libpq and the client/server protocol (used by the pure
javascript bindings), but I've not directly exposed it in the API. I
thought it would add complexity to the API without any real benefit.
Since named statements are bound to the client in which they are
created, if the client is disconnected and reconnected or a different
client is returned from the client pool, the named statement will no
longer work (it requires a re-parsing).
You can use pg-prepared for that:
var prep = require('pg-prepared')
// First prepare statement without binding parameters
var item = prep('select id from mytable where id=${id}')
// Then execute the query and bind parameters in loop
for (i in [1,2,3]) {
client.query(item({id: i}), function(err, result) {...})
}
Update: Reading your question again, here's what I believe you need to do. You need to pass a "value" array as well.
Just to clarify; where you would normally "prepare" your query, just prepare the object you pass to it, without the value array. Then where you would normally "execute" your query, set the value array in the object and pass it to the query. If it's the first time, the driver will do the actual prepare for you the first time around, and simple do binding and execution for the rest of the iteration.