I want to create a "prepared statement" in postgres using the node-postgres module. I want to create it without binding it to parameters because the binding will take place in a loop.
In the documentation i read :
query(object config, optional function callback) : Query
If _text_ and _name_ are provided within the config, the query will result in the creation of a prepared statement.
I tried
client.query({"name":"mystatement", "text":"select id from mytable where id=$1"});
but when I try passing only the text & name keys in the config object, I get an exception :
(translated) message is binding 0 parameters but the prepared statement expects 1
Is there something I am missing ? How do you create/prepare a statement without binding it to specific value in order to avoid re-preparing the statement in every step of a loop ?
I just found an answer on this issue by the author of node-postgres.
With node-postgres the first time you issue a named query it is
parsed, bound, and executed all at once. Every subsequent query issued
on the same connection with the same name will automatically skip the
"parse" step and only rebind and execute the already planned query.
Currently node-postgres does not support a way to create a named,
prepared query and not execute the query. This feature is supported
within libpq and the client/server protocol (used by the pure
javascript bindings), but I've not directly exposed it in the API. I
thought it would add complexity to the API without any real benefit.
Since named statements are bound to the client in which they are
created, if the client is disconnected and reconnected or a different
client is returned from the client pool, the named statement will no
longer work (it requires a re-parsing).
You can use pg-prepared for that:
var prep = require('pg-prepared')
// First prepare statement without binding parameters
var item = prep('select id from mytable where id=${id}')
// Then execute the query and bind parameters in loop
for (i in [1,2,3]) {
client.query(item({id: i}), function(err, result) {...})
}
Update: Reading your question again, here's what I believe you need to do. You need to pass a "value" array as well.
Just to clarify; where you would normally "prepare" your query, just prepare the object you pass to it, without the value array. Then where you would normally "execute" your query, set the value array in the object and pass it to the query. If it's the first time, the driver will do the actual prepare for you the first time around, and simple do binding and execution for the rest of the iteration.
Related
I haven't used logic apps a lot, my boss is having trouble stuffing the results of one query into a variable and then using that variable in another query.
Basically, all he wants to do is get a list of of Id's returned from the first query and use that list in the second.
Here is a picture of what his logic app looks like:
You can see at the end of the second query he wants to check if the id is in the list or not. He's out for the day and I'm not sure if that variable is even receiving the list of id's successfully, but is there anything from the picture that you can tell that needs to be corrected? Or any suggestions that he could try, to achieve what he's trying to achieve?
According to the image, no data is getting stored into the variable AppId. While in the query you can just directly use c.EntityId. Below query to check if c.id is present in c.EntityId.
SELECT c.Vechicle.GrossVechicleWeight as GVW, c.EntityId as ApplicationId FROM c where c.RiskTypeId = 1 and c.Discriminator = 'RiskEntity' and c.EntityTypeId = 4500 and c.id in (c.EntityId)
Consider if you are trying to store c.Entity into AppId variable then you can Query SELECT c.EntityId FROM c and then store the result into the variable using Append to array variable action by extracting only c.EntityId using Parse JSON.
Here is my logic app
RESULT:
I'm currently writing an app that accesses google bigquery via their "#google-cloud/bigquery": "^2.0.6" library. In one of my queries I have a where clause where i need to pass a list of ids. If I use UNNEST like in their example and pass an array of strings, it works fine.
https://cloud.google.com/bigquery/docs/parameterized-queries
However, I have found that UNNEST can be really slow and just want to use IN on its own and pass in a string list of ids. No matter what format of string list I send, the query returns null results. I think this is because of the way they convert parameters in order to avoid sql injection. I have to use a parameter because I, myself want to avoid SQL injection attacks on my app. If i pass just one id it works fine, but if i pass a list it blows up so I figure it has something to do with formatting, but I know my format is correct in terms of what IN would normally expect i.e. IN ('', '')
Has anyone been able to just pass a param to IN and have it work? i.e. IN (#idParam)?
We declare params like this at the beginning of the script:
DECLARE var_country_ids ARRAY<INT64> DEFAULT [1,2,3];
and use like this:
WHERE if(var_country_ids is not null,p.country_id IN UNNEST(var_country_ids),true) AND ...
as you see we let NULL and array notation as well. We don't see issues with speed.
I am using the python bolt driver to create nodes in a neo4j database. These nodes get altered by apoc.trigger functions. And i want the returning BoltStatementResult to contain the altered version of these nodes.
This is what i have tested so far:
My triggers are working as expected. The nodes stored, are altered correctly.
I tried the 'before' and 'after' phases.
I set the trigger functions to return the altered version.
I did write a second query to get the new and updated node of database. But this option is quite unsafe, as it has no unique identifier.
My trigger function:
CALL apoc.trigger.add(
'onCreateNodeAddMetadata',
'UNWIND {createdNodes} AS n
SET n.uid = apoc.create.uuid(), n.timestamp = timestamp() RETURN n',
{phase: 'before'}
)
I expect the returning value of my session.write_transaction to contain the the added properties.
As a safe workaround (but see caveat below), the Cypher query in your write_transaction can return the native ID of the created node (e.g., RETURN ID(n)).
Then, as long as you know the node was not deleted, you can perform a query for it with that ID (in this example, myID contains the ID value and is passed as a parameter):
MATCH (n) WHERE ID(n) = $myId
...
If the node could be deleted before you search for it by native ID, then this technique is not safe, since neo4j can re-assign the native ID of a deleted node to another newly- created node.
I am trying to use firebase realtime database multi path updates.
However trying to set a parent node to null as below will result on an error.
const firebaseUpdate = {}
firebaseUpdate[`user/${uid}`] = null
db.ref().update(firebaseUpdate)
Error: Reference.update failed: First argument contains a path /user/USER_ID
that is ancestor of another path /user/USER_ID/creationTime
I was wondering if there is a way to use multi-path updates in order to set a parent node with multiple children to null.
I assume I could use remove or set function but I'd rather use the multi-path update.
The error message indicates that you're trying to apply two conflicting updates to the database in one operation. As the message says, your update tries to:
write to /user/USER_ID
write to /user/USER_ID/creationTime
The second one write is a child of the first one. Since the order of writes in a multi-location is unspecified, it's impossible to say what the outcome of the write operation will be.
If you want to replace any data that currently exists at /user/USER_ID with the creationTime, you should update it like this:
db.ref().update({
"/user/USER_ID": { creationTime: Date.now() }
})
The database is in Azure cloud and not being used in production currently. There are 80.000 rows and a uprn is a VARCHAR(100);
I'm already using JOI to validate each UPRN as well;
I'm using KNEX with a SQL Server database with the following whereIn query:
knex(LOCATIONS.table).whereIn(LOCATIONS.uprn, req.body.uprns)
but this takes 8-12s to complete and sometimes timesout. if I use .toQuery() on the same thing, SSMS will return the result within 1-2.
If I do a raw query, the resulting .toQuery() or toString() works in SSMS and returns results. But if I try to use the raw directly, it will return 0 results.
I'm looking to either fix what's making whereIn so slow or get the raw query working.
EDIT 1:
After much debugging and trying -- it seems that the bug is due to how knex deals with arrays, so I made a for-of loop to add ? ? ? for each array element and then inputed the array for all params.
This led me to realizing the performance issue is due to SQL server way of parameterising.
I ended up building a raw query string with all of the parameters and validating the input with Joi string/regex config:
Joi.string()
.min(1)
.max(35)
.regex(/^[a-z\d\-_\s]+$/i)
allowing only for alphanumeric, dashes and spaces which should prevent sql injection.
I'm going to look deeper into security issues with this and might make a separate login that can only SELECT data from that table and nothing more to run with these queries.
Needed to just handle it raw and validate separately.