Array to function parameters - node.js

I'm using the node.js Redis library and I'm attempting to bulk-subscribe to many keys. I've got an array which is dynamic i.e
var keys {'key1','key2',...,'keyN'}
and I want to feed each index in as parameters to subscribe in the Redis library which takes one or more string(s). I've tried the apply function in JS using..
redisClient.subscribe.apply(this,keys);
but it doesn't cause a subscription. Any suggestions on how I can get over this issue?

Your example data is totally invalid JS, but I'm assuming you have it correct in your code.
You need to set the proper function context:
redisClient.subscribe.apply(redisClient, keys);

Related

Azure Functions Orchestration using Logic App

I have multiple Azure Functions which carry out small tasks. I would like to orchestrate those tasks together using Logic Apps, as you can see here:
Logic App Flow
I am taking the output of Function 1 and inputting parts of it into Function 2. As I was creating the logic app, I realized I have to parse the response of Function 1 as JSON in order to access the specific parameters I need. Parse JSON requires me to provide an example schema however, I need to be able to parse the response as JSON without this manual step.
One solution I thought would work was to register Function 1 with APIM and provide a response schema. This doesn't seem to be any different to calling the Function directly.
Does anyone have any suggestions for how to get the response of a Function as a JSON/XML?
You can run Javascript snippets and dynamic parse the response from Function 1 without providing a sample.
e.g.
var data = Object.keys(workflowContext.trigger.outputs.body.Body);
var key = data.filter(s => s.includes('Property')).toString(); // to get element - Property - dynamic content
return workflowContext.trigger.outputs.body.Body[key];
https://learn.microsoft.com/en-us/azure/logic-apps/logic-apps-add-run-inline-code?tabs=consumption

Problem accesing a dictionary value on dialogflow fullfilment

I am using fullfilment section on dialogflow on a fairly basic program I have started to show myself I can do a bigger project on dialogflow.
I have an object setup that is a dictionary.
I can make the keys a constant through
const KEYS=Object.keys(overflow);
I am going through the values using
if(KEYS.length>0){
var dictionary = overflow[keys[i]]
if I stringify dictionary using
JSON.stringify(item);
I get:
{"stack":"overflow","stack2":"overflowtoo", "stack3":3}
This leads me to believe I am actually facing a dictionary, hence the name of the variable.
I am accesing a string variable such as stack and unlike stack3
Everything I have read online tells me
dictionary.stack
Should work since
JSON.stringify(item);
Shows me:
{"stack":"overflow","stack2":"overflowtoo","stack3":3}
Whenever I:
Try to add the variable to the response string.
Append it to a string using output+=${item.tipo};
I get an error that the function crashed. I can replace it with the stringify line and it works and it gives me the JSON provided so the issue isnt there
Dictionary values are created as such before being accessed on another function:
dictionary[request.body.responseId]={
"stack":"overflow",
"stack2":"overflowtoo",
"stack3":3 };
Based on the suggestion here I saw that the properties where being accessed properly but their type was undefined. Going over things repeatedly I saw that the properties where defined as list rather than single values.
Dot notation works when they stop being lists.
Thanks for guiding me towards the problem.

Node-Postgres/Knex returning CITEXT[] as a string in JS, instead of an array of strings

I'm using Knex, which itself uses package "pg" (aka "node-postgres").
If you SELECT some rows from a table with a TEXT[] column, all is well... in JS you get an array of strings.
But if you're using a CITEXT[] column, instead you just get back a string in JS like:
"{First-element,Second-element}"
Normally when you want to instruct the pg package on how to return specific postgres types, you can do something like this:
import {types} from 'pg';
types.setTypeParser(types.builtins.TIMESTAMPTZ, 'text');
types.setTypeParser(types.builtins.TIMESTAMP, 'text');
types.setTypeParser(types.builtins.DATE, 'text');
types.setTypeParser(types.builtins.TIME, 'text');
types.setTypeParser(types.builtins.TIMETZ, 'text');
The types.builtins.* constants have values that are hardcoded OID numbers for the known built-in types in postgres. Those OID numbers are the same across all postgres installations.
However due to the fact that CITEXT[] is an extension, the OID numbers for the CITEXT + CITEXT[] types will be different on every server, e.g. with the following SQL query:
SELECT typname, oid, typarray FROM pg_type WHERE typname like '%citext%';
On my development server I get:
typname|oid |typarray|
-------|-----|--------|
citext |17459|17464 |
_citext|17464|0 |
But on my production server I get:
typname|oid |typarray|
-------|-----|--------|
citext |18618|18623 |
_citext|18623|0 |
How can I solve this?
Some hacky options that I really don't want to do are:
Find out all the different OID values for all my servers and hard code them in - very hacky and really don't want to do this.
Write code specifically for every table/column that manually converts the strings to array - also hacky and repetitive
When the node process initializes, get the server's OID value for the server and then call the types.setTypeParser() function with that dynamic value - also not very good
How can I solve this for all tables/columns without these hacky solutions?
I don't believe there is any way how to do it without querying DB.
I would probably query the correct OID number before starting the node app and store it to an environment variable and then initialize pg types with the value from process.env.
That is also a bit hacky, but at least the hack is mostly encapsulated out of the application code.

Passing sets of properties and nodes as a POST statement wit KOA-NEO4J or BOLT

I am building a REST API which connects to a NEO4J instance. I am using the koa-neo4j library as the basis (https://github.com/assister-ai/koa-neo4j-starter-kit). I am a beginner at all these technologies but thanks to some help from this forum I have the basic functionality working. For example the below code allows me to create a new node with the label "metric" and set the name and dateAdded propertis.
URL:
/metric?metricName=Test&dateAdded=2/21/2017
index.js
app.defineAPI({
method: 'POST',
route: '/api/v1/imm/metric',
cypherQueryFile: './src/api/v1/imm/metric/createMetric.cyp'
});
createMetric.cyp"
CREATE (n:metric {
name: $metricName,
dateAdded: $dateAdded
})
return ID(n) as id
However, I am struggling to know how I can approach more complicated examples. How can I handle situations when I don't know how many properties will be added when creating a new node beforehand or when I want to create multiple nodes in a single post statement. Ideally I would like to be able to pass something like JSON as part of the POST which would contain all of the nodes, labels and properties that I want to create. Is something like this possible? I tried using the below Cypher query and passing a JSON string in the POST body but it didn't work.
UNWIND $props AS properties
CREATE (n:metric)
SET n = properties
RETURN n
Would I be better off switching tothe Neo4j Rest API instead of the BOLT protocol and the KOA-NEO4J framework. From my research I thought it was better to use BOLT but I want to have a Rest API as the middle layer between my front and back end so I am willing to change over if this will be easier in the longer term.
Thanks for the help!
Your Cypher syntax is bad in a couple of ways.
UNWIND only accepts a collection as its argument, not a string.
SET n = properties is only legal if properties is a map, not a string.
This query should work for creating a single node (assuming that $props is a map containing all the properties you want to store with the newly created node):
CREATE (n:metric $props)
RETURN n
If you want to create multiple nodes, then this query (essentially the same as yours) should work (but only if $prop_collection is a collection of maps):
UNWIND $prop_collection AS props
CREATE (n:metric)
SET n = props
RETURN n
I too have faced difficulties when trying to pass complex types as arguments to neo4j, this has to do with type conversions between js and cypher over bolt and there is not much one could do except for filing an issue in the official neo4j JavaScript driver repo. koa-neo4j uses the official driver under the hood.
One way to go about such scenarios in koa-neo4j is using JavaScript to manipulate the arguments before sending to Cypher:
https://github.com/assister-ai/koa-neo4j#preprocess-lifecycle
Also possible to further manipulate the results of a Cypher query using postProcess lifecycle hook:
https://github.com/assister-ai/koa-neo4j#postprocess-lifecycle

node-postgres: how to prepare a statement without executing the query?

I want to create a "prepared statement" in postgres using the node-postgres module. I want to create it without binding it to parameters because the binding will take place in a loop.
In the documentation i read :
query(object config, optional function callback) : Query
If _text_ and _name_ are provided within the config, the query will result in the creation of a prepared statement.
I tried
client.query({"name":"mystatement", "text":"select id from mytable where id=$1"});
but when I try passing only the text & name keys in the config object, I get an exception :
(translated) message is binding 0 parameters but the prepared statement expects 1
Is there something I am missing ? How do you create/prepare a statement without binding it to specific value in order to avoid re-preparing the statement in every step of a loop ?
I just found an answer on this issue by the author of node-postgres.
With node-postgres the first time you issue a named query it is
parsed, bound, and executed all at once. Every subsequent query issued
on the same connection with the same name will automatically skip the
"parse" step and only rebind and execute the already planned query.
Currently node-postgres does not support a way to create a named,
prepared query and not execute the query. This feature is supported
within libpq and the client/server protocol (used by the pure
javascript bindings), but I've not directly exposed it in the API. I
thought it would add complexity to the API without any real benefit.
Since named statements are bound to the client in which they are
created, if the client is disconnected and reconnected or a different
client is returned from the client pool, the named statement will no
longer work (it requires a re-parsing).
You can use pg-prepared for that:
var prep = require('pg-prepared')
// First prepare statement without binding parameters
var item = prep('select id from mytable where id=${id}')
// Then execute the query and bind parameters in loop
for (i in [1,2,3]) {
client.query(item({id: i}), function(err, result) {...})
}
Update: Reading your question again, here's what I believe you need to do. You need to pass a "value" array as well.
Just to clarify; where you would normally "prepare" your query, just prepare the object you pass to it, without the value array. Then where you would normally "execute" your query, set the value array in the object and pass it to the query. If it's the first time, the driver will do the actual prepare for you the first time around, and simple do binding and execution for the rest of the iteration.

Resources