How can I enter JSON value to my postgresql database? - node.js

So, I can email address, password and date using this code in node
client.query("INSERT INTO manlalaro (emailaddr,pwd,premiumexpiry) values ('tomatopie#coldmail.com','123absurdcodes', DATE '2009-09-19') ",(err,res)=>{
console.log(err, res)
client.end()
})
But how do I enter JSON data type successfully without getting errors? I have playersaved which is a data type JSON.

The best way to pass the data to be inserted in a separated parameter where the library or the driver do the right treatment to each data type.
In most cases it will be something like this:
client.query("INSERT INTO x (a, b, c) VALUES (?, ?, ?)", [1, "text", { "json": "data" }]);
Or this:
client.query("INSERT INTO x (a, b, c) VALUES ($1, $2, $3)", [1, "text", { "json": "data" }]);
The way to know the right thing to do is read the documentation of the library.
If you are using pg (node-postgres) https://node-postgres.com/
Note: As #Aedric pointed out, in some cases your object must be previously "stringified" (JSON.stringify()). But node-postgres claims it do this automatically. (https://node-postgres.com/features/types#uuid%20+%20json%20/%20jsonb).

You can insert JSON data in postgresql by converting it into string using JSON.stringify(Object)
`insert into tableName (id,json_data) values(1,'{"test":1}')`

Related

Is it possible for CQL to parse a JSON object to insert data?

From what I looked so far, it seems impossible with Cassandra. But I thought I'd give it a shot:
How can I select a value of a json property, parsed from a json object string, and use it as part of an update / insert statement in Cassandra?
For example, I'm given the json object:
{
id:123,
some_string:"hello there",
mytimestamp: "2019-09-02T22:02:24.355Z"
}
And this is the table definition:
CREATE TABLE IF NOT EXISTS myspace.mytable (
id text,
data blob,
PRIMARY KEY (id)
);
Now the thing to know at this point is that for a given reason the data field will be set to the json string. In other words, there is no 1:1 mapping between the given json and the table columns, but the data field contains the json object as kind of a blob value.
... Is it possible to parse the timestamp value of the given json object as part of an insert statement?
Pseudo code example of what I mean, which obviously doesn't work ($myJson is a placeholder for the json object string above):
INSERT INTO myspace.mytable (id, data)
VALUES (123, $myJson)
USING timestamp toTimeStamp($myJson.mytimestamp)
The quick answer is no, it's not possible to do that with CQL.
The norm is to parse the elements of the JSON object within your application to extract the corresponding values to construct the CQL statement.
As a side note, I would discourage using the CQL blob type due to possible performance issues should the blob size exceeed 1MB. If it's JSON, consider storing it as CQL text type instead. Cheers!
Worth mentioning, but CQL can do a limited amount of JSON parsing on its own. Albeit, not as detailed as you're asking here (ex: USING timestamp).
But something like this works:
> CREATE TABLE myjsontable (
... id TEXT,
... some_string TEXT,
... PRIMARY KEY (id));
> INSERT INTO myjsontable JSON '{"id":"123","some_string":"hello there"}';
> SELECT * FROM myjsontable WHERE id='123';
id | some_string
-----+-------------
123 | hello there
(1 rows)
In your case you'd either have to redesign the table or the JSON payload so that they match. But as Erick and Cédrick have mentioned, the USING timestamp part would have to happen client-side.
What you detailed is doable with Cassandra.
Timestamp
To insert timestamp in a query it should be formatted as an ISO 8601 String. Sample examples could be found here. In your code, you might have to convert your incoming value to expected type and format.
Blob:
Blob expects to store binary data, as such it cannot be put Ad hoc as a String in a CQL query. (you can use TEXT type to do it if you want to encode base64)
When you need to insert binary data you need to provide proper type as well. For instance if you are working with Javascript to need to provide a Buffer as describe in the documentation Then when you execute your query you externalized your parameters
const sampleId = 123;
const sampleData = Buffer.from('hello world', 'utf8');
const sampleTimeStamp = new Date();
client.execute('INSERT INTO myspace.mytable (id, data) VALUES (?, ?) USING timestamp toTimeStamp(?)', [ sampleId, sampleData, sampleTimeStamp ]);

In clause Nodejs snowflake not getting result set

I am working on snowflake with nodejs. I have used snowflake-sdk.
My raw query is
select * from xyz where x in ('1','2','3').
For this, in node.js, i had written query as connection.execute({ sqlText: select * from xyz where x in (:1), binds: [] })
what should I pass in binds and in which format, I am not getting an idea for it?
Please review the node.js driver documentation which provides a sample for the bind operations : https://docs.snowflake.com/en/user-guide/nodejs-driver-use.html#binding-statement-parameters
Note: not compiled or tested the below, but it's based on a technique we've used
It isn't possible to directly bind the array of values, but the following works:
var params = ['1', '2', '3'];
var statement = select * from xyz where id in (${params.map(x => '?').join()});
// statement will now be:
// select * from xyz where id in (?, ?, ?)
connection.execute({ sqlText: statements, binds: params })

What are the returned data types from a Knex select() statement?

Hi everyone,
I am currently using Knex.js for a project and a question arise when I make a knex('table').select() function call.
What are the returned types from the query ? In particular, If I have a datetime column in my table, what is the return value for this field ?
I believe the query will return a value of type string for this column. But it is the case for any database (I use SQLite3) ? It is possible that the query returns a Date value ?
EXAMPLE :
the user table has this schema :
knex.schema.createTable('user', function (table) {
table.increments('id');
table.string('username', 256).notNullable().unique();
table.timestamps(true, true);
})
since I use SQLite3, table.timestamps(true, true); produces 2 datetime columns : created_at & modified_at.
when I make a query knex('user').select(), it returns a array of objects with the attributes : id, username, created_at, modified_at.
id is of type number
username is of type string
what will be the types of created_at & modified_at ?
Will it be always of string type ? If I use an other database like PostgreSQL, these columns will have the timestamptz SQL type. The returned type of knex will be also a string type ?
This is not in fact something that Knex is responsible for, but rather the underlying database library. So if you're using SQLite, it would be sqlite3. If you're using Postgres, pg is responsible and you could find more documentation here. Broadly, most libraries take the approach that types which have a direct JavaScript equivalent (booleans, strings, null, integers, etc.) are returned as those types; anything else is converted to a string.
Knex's job is to construct the SQL that the other libraries use to talk to the database, and receives the response that they return.
as I believe it will be object of strings or numbers

pg-promise reads integers as string, even after being parsed

I'm having the following error:
I have a post form to insert some values to a postgres database using pg-promise. Those values are being converted to integers on the server. But when I try to insert the values to postgres, it says:
invalid input syntax for integer: ""
What I'm trying to do:
Every field that its left blank, convert it to NULL and insert it to the database, here's the code:
var tier_1 = parseInt(req.body.tier_1);
if (isNaN(tier_1)) {
console.log("Not a Number");
tier_1= null;
}
and the query:
"insert into products(tier_1) values (${tier_1})"
but postgres still reads tier_1 as a string.
Any ideas?
Thank you!

Pass column name as argument - Postgres and Node JS

I have a query (Update statement) wrapped in a function and will need to perform the same statement on multiple columns during the course of my script
async function update_percentage_value(value, id){
(async () => {
const client = await pool.connect();
try {
const res = await client.query('UPDATE fixtures SET column_1_percentage = ($1) WHERE id = ($2) RETURNING *', [value, id]);
} finally {
client.release();
}
})().catch(e => console.log(e.stack))
}
I then call this function
update_percentage_value(50, 2);
I have many columns to update at various points of my script, each one needs to be done at the time. I would like to be able to just call the one function, passing the column name, value and id.
My table looks like below
CREATE TABLE fixtures (
ID SERIAL PRIMARY KEY,
home_team VARCHAR,
away_team VARCHAR,
column_1_percentage INTEGER,
column_2_percentage INTEGER,
column_3_percentage INTEGER,
column_4_percentage INTEGER
);
Is it at all possible to do this?
I'm going to post the solution that was advised by Sehrope Sarkuni via the node-postgres GitHub repo. This helped me a lot and works for what I require:
No column names are identifiers and they can't be specified as parameters. They have to be included in the text of the SQL command.
It is possible but you have to build the SQL text with the column names. If you're going to dynamically build SQL you should make sure to escape the components using something like pg-format or use an ORM that handles this type of thing.
So something like:
const format = require('pg-format');
async function updateFixtures(id, column, value) {
const sql = format('UPDATE fixtures SET %I = $1 WHERE id = $2', column);
await pool.query(sql, [value, id]);
}
Also if you're doing multiple updates to the same row back-to-back then you're likely better off with a single UPDATE statement that modifies all the columns rather than separate statements as they'd be both slower and generate more WAL on the server.
To get the column names of the table, you can query the information_schema.columns table which stores the details of column structure of your table, this would help you in framing a dynamic query for updating a specific column based on a specific result.
You can get the column names of the table with the help of following query:
select column_name from information_schema.columns where table_name='fixtures' and table_schema='public';
The above query would give you the list of columns in the table.
Now to update each one for a specific purpose, You can store the result set of column name to a variable and pass that variable to the function to perform the required action.

Resources