How to use format or any other medium to create query for inserting multiple values.
pg, pg-format are being used in project
rows = [[200,"http://localhost:3000/product",null,[{url:"www.someurl.com", name: "Test"},{url:"www.someurl1.com", name: "Test1"}]],[400,"http://localhost:3000/user",null,[{url:"www.someurl3.com", name: "1Test"},{url:"www.someurl2.com", name: "1Test1"}]]]
columns = ['code', 'url', 'additional', 'response']
const query1 = format(`INSERT INTO ${table_name} (${columns.join(', ')}) VALUES %L returning id`, rows);
query1 is getting formatted to
INSERT INTO testtable (code, url, additional, response) VALUES ('200','http://localhost:3000/product',NULL, ('{"url":"www.someurl.com","name": "Test"}}}'::jsonb, '{"url":"www.someurl.com","name":"customer"}}'::jsonb),'200','http://localhost:3000/user',NULL,'{"id":"61e541b9700bb8c4cbe008b8","status":"queued"}'::jsonb) returning id
Values are getting changed two values types should be created with:
(code, url, additional, response), (code, url, additional, response)
But it followed:
(code, url, additional), (response, code, url, additional)
Not sure what went wrong
You can write your rows like this:
rows = [
[200,"http://localhost:3000/product", null, `${JSON.stringify([{url:"www.someurl.com", name: "Test"},{url:"www.someurl1.com", name: "Test1"}])}::jsonb`],
[400,"http://localhost:3000/user", null, `${JSON.stringify([{url:"www.someurl3.com", name: "1Test"},{url:"www.someurl2.com", name: "1Test1"}])}::jsonb`]
]
explicitly telling pg-format that they're jsonb values.
Related
A single row can be inserted like this:
client.query("insert into tableName (name, email) values ($1, $2) ", ['john', 'john#gmail.com'], callBack)
This approach automatically comments out any special characters.
How do i insert multiple rows at once?
I need to implement this:
"insert into tableName (name, email) values ('john', 'john#gmail.com'), ('jane', 'jane#gmail.com')"
I can just use js string operators to compile such rows manually, but then i need to add special characters escape somehow.
Use pg-format like below.
var format = require('pg-format');
var values = [
[7, 'john22', 'john22#gmail.com', '9999999922'],
[6, 'testvk', 'testvk#gmail.com', '88888888888']
];
client.query(format('INSERT INTO users (id, name, email, phone) VALUES %L', values),[], (err, result)=>{
console.log(err);
console.log(result);
});
One other way using PostgreSQL json functions:
client.query('INSERT INTO table (columns) ' +
'SELECT m.* FROM json_populate_recordset(null::your_custom_type, $1) AS m',
[JSON.stringify(your_json_object_array)], function(err, result) {
if (err) {
console.log(err);
} else {
console.log(result);
}
});
Following this article: Performance Boost from pg-promise library, and its suggested approach:
// Concatenates an array of objects or arrays of values, according to the template,
// to use with insert queries. Can be used either as a class type or as a function.
//
// template = formatting template string
// data = array of either objects or arrays of values
function Inserts(template, data) {
if (!(this instanceof Inserts)) {
return new Inserts(template, data);
}
this.rawType = true;
this.toPostgres = function () {
return data.map(d=>'(' + pgp.as.format(template, d) + ')').join(',');
};
}
An example of using it, exactly as in your case:
var users = [['John', 23], ['Mike', 30], ['David', 18]];
db.none('INSERT INTO Users(name, age) VALUES $1', Inserts('$1, $2', users))
.then(data=> {
// OK, all records have been inserted
})
.catch(error=> {
// Error, no records inserted
});
And it will work with an array of objects as well:
var users = [{name: 'John', age: 23}, {name: 'Mike', age: 30}, {name: 'David', age: 18}];
db.none('INSERT INTO Users(name, age) VALUES $1', Inserts('${name}, ${age}', users))
.then(data=> {
// OK, all records have been inserted
})
.catch(error=> {
// Error, no records inserted
});
UPDATE-1
For a high-performance approach via a single INSERT query see Multi-row insert with pg-promise.
UPDATE-2
The information here is quite old now, see the latest syntax for Custom Type Formatting. What used to be _rawDBType is now rawType, and formatDBType was renamed into toPostgres.
You are going to have to generate the query dynamically. Although possible, this is risky, and could easily lead to SQL Injection vulnerabilities if you do it wrong. It's also easy to end up with off by one errors between the index of your parameters in the query and the parameters you're passing in.
That being said, here is an example of how you could do write this, assuming you have an array of users that looks like {name: string, email: string}:
client.query(
`INSERT INTO table_name (name, email) VALUES ${users.map(() => `(?, ?)`).join(',')}`,
users.reduce((params, u) => params.concat([u.name, u.email]), []),
callBack,
)
An alternative approach, is to use a library like #databases/pg (which I wrote):
await db.query(sql`
INSERT INTO table_name (name, email)
VALUES ${sql.join(users.map(u => sql`(${u.name}, ${u.email})`), ',')}
`)
#databases requires the query to be tagged with sql and uses that to ensure any user data you pass is always automatically escaped. This also lets you write the parameters inline, which I think makes the code much more readable.
Using npm module postgres (porsager/postgres) which has Tagged Template Strings at the core:
https://github.com/porsager/postgres#multiple-inserts-in-one-query
const users = [{
name: 'Murray',
age: 68,
garbage: 'ignore'
},
{
name: 'Walter',
age: 80,
garbage: 'ignore'
}]
sql`insert into users ${ sql(users, 'name', 'age') }`
// Is translated to:
insert into users ("name", "age") values ($1, $2), ($3, $4)
// Here you can also omit column names which will use all object keys as columns
sql`insert into users ${ sql(users) }`
// Which results in:
insert into users ("name", "age", "garbage") values ($1, $2, $3), ($4, $5, $6)
Just thought I'd post since it's like brand new out of beta and I've found it to be a better philosophy of SQL library. I think would be preferable over the other postgres/node libraries posted in other answers. IMHO
Hi I know I am late to the party, but what worked for me was a simple map.
I hope this will help someone seeking for same
let sampleQuery = array.map(myRow =>
`('${myRow.column_a}','${myRow.column_b}') `
)
let res = await pool.query(`INSERT INTO public.table(column_a, column_b) VALUES ${sampleQuery} `)
client.query("insert into tableName (name, email) values ($1, $2),($3, $4) ", ['john', 'john#gmail.com','john', 'john#gmail.com'], callBack)
doesn't help?
Futher more, you can manually generate a string for query:
insert into tableName (name, email) values (" +var1 + "," + var2 + "),(" +var3 + ", " +var4+ ") "
if you read here, https://github.com/brianc/node-postgres/issues/530 , you can see the same implementation.
i'm having trouble with node & knex.js
I'm trying to build a mini blog, with posts & adding functionality to add multiple tags to post
I have a POST model with following properties:
id SERIAL PRIMARY KEY NOT NULL,
name TEXT,
Second I have Tags model that is used for storing tags:
id SERIAL PRIMARY KEY NOT NULL,
name TEXT
And I have many to many table: Post Tags that references post & tags:
id SERIAL PRIMARY KEY NOT NULL,
post_id INTEGER NOT NULL REFERENCES posts ON DELETE CASCADE,
tag_id INTEGER NOT NULL REFERENCES tags ON DELETE CASCADE
I have managed to insert tags, and create post with tags,
But when I want to fetch Post data with Tags attached to that post I'm having a trouble
Here is a problem:
const data = await knex.select('posts.name as postName', 'tags.name as tagName'
.from('posts')
.leftJoin('post_tags', 'posts.id', 'post_tags.post_id')
.leftJoin('tags', 'tags.id', 'post_tags.tag_id')
.where('posts.id', id)
Following query returns this result:
[
{
postName: 'Post 1',
tagName: 'Youtube',
},
{
postName: 'Post 1',
tagName: 'Funny',
}
]
But I want the result to be formated & returned like this:
{
postName: 'Post 1',
tagName: ['Youtube', 'Funny'],
}
Is that even possible with query or do I have to manually format data ?
One way of doing this is to use some kind of aggregate function. If you're using PostgreSQL:
const data = await knex.select('posts.name as postName', knex.raw('ARRAY_AGG (tags.name) tags'))
.from('posts')
.innerJoin('post_tags', 'posts.id', 'post_tags.post_id')
.innerJoin('tags', 'tags.id', 'post_tags.tag_id')
.where('posts.id', id)
.groupBy("postName")
.orderBy("postName")
.first();
->
{ postName: 'post1', tags: [ 'tag1', 'tag2', 'tag3' ] }
For MySQL:
const data = await knex.select('posts.name as postName', knex.raw('GROUP_CONCAT (tags.name) as tags'))
.from('posts')
.innerJoin('post_tags', 'posts.id', 'post_tags.post_id')
.innerJoin('tags', 'tags.id', 'post_tags.tag_id')
.where('posts.id', id)
.groupBy("postName")
.orderBy("postName")
.first()
.then(res => Object.assign(res, { tags: res.tags.split(',')}))
There are no arrays in MySQL, and GROUP_CONCAT will just concat all tags into a string, so we need to split them manually.
->
RowDataPacket { postName: 'post1', tags: [ 'tag1', 'tag2', 'tag3' ] }
The result is correct as that is how SQL works - it returns rows of data. SQL has no concept of returning anything other than a table (think CSV data or Excel spreadsheet).
There are some interesting things you can do with SQL that can convert the tags to strings that you concatenate together but that is not really what you want. Either way you will need to add a post-processing step.
With your current query you can simply do something like this:
function formatter (result) {
let set = {};
result.forEach(row => {
if (set[row.postName] === undefined) {
set[row.postName] = row;
set[row.postName].tagName = [set[row.postName].tagName];
}
else {
set[row.postName].tagName.push(row.tagName);
}
});
return Object.values(set);
}
// ...
query.then(formatter);
This shouldn't be slow as you're only looping through the results once.
I'm trying to update a column in users table the column type is json.
column name is test.
and the column consists of an object default value for example is
{a: "text", b: 0}
how to update let's say the object key b without changing the whole column
the code i'm using is
knexDb('users').where({
email: email
})
.update({
test: { b: 1 }
})
second solution
knexDb('users').where({
email: email
})
.update({
test: knexDb.raw(`jsonb_set(??, '{b}', ?)`, ['test', 1])
})
first solution changes the whole column cell and test will be only { b: 1 }
second solution doesn't work it give an error
function jsonb_set(json, unknown, unknown) does not exist
The expected result
is to manage to update only a certain key value in an object without changing the whole object.
PS
I also want to update an array that consists of objects like the above one for example.
[{a: "text", b: 0}, {c: "another-text", d: 0}]
if i use the code above in kenxjs it'll update the whole array to only {b: 1}
PS after searching a lot found that in order to make it work i need to set column type to jsonb, in order the above jsonb_set() to work
but now i'm facing another issue
how to update multiple keys using jsonb_set
knexDb('users').where({
email: email
})
.update({
test: knexDb.raw(`jsonb_set(??, '{b}', ?)`, ['test', 1]),
test: knexDb.raw(`jsonb_set(??, '{a}', ?)`, ['test', "another-text"]),
})
the first query key b is now not updating, in fact all updates don't work except the last query key a, so can some explain why ?
Your issue is that you're overwriting test. What you're passing into update is a JS object (docs). You cannot have multiple keys with identical values (docs). You'll have to do something like this where you make 1 long string with all your raw SQL as the value to test.
knexDb('users').where({
email: email
})
.update({
test: knexDb.raw(`
jsonb_set(??, '{a}', ?)
jsonb_set(??, '{b}', ?)
`,
['test', "another-text", 'test', 1])
})
Probably a better option exists - one that would be much more readable if you have to do this for several columns is something like what I have included below. In this example, the column containing the jsonb is called json.
const updateUser = async (email, a, b) => {
const user = await knexDb('users')
.where({ email })
.first();
user.json.a = a;
user.json.b = b;
const updatedUser = await knexDb('users')
.where({ email })
.update(user)
.returning('*');
return updatedUser;
}
Update/insert a single field in a JSON column:
knex('table')
.update( {
your_json_col: knex.jsonSet('your_json_col','$.field', 'new value')
})
.where(...)
Update/insert multiple fields
Option 1 (nested)
knex('table')
.update( {
your_json_col: knex.jsonSet(knex.jsonSet('your_json_col','$.field1', 'val1')
'$.field2', 'val2')
})
.where(...)
Option 2 (chained)
knex('table')
.update( {
your_json_col: knex.jsonSet('your_json_col','$.field1', 'val1')
})
.update( {
your_json_col: knex.jsonSet('your_json_col','$.field2', 'val2')
})
.where(...)
I am trying to create a table for my website and for some reason it is only showing the first row of data.
This is how I am formatting the columns of the data:
const { items } = this.props.item;
// console.log({ items });
// react - bootstrap - table - next
const columns = [{
dataField: 'team',
text: 'Team',
sort: true,
formatter: (cellContent, row, rowIndex) => (
Object.values(row.team)[rowIndex]
)
}, {
dataField: 'current_Rank',
text: 'Current Rank',
sort: true,
formatter: (cellContent, row, rowIndex) => (
Object.values(row.current_Rank)[rowIndex]
)
}, {
dataField: 'new_Rank',
text: '321 Rank',
sort: true,
formatter: (cellContent, row, rowIndex) => (
Object.values(row.new_Rank)[rowIndex]
)
}];
This is how I am returning the table so that it renders the table:
return (
<BootstrapTable
keyField="team"
data={items}
columns={columns}
striped
hover />
)
}
}
The data:
Picture from the console
Live site: https://nhl-321-pointsystem.herokuapp.com/
I looked up your network response for /api/items API call, and found out that the data contains only one item. This being one of the reason you're seeing a single row when the table is rendered.
Please note the, another reason for the issue is, react-bootstrap-table-next key
data accepts a single Array object. And not array of single object.
You should re-arrange your data so that key 'team' will be present for all items in the array. And rest of the column header values (e.g. current_Rank) are available for each like.
Something like a reformat function I created in the sandbox available here.
Plus point - After you apply the reformat function, you won't need formatter for each column unlike your previous solution.
Alternate but recommended solution would be to send the formatted response from the API endpoint only, instead of re-parsing and creating new object to fit the needs of UI.
Sandbox link - https://codesandbox.io/embed/32vl4x4oj6
I'm trying to run two parameterised insert queries using node-postgres: the first one specifies the primary key column, the second doesn't.
The second query, even though doesn't specify the primary key column, fails saying there's a duplicate primary key.
My pg table:
CREATE TABLE teams (
id serial PRIMARY KEY,
created_by int REFERENCES users,
name text,
logo text
);
Code that reproduces this issue:
var pg = require('pg');
var insertWithId = 'INSERT INTO teams(id, name, created_by) VALUES($1, $2, $3) RETURNING id';
var insertWithoutId = 'INSERT INTO teams(name, created_by) VALUES($1, $2) RETURNING id';
pg.connect(process.env.POSTGRES_URI, function (err, client, releaseClient) {
client.query(insertWithId, [1, 'First Team', 1], function (err, result) {
releaseClient();
if (err) {
throw err;
}
console.log('first team created');
});
});
pg.connect(process.env.POSTGRES_URI, function (err, client, releaseClient) {
client.query(insertWithoutId, ['Second Team', 1], function (err, result) {
releaseClient();
if (err) {
console.log(err);
}
});
});
And output of running this:
first team created
{ [error: duplicate key value violates unique constraint "teams_pkey"]
name: 'error',
length: 173,
severity: 'ERROR',
code: '23505',
detail: 'Key (id)=(1) already exists.',
hint: undefined,
position: undefined,
internalPosition: undefined,
internalQuery: undefined,
where: undefined,
schema: 'public',
table: 'teams',
column: undefined,
dataType: undefined,
constraint: 'teams_pkey',
file: 'nbtinsert.c',
line: '406',
routine: '_bt_check_unique' }
What I gather from reading the node-postgres source, parameterised queries are treated as prepared queries, which get cached if they reuse a name parameter; though from digging around it's source, it doesn't seem to think that my queries have a name property.
Does anyone have any ideas on how this could be avoided?
The first insert supplies a value for id, so the serial is not incremented. The serial still is 1 after the first insert. The second insert does not supply a value for id, so the serial (=1) is used. Which is a duplicate. Best solution is to only use the second statement, and let the application use the returned id, if needed.
In short: don't interfere with serials.
If you need to correct the next value for a sequence, you can use something like the below statement.
SELECT setval('teams_id_seq', (SELECT MAX(id) FROM teams) )
;