I have one field & it has comma separated ID, so i want to find from that selected id, here is my code,
.get(function(req, res) {
knex.select('*')
.from('exam')
.whereRaw('? = any(regexp_split_to_array(student_id))', [req.params.id])
.then(function(rows) {
//return res.send(rows);
console.log(rows);
})
.catch(function(error) {
console.log(error)
});
});
===> while i am using KNEX it will give an Error Like this,
{ error: function regexp_split_to_array(text) does not exist
name: 'error',
length: 220,
severity: 'ERROR',
code: '42883',
detail: undefined,
hint: 'No function matches the given name and argument types. You might need to add explicit type casts.',
position: '37',
internalPosition: undefined,
internalQuery: undefined,
where: undefined,
schema: undefined,
table: undefined,
column: undefined,
dataType: undefined,
constraint: undefined,
file: 'parse_func.c',
line: '523',
routine: 'ParseFuncOrColumn'
}
in student_id column i have ID like this, 33,34,35,36
in req.params.id i got only one single ID like, 35.
so i want that rows which have included 35 ID, in Same Table.
===> So i want Only Two Rows (2,3) because it has Included ID = 35.
Assuming you are using PostgreSQL database (I saw you use phpPgAdmin on the screenshot). You can use regexp_split_to_array function to convert your string to array (obviously :). And the perform search over the resulting array using any.
In SQL words, it can be written like this
select '35' = any(regexp_split_to_array('33,34,35,36', E','));
In your query, you can replace .where with
.whereRaw("? = any(regexp_split_to_array(student_id, E','))", [req.params.id])
But keep in mind, this can be performance-heavy request, as for each row you execute string split operation. A better way of doing this (assuming it is necessary for your project to contain array values in one row) is to store your student_id in the Array type and add gin index on student_id column and perform search operations like this
select * from table where student_id #> '{35}';
Related
I'm trying to update a column in users table the column type is json.
column name is test.
and the column consists of an object default value for example is
{a: "text", b: 0}
how to update let's say the object key b without changing the whole column
the code i'm using is
knexDb('users').where({
email: email
})
.update({
test: { b: 1 }
})
second solution
knexDb('users').where({
email: email
})
.update({
test: knexDb.raw(`jsonb_set(??, '{b}', ?)`, ['test', 1])
})
first solution changes the whole column cell and test will be only { b: 1 }
second solution doesn't work it give an error
function jsonb_set(json, unknown, unknown) does not exist
The expected result
is to manage to update only a certain key value in an object without changing the whole object.
PS
I also want to update an array that consists of objects like the above one for example.
[{a: "text", b: 0}, {c: "another-text", d: 0}]
if i use the code above in kenxjs it'll update the whole array to only {b: 1}
PS after searching a lot found that in order to make it work i need to set column type to jsonb, in order the above jsonb_set() to work
but now i'm facing another issue
how to update multiple keys using jsonb_set
knexDb('users').where({
email: email
})
.update({
test: knexDb.raw(`jsonb_set(??, '{b}', ?)`, ['test', 1]),
test: knexDb.raw(`jsonb_set(??, '{a}', ?)`, ['test', "another-text"]),
})
the first query key b is now not updating, in fact all updates don't work except the last query key a, so can some explain why ?
Your issue is that you're overwriting test. What you're passing into update is a JS object (docs). You cannot have multiple keys with identical values (docs). You'll have to do something like this where you make 1 long string with all your raw SQL as the value to test.
knexDb('users').where({
email: email
})
.update({
test: knexDb.raw(`
jsonb_set(??, '{a}', ?)
jsonb_set(??, '{b}', ?)
`,
['test', "another-text", 'test', 1])
})
Probably a better option exists - one that would be much more readable if you have to do this for several columns is something like what I have included below. In this example, the column containing the jsonb is called json.
const updateUser = async (email, a, b) => {
const user = await knexDb('users')
.where({ email })
.first();
user.json.a = a;
user.json.b = b;
const updatedUser = await knexDb('users')
.where({ email })
.update(user)
.returning('*');
return updatedUser;
}
Update/insert a single field in a JSON column:
knex('table')
.update( {
your_json_col: knex.jsonSet('your_json_col','$.field', 'new value')
})
.where(...)
Update/insert multiple fields
Option 1 (nested)
knex('table')
.update( {
your_json_col: knex.jsonSet(knex.jsonSet('your_json_col','$.field1', 'val1')
'$.field2', 'val2')
})
.where(...)
Option 2 (chained)
knex('table')
.update( {
your_json_col: knex.jsonSet('your_json_col','$.field1', 'val1')
})
.update( {
your_json_col: knex.jsonSet('your_json_col','$.field2', 'val2')
})
.where(...)
I'm working on a web app using React, Node, Express, Massive, and PostgreSQL, and having trouble performing one specific query:
SELECT
COUNT(DISTINCT cu.user_id) AS ucount,
COUNT(DISTINCT p.project_id) AS pcount,
COUNT(DISTINCT t.task_id) AS tcount,
c.clique_id, c.clique_name, c.admin_id, c.created_on
FROM cliques c
FULL OUTER JOIN cliques_users cu ON cu.clique_id = c.clique_id
FULL OUTER JOIN users u ON u.user_id = cu.user_id
FULL OUTER JOIN projects p ON p.clique_id = cu.clique_id
FULL OUTER JOIN tasks t ON t.clique_id = cu.clique_id
WHERE c.clique_id IN (
SELECT cu.clique_id FROM cliques_users cu WHERE cu.user_id = 3
)
GROUP BY c.clique_id;
**Note: I'm using 3 on the subquery just to test.
I'm using Postico to test my SQL statements, and this query returns the results I expect. But when this is done on the app itself, requesting the data by hitting an endpoint, the server throws an error:
{ error: function count(integer) does not exist at ...
name: 'error',
length: 225,
severity: 'ERROR',
code: '42883',
detail: undefined,
hint: 'No function matches the given name and argument types. You might need to add explicit type casts.',
position: '55',
internalPosition: undefined,
internalQuery: undefined,
where: undefined,
schema: undefined,
table: undefined,
column: undefined,
dataType: undefined,
constraint: undefined,
file: 'parse_func.c',
line: '528',
routine: 'ParseFuncOrColumn' }
The callback function that runs when the endpoint is hit looks like this:
(req, res, next) => {
req.app.get( 'db' )
.clique.getCliqueSummaryQuery( req.params.user_id )
.then( response => {
res.status(200).json(response);
})
.catch( err => {
console.log( 'getMyCliquesInfo failed: ', err );
res.status(500).json( err );
});
}
getCliqueSummaryQuery() runs the query, passing a url parameter as a variable that will replace the 3 I've hardcoded for testing. The error occurs with both variable and hardcoded values. I've copied the query straight from Postico to my sql file.
Anyone know why it works one way and not the other?
I have a working parameterized query for an insert statement but I now want to add in a 'WHERE NOT EXISTS' clause. The working insert looks like this:
pgClient.query("INSERT INTO social_posts (username, user_image, message, image_url, post_id, post_url, location, network) VALUES ($1,$2,$3,$4,$5,$6,$7,$8)", postArray,
function(err, result) {...}
What I'd like to implement is:
pgClient.query("INSERT INTO social_posts(username, user_image, message, image_url, post_id, post_url, location, network) SELECT ($1,$2,$3,$4,$5,$6,$7,$8) WHERE NOT EXISTS (SELECT 1 FROM social_posts WHERE post_id = $9)", postArray,
function(err, result) {...}
In this case the $9 would actually be equal to postArray[4].
I've tried pushing the value into the array again, that didn't work:
error running query { [error: operator does not exist: character varying = bigint]
name: 'error',
length: 207,
severity: 'ERROR',
code: '42883',
detail: undefined,
hint: 'No operator matches the given name and argument type(s). You might need to add explicit type casts.',
position: '198',
internalPosition: undefined,
internalQuery: undefined,
where: undefined,
schema: undefined,
table: undefined,
column: undefined,
dataType: undefined,
constraint: undefined,
file: 'parse_oper.c',
line: '722',
routine: 'op_error' }
I tried interpolating the value, that didn't work:
error running query { [error: there is no parameter $1]
name: 'error',
length: 88,
severity: 'ERROR',
code: '42P02',
detail: undefined,
hint: undefined,
position: '114',
internalPosition: undefined,
internalQuery: undefined,
where: undefined,
schema: undefined,
table: undefined,
column: undefined,
dataType: undefined,
constraint: undefined,
file: 'parse_expr.c',
line: '823',
routine: 'transformParamRef' }
Does anyone have any ideas? Thanks in advance!
After doing some research I came across something helpful here. I couldn't get the $9 parameter to work the reason why it wouldn't work with interpoloation had to do with an extra set of parentheses around the SELECT items. The correct syntax is:
var postID = postArray[4].toString() // per joop's comment above
pgClient.query("INSERT INTO social_posts(username, user_image, message, image_url, post_id, post_url, location, network) SELECT $1,$2,$3,$4,$5,$6,$7,$8 WHERE NOT EXISTS (SELECT 1 FROM social_posts WHERE post_id = '" + postId + "')", postArray,
function(err, result) {...}
Any insight on how to get an auto increment id working? From my understanding, an id column is added by default; however, because I'm using Redshift, the default "serial" type won't work as it is not supported.
{ [error: Column "probe.id" has unsupported type "serial".]
name: 'error',
length: 165,
severity: 'ERROR',
code: '0A000',
detail: undefined,
hint: undefined,
position: undefined,
internalPosition: undefined,
internalQuery: undefined,
where: undefined,
schema: undefined,
table: undefined,
column: undefined,
dataType: undefined,
constraint: undefined,
file: '/home/awsrsqa/padb/src/pg/src/backend/parser/parser_analyze.c',
line: '3600',
routine: 'transformColumnDefinition',
model: 'probe' }
No such thing supported.
You can only get an auto-increment for an integer:
IDENTITY(seed, step)
Clause that specifies that the column is an IDENTITY column. An IDENTITY column contains unique auto-generated values. These values start with the value specified as seed and increment by the number specified as step. The data type for an IDENTITY column must be either INT or BIGINT.
For a GUID you will have to generate one and insert it yourself.
Example:
CREATE TABLE your_table(
id INT IDENTITY(1, 1)
);
I'm trying to run two parameterised insert queries using node-postgres: the first one specifies the primary key column, the second doesn't.
The second query, even though doesn't specify the primary key column, fails saying there's a duplicate primary key.
My pg table:
CREATE TABLE teams (
id serial PRIMARY KEY,
created_by int REFERENCES users,
name text,
logo text
);
Code that reproduces this issue:
var pg = require('pg');
var insertWithId = 'INSERT INTO teams(id, name, created_by) VALUES($1, $2, $3) RETURNING id';
var insertWithoutId = 'INSERT INTO teams(name, created_by) VALUES($1, $2) RETURNING id';
pg.connect(process.env.POSTGRES_URI, function (err, client, releaseClient) {
client.query(insertWithId, [1, 'First Team', 1], function (err, result) {
releaseClient();
if (err) {
throw err;
}
console.log('first team created');
});
});
pg.connect(process.env.POSTGRES_URI, function (err, client, releaseClient) {
client.query(insertWithoutId, ['Second Team', 1], function (err, result) {
releaseClient();
if (err) {
console.log(err);
}
});
});
And output of running this:
first team created
{ [error: duplicate key value violates unique constraint "teams_pkey"]
name: 'error',
length: 173,
severity: 'ERROR',
code: '23505',
detail: 'Key (id)=(1) already exists.',
hint: undefined,
position: undefined,
internalPosition: undefined,
internalQuery: undefined,
where: undefined,
schema: 'public',
table: 'teams',
column: undefined,
dataType: undefined,
constraint: 'teams_pkey',
file: 'nbtinsert.c',
line: '406',
routine: '_bt_check_unique' }
What I gather from reading the node-postgres source, parameterised queries are treated as prepared queries, which get cached if they reuse a name parameter; though from digging around it's source, it doesn't seem to think that my queries have a name property.
Does anyone have any ideas on how this could be avoided?
The first insert supplies a value for id, so the serial is not incremented. The serial still is 1 after the first insert. The second insert does not supply a value for id, so the serial (=1) is used. Which is a duplicate. Best solution is to only use the second statement, and let the application use the returned id, if needed.
In short: don't interfere with serials.
If you need to correct the next value for a sequence, you can use something like the below statement.
SELECT setval('teams_id_seq', (SELECT MAX(id) FROM teams) )
;