I have one table in postgres table and table structure is
ID
Name
Details
Context
CreatedDate
Where as Context is JSONB field and CreatedDate is a timestamp
I am saving data in Context this way {"trade": {"id": 102}, "trader": {"id": 100}}
I am trying to select record from Context based on trader id and this is my query
this.findAll({
where: {
context: {
$contains: {
trader: [{id: '100'}]
}
}
}
})
I tried nested keys as well but no result yeild.
this.findAll({
where: {
'context.trader.id': {
$eq: '100'
}
}
})
Can you please suggest how I can select the records based on my structure.
In continuity to that how I can get records based on two statements like adding createdtime in this where clause
Related
Is there a way to use include (which is actually a join table) to another Model, where the key is INSIDE a JSONB field? for example:
Item { id: INTEGER, someJsonbField: JSONB }
(item example: { id: 1, someJsonbField: { storeId: 2 } })
Then, for getting all of the items of store with id 2, you write something like this:
Item.findAll({ include: { model: 'Store', key: 'someJsonbField.storeId', ... } })
OFCOURSE, in a real world scenario, storeId should be inside Item directly, but only for the purpose of this question - How could it be done?
On a daily basis, I'm pushing data (time_series) to Elasticsearch. I created an index pattern, and my index have the name: myindex_* , where * is today date (an index pattern has been setup). Thus after a week, I have: myindex_2022-06-20, myindex_2022-06-21... myindex_2022-06-27.
Let's assume my index is indexing products' prices. Thus inside each myindex_*, I have got:
myindex_2022-06-26 is including many products prices like this:
{
"reference_code": "123456789",
"price": 10.00
},
...
myindex_2022-06-27:
{
"reference_code": "123456789",
"price": 12.00
},
I'm using this query to get the reference code and the corresponding prices. And it works great.
const data = await elasticClient.search({
index: myindex_2022-06-27,
body: {
query: {
match: {
"reference_code": "123456789"
}
}
}
});
But, I would like to have a query that if in the index of the date 2022-06-27, there is no data, then it checks, in the previous index 2022-06-26, and so on (until e.g. 10x).
Not sure, but it seems it's doing this when I replace myindex_2022-06-27 by myindex_* (not sure it's the default behaviour).
The issue is that when I'm using this way, I got prices from other index but it seems to use the oldest one. I would like to get the newest one instead, thus the opposite way.
How should I proceed?
If you query with index wildcard, it should return a list of documents, where every document will include some meta fields as _index and _id.
You can sort by _index, to make elastic search return the latest document at position [0] in your list.
const data = await elasticClient.search({
index: myindex_2022-*,
body: {
query: {
match: {
"reference_code": "123456789"
}
}
sort : { "_index" : "desc" },
}
});
Here is the raw query I'm trying to convert to Sequelize, where response is a JSONB column of the posts table, and may or may not have an error attribute.
SELECT post_id,
COUNT(response -> 'error' IS NOT NULL) as "response_errors"
FROM posts
WHERE post_id in (<list of post ids>)
GROUP BY post_id
I expect this query to return an array where each entry has a post_id, and response_errors attribute counting the number of rows in posts for each post_id with a response containing an error.
Where I'm having trouble in my findAll options definition is the attributes array. I'm not sure how to implement the aggregation on the nested value of a jsonb column.
{
where: {
postId: postIds,
},
attributes: [
'postId',
[Sequelize.fn('COUNT', { response: { error: { $ne: null } } }), 'responseErrors'],
],
group: ['postId'],
order: [Sequelize.col('post_id')],
raw: true,
}
Any pointers would be appreciated. Thanks!
i'm having trouble with node & knex.js
I'm trying to build a mini blog, with posts & adding functionality to add multiple tags to post
I have a POST model with following properties:
id SERIAL PRIMARY KEY NOT NULL,
name TEXT,
Second I have Tags model that is used for storing tags:
id SERIAL PRIMARY KEY NOT NULL,
name TEXT
And I have many to many table: Post Tags that references post & tags:
id SERIAL PRIMARY KEY NOT NULL,
post_id INTEGER NOT NULL REFERENCES posts ON DELETE CASCADE,
tag_id INTEGER NOT NULL REFERENCES tags ON DELETE CASCADE
I have managed to insert tags, and create post with tags,
But when I want to fetch Post data with Tags attached to that post I'm having a trouble
Here is a problem:
const data = await knex.select('posts.name as postName', 'tags.name as tagName'
.from('posts')
.leftJoin('post_tags', 'posts.id', 'post_tags.post_id')
.leftJoin('tags', 'tags.id', 'post_tags.tag_id')
.where('posts.id', id)
Following query returns this result:
[
{
postName: 'Post 1',
tagName: 'Youtube',
},
{
postName: 'Post 1',
tagName: 'Funny',
}
]
But I want the result to be formated & returned like this:
{
postName: 'Post 1',
tagName: ['Youtube', 'Funny'],
}
Is that even possible with query or do I have to manually format data ?
One way of doing this is to use some kind of aggregate function. If you're using PostgreSQL:
const data = await knex.select('posts.name as postName', knex.raw('ARRAY_AGG (tags.name) tags'))
.from('posts')
.innerJoin('post_tags', 'posts.id', 'post_tags.post_id')
.innerJoin('tags', 'tags.id', 'post_tags.tag_id')
.where('posts.id', id)
.groupBy("postName")
.orderBy("postName")
.first();
->
{ postName: 'post1', tags: [ 'tag1', 'tag2', 'tag3' ] }
For MySQL:
const data = await knex.select('posts.name as postName', knex.raw('GROUP_CONCAT (tags.name) as tags'))
.from('posts')
.innerJoin('post_tags', 'posts.id', 'post_tags.post_id')
.innerJoin('tags', 'tags.id', 'post_tags.tag_id')
.where('posts.id', id)
.groupBy("postName")
.orderBy("postName")
.first()
.then(res => Object.assign(res, { tags: res.tags.split(',')}))
There are no arrays in MySQL, and GROUP_CONCAT will just concat all tags into a string, so we need to split them manually.
->
RowDataPacket { postName: 'post1', tags: [ 'tag1', 'tag2', 'tag3' ] }
The result is correct as that is how SQL works - it returns rows of data. SQL has no concept of returning anything other than a table (think CSV data or Excel spreadsheet).
There are some interesting things you can do with SQL that can convert the tags to strings that you concatenate together but that is not really what you want. Either way you will need to add a post-processing step.
With your current query you can simply do something like this:
function formatter (result) {
let set = {};
result.forEach(row => {
if (set[row.postName] === undefined) {
set[row.postName] = row;
set[row.postName].tagName = [set[row.postName].tagName];
}
else {
set[row.postName].tagName.push(row.tagName);
}
});
return Object.values(set);
}
// ...
query.then(formatter);
This shouldn't be slow as you're only looping through the results once.
How do we integrate both distinct and selects the documents where the value of the field is not equal to the specified value.in a query in mongo using nodejs (keystone framework) ? or just basically in mongo. I am receiving an error which is field selection and slice cannot be used with distinct Error:. Any idea? or solution? I did try to use Syntax: {field: {$ne: value} } and that is the error. Also how can we include a limit when limit cannot be used with distinct Error: limit cannot be used with distinct.
query
keystone.list('Customer').model.find({ customer_id: { $in: locals.data.customers } }, { vin: { $ne: vin } }).distinct('vin').limit(4) ....
You can add a query to distinct but not skip and limit
https://docs.mongodb.com/manual/reference/method/db.collection.distinct/#specify-query-with-distinct
Instead, you can use the aggregate pipeline as
db.customer.aggregate(
{ $match:{ customer_id: { $in: locals.data.customers } }},
{ $group:{_id:"$vin"}},
{ $skip: skip},
{ $limit: limit},
{ $group:{_id:null,vin:{$push:"$_id"}}}
);