In Strapi v3 the following code would return random records:
strapi.query(table).model.query(qb => {
qb.limit(count); //with limit
qb.orderByRaw("RAND()") //with rand
}).fetchAll()
How can I achieve the same in v4?
For reference here is how I solved this:
const qb = strapi.db.entityManager
.createQueryBuilder("table")
.init({ select: ["id"] })
.getKnexQuery()
.orderByRaw(randomSort())
const ids = (await qb).map(r => r.id)
const filters = { id: { $in: ids } }
return await strapi.entityService.findMany(table, { filters })
Related
I want to compare to ids one in relation and the other given by me in a query and get all information, for example:
async getAllPhoto(id: string) {
var photo = await this._photoRepository.find({
relations: {
catalogue: true,
},
where: { catalogue: { id: Not(id) } },
});
return photo;
}
I tried this but got an empty array.
const ids = 2; // get your id which you request from font-end
const photo = this.repository.find({
relations:['catalogue'],
where:{
catalogue:{
id: Not(ids)
}
}
})
when you develop project in nestjs,
you'd better enable "logging":true in your database config!
you will find all raw sql from ORM.
This is my first time of using bulkWrite to carry out updates via mongoose. I am building a blog application and I am using it to learn MERN stack. I have a Post model. The Post model has object value which is an array. This is an example of it:
const PostSchema = new mongoose.Schema(
{
postLikes:{
type: Array,
default: []
}
}
)
The postLikes contain mongodb object ids of users who liked a post.
I have a logic for deleting selected users or all users by an admin. The like system does not come with a Like Model of it own. I simply used an array system inside the post model. After deleting a user, I would like to update all post models with likes of the selected users. Some users may have multiple likes across different posts.
In my node, I created a variable like this:
const {selectedIds} = req.body;
The selectedIds came from reactjs like this:
const [selectedUsers, setSelectedUsers] = useState([]);
const arrayOfSelectedUserId = (userId) =>{
setSelectedUsers(prevArray => [...prevArray, userId]);
);
}
For the request, I did it like this:
const response = await axiosPrivate.post(`/v1/users/deleteSelected`, selectedIds, { withCredentials: true,
headers:{authorization: `Bearer ${auth.token}`}})
In nodejs, the selectedUsers ids was passed to this variable:
const {selectedIds} = req.body;
I created the logic this way:
const findIntersection = (array1, array2) => {
return array1.filter((elem) => {
return array2.indexOf(elem) !== -1;
});
}
const filteredPost = posts.filter((singleFilter) => {
const intersection = findIntersection(selectedIds, singleFilter.postLikes);
return singleFilter.postLikes.length !== 0 && intersection.length !== 0;
});
const updatedPosts = filteredPost.map((obj)=>{
const intersection = findIntersection(selectedIds, obj.postLikes);
console.log(intersection )
return {
updateOne: {
filter: { _id: obj._id },
update: { $pull: { postLikes: { $in: intersection } } },
},
};
});
Post.bulkWrite(updatedPosts).then((res) => {
console.log("Documents Updated", res.modifiedCount)
})
The console.log shows the text Document updated and showed number of documents updated. However, if I check my database, the update won't reflect. This means that the selected users' ID is still in the array.
Is there a better method? What Am I doing wrong?
I have this query to display in a table on frontend so I used paginate which is working fine
tableSchema.statics.getn = (query, options) => {
return mongoose.model(MODEL_NAME).paginate(query, options);
};
But when I am trying to perform search query then I am unable to perform paginate on that. Is there any way to send response as paginated form to all the searched queries
I tried following code
tableSchema.statics.search = query => {
const Id = Number(query);
const isNumeric = value => /^\d+$/.test(value);
if (!isNumeric(query)) {
if (query.includes("#")) {
const regex = new RegExp(query, "i");
return mongoose.model(MODEL_NAME).find({ "recipies.to": regex }).paginate(query);
}
return mongoose.model(MODEL_NAME).find({ "temp.name": query });
}
return mongoose.model(MODEL_NAME).find({ recipies: { Id } });
};
It is throwing me error that paginate is not a function. I tried storing find query result in object then performed paginate still it was not working.
I am using "mongoose-paginate-v2" for pagination
Hi I think you missed to add pagination pluging in model section.
const mongoose = require('mongoose');
const mongoosePaginate = require('mongoose-paginate-v2');
const mySchema = new mongoose.Schema({
/* your schema definition */
});
mySchema.plugin(mongoosePaginate);
const myModel = mongoose.model('SampleModel', mySchema);
myModel.paginate().then({}); // Usage
You need to add mongoosePaginate in model as plugin.
let options = {
sort: { createdOn: 1 },
page: 1,
limit: 10
};
ModelName.paginate({ 'recipies.to': 'value' }, options, function (err, result) {
if (err) {
console.log(err);
} else {
// Here you will get paginate array please console and check
console.log(result);
}
Let's say I have the following Schema :
const mySchema = mongoose.Schema({
_id: mongoose.Schema.Types.ObjectId,
date: Number,
data: {
field1 : Number,
field2 : Number
}
});
And I want to update field2 with "myAwesomeValue" for the document having "myAwesomeDate". My current code, inside an async/await function, is :
// V1
var myAwesomeDocument = await myModel.findOneAndUpdate(
{date: myAwesomeDate}, //selector
{$set: { //update
'data.field2': myAwesomeValue
}},
{ //options
upsert: false,
new: true
}
).exec();
This code allows me to work with the updated document.
If I'm not mistaken, the following code has the same behavior, but is to be avoided, since it loads the document to the client side first (therefore less efficient) (Mongoose difference between .save() and using update()) :
// V2
var myAwesomeDocument = await myModel.findOne({date: myAwesomeDate}).exec();
myAwesomeDocument.data.field2 = myAwesomeValue;
myAwesomeDocument = await myAwesomeDocument.save().exec();
Now, I would like to make my code more readable using .doSomething() fashion :
// V3 (any code mistake here ?)
var myAwesomeDocument = await myModel.findOne({date: myAwesomeDate})
.set(
{'data.field2': myAwesomeValue},
{upsert: false, new: true}
)
.exec();
My question is about efficiency first, and then about code readability :
Is there a more efficient code than V1 ?
Does V3 perform the same update ? Is it as efficient as V1 ?
Is there a more efficient and readable way to write V3 ?
Thx for any kind of answer !
From the examples you provided, the most efficient is v1,
V1 Under the hood it triggers only one query, mongo's
findAndModify.
V2 Needs 2 queries to complete the intended update. findOne then
updateOne.
V3 Is not doing the thing intended, and it just does findOne, without
issuing any update operation on the found document.
A correct version of V3, would be the one below:
instance = await model
.findOneAndUpdate({date})
.set({'data.f2': f1},)
.setOptions({new: true})
.exec();
Explanation:
Mongoose findOneAndUpdate returns a Query (check examples). Then we use the Query's methods to set the update operation, and the options.
To conclude, you can go with either V1, or the updated V3 I provided, as they are using the same database calls under the hood.
You can always use mongoose.set('debug': true) to analyze which queries are actually sent to the database.
To back up the things I said above, here is the code snippet I used to run the tests. You can invoke it like:
const uri = 'mongodb://localhost:27017/test-sav';
const mongoose = require('mongoose');
const bombardCount = process.argv[2] ? parseInt(process.argv[2]) : 1;
const debug = process.argv[3] === 'true';
const date = 1234567;
const f1 = 1;
const f2 = 2;
let model;
(async function () {
await mongoose.connect(uri, {useNewUrlParser: true, useUnifiedTopology: true});
const schema = new mongoose.Schema({
date: Number,
data: {
f1: Number,
f2: Number,
}
});
model = mongoose.model('model', schema);
console.log('### START ###')
const doc1 = await bombard(v1, bombardCount);
console.log(doc1);
const doc2 = await bombard(v2, bombardCount);
console.log(doc2);
const doc3 = await bombard(v3, bombardCount);
console.log(doc3);
const doc4 = await bombard(v4, bombardCount);
console.log(doc4);
console.log('### END ###');
})().catch(error => console.error(error)).then(() => process.exit(1));
async function v1() {
console.log('### V1 ###\n');
await beforeEach();
console.time('### V1 ###');
let instance = await model.findOneAndUpdate(
{date},
{
$set: {
'data.f2': f1,
},
},
{
upsert: false,
new: true
}
).exec();
console.timeEnd('### V1 ###');
await afterEach();
return instance;
}
async function v2() {
console.log('### V2 ###\n');
await beforeEach();
console.time('### V2 ###');
let instance = await model.findOne({date}).exec();
instance.data.f2 = f1;
instance = await instance.save();
console.timeEnd('### V2 ###');
await afterEach();
return instance;
}
async function v3() {
console.log('### V3 ###\n');
console.time('### V3 ###');
await beforeEach();
let instance = await model
.findOne({date})
.set(
{'data.f2': f1},
{upsert: false, new: true}
).exec();
console.timeEnd('### V3 ###');
await afterEach();
return instance
}
async function v4() {
console.log('### V4 ###\n');
console.time('### V4 ###');
await beforeEach();
let instance = await model
.findOneAndUpdate({date})
.set({'data.f2': f1})
.setOptions({new: true})
.exec();
console.timeEnd('### V4 ###');
await afterEach();
return instance;
}
async function beforeEach() {
await new model({
date,
data: {
f1,
f2,
},
}).save();
mongoose.set('debug', debug);
}
async function afterEach() {
mongoose.set('debug', debug);
await model.deleteMany({});
}
async function bombard(f, times) {
let x;
for (let i = 0; i < times; i++) {
x = await f();
}
return x;
}
node index.js [repeats?=number] [debug?=true/false]
I was hoping to get some help. I just started using Postgres with my Node applications and am curious to find out how to go about dealing with models and model methods. What is the best practice when working with Node and Postgres in regards to models and methods? I was looking around and all I could find is something called Objection, but is it absolutely necessary I take that route?
Ideally I would like to have a model.js file for each component but I have not seen them used when dealing with Postgres + Node.
Any help is greatly appreciated. Thanks guys, hope you all had a great Thanksgiving!
Assuming that viewer can understand basic javascript modules and that the codes are mostly self explanatory
This is my model.js file
module.exports = ({
knex = require('./connection'),
name = '',
tableName = '',
selectableProps = [],
timeout = 1000
}) => {
const query = knex.from(tableName)
const create = props => {
delete props.id
return knex.insert(props)
.returning(selectableProps)
.into(tableName)
.timeout(timeout)
}
const findAll = () => {
return knex.select(selectableProps)
.from(tableName)
.timeout(timeout)
}
const find = filters => {
return knex.select(selectableProps)
.from(tableName)
.where(filters)
.timeout(timeout)
}
const update = (id, props) => {
delete props.id
return knex.update(props)
.from(tableName)
.where({
id
})
.returning(selectableProps)
.timeout(timeout)
}
const destroy = id => {
return knex.del()
.from(tableName)
.where({
id
})
.timeout(timeout)
}
return {
query,
name,
tableName,
selectableProps,
timeout,
create,
findAll,
find,
update,
destroy
}
}
This is my controller.js file
const model = require('./model');
const user = model({
name: "users",
tableName: "tbl_users",
});
const getAllUsers = async (req, res, next)=>{
let result = await user.findAll();
res.send(result);
}
module.exports = { getAllUsers }
And Lastly a the connection.js file
const knex = require('knex')({
client: 'pg',
connection: {
host: 'YOUR_HOST_ADDR',
user: 'YOUR_USERNAME',
password: 'YOUR_PASS',
database: 'YOUR_DB_NAME'
},
pool: {
min: 0,
max: 7
}
});
module.exports = knex;