My collection has 1000-ish records. I have a flutter app, which fetches data from a Heroku API, where the backend is based in NodeJs. While fetching, it sorts the entire collection based on the number of a certain "Vacant" field in descending order. It takes nearly 15 seconds to fetch that data. I really don't think 1000 documents is a lot of data, so what can be the optimal method of approaching this problem?
UPDATE 1: This is the code where I'm fetching the data, where I'm sorting based on the 'Vacant' field.
UPDATE 2: My region on Heroku was set in the US, that's why there was a huge delay in the response time. Shifted to AWS with a server close to me, the response is in milliseconds now.
const dataSchema = new mongoose.Schema({
District: String,
Name: String,
Vacant: Number,
Address: String,
PhoneNumber: String
})
app.get('/:state', (req, res) => {
const State = mongoose.model(req.params.state, dataSchema)
State.find().sort([['Vacant', -1]]).exec((err, foundData) => {
if (err) {
console.log(err)
} else {
res.send(foundData)
}
})
})
Related
app.delete("/api/persons/:id", (req, res, next) => {
Person.findByIdAndDelete(req.params.id)
.then((result) => {
res.status(204).end();
})
.catch((error) => next(error));
});
Not sure how to even explain this properly, but there is my delete method. It works fine for objects that are allready in the databases, but if I add a new one and I dont refresh the site, I get error: Cast to ObjectId failed for value "undefined" (type string) at path "_id" for model "Person"
Below is my mongoose schema if that helps:
const personSchema = new mongoose.Schema({
name: { type: String, required: true },
number: { type: Number, required: true },
});
personSchema.set("toJSON", {
transform: (document, returnedObject) => {
returnedObject.id = returnedObject._id.toString();
delete returnedObject._id;
delete returnedObject.__v;
},
});
My guess is you're optimistically updating your frontend with the new Person without waiting for a successful DB response with the new data. That is a valid technique, but gets you into trouble if you're not careful.
My suggestion would be to also send the new value from the database back to your app right away say it can stay in sync. You likely have no _id value on the front end if you're optimistically updating the app before a DB response.
Something like this:
app.post("api/person/new", async (req, res) => {
try {
const person = new Person(req.body)
await person.save()
res.status(201).send({ person })
} catch (err) {
res.status(500).send({ err })
}
})
And then more importantly, your API handler on the frontend needs to take that returned person value and use it to update the values on your front end, so it has access to the _id property for immediate deletion. And if there's an error creating the person for any reason, you can remove the person from the front end, or handle it however you wish.
I don't know what your app is built with, so I can write a sample bit of code for it.
I am trying to delete a document in my mongoDB collection using TTL feature of it, and it works as expected, but not fully. It deletes the document later than the specified time.
I specified delete time 10seconds, but sometimes it takes 20seconds to delete it, sometimes 50seconds. Maybe I am making some mistake. I have used UTC date format, and tried with my local area date format too, but still the same. How do I resolve this problem?
One more thing that I want to ask, lets say somehow it works, and I have thousands of documents in my collection, will it decrease the performance of my database for requests and response handling??
because I am not deleting the whole collection, but individual documents, so keeping track of them might decrease performance, am I right?
Here is what I tried.
this is my schema
const mongoose = require('mongoose');
const tokens = mongoose.Schema({
token: String,
createdAt:{
type: Date,
expires: 10,
}
});
module.exports = {
authTokens: mongoose.model('Authtoken', tokens)
}
THIS HOW I AM CREATING DOCUMENT IN COLLECTION
app.post('/createToken', async (req, res) => {
try{
//create document in authTokens Collection
await authTokens.create({
token: req.body.token,
createdAt: new Date().toUTCString() //getting utc value of date
});
}
catch (err) {
res.send("Could not send token!");
console.error(err);
return;
}
res.send("Document Created Successfully");
return;
});
Can anyone help please
Thanks
I have a MERN app which uses mongoose to connect to MongoDB Atlas. While the average response time from Atlas is <300ms, every once a while this becomes >30 seconds and everything becomes unusable. I am wondering if something might be wrong with the way I handle connections to the database?
Currently in my app.js file:
mongoose.connect(`mongodb+srv://<db name>:${process.env.DB_PASSWORD}#<...>.kea2z.mongodb.net/<...>?retryWrites=true&w=majority`, {useNewUrlParser: true, useUnifiedTopology: true})
In my routers.js file, I handle routes like the following:
import { Post } from './models.js'
...
const postRouter = express.Router()
postRouter.get('/', async (req, res) => {
try {
const posts = await Post.find()
return res.json(posts)
} catch(e) {
console.log(`Error while indexing posts: ${e}`)
return res.status(404).json('An error has occurred!')
}
})
...
For instance, the above Post collection has 50 documents and 2MB total size, but the simple Post.find() query takes longer than 30 seconds to complete. I have four other collections similar to this; including a collection of images which has a size of 65MB. Is there a problem in the way I am querying the database?
UPDATE 1:
I have moved all the images to the cloud so now my database only stores their URLs. However it still takes ~15s for the Post collection to be queried, which has a size of 1.3MB and contains 50 documents. In case a faulty schema definition may be causing it, here is its model:
const Post = mongoose.model('Post', new mongoose.Schema({
slug: String,
title: String,
authors: [String],
date: Date,
categories: [String],
content: String
}))
It's not a good practice to store images in a mongoDB database.
A better approach is to store the images in some storage (such as AWS S3) and save the image URLs in the database as a string.
This Query may be faster
await Post.find().lean();
NOTICE if you use lean(), it is faster because
you get pure json of document
but you cannot modify document like
posts[0].name = "jack";
await posts[0].save()
I am developing an application where I am using MongoDB as database with Nodejs + Express in application layer, I have two collections, namely
users
transactions
Here i have to update wallet of thousands of users with some amount and if successful create a new document with related info for each transaction, This is My code :
userModel.update({_id : ObjectId(userId)}, {$inc : {wallet : 500}}, function (err, creditInfo) {
if(err){
console.log(err);
}
if(creditInfo.nModified > 0) {
newTransModel = new transModel({
usersId: ObjectId(userId),
amount: winAmt,
type: 'credit',
});
newTransModel.save(function (err, doc) {
if(err){
Cb(err);
}
});
}
});
but this solution is not atomic there is always a possibility of user wallet updated with amount but related transaction not created in transactions collection resulting in financial loss.
I have heard that recently MongoDB has added Transactions support in its 4.0 version, I have read the MongoDB docs but couldn't get it to successfully implement it with mongoose in Node.js, can anyone tell me how this above code be reimplemented using the latest Transactions feature of MongoDB which have these functions
Session.startTransaction()
Session.abortTransaction()
Session.commitTransaction()
MongoDB Docs : Click Here
with mongoose in Node.js, can anyone tell me how this above code be reimplemented using the latest Transactions feature
To use MongoDB multi-documents transactions support in mongoose you need version greater than v5.2. For example:
npm install mongoose#5.2
Mongoose transactional methods returns a promise rather than a session which would require to use await. See:
Transactions in Mongoose
Blog: A Node.JS Perspective on MongoDB 4.0: Transactions
For example, altering the example on the resource above and your example, you can try:
const User = mongoose.model('Users', new mongoose.Schema({
userId: String, wallet: Number
}));
const Transaction = mongoose.model('Transactions', new mongoose.Schema({
userId: ObjectId, amount: Number, type: String
}));
await updateWallet(userId, 500);
async function updateWallet(userId, amount) {
const session = await User.startSession();
session.startTransaction();
try {
const opts = { session };
const A = await User.findOneAndUpdate(
{ _id: userId }, { $inc: { wallet: amount } }, opts);
const B = await Transaction(
{ usersId: userId, amount: amount, type: "credit" })
.save(opts);
await session.commitTransaction();
session.endSession();
return true;
} catch (error) {
// If an error occurred, abort the whole transaction and
// undo any changes that might have happened
await session.abortTransaction();
session.endSession();
throw error;
}
}
is not atomic there is always a possibility of user wallet updated with amount but related transaction not created in transactions collection resulting in financial loss
You should also consider changing your MongoDB data models. Especially if the two collections are naturally linked. See also Model data for Atomic Operations for more information.
An example model that you could try is Event Sourcing model. Create a transaction entry first as an event, then recalculate the user's wallet balance using aggregation.
For example:
{tranId: 1001, fromUser:800, toUser:99, amount:300, time: Date(..)}
{tranId: 1002, fromUser:77, toUser:99, amount:100, time: Date(..)}
Then introduce a process to calculate the amount for each users per period as a cache depending on requirements (i.e. per 6 hours). You can display the current user's wallet balance by adding:
The last cached amount for the user
Any transactions for the user occur since the last cached amount. i.e. 0-6 hours ago.
This question already has an answer here:
mongoDB array pagination
(1 answer)
Closed 5 years ago.
Currently, I have this confusion which is to paginate an array of items in a Schema.
So technically, every user has a cart and I want to paginate the cart's items. For example let say the cart has 18 items stored. I want to paginate the cart to only 5 items.
Here's the schema
const UserSchema = new Schema({
email: { type: String, unique: true, lowercase: true},
cart: [{
type: Schema.Types.ObjectId, ref: 'Product'
}],
});
Here's the route
My current Approach
router.get('/cart', (req, res, next) => {
User.findOne({ _id: req.user._id })
.populate({
path: 'cart',
options: { limit: 5 }
})
.exec((err, user) => {
res.render('order/cart', { user: user });
});
});
My current approach will definitely limit the items shown from 18 items to only 5 items, but this creates a new problem which now I can't calculate the total price of the cart because it only limit to 5 items per request.
You can do the pagination on the client side, this way you'll get all the products from the server, but display only five on the client.
It will allow you calculate the average or other calculation on all the data. also it will allow you filtering and sorting all the data.
If you have a large amount of data, it will take long time for the first time for the client to load the data from the server, so for large amount of data it may be not the right way to go, in this case you probably want to load the data in chuncks, and maybe in the background, and
perform the calculation using the server maybe also in the background -for optimization.
You can find pros and cons of server/client pagentaion https://dzone.com/articles/pagination-server-side-or-clie