How to find multiple mongo db objects at once in node js - node.js

i am trying to run a search , it is working fine with findOne and recieving data but when i use find it just return some headers and prototypes like this
response i am getting
code
let data = await client
.db("Movies")
.collection("movies")
.find();
console.log(data);
please tell me where i am going wrong

Based on documentation
The toArray() method returns an array that contains all the documents from a cursor. The method iterates completely the cursor, loading all the documents into RAM and exhausting the cursor.
so just try
let data = await client
.db("Movies")
.collection("movies")
.find({}).toArray();
console.log(data);

Related

Cursor.forEach in Express hanging when document not found

I'm using the MongoDB and NodeJS stack trying to do some simple conditional logic. Unfortunately, I am using the MongoDB native driver for NodeJS, and the owner of the project didnt choose to use Mongoose.
The below code is looking through my documents and filtering based on 'props': 'value' and sending a response via Express.
let cursor = db.collection('collection').find({props: 'value' })
cursor.forEach((doc) => {
if (!doc) {
return res.send('No document found with that property assigned!')
}
res.json(doc)
})
The method is working fine when the property is found, but Express hangs when the the value isn't found. Does anyone have any fixes?
It looks like the No document found ... condition will never be triggered. The if block is inside a section of code that will only be executed if the doc variable exists.
Try having the 'no results' condition be a default condition, or check for the cursor to contain documents.

Firestore: get document back after adding it / updating it without additional network calls

Is it possible to get document back after adding it / updating it without additional network calls with Firestore, similar to MongoDB?
I find it stupid to first make a call to add / update a document and then make an additional call to get it.
As you have probably seen in the documentation of the Node.js (and Javascript) SDKs, this is not possible, neither with the methods of a DocumentReference nor with the one of a CollectionReference.
More precisely, the set() and update() methods of a DocumentReference both return a Promise containing void, while the CollectionReference's add() method returns a Promise containing a DocumentReference.
Side Note (in line with answer from darrinm below): It is interesting to note that with the Firestore REST API, when you create a document, you get back (i.e. through the API endpoint response) a Document object.
When you add a document to Cloud Firestore, the server can affect the data that is stored. A few ways this may happen:
If your data contains a marker for a server-side timestamp, the server will expand that marker into the actual timestamp.
Your data data is not permitted according to your server-side security rules, the server will reject the write operation.
Since the server affects the contents of the Document, the client can't simply return the data that it already has as the new document. If you just want to show the data that you sent to the server in your client, you can of course do so by simply reusing the object you passed into setData(...)/addDocument(data: ...).
This appears to be an arbitrary limitation of the the Firestore Javascript API. The Firestore REST API returns the updated document on the same call.
https://firebase.google.com/docs/firestore/reference/rest/v1beta1/projects.databases.documents/patch
I did this to get the ID of a new Document created, and then use it in something else.
Future<DocumentReference<Object>> addNewData() async {
final FirebaseFirestore _firestore = FirebaseFirestore.instance;
final CollectionReference _userCollection = _firestore.collection('users');
return await _userCollection
.add({ 'data': 'value' })
.whenComplete(() => {
// Show good notification
})
.catchError((e) {
// Show Bad notification
});
}
And here I obtain the ID:
await addNewData()
.then((document) async {
// Get ID
print('ID Document Created ${document.id}');
});
I hope it helps.

How do MongoDB cursors work when used with Node.js?

I am using Node.js with the npm package mongodb. When I use findOne(...), I get a result which is directly the item I searched for. When I use find(...) instead, I don't get an array of elements, I get a cursor which looks very weird if you console.log it.
My question is why does it return a cursor instead of the array of elements and is the cursor.forEach(...) call then asynchronous or how can the client get data out of the cursor?
It returns a cursor instead of an array to provide flexibility to the client to access the results in whatever way is optimal for its needs.
To get an array of all results, you can call the async toArray method on the cursor:
collection.find({...}).toArray((err, docs) => {...});
Same thing for aggregate:
collection.aggregate([{$match: {...}}]).toArray((err, docs) => {...});

In firestore, does using .get() with the nodejs admin sdk on a collection reference read all data into memory?

I am using the Admin SDK for Node.js, working with firestore.
I am trying to handle many documents, and therefore wonders how the following function behaves.
export async function test() {
const collection = firestore.afs.collection(<some-path>);
const items = await collection.get();
// This method takes rather long time on a big collection.
}
test();
Does the get() method reads all method up in memory?
Reading all documents in a collection into a variable, as your code does, loads them all in memory. There really isn't anywhere else to put them.
If you don't want to load all document at once, you'll want to use queries to determine what specific documents to load.

Mongoose js batch find

I'm using mongoose 3.8. I need to fetch 100 documents, execute the callback function then fetch next 100 documents and do the same thing.
I thought .batchSize() would do the same thing, but I'm getting all the data at once.
Do I have to use limit or offset? If yes, can someone give a proper example to do it?
If it can be done with batchSize, why is it not working for me?
MySchema.find({}).batchSize(20).exec(function(err,docs)
{
console.log(docs.length)
});
I thought it would print 20 each time, but its printing whole count.
This link has the information you need.
You can do this,
var pagesize=100;
MySchema.find().skip(pagesize*(n-1)).limit(pagesize);
where n is the parameter you receive in the request, which is the page number client wants to receive.
Docs says:
In most cases, modifying the batch size will not affect the user or the application, as the mongo shell and most drivers return results as if MongoDB returned a single batch.
You may want to take a look at streams and perhaps try to accumulate subresults:
var stream = Dummy.find({}).stream();
stream.on('data', function (dummy) {
callback(dummy);
})

Resources