I am working in a node proyect and I installed firebase package.
This is the structure of my database.
I want to stract all data of the client aaa#gmail.com in only one operation. I mean
the name, email, age and all addresses....
const docClient = doc(db,"Client", "aaa#gmail.com");
const collectionAddresses = collection(docClient, "addresses");
const DataClient = await getDoc(docClient)
const querySnapshot = await getDocs(collectionAddresses);
console.log(DataClient.data());
querySnapshot.forEach((doc) => {
console.log(doc.id, " => ", doc.data());
});
I think there are too many operations to only extract the customer data found in the same document
CHAT GPT told me that if I use transactions, both operations are performed as if it were one and he gave me a code that is not working.
But this code is wrong, because I think transaction only works with documents, no collections.
try{
await runTransaction(db,async (transaccion)=>{
var collectionAddresses=collection(doc(db,"Client","aaaa#gmail.com"),"addresses");
var a=await transaccion.get(collectionAddresses);
});
}catch(errors){
console.log(errors);
}
In fact, node print an error for console.....
So my question is if someone knows how can i get all data in only one operation?
How can i get all data in only one operation?
I understand that by "one operation" you mean "one query". With your data model this is not possible. You'll have to query the client doc and all the address docs.
So for your example (with 3 addresses) you'll pay for 4 document reads.
Note that using a transaction will not change anything from a number of reads perspective. CHAT GPT answer is not very good... The "one operation" that this AI refers to is the fact that changes in a transaction are grouped in an atomic operation, not that there is only one read or one write from/to the Firestore DB.
One possible approach would be to store the addresses in an Array in the client doc, but you have to be sure that you'll never reach the 1MiB maximum size limit for a document.
Related
Hello and thanks in advance!
MERN stack student here. I'm just getting to know the MVC design pattern aswell.
So, here's my question:
I'm trying to get some docs from a Mongo collection (working with Mongoose) and I get a limit query with my request (let's say I need only the first 5 docs from a collection of 30 docs). What is considered best practice in general? Would you use one or the other in different cases (i.e. how big is the db, as an example that comes to mind)?
Something like this:
Controller:
getProducts() {
const { limit } = req.query;
const products = await productManagerDB.getProducts();
res.status(200).json({ success: true, limitedProductsList: products.slice(0, Number(limit))});
}
Or
Like this:
Controller:
getProducts() {
const { limit } = req.query;
const products = await productManagerDB.getProducts(limit);
res.status(200).json({ success: true, limitedProductsList: products});
}
Service:
getProducts(query) {
try {
const limit = query? Number(query) : 0;
const products = await ProductsModel.find().limit(limit);
return products;
} catch (error) {
throw new Error(error.message)
}
}
Tried both ways with the same outcome. I expect second to be more efficient since it's not loading all the data that I ain't using but curious if in some cases would be better to fetch the whole collection...
As you have already stated the second query is far more efficient - especially when you have a large collection.
the two differences would be:
MongoDB engine will have to fetch all the documents from the disk -
the way MongoDB is designed (the internal wired-tiger storage
engine) is that it caches the frequently used data either in the
application cache or the OS cache - in any case, its quite possible
that the whole collection will not fit in the memory and therefore a
large number of disk operations will happen (very slow comparatively
even with the latest nvme disks)
A large amount of data will have to flow over the network from the
database server to the application server which is a waste of
bandwidth and will be slower.
Where you might need the full collection obviously depends on the usecase
Okay so I have a Nodejs/Express app that has an endpoint which allows users to receive notifications by opening up a connection to said endpoint:
var practitionerStreams = [] // this is a list of all the streams opened by pract users to the
backend
async function notificationEventsHandler(req, res){
const headers ={
'Content-Type': 'text/event-stream',
'Connection': 'keep-alive',
'Cache-Control': 'no-cache'
}
const practEmail = req.headers.practemail
console.log("PRACT EMAIL", practEmail)
const data = await ApptNotificationData.findAll({
where: {
practEmail: practEmail
}
})
//console.log("DATA", data)
res.writeHead(200, headers)
await res.write(`data:${JSON.stringify(data)}\n\n`)
// create a new stream
const newPractStream = {
practEmail: practEmail,
res
}
// add the new stream to list of streams
practitionerStreams.push(newPractStream)
req.on('close', () => {
console.log(`${practEmail} Connection closed`);
practitionerStreams = practitionerStreams.filter(pract => pract.practEmail !== pract.practEmail);
});
return res
}
async function sendApptNotification(newNotification, practEmail){
var updatedPractitionerStream = practitionerStreams.map((stream) =>
// iterate through the array and find the stream that contains the pract email we want
// then write the new notification to that stream
{
if (stream["practEmail"]==practEmail){
console.log("IF")
stream.res.write(`data:${JSON.stringify(newNotification)}\n\n`)
return stream
}
else {
// if it doesnt contain the stream we want leave it unchanged
console.log("ELSE")
return stream
}
}
)
practitionerStreams = updatedPractitionerStream
}
Basically when the user connects it takes the response object (that will stay open), will put that in an Object along with a unique email, and write to it in the future in sendApptNotification
But obviously this is slow for a full app, how exactly do I replace this with Redis? Would I still have a Response object that I write to? Or would that be replaced with a redis stream that I can subscribe to on the frontend? I also assume I would store all my streams on redis as well
edit: from what examples I've seen people are writing events from redis to the response object
Thank you in advance
If you want to use Redis Stream as notification system, you can follow this official guide:
https://redis.com/blog/how-to-create-notification-services-with-redis-websockets-and-vue-js/ .
To get this data as real time you need to create a websocket connection. I prefer to send to you an official guide instead of create it for you it's because the quality of this guide. It's perfect to anyone understand how to create it, but you need to adapt for your reality, of course.
However like I've said to you in the comments, I just believe that it's more simple to do requests in your api endpoint like /api/v1/notifications with setInterval in your frontend code and do requests each 5 seconds for example. If you prefer to use a notification system as real time I think you need to understand why do you need it, so in the future you can change your system if you think you need it. Basically it's a trade-off you must to do!
For my example imagine two tables in a relational database, one as Users and the second as Notifications.
The tables of this example:
UsersTable
id name
1 andrew
2 mark
NotificationTable
id message userId isRead
1 message1 1 true
2 message2 1 false
3 message3 2 false
The endpoint of this example will return all cached notifications that isn't read by the user. If the cache doesn't exists, it will return the data from the database, put it on the cache and return to the user. In the next call from API, you'll get the result from cache. There some points to complete in this example, for example the query on the database to get the notifications, the configuration of time expiration from cache and the another important thing is: if you want to update all the time the notifications in the cache, you need to create a middleware and trigger it in the parts of your code that needs to notify the notifications user. In this case you'll only update the database and cache. But I think you can complete these points.
const redis = require('redis');
const redisClient = redis.createClient();
app.get('/notifications', async (request, response) => {
const userId = request.user.id;
const cacheResult = await redisClient.get(`user:${userId}:notifications`)
if (cacheResult) return response.send(cacheResult);
const notifications = getUserNotificationsFromDatabase(userId);
redisClient.set(`user:${userId}:notifications`, notifications);
response.send(notifications);
})
Besides that there's another way, you can simple use only the redis or only the database to manage this notification. Your relational database with the correct index will send to your the results as faster as you expect. You'll only think about how much notifications you'll have been.
In reservation system, only 5 different user can create bookings. If 100 user call booking api at same time than how to handle concurrency with locking. I am using nodejs with mongodb. I went through mongo concurrency article and transactions in mongodb, but cannot find any sample coding solution with locking.
I have achieved solution with Optimistic concurrency control (when there is low contention for the resource - This can be easily implemented using versionNumber or timeStamp field).
Thank you in advance for suggesting me solution with locking.
Now the algorithm is:
Step 1: Get userAllowedNumber from userSettings collection.
//Query
db.getCollection('userSettings').find({})
//Response Data
{ "userAllowedNumber": 5 }
Step 2, Get current bookedCount from bookings collection.
//Query
db.getCollection('bookings').count({ })
//Response Data
2
Step 3, if bookedCount <= userAllowedNumber then insert in bookings.
//Query
db.getCollection('bookings').create({ user_id: "usr_1" })
I had indepth discussion about locking with transaction in mongodb community. In the conclusion, I learn and found the limitation of transaction. There is no lock we can use to handle concurrent request for this task.
You can see the full Mongodb community conversation at this link
https://www.mongodb.com/community/forums/t/implementing-locking-in-transaction-for-read-and-write/127845
Github demo code with Jmeter testing shows the limitation and not able to handle concurrent request for this task.
https://github.com/naisargparmar/concurrencyMongo
New suggestion are still welcome and appreciate
To solve your problem, you can also try using Redlock (https://redis.com/redis-best-practices/communication-patterns/redlock/) for distributed locking or using Mutex lock for instance locking.
For a simple example of transaction, first you need to have a client connect to a MongoDB.
const client = new MongoClient(uri);
await client.connect();
Once you have the client, you can create a session from which you can make transactions:
const session = await client.startSession();
const transactionResults = await session.withTransaction( async () => {
await client.db().collection("example").insertOne(example);
}, transactionOptions);
With transactionOptions being:
const transactionOptions = {
readPreference: 'primary',
readConcern: { level: 'majority' },
writeConcern: { w: 'majority' }
};
Find about the Read/Write concern in the MongoDb documentation.
Depending on your usecase, you may also consider findAndModify which lock the document on change.
I have to insert a a table with data regarding sent emails, after each email is sent.
Inside a loop I'm stuffing an array to be solved by the Promise.all().
insertData is a function that inserts Data, given two arguments, connector, the connection pool and dataToInsert, an object with data to be inserted.
async function sendAndInsert(payload) {
for (data of payload) {
let array = [];
const dataToInsert = {
id: data.id,
campaign: data.campaign,
}
for (email of data) {
array.push(insertData(connector, dataToInsert));
}
await Promise.all(array);
}
}
Afterwards, the function is invoked:
async invoke () {
await sendAndInsert(toInsertdata);
}
To insert 5000 records, it takes about 10 minutes, which is nuts.
Using
nodejs v10
pg-txclient as DB connector to PostgreSql.
What I've done and can be discarded as possible source of error:
Inserted random stuff to the table using the same connection.
I'm sure there is no issue with DB server, connection.
The issue must be in the Promise.all(), await sutff.
It looks like each record is being inserted through a separate call to insertData. Each call is likely to include overhead such as network latency, and 5000 requests cannot all be handled simultaneously. One call to insertData has to send the data to the database and wait for a response, before the next call can even start sending its data. 5000 requests over 10 minutes corresponds to 1.2 seconds latency per request, which is not unreasonable if the database is on another machine.
A better strategy is to insert all of the objects in one network request. You should modify insertData to allow it to accept an array of objects to insert instead of just one at a time. Then, all data can be sent at once to the database and you only suffer through the latency a single time.
I have this simple app that I created using IOS, it is a questionnaire app, whenever user clicks play, it will invoke a request to node.js/express server
Technically after a user clicks an answer it will go to the next question
I'm confused to use which method, to fetch the questions/question
fetch all the data at once and present it to the user - which is an array
Fetch the data one by one as user progress with the next question - which is one data per call
API examples
// Fetch all the data at once
app.get(‘/api/questions’, (req, res, next) => {
Question.find({}, (err, questions) => {
res.json(questions);
});
});
// Fetch the data one by one
app.get('/api/questions/:id', (req, res, next) => {
Question.findOne({ _id: req.params.id }, (err, question) => {
res.json(question);
});
});
The problem with number 1 approach is that, let say there are 200 questions, wouldn’t it be slow for mongodb to fetch at once and possibly slow to do network request
The problem with number 2 approach, I just can’t imagine how to do this, because every question is independent and to trigger to next api call is just weird, unless there is a counter or a level in the question mongodb.
Just for the sake of clarity, this is the question database design in Mongoose
const mongoose = require('mongoose');
const Schema = mongoose.Schema;
const QuestionSchema = new Schema({
question: String,
choice_1: String,
choice_2: String,
choice_3: String,
choice_4: String,
answer: String
});
I'd use Dave's approach, but I'll go a bit more into detail here.
In your app, create an array that will contain the questions. Then also store a value which question the user currently is on, call it index for example. You then have the following pseudocode:
index = 0
questions = []
Now that you have this, as soon as the user starts up the app, load 10 questions (see Dave's answer, use MongoDB's skip and limit for this), then add them to the array. Serve questions [index] to your user. As soon as the index reaches 8 (= 9th question), load 10 more questions via your API, and add them to the array. This way, you will always have questions available for the user.
Very good question. I guess the answer to this question depends on your future plans about this app.
If you are planning to have 500 questions, then getting them one by one will require 500 api calls. Not the best option always. On the other hand, if you fetch all of them at once, it will delay the response depending on the size of each object.
So my suggestion will be to use pagination. Bring 10 results, when the user reaches 8th question update the list with next 10 questions.
This is a common practice among mobile developers, this will also give you the flexibility to update next questions on the basis of previous responses from user. Like Adaptive test and all.
EDIT
You can add pageNumber & pageSize query parameter in your request for fetching questions from server, something like this.
myquestionbank.com/api/questions?pageNumber=10&pageSize=2
receive these parameters in on server
var pageOptions = {
pageNumber: req.query.pageNumber || 0,
pageSize: req.query.pageSize || 10
}
and while querying from your database provide these additional parameters.
Question.find()
.skip(pageOptions.pageNumber * pageOptions.pageSize)
.limit(pageOptions.pageOptions)
.exec(function (err, questions) {
if(err) {
res.status(500).json(err); return;
};
res.status(200).json(questions);
})
Note: start your pageNumber with zero (0) it's not mandatory, but that's the convention.
skip() method allows you to skip first n results. Consider the first case, pageNumber will be zero, so the product (pageOptions.pageNumber * pageOptions.pageSize) will become zero, and it will not skip any record.
But for next time (pageNumber=1) the product will result to 10. so it will skip first 10 results which were already processed.
limit() this method limits the number of records which will be provided in result.
Remember that you'll need to update pageNumber variable with each request. (though you can vary limit also, but it is advised to keep it same in all the requests)
So, all you have to do is, as soon as user reaches second last question, you can request for 10 (pageSize) more questions from the server as put it in your array.
code reference : here.
You're right, the first option is a never-to-use option in my opinion too. Fetching that much data is useless if it is not being used or has a chance to not to be used in the context.
What you can do is that you can expose a new api call:
app.get(‘/api/questions/getOneRandom’, (req, res, next) => {
Question.count({}, function( err, count){
console.log( "Number of questions:", count );
var random = Math.ceil(Math.random() * count);
// random now contains a simple var
now you can do
Question.find({},{},{limit:1,skip:random}, (err, questions) => {
res.json(questions);
});
})
});
The skip:random will make sure that each time a random question is fetched. This is just a basic idea of how to fetch a random question from all of your questions. You can put further logics to make sure that the user doesn't get any question which he has already solved in the previous steps.
Hope this helps :)
you can use the concept of limit and skip in mongodb.
when you are hitting the api for the first time you can have your limit=20 and skip=0 increase your skip count every time you that api again.
1st time=> limit =20, skip=0
when you click next => limit=20 , skip=20 and so on
app.get(‘/api/questions’, (req, res, next) => {
Question.find({},{},{limit:20,skip:0}, (err, questions) => {
res.json(questions);
});
});