I using firebase admin and use nodejs get data by phone number from firebase. When i get success , i want get only document one time and use it every time. It possible ?
Picture :
First i get document by phone look like.
Example:
const phone = await admin.firestore().collection('users').where('phone_number', '==', phone).get()
After that , i want using document phone every time in my code look like :
await handleLogic(phone)
Then
async function handleLogic(phone) {
//inside here i need call await admin.firestore().collection('users').where('phone_number', '==', somePhone).get() or re use phone in parameter ?
phone.ref.collection("subcollection").get()
.then(() => {
let data = {
created_at: timeNowFirebase(),
};
phone.ref.collection(subcollection).doc(someId)
.set(data);
}
I have question : Inside function handleLogic(phone), i need re call admin.firestore().collection('users').where('phone_number', '==', phone).get() or only use parameter phone and used phone.ref.collection(subcollection).doc(someId).set(data); . It will set subcollection into document my phone correct ?
Yes you can store the document ID (or even the DocumentReference as you are currently) in memory/cache instead of querying the document with phone number every time. The doc ID never changes and this seems to be a good way to prevent additional requests to the database.
// storing in memory for example
const phoneToUserId = {};
async function handleLogic(phone) {
if (!phoneToUserId[phone]) {
// run a query to get userID from phone number
const user = await admin.firestore().collection('users').where('phone_number', '==', somePhone).get()
phoneToUserId[phone] = user.docs[0].id
}
// get a reference to sub-collection
const subCol = admin.firestore().collection(`users/${phoneToUserId[phone]}/subcollection`)
// query data
}
However do note that you'll have to update that object whenever user updates their phone number or delete their document.
Related
I'm looking to add the Firestore ID to the DocumentData, so that I can easily utilize the ID when referring to rows in a table, without having to use document.data().property everytime I call a property of a document. Instead, I want to be able to call document.id.... document.property... and so on.
Is there an easy way to do this? Possibly with a Cloud Function that adds the auto-generated ID to the document data?
Thanks!
Example:
export const getSpaces = async () => {
const spaceDocs = await getDocs(spacesCollection)
spaceDocs.docs.forEach((spaceDoc) => {
const spaceID = spaceDoc.id
const spaceData = spaceDoc.data()
console.log(spaceID)
spaces.value.push(spaceData)
})
}
Now, the spaces array has objects containing the data of the documents. But, I loose the ability to reference the ID of a document.
Alternatively, I can add the entire document to the array, but following that, I'll have to access the properties by always including the data() in between. I.e. space.data().name
I'm certain, theres a better way
You don't need Cloud Functions to add the document ID to the data of that document. If you look at the third code snippet in the documentation on adding a document, you can see how to get the ID before writing the document.
In some cases, it can be useful to create a document reference with an auto-generated ID, then use the reference later. For this use case, you can call doc() [without any arguments]:
const newCityRef = db.collection('cities').doc(); // 👈 Generates a reference, but doesn't write yet
// Later...
const res = await newCityRef.set({
newCityRef.id, // 👈 Writes the document ID
// ...
});
As others have commented, you don't need to store the ID in the document. You can also add it to your data when you read the documents, with:
spaceDocs.docs.forEach((spaceDoc) => {
const spaceID = spaceDoc.id
const spaceData = spaceDoc.data()
console.log(spaceID)
spaces.value.push({ id: spaceID, ...spaceData })
})
With this change your spaces contains both the document ID and the data of each document.
A DocumentSnapshot has id property that you are looking for. If you add something within the document, then you'll need to access the data first with doc.data().field_name.
Working on another problem, I came across the solution. Quite simple actually:
When declaring the new Object, you can add ...doc.data() to add all the properties of the DocumentData to the newly created Object, after initialising it with an id. This works for me anyways. Case closed.
onSnapshot(profilesCollectionRef, (querySnapshot) => {
const profilesHolder = [];
querySnapshot.forEach((doc) => {
const profile = {
id: doc.id,
...doc.data(),
}
profilesHolder.push(profile);
console.log(profile);
});
profiles.value = profilesHolder;
});
onSnapshot(doc(db, "profiles", userId), (doc) => {
const newProfile = {
id: doc.id,
...doc.data(),
}
myProfile.value = newProfile as Profile;
});
I need to perform a query to get the oldest document in a sub collection.
I want to perform this query with few reads as possible.
DB description:
Based on Firebase.
Collection of devices. Each device holds a collection of call-backs. For a specific device I need to fetch the oldest call-back (call-backs has timestamp).
I think I know how to perform this query using the device Unique ID, But I want to do it by filtering by some field of the device, this field is also unique.
I was able to do it by querying the device with all of his call-backs but this will charge me for more reads then actually needed.
Query that works using ID:
admin
.firestore()
.collection("devices/{device_id}/callbacks")
.{order_by_timestamp}
.limit(1)
.get()
.then((data) => {
let callbacks = [];
data.forEach((doc) => {
callbacks.push(doc.data());
});
return res.json(callbacks);
})
.catch((err) => console.error(err));
If that field in devices collection is unique then you can fetch ID of that device first and then proceed with your existing logic as shown below:
async function getOldestCallback(thatFieldValue) {
const device = await admin.firestore().collection("devices").where("thatField", "==", thatFieldValue).get()
if (device.empty) return false;
const deviceId = device[0]["id"];
// existing function
}
This should incur 2 reads (1 for device document and 1 for oldest callback if it exist).
Additionally, since you are limiting number of docs to be returned to 1 then you can use [0] instead of using a forEach loop.
const callbacks = [ data[0].data() ]
I have 2 models - Driver and User. Both of them rate each other, so while creating the API, how can I check whether a certain userId exists in db, and if it exists, I want to add to my driver's rating array such that a new object is added like this.
{userId:xyz/*Already Checked in the Db that it exists*/,rating:4}
It sounds like you need to use two requests.
const user = await Users.findOne({_id: userId});
if (user){
const driver = await Drivers.findOne({_id: driverId});
driver.ratings.push({userId, rating: 4})
await driver.save();
}
I am fetching id column value from database for a particular email. In this case I am passing email and want to get primary key i.e id. This operation is successful as I get object which contains Object with the right and expected result. However I am not able to access the object.
I am receiving object like this:
[ UserInfo { id: 21 } ]
And I am not able to access id part of it.
I am using node.js, postgres for database and typeorm library to connect with database.
const id = await userRepo.find({
select:["id"],
where: {
email:email
}
});
console.log(id)
This prints the above object.
The id I am getting is right. But I am not able to retrieve the id part of the object. I tried various ways for e.g.
id['UserInfo'].id, id.UserInfo.
Please help me in accessing the object I am receiving
Typeorm .find() returns an array of objects containing entries corresponding to your filters, in your case, all entries with an email field corresponding to the email you specified.
Because the result is an array, you can access it this way:
const records = await userRepo.find({
select: ['id'],
where: {
email,
},
})
console.log(records[0].id)
You could also use the .findOne() method, which returns a single element and might be a better solution in your case :)
When you are putting a field in the select part select:["id"], you are only retrieving this part of the database.
It is like your query was this: select id from userRepo where email = email
and you need to put * in the select part to retrieve all the information:
const id = await userRepo.find({
select:["*"],
where: {
email:email
}
});
I am running an iOS app where I display a list of users that are currently online.
I have an API endpoint where I return 10 (or N) users randomly, so that you can keep scrolling and always see new users. Therefore I want to make sure I dont return a user that I already returned before.
I cannot use a cursor or a normal pagination as the users have to be returned randomly.
I tried 2 things, but I am sure there is a better way:
At first what I did was sending in the parameters of the request the IDs of the user that were already seen.
ex:
But if the user keeps scrolling and has gone through 200 profiles then the list is long and it doesnt look clean.
Then, in the database, I tried adding a field to each users "online_profiles_already_sent" where i would store an array of the IDs that were already sent to the user (I am using MongoDB)
I can't figure out how to do it in a better/cleaner way
EDIT:
I found a way to do it with MySQL, using RAND(seed)
but I can't figure out if there is a way to do the same thing with Mongo
PHP MySQL pagination with random ordering
Thank you :)
I think the only way that you will be able to guarentee that users see unique users every time is to store the list of users that have already been seen. Even in the RAND example that you linked to, there is a possibility of intersection with a previous user list because RAND won't necessarily exclude previously returned users.
Random Sampling
If you do want to go with random sampling, consider Random record from MongoDB which suggests using an an Aggregation and the $sample operator. The implementation would look something like this:
const {
MongoClient
} = require("mongodb");
const
DB_NAME = "weather",
COLLECTION_NAME = "readings",
MONGO_DOMAIN = "localhost",
MONGO_PORT = "32768",
MONGO_URL = `mongodb://${MONGO_DOMAIN}:${MONGO_PORT}`;
(async function () {
const client = await MongoClient.connect(MONGO_URL),
db = await client.db(DB_NAME),
collection = await db.collection(COLLECTION_NAME);
const randomDocs = await collection
.aggregate([{
$sample: {
size: 5
}
}])
.map(doc => {
return {
id: doc._id,
temperature: doc.main.temp
}
});
randomDocs.forEach(doc => console.log(`ID: ${doc.id} | Temperature: ${doc.temperature}`));
client.close();
}());
Cache of Previous Users
If you go with maintaining a list of previously viewed users, you could write an implementation using the $nin filter and store the _id of previously viewed users.
Here is an example using a weather database that I have returning entries 5 at a time until all have been printed:
const {
MongoClient
} = require("mongodb");
const
DB_NAME = "weather",
COLLECTION_NAME = "readings",
MONGO_DOMAIN = "localhost",
MONGO_PORT = "32768",
MONGO_URL = `mongodb://${MONGO_DOMAIN}:${MONGO_PORT}`;
(async function () {
const client = await MongoClient.connect(MONGO_URL),
db = await client.db(DB_NAME),
collection = await db.collection(COLLECTION_NAME);
let previousEntries = [], // Track ids of things we have seen
empty = false;
while (!empty) {
const findFilter = {};
if (previousEntries.length) {
findFilter._id = {
$nin: previousEntries
}
}
// Get items 5 at a time
const docs = await collection
.find(findFilter, {
limit: 5,
projection: {
main: 1
}
})
.map(doc => {
return {
id: doc._id,
temperature: doc.main.temp
}
})
.toArray();
// Keep track of already seen items
previousEntries = previousEntries.concat(docs.map(doc => doc.id));
// Are we still getting items?
console.log(docs.length);
empty = !docs.length;
// Print out the docs
docs.forEach(doc => console.log(`ID: ${doc.id} | Temperature: ${doc.temperature}`));
}
client.close();
}());
I have encountered the same issue and can suggest an alternate solution.
TL;DR: Grab all Object ID of the collections on first landing, randomized using NodeJS and used it later on.
Disadvantage: slow first landing if have million of records
Advantage: subsequent execution is probably quicker than the other solution
Let's get to the detail explain :)
For better explain, I will make the following assumption
Assumption:
Assume programming language used NodeJS
Solution works for other programming language as well
Assume you have 4 total objects in yor collections
Assume pagination limit is 2
Steps:
On first execution:
Grab all Object Ids
Note: I do have considered performance, this execution takes spit seconds for 10,000 size collections. If you are solving a million record issue then maybe used some form of partition logic first / used the other solution listed
db.getCollection('my_collection').find({}, {_id:1}).map(function(item){ return item._id; });
OR
db.getCollection('my_collection').find({}, {_id:1}).map(function(item){ return item._id.valueOf(); });
Result:
ObjectId("FirstObjectID"),
ObjectId("SecondObjectID"),
ObjectId("ThirdObjectID"),
ObjectId("ForthObjectID"),
Randomized the array retrive using NodeJS
Result:
ObjectId("ThirdObjectID"),
ObjectId("SecondObjectID"),
ObjectId("ForthObjectID"),
ObjectId("FirstObjectID"),
Stored this randomized array:
If this is a Server side script that randomized pagination for each user, consider storing in Cookie / Session
I suggest Cookie (with timeout expired linked to browser close) for scaling purpose
On each retrieval:
Retrieve the stored array
Grab the pagination item, (e.g. first 2 items)
Find the objects for those item using find $in
.
db.getCollection('my_collection')
.find({"_id" : {"$in" : [ObjectId("ThirdObjectID"), ObjectId("SecondObjectID")]}});
Using NodeJS, sort the retrieved object based on the retrived pagination item
There you go! A randomized MongoDB query for pagination :)