I have an onUpdate firestore trigger function that does multiple things:
functions.firestore.document('document').onUpdate((change, context) => {
const updatedObject = change.after.data()
if (updatedObject.first) {
doFirst()
}
if (updatedObject.second) {
doSecond()
}
})
I am thinking of splitting this trigger into 2 smaller triggers to keep my functions more concise.
functions.firestore.document('document').onUpdate((change, context) => {
const updatedObject = change.after.data()
if (!updatedObject.first) {
return
}
doFirst()
})
functions.firestore.document('document').onUpdate((change, context) => {
const updatedObject = change.after.data()
if (!updatedObject.second) {
return
}
doSecond()
})
The firestore pricing docs mentions the following:
When you listen to the results of a query, you are charged for a read each time a document in the result set is added or updated. You are also charged for a read when a document is removed from the result set because the document has changed. (In contrast, when a document is deleted, you are not charged for a read.)
Would this increase the number of reads from 1 to 2?
The docs does not clearly state the behavior when there are multiple functions listening to the same event.
A more general question I have is would increasing the number of functions listening to the same event increase the number of reads and hence increase my bill?
Is there a best practice for this issue?
firebaser here
The document data passed to Cloud Functions as part of the trigger (so change.before and change.after) comes out of the existing flow, and is not a charged read. Only additional reads that you perform inside your Cloud Functions code would be charged.
Related
I'm trying to get the number of documents in a collection before and after a document has been added using cloud functions, the code in nodeJs I wrote is this:
exports.onShowcaseCreated = functions.firestore
.document("Show/{document}")
.onCreate((snapshot, context) => {
const showcaseDict = snapshot.data();
const uid = showcaseDict.uid;
return db.collection("Showcases").where("uid", "==", uid).get()
.then((showsnap) => {
const numberOfShowcaseBefore = showsnap.size.before;
const numberOfShowcaseAfter = showsnap.size.after;
console.log( numberOfShowcaseBefore, numberOfShowcaseAfter);
if ( numberOfShowcaseBefore == 0 && numberOfShowcaseAfter == 1 ) {
return db.collection("Users").doc(uid).update({
timestamp: admin.firestore.Timestamp.now(),
});
});
});
});
but the console logs are undefined undefined it seems like this is not the right approach for taking the number of documents before and after a document has beed added
The before and after properties are only defined on the argument that is passed to onCreate. You call that snapshot in your code, but it's actually a Change object as defined here.
Reading data from the database gives you a QuerySnapshot object as defined here. As you can see, that size on that QuerySnapshot is just a number and does not have before or after properties.
So there's no way to determine the size before the event triggered with your approach. Any query you run in the code, runs after the event was triggered so will give you the size at that moment.
To implement this use-case I'd recommend storing a count of the number of relevant documents in the database itself, and then triggering a Cloud Function when that document changes. Inside the Cloud Function code you can then read the previous and new value of the size from the change document that is passed in.
Sometimes we use the firebase functions triggered by real-time database (onCreate/onDelete/onUpdate ...) to do some logic (like counting, etc).
My question, would it be possible to avoid this trigger in some cases. Mainly, when I would like to allow a user to import a huge JSON to firebase?
Example:
a function E triggered on the creation of a new child in /examples. Normally, users add examples one by one to /examples and function E runs to do some logic. However, I would like to allow a user (from the front-end) to import 2000 children to /examples and the logic which is done by function E is possible at import time without the need for E. Then, I do not need E to be triggered for such a case where a high number of functions could be executed. (Note: I am aware of the 1000 limit)
Update:
based on the accepted answer, submitted my answer down.
As far as I know, there is no way to disable a Cloud Function programmatically without just deleting it. However this introduces an edge case where data is added to the database while the import is taking place.
A compromise would be to signal that the data you are uploading should be post-processed. Let's say you were uploading to /examples/{pushId}, instead of attaching the database trigger to /examples/{pushId}, attach it to /examples/{pushId}/needsProcessing (or something similar). Unfortunately this has the trade-off of not being able to make use of change objects for onUpdate() and onWrite().
const result = await firebase.database.ref('/examples').push({
title: "Example 1A",
desc: "This is an example",
attachments: { /* ... */ },
class: "-MTjzAKMcJzhhtxwUbFw",
author: "johndoe1970",
needsProcessing: true
});
async function handleExampleProcessing(snapshot, context) {
// do post processing if needsProcessing is truthy
if (!snapshot.exists() || !snapshot.val()) {
console.log('No processing needed, exiting.');
return;
}
const exampleRef = admin.database().ref(change.ref.parent); // /examples/{pushId}, as admin
const data = await exampleRef.once('value');
// do something with data, like mutate it
// commit changes
return exampleRef.update({
...data,
needsProcessing: null /* delete needsProcessing value */
});
}
const functionsExampleProcessingRef = functions.database.ref("examples/{pushId}/needsProcessing");
export const handleExampleNeedingProcessingOnCreate = functionsExampleProcessingRef.onCreate(handleExampleProcessing);
// this is only needed if you ever intend on writing `needsProcessing = /* some falsy value */`, I recommend just creating and deleting it, then you can use just the above trigger.
export const handleExampleNeedingProcessingOnUpdate = functionsExampleProcessingRef.onUpdate((change, context) => handleExampleProcessing(change.after, context));
An alternative to Sam's approach is to use feature flags to determine if a Cloud Function performs its main function. I often have this in my code:
exports.onUpload = functions.database
.ref("/uploads/{uploadId}")
.onWrite((event) => {
return ifEnabled("transcribe").then(() => {
console.log("transcription is enabled: calling Cloud Speech");
...
})
});
The ifEnabled is a simple helper function that checks (also in Realtime Database) if the feature is enabled:
function ifEnabled(feature) {
console.log("Checking if feature '"+feature+"' is enabled");
return new Promise((resolve, reject) => {
admin.database().ref("/config/features")
.child(feature)
.once('value')
.then(snapshot => {
if (snapshot.val()) {
resolve(snapshot.val());
}
else {
reject("No value or 'falsy' value found");
}
});
});
}
Most of my usage of this is during talks at conferences, to enable the Cloud Functions at the right time (as a deploy takes a bit longer than we'd like for a demo). But the same approach should work to temporarily disable features during for example data import.
Okay, another solution would be
A: Add a new table in firebase like /triggers-queue where all CRUD that should fire a background function are added. In this table, we add a key for each table that should have triggers - in our example /examples table. Any key that represents a table should also have /created, /updated, and /deleted keys as follows.
/examples
.../example-id-1
/triggers-queue
.../examples
....../created
........./example-id
....../updated
........./example-id
............old-value
....../deleted
........./example-id
............old-value
Note that the old-value should be added from app (front-end, etc).
We set triggers always onCreate on
/triggers-queue/examples/created/{exampleID} (simulate onCreate)
/triggers-queue/examples/updated/{exampleID} (simulate onUpdate)
/triggers-queue/examples/deleted/{exampleID} (simulate onDelete)
The fired function can know all the necessary info to handle the logic as follows:
Operation type: from the path (either: created, updated, or deleted)
key of the object: from the path
current data: by reading the corresponding table (i.e., /examples/id)
old data: from the triggers table
Good Points:
You can import a huge data to /examples table without firing any function as we do not add to the /triggers-queue
you can fanout functions to pass the limit 1000/sec. That is by setting triggers on (as an example to fanout on-create)
/triggers-queue/examples/created0/{exampleID} and
/triggers-queue/examples/created1/{exampleID}
bad-points:
more difficult to implement
need to write more data to firebase (like old-data) from the app.
B- Another way (although not an answer for this) is to move the login in the background function to an HTTP function and call it on every crud ops.
I have a master collection in firestore with a couple hundred documents (which will grow to a few thousand in a couple of months).
I have a use case, where every time a new user document is created in /users/ collection, I want all the documents from the master to be copied over to /users/{userId}/.
To achieve this, I have created a firebase cloud function as below:
// setup for new user
exports.setupCollectionForUser = functions.firestore
.document('users/{userId}')
.onCreate((snap, context) => {
const userId = context.params.userId;
db.collection('master').get().then(snapshot => {
if (snapshot.empty) {
console.log('no docs found');
return;
}
snapshot.forEach(function(doc) {
return db.collection('users').doc(userId).collection('slave').doc(doc.get('uid')).set(doc.data());
});
});
});
This works, the only problem is, it takes forever (~3-5 mins) for only about 200 documents. This has been such a bummer because a lot depends on how fast these documents get copied over. I was hoping this to be not more than a few seconds at max. Also, the documents show up altogether and not as they are written, or at least they seem that way.
Am I doing anything wrong? Why should it take so long?
Is there a way I can break this operation into multiple reads and writes so that I can guarantee a minimum documents in a few seconds and not wait until all of them are copied over?
Please advise.
If I am not mistaking, by correctly managing the parallel writes with Promise.all() and returning the Promises chain it should normally improve the speed.
Try to adapt your code as follows:
exports.setupCollectionForUser = functions.firestore
.document('users/{userId}')
.onCreate((snap, context) => {
const userId = context.params.userId;
return db.collection('master').get().then(snapshot => {
if (snapshot.empty) {
console.log('no docs found');
return null;
} else {
const promises = [];
const slaveRef = db.collection('users').doc(userId).collection('slave');
snapshot.forEach(doc => {
promises.push(slaveRef.doc(doc.get('uid')).set(doc.data()))
});
return Promise.all(promises);
}
});
});
I would suggest you watch the 3 videos about "JavaScript Promises" from the Firebase video series which explain why it is key to return a Promise or a value in a background triggered Cloud Function.
Note also, that if you are sure you have less than 500 documents to save in the slave collection, you could use a batched write. (You could use it for more than 500 docs but then you would have to manage different batches of batched write...)
I am trying to display a user with his phone contacts that are also users to my app
I am storing the user phone contacts on Firestore "contacts" collection.
On each document i create on android, i am leaving "fuid" (friend UID) field as null. Using cloud functions, each time a new document is created i am checking if his "paPho" (parsed phone number e.g. +972123455) matching an existing user phone number. If yes, i will place his uid to the "fuid" matching document.
On android, i will display all user contact which fuid is not null and uid matching
Since each user might have more than 500 contacts (all added in very short time) i am using Blaze plan
It is working quite nicely but, although no error is found on log, it seems the onCreate is missing sometimes.
The reason i think it is missing since if i re-run the cloud function under same contact list couple of times, sometimes the missed document appears.
It might be relevant that these sometimes-missing contacts are close by name and having same phone number
const functions = require('firebase-functions');
exports.attachUserToNewContact = functions.firestore
.document('contacts/{contactId}').onCreate((snap,contex) => {
admin.auth().getUserByPhoneNumber(snap.data().paPho)
.then(userRecord => {
if (userRecord.uid) {
console.log(`userRecord phone ${snap.data().paPho} matching contact ${userRecord.uid}`);
admin.firestore().collection('contacts')
.doc(snap.data().id).update({fuid:userRecord.uid});
}
return 0;
})
.catch(error => {
//There is no user record corresponding to the provided identifier
});
return 0;
});
You are not returning the promises returned by the asynchronous methods (getUserByPhoneNumber() and update()), potentially generating some "erratic" behavior of the Cloud Function.
As you will see in the three videos about "JavaScript Promises" from the official Firebase video series (https://firebase.google.com/docs/functions/video-series/) you MUST return a Promise or a value in a background triggered Cloud Function, to indicate to the platform that it has completed, and to avoid it is terminated before the asynchronous operations are done.
Concretely, it happens sometimes that your Cloud Function is terminated before the asynchronous operations are completed, because the return 0; at the end indicates to the Cloud Function platform that it can terminate the Function. The other times, the Cloud Function platform does not terminate the Function immediately and the asynchronous operations can be completed.
By modifying your code as follows, you will avoid this "erratic" behavior :
const functions = require('firebase-functions');
exports.attachUserToNewContact = functions.firestore
.document('contacts/{contactId}').onCreate((snap,contex) => {
return admin.auth().getUserByPhoneNumber(snap.data().paPho)
.then(userRecord => {
if (userRecord.uid) {
console.log(`userRecord phone ${snap.data().paPho} matching contact ${userRecord.uid}`);
return admin.firestore().collection('contacts')
.doc(snap.data().id).update({fuid:userRecord.uid});
} else {
return null;
}
})
.catch(error => {
//There is no user record corresponding to the provided identifier
return null;
});
});
BTW, if you don’t do anything else than returning null in the catch block, you can totally remove it.
Firestore Cloud Functions priority
Now I deploy two cloud functions in the Firestore database.
They are triggered by the same document changes.
Is it possible to specify the execution order of the functions or the trigger sequence? For example, I want to let updateCommentNum function trigger fist, then trigger writeUserLog function. How could I achieve this goal?
exports.updateCommentNum = functions.firestore
.document('post/{postId}/comments/{commentsID}')
.onWrite((change, context) =>
{
//update the comment numbers in the post/{postId}/
}
exports.writeUserLog = functions.firestore
.document('post/{postId}/comments/{commentsID}')
.onWrite((change, context) =>
{
//write the comment name,text,ID,timestamp etc. in the collection "commentlog"
}
There is no way to indicate relative priority between functions.
If you have a defined order you want them invoked in, use a single Cloud Function and just call two regular functions from there:
exports.onCommentWritten = functions.firestore
.document('post/{postId}/comments/{commentsID}')
.onWrite((change, context) => {
return Promise.all([
updateCommentNum,
writeUserLog
])
})
function updateCommentNum(change, context) {
//update the comment numbers in the post/{postId}/
}
function writeUserLog(change, context) {
//write the comment name,text,ID,timestamp etc. in the collection "commentlog"
}
That will also reduce the number of invocations, and thus reduce the cost of operating them.