DataSnapshot.ref in Functions Emulators only points to default database - node.js

Let's say I have a node in a secondary realtime database called "test" with a value of "foobar".
I want to set up a function that prevents it from being deleted. More realistically this node would have several child nodes, where the function first checks if it can be deleted or not. However, here we never allow it to be deleted to keep the code as short as possible.
So I add a function that triggers onDelete and just rewrites the value.
In short:
Secondary database has: {"test":"foobar"}
onDelete function:
exports.testDelete = functions.database
.instance("secondary")
.ref("test")
.onDelete(async (snap, context) => {
await snap.ref.set(snap.val());
});
When running this with emulators, I would expect that when I delete the node, the node would just reappear in the secondary database, which is what happens when deployed to production. In the emulators, the node reappears, but in the main database instead of the secondary database. The only way I see to fix this is to replace snap.ref.set(snap.val()) with admin.app().database("https://{secondarydatabasedomain}.firebasedatabase.app").ref().child("test").set(snap.val()) which looks a little cumbersome just to get emulators to work.
Am I doing something wrong here?
I am using node 14, and firebase CLI version 9.23.0

To specify instance and path :
You have followed the syntax :
Instance named "my-app-db-2": functions.database.instance('my-app-db-2').ref('/foo/bar')
You have mentioned the instance name otherwise it will redirect to the default database so the syntax seems correct.
For triggering the event data follow the syntax as :
onDelete(handler: (snapshot: DataSnapshot, context: EventContext) => any): CloudFunction
For example you can refer to the Documentation :
// Listens for new messages added to /messages/:pushId/original and creates an
// uppercase version of the message to /messages/:pushId/uppercase
exports.makeUppercase = functions.database.ref('/messages/{pushId}/original')
.onCreate((snapshot, context) => {
// Grab the current value of what was written to the Realtime Database.
const original = snapshot.val();
functions.logger.log('Uppercasing', context.params.pushId, original);
const uppercase = original.toUpperCase();
// You must return a Promise when performing asynchronous tasks inside a Functions such as
// writing to the Firebase Realtime Database.
// Setting an "uppercase" sibling in the Realtime Database returns a Promise.
return snapshot.ref.parent.child('uppercase').set(uppercase);
});
If all above syntax has been followed correctly then I will recommend you to report a bug with a minimal repro on the repo along with including the entire cloud function as mentioned by Frank in a similar scenario.

Related

How to stop firebase functions from propagating triggers

I have a Firebase function to decrease the commentCount when a comment is deleted, like this
export const onArticleCommentDeleted = functions.firestore.document('articles/{articleId}/comments/{uid}').onDelete((snapshot, context) => {
return db.collection('articles').doc(context.params.articleId).update({
commentCount: admin.firestore.FieldValue.increment(-1)
})
})
I also have firebase functions to recursively delete comments of an article when it's deleted
export const onArticleDeleted = functions.firestore.document('articles/{id}').onDelete((snapshot, context) => {
const commentsRef = db.collection('articles').doc(snapshot.id).collection('comments');
db.recursiveDelete(commentsRef); // this triggers the onArticleCommentDeleted multiple times
})
When I delete an article, the onArticleCommentDeleted is triggered and it tries to update the article that has already been deleted. Of course I can check if the article exists before updating it. But it's really cumbersome and waste of resources.
Are there any ways to avoid from propagating further triggers?
I think the problem arises from the way I make use of the trigger. In general, it's not a good idea to implement an onDelete trigger on a child document updates its own parent. This will surely cause conflict.
Instead, at the client side, I use transaction
...
runTransaction(async trans => {
trans.delete(commentRef);
trans.update(articleRef, {
commentCount: admin.firestore.FieldValue.increment(-1)
})
})
This makes sure that if one of the operation fails they both fail, and eradicates the triggers. Relying to the client side is not the best idea, but I think we can consider the trade off.
There is no way to prevent triggering the Cloud Function on the comments when you delete the comments for that article. You will have to check for that condition in the function code itself, as you already said.

How to handle Firebase Cloud Functions infinite loops?

I have a Firebase Cloud functions which is triggered by an update to some data in a Firebase Realtime Database. When the data is updated, I want to read the data, perform some calculations on that data, and then save the results of the calculations back to the Realtime Database. It looks like this:
exports.onUpdate = functions.database.ref("/some/path").onUpdate((change) => {
const values = change.after.val();
const newValues = performCalculations(value);
return change.after.ref.update(newValues);
});
My concern is that this may create an indefinite loop of updates. I saw a note on the Cloud Firestore Triggers that says:
"Any time you write to the same document that triggered a function,
you are at risk of creating an infinite loop. Use caution and ensure
that you safely exit the function when no change is needed."
So my first question is: Does this same problem apply to the Firebase Realtime Database?
If it does, what is the best way to prevent the infinite looping?
Should I be comparing before/after snapshots, the key/value pairs, etc.?
My idea so far:
exports.onUpdate = functions.database.ref("/some/path").onUpdate((change) => {
// Get old values
const beforeValues = change.before.val();
// Get current values
const afterValues = change.after.val();
// Something like this???
if (beforeValues === afterValues) return null;
const newValues = performCalculations(afterValues);
return change.after.ref.update(newValues);
});
Thanks
Does this same problem apply to the Firebase Realtime Database?
Yes, the chance of infinite loops occurs whenever you write back to the same location that triggered your Cloud Function to run, no matter what trigger type was used.
To prevent an infinite loop, you have to detect its condition in the code. You can:
either flag the node/document after processing it by writing a value into it, and check for that flag at the start of the Cloud Function.
or you can detect whether the Cloud Function code made any effective change/improvement to the data, and not write it back to the database when there was no change/improvement.
Either of these can work, and which one to use depends on your use-case. Your if (beforeValues === afterValues) return null is a form of the second approach, and can indeed work - but that depends on details about the data that you haven't shared.

Update field when document is added on Cloud Firestore [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
I have a question regarding the use of Cloud Firestore Functions in my Android App (I'm writing on kotlin using Android Studio)
Having read some documentation I think it's possible to run a custom method in Firestore Functions when a new Document is created in my database. All I need to do is update a field.
The thing is that these functions on Cloud Firestore need to be written in JavaScript and I need to use Node.js, and I have 0 knowledge on this.
To any developer with Cloud Firestore knowledge, any guides or hints on this issue?
The thing is that these functions on Cloud Firestore need to be written in JavaScript and I need to use Node.js, and I have 0 knowledge on this.
Don't know if it helps, but the Cloud Functions can also be written in Python or Go. You can check out more complete information about the current runtimes and languages here.
But let's try to answer your question, shall we? I'll use the Node.js 8 Runtime in the examples below.
Introduction
Google Cloud Functions currently support 2 types of functions:
HTTP Functions, which are triggered by simple HTTP requests, and
Background Functions, which are triggered by events from Google Cloud's infrastructure, such as Cloud Firestore events. This is what you need, so let's focus on that.
Setup
Since you're using Cloud Firestore, I assume you already have a Firebase project set up. So the first step, if you don't have it yet, is to install the Firebase CLI and follow its instructions to setup your project locally. When asked, select the "Functions: Configure and deploy Cloud Functions" option to enable it and also "Use an existing project" to select your project.
$ firebase login
$ firebase init
Once you finish the setup, you'll essentially have the following structure in your directory:
firebase.json
.firebaserc
functions/
index.js
package.json
Now before you start coding, there's something you should know about Cloud Firestore events. There are 4 of them (the list is here):
onCreate: Triggered when a document is written to for the first time.
onUpdate: Triggered when a document already exists and has any value changed.
onDelete: Triggered when a document with data is deleted.
onWrite: Triggered when onCreate, onUpdate or onDelete is triggered.
Since you only need to capture a creation event, you'll write an onCreate event.
Coding
To do that, open the functions/index.js file and type the following piece of code:
const functions = require('firebase-functions');
// this function is triggered by "onCreate" Cloud Firestore events
// the "userId" is a wildcard that represents the id of the document created inside te "users" collection
// it will read the "email" field and insert the "lowercaseEmail" field
exports.onCreateUserInsertEmailLowercase = functions.firestore
.document('users/{userId}')
.onCreate((snapshot, context) => {
// "context" has info about the event
// reference: https://firebase.google.com/docs/reference/functions/cloud_functions_.eventcontext
const { userId } = context.params;
// "snapshot" is a representation of the document that was inserted
// reference: https://googleapis.dev/nodejs/firestore/latest/DocumentSnapshot.html
const email = snapshot.get('email');
console.log(`User ${userId} was inserted, with email ${email}`);
return null;
});
As you can probably guess, this is a really simple Cloud Function, that only logs the document's id and its "email" field. So now we go to the second part of your question: how can we edit this newly created document? Two options here: (1) update the document you just created and (2) update other document, so I'll separate it in 2 sections:
(1) Update the document you have just created
The answer lies in the "snapshot" parameter. Although it's just a representation of the document you inserted, it carries inside it a DocumentReference, which is a different type of object that has read, write and listening to changes capabilities. Let us use its set method to insert the new field. So let's change our current Function to do that:
const functions = require('firebase-functions');
// this function is triggered by "onCreate" Cloud Firestore events
// the "userId" is a wildcard that represents the id of the document created inside te "users" collection
// it will read the "email" field and insert the "lowercaseEmail" field
exports.onCreateUserInsertEmailLowercase = functions.firestore
.document('users/{userId}')
.onCreate((snapshot, context) => {
// "context" has info about the event
// reference: https://firebase.google.com/docs/reference/functions/cloud_functions_.eventcontext
const { userId } = context.params;
// "snapshot" is a representation of the document that was inserted
// reference: https://googleapis.dev/nodejs/firestore/latest/DocumentSnapshot.html
const email = snapshot.get('email');
console.log(`User ${userId} was inserted, with email ${email}`);
// converts the email to lowercase
const lowercaseEmail = email.toLowerCase();
// get the DocumentReference, with write powers
const documentReference = snapshot.ref;
// insert the new field
// the { merge: true } parameter is so that the whole document isn't overwritten
// that way, only the new field is added without changing its current content
return documentReference.set({ lowercaseEmail }, { merge: true });
});
(2) Update a document from another collection
For that, you're gonna need to add the firebase-admin to your project. It has all the admin privileges so you'll be able to write to any Cloud Firestore document inside your project.
Inside the functions directory, run:
$ npm install --save firebase-admin
And since you're already inside Google Cloud's infrastructure, initializing it is as simple as adding the following couple of lines to the index.js file:
const admin = require('firebase-admin');
admin.initializeApp();
Now all you have to do is use the Admin SDK to get a DocumentReference of the document you wish to update, and use it to update one of its fields.
For this example, I'll consider you have a collection called stats which contains a users document with a counter inside it that tracks the number of documents in the users collection:
// this updates the user count whenever a document is created inside the "users" collection
exports.onCreateUpdateUsersCounter = functions.firestore
.document('users/{userId}')
.onCreate(async (snapshot, context) => {
const statsDocumentReference = admin.firestore().doc('stats/users');
// a DocumentReference "get" returns a Promise containing a DocumentSnapshot
// that's why I'm using async/await
const statsDocumentSnapshot = await statsDocumentReference.get();
const currentCounter = statsDocumentSnapshot.get('counter');
// increased counter
const newCounter = currentCounter + 1;
// update the "counter" field with the increased value
return statsDocumentReference.update({ counter: newCounter });
});
And that's it!
Deploying
But now that you've got the coding part, how can you deploy it to make it run in your project, right? Let us use the Firebase CLI once more to deploy the new Cloud Functions.
Inside your project's root directory, run:
$ firebase deploy --only functions:onCreateUserInsertEmailLowercase
$ firebase deploy --only functions:onCreateUpdateUsersCounter
And that's pretty much the basics, but if you'd like, you can check its documentation for more info about deploying Cloud Functions.
Debugging
Ok, right, but how can we know it worked? Go to https://console.firebase.google.com and try it out! Insert a couple of documents and see the magic happens. And if you need a little debugging, click the "Functions" menu on the left-hand side and you'll be able to access your functions logs.
That's pretty much it for your use-case scenario, but if you'd like to go deeper into Cloud Functions, I really recommend its documentation. It's pretty complete, concise and organized. I left some links as a reference so you'll know where to look.
Cheers!

Firestore: get document back after adding it / updating it without additional network calls

Is it possible to get document back after adding it / updating it without additional network calls with Firestore, similar to MongoDB?
I find it stupid to first make a call to add / update a document and then make an additional call to get it.
As you have probably seen in the documentation of the Node.js (and Javascript) SDKs, this is not possible, neither with the methods of a DocumentReference nor with the one of a CollectionReference.
More precisely, the set() and update() methods of a DocumentReference both return a Promise containing void, while the CollectionReference's add() method returns a Promise containing a DocumentReference.
Side Note (in line with answer from darrinm below): It is interesting to note that with the Firestore REST API, when you create a document, you get back (i.e. through the API endpoint response) a Document object.
When you add a document to Cloud Firestore, the server can affect the data that is stored. A few ways this may happen:
If your data contains a marker for a server-side timestamp, the server will expand that marker into the actual timestamp.
Your data data is not permitted according to your server-side security rules, the server will reject the write operation.
Since the server affects the contents of the Document, the client can't simply return the data that it already has as the new document. If you just want to show the data that you sent to the server in your client, you can of course do so by simply reusing the object you passed into setData(...)/addDocument(data: ...).
This appears to be an arbitrary limitation of the the Firestore Javascript API. The Firestore REST API returns the updated document on the same call.
https://firebase.google.com/docs/firestore/reference/rest/v1beta1/projects.databases.documents/patch
I did this to get the ID of a new Document created, and then use it in something else.
Future<DocumentReference<Object>> addNewData() async {
final FirebaseFirestore _firestore = FirebaseFirestore.instance;
final CollectionReference _userCollection = _firestore.collection('users');
return await _userCollection
.add({ 'data': 'value' })
.whenComplete(() => {
// Show good notification
})
.catchError((e) {
// Show Bad notification
});
}
And here I obtain the ID:
await addNewData()
.then((document) async {
// Get ID
print('ID Document Created ${document.id}');
});
I hope it helps.

Azure function run code on startup for Node

I am developing Chatbot using Azure functions. I want to load the some of the conversations for Chatbot from a file. I am looking for a way to load these conversation data before the function app starts with some function callback. Is there a way load the conversation data only once when the function app is started?
This question is actually a duplicate of Azure Function run code on startup. But this question is asked for C# and I wanted a way to do the same thing in NodeJS
After like a week of messing around I got a working solution.
First some context:
The question at hand, running custom code # App Start for Node JS Azure Functions.
The issue is currently being discussed here and has been open for almost 5 years, and doesn't seem to be going anywhere.
As of now there is an Azure Functions "warmup" trigger feature, found here AZ Funcs Warm Up Trigger. However this trigger only runs on-scale. So the first, initial instance of your App won't run the "warmup" code.
Solution:
I created a start.js file and put the following code in there
const ErrorHandler = require('./Classes/ErrorHandler');
const Validator = require('./Classes/Validator');
const delay = require('delay');
let flag = false;
module.exports = async () =>
{
console.log('Initializing Globals')
global.ErrorHandler = ErrorHandler;
global.Validator = Validator;
//this is just to test if it will work with async funcs
const wait = await delay(5000)
//add additional logic...
//await db.connect(); etc // initialize a db connection
console.log('Done Waiting')
}
To run this code I just have to do
require('../start')();
in any of my functions. Just one function is fine. Since all of the function dependencies are loaded when you deploy your code, as long as this line is in one of the functions, start.js will run and initialize all of your global/singleton variables or whatever else you want it to do on func start. I made a literal function called "startWarmUp" and it is just a timer triggered function that runs once a day.
My use case is that almost every function relies on ErrorHandler and Validator class. And though generally making something a global variable is bad practice, in this case I didn't see any harm in making these 2 classes global so they're available in all of the functions.
Side Note: when developing locally you will have to include that function in your func start --functions <function requiring start.js> <other funcs> in order to have that startup code actually run.
Additionally there is a feature request for this functionality that can voted on open here: Azure Feedback
I have a similar use case that I am also stuck on.
Based on this resource I have found a good way to approach the structure of my code. It is simple enough: you just need to run your initialization code before you declare your module.exports.
https://github.com/rcarmo/azure-functions-bot/blob/master/bot/index.js
I also read this thread, but it does not look like there is a recommended solution.
https://github.com/Azure/azure-functions-host/issues/586
However, in my case I have an additional complication in that I need to use promises as I am waiting on external services to come back. These promises run within bot.initialise(). Initialise() only seems to run when the first call to the bot occurs. Which would be fine, but as it is running a promise, my code doesn't block - which means that when it calls 'listener(req, context.res)' it doesn't yet exist.
The next thing I will try is to restructure my code so that bot.initialise returns a promise, but the code would be much simpler if there was a initialisation webhook that guaranteed that the code within it was executed at startup before everything else.
Has anyone found a good workaround?
My code looks something like this:
var listener = null;
if (process.env.FUNCTIONS_EXTENSION_VERSION) {
// If we are inside Azure Functions, export the standard handler.
listener = bot.initialise(true);
module.exports = function (context, req) {
context.log("Passing body", req.body);
listener(req, context.res);
}
} else {
// Local server for testing
listener = bot.initialise(false);
}
You can use global variable to load data before function execution.
var data = [1, 2, 3];
module.exports = function (context, req) {
context.log(data[0]);
context.done();
};
data variable initialized only once and will be used within function calls.

Resources