Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
I have a question regarding the use of Cloud Firestore Functions in my Android App (I'm writing on kotlin using Android Studio)
Having read some documentation I think it's possible to run a custom method in Firestore Functions when a new Document is created in my database. All I need to do is update a field.
The thing is that these functions on Cloud Firestore need to be written in JavaScript and I need to use Node.js, and I have 0 knowledge on this.
To any developer with Cloud Firestore knowledge, any guides or hints on this issue?
The thing is that these functions on Cloud Firestore need to be written in JavaScript and I need to use Node.js, and I have 0 knowledge on this.
Don't know if it helps, but the Cloud Functions can also be written in Python or Go. You can check out more complete information about the current runtimes and languages here.
But let's try to answer your question, shall we? I'll use the Node.js 8 Runtime in the examples below.
Introduction
Google Cloud Functions currently support 2 types of functions:
HTTP Functions, which are triggered by simple HTTP requests, and
Background Functions, which are triggered by events from Google Cloud's infrastructure, such as Cloud Firestore events. This is what you need, so let's focus on that.
Setup
Since you're using Cloud Firestore, I assume you already have a Firebase project set up. So the first step, if you don't have it yet, is to install the Firebase CLI and follow its instructions to setup your project locally. When asked, select the "Functions: Configure and deploy Cloud Functions" option to enable it and also "Use an existing project" to select your project.
$ firebase login
$ firebase init
Once you finish the setup, you'll essentially have the following structure in your directory:
firebase.json
.firebaserc
functions/
index.js
package.json
Now before you start coding, there's something you should know about Cloud Firestore events. There are 4 of them (the list is here):
onCreate: Triggered when a document is written to for the first time.
onUpdate: Triggered when a document already exists and has any value changed.
onDelete: Triggered when a document with data is deleted.
onWrite: Triggered when onCreate, onUpdate or onDelete is triggered.
Since you only need to capture a creation event, you'll write an onCreate event.
Coding
To do that, open the functions/index.js file and type the following piece of code:
const functions = require('firebase-functions');
// this function is triggered by "onCreate" Cloud Firestore events
// the "userId" is a wildcard that represents the id of the document created inside te "users" collection
// it will read the "email" field and insert the "lowercaseEmail" field
exports.onCreateUserInsertEmailLowercase = functions.firestore
.document('users/{userId}')
.onCreate((snapshot, context) => {
// "context" has info about the event
// reference: https://firebase.google.com/docs/reference/functions/cloud_functions_.eventcontext
const { userId } = context.params;
// "snapshot" is a representation of the document that was inserted
// reference: https://googleapis.dev/nodejs/firestore/latest/DocumentSnapshot.html
const email = snapshot.get('email');
console.log(`User ${userId} was inserted, with email ${email}`);
return null;
});
As you can probably guess, this is a really simple Cloud Function, that only logs the document's id and its "email" field. So now we go to the second part of your question: how can we edit this newly created document? Two options here: (1) update the document you just created and (2) update other document, so I'll separate it in 2 sections:
(1) Update the document you have just created
The answer lies in the "snapshot" parameter. Although it's just a representation of the document you inserted, it carries inside it a DocumentReference, which is a different type of object that has read, write and listening to changes capabilities. Let us use its set method to insert the new field. So let's change our current Function to do that:
const functions = require('firebase-functions');
// this function is triggered by "onCreate" Cloud Firestore events
// the "userId" is a wildcard that represents the id of the document created inside te "users" collection
// it will read the "email" field and insert the "lowercaseEmail" field
exports.onCreateUserInsertEmailLowercase = functions.firestore
.document('users/{userId}')
.onCreate((snapshot, context) => {
// "context" has info about the event
// reference: https://firebase.google.com/docs/reference/functions/cloud_functions_.eventcontext
const { userId } = context.params;
// "snapshot" is a representation of the document that was inserted
// reference: https://googleapis.dev/nodejs/firestore/latest/DocumentSnapshot.html
const email = snapshot.get('email');
console.log(`User ${userId} was inserted, with email ${email}`);
// converts the email to lowercase
const lowercaseEmail = email.toLowerCase();
// get the DocumentReference, with write powers
const documentReference = snapshot.ref;
// insert the new field
// the { merge: true } parameter is so that the whole document isn't overwritten
// that way, only the new field is added without changing its current content
return documentReference.set({ lowercaseEmail }, { merge: true });
});
(2) Update a document from another collection
For that, you're gonna need to add the firebase-admin to your project. It has all the admin privileges so you'll be able to write to any Cloud Firestore document inside your project.
Inside the functions directory, run:
$ npm install --save firebase-admin
And since you're already inside Google Cloud's infrastructure, initializing it is as simple as adding the following couple of lines to the index.js file:
const admin = require('firebase-admin');
admin.initializeApp();
Now all you have to do is use the Admin SDK to get a DocumentReference of the document you wish to update, and use it to update one of its fields.
For this example, I'll consider you have a collection called stats which contains a users document with a counter inside it that tracks the number of documents in the users collection:
// this updates the user count whenever a document is created inside the "users" collection
exports.onCreateUpdateUsersCounter = functions.firestore
.document('users/{userId}')
.onCreate(async (snapshot, context) => {
const statsDocumentReference = admin.firestore().doc('stats/users');
// a DocumentReference "get" returns a Promise containing a DocumentSnapshot
// that's why I'm using async/await
const statsDocumentSnapshot = await statsDocumentReference.get();
const currentCounter = statsDocumentSnapshot.get('counter');
// increased counter
const newCounter = currentCounter + 1;
// update the "counter" field with the increased value
return statsDocumentReference.update({ counter: newCounter });
});
And that's it!
Deploying
But now that you've got the coding part, how can you deploy it to make it run in your project, right? Let us use the Firebase CLI once more to deploy the new Cloud Functions.
Inside your project's root directory, run:
$ firebase deploy --only functions:onCreateUserInsertEmailLowercase
$ firebase deploy --only functions:onCreateUpdateUsersCounter
And that's pretty much the basics, but if you'd like, you can check its documentation for more info about deploying Cloud Functions.
Debugging
Ok, right, but how can we know it worked? Go to https://console.firebase.google.com and try it out! Insert a couple of documents and see the magic happens. And if you need a little debugging, click the "Functions" menu on the left-hand side and you'll be able to access your functions logs.
That's pretty much it for your use-case scenario, but if you'd like to go deeper into Cloud Functions, I really recommend its documentation. It's pretty complete, concise and organized. I left some links as a reference so you'll know where to look.
Cheers!
Related
In Google's code lab, it says:
Adding this function to the special exports object is Node's way of making the function accessible outside of the current file and is required for Cloud Functions.
What exactly does that mean? When do I need to add functions to the exports object?
And I'm not exactly sure what role index.js plays. Do I need to put all functions in there? What if I have let's say 3 different topics like Posts, Messages, and Profiles. Each topic has multiple cloud functions and many different simple helper functions (stuff like manipulating a string for example).
Should you really put all these functions in the same file then?
I think I don't really understand what index.js does.
Firebase CLI will attempt to deploy any Cloud Functions exported from index.js only by default. That does not mean you have to write the whole code in the same file. You can create any directory structure of your own but just ensure the Cloud Functions are exported from index.js. For example:
// index.js
const processNewUser = require('./some/other/file.js');
exports.addWelcomeMessages = functions.auth.user().onCreate(async (user) => {
await processNewUser();
return null;
});
Here the function processNewUser() is defined somewhere else but is called within the Cloud Function.
You can also define the Cloud Function itself for example:
// auth.js
const processNewUser = require('./some/other/file.js');
exports.addWelcomeMessages = functions.auth.user().onCreate(async (user) => {
await processNewUser();
return null;
});
// index.js
const { addWelcomeMessages } = require("./auth.js")
exports.addWelcomeMessages;
If you do not add exports.addWelcomeMessages; (i.e. export the function from index.js, then the CLI won't deploy it.
Also checkout organize multiple functions to learn about more possible cases.
I have a Firebase function to decrease the commentCount when a comment is deleted, like this
export const onArticleCommentDeleted = functions.firestore.document('articles/{articleId}/comments/{uid}').onDelete((snapshot, context) => {
return db.collection('articles').doc(context.params.articleId).update({
commentCount: admin.firestore.FieldValue.increment(-1)
})
})
I also have firebase functions to recursively delete comments of an article when it's deleted
export const onArticleDeleted = functions.firestore.document('articles/{id}').onDelete((snapshot, context) => {
const commentsRef = db.collection('articles').doc(snapshot.id).collection('comments');
db.recursiveDelete(commentsRef); // this triggers the onArticleCommentDeleted multiple times
})
When I delete an article, the onArticleCommentDeleted is triggered and it tries to update the article that has already been deleted. Of course I can check if the article exists before updating it. But it's really cumbersome and waste of resources.
Are there any ways to avoid from propagating further triggers?
I think the problem arises from the way I make use of the trigger. In general, it's not a good idea to implement an onDelete trigger on a child document updates its own parent. This will surely cause conflict.
Instead, at the client side, I use transaction
...
runTransaction(async trans => {
trans.delete(commentRef);
trans.update(articleRef, {
commentCount: admin.firestore.FieldValue.increment(-1)
})
})
This makes sure that if one of the operation fails they both fail, and eradicates the triggers. Relying to the client side is not the best idea, but I think we can consider the trade off.
There is no way to prevent triggering the Cloud Function on the comments when you delete the comments for that article. You will have to check for that condition in the function code itself, as you already said.
Let's say I have a node in a secondary realtime database called "test" with a value of "foobar".
I want to set up a function that prevents it from being deleted. More realistically this node would have several child nodes, where the function first checks if it can be deleted or not. However, here we never allow it to be deleted to keep the code as short as possible.
So I add a function that triggers onDelete and just rewrites the value.
In short:
Secondary database has: {"test":"foobar"}
onDelete function:
exports.testDelete = functions.database
.instance("secondary")
.ref("test")
.onDelete(async (snap, context) => {
await snap.ref.set(snap.val());
});
When running this with emulators, I would expect that when I delete the node, the node would just reappear in the secondary database, which is what happens when deployed to production. In the emulators, the node reappears, but in the main database instead of the secondary database. The only way I see to fix this is to replace snap.ref.set(snap.val()) with admin.app().database("https://{secondarydatabasedomain}.firebasedatabase.app").ref().child("test").set(snap.val()) which looks a little cumbersome just to get emulators to work.
Am I doing something wrong here?
I am using node 14, and firebase CLI version 9.23.0
To specify instance and path :
You have followed the syntax :
Instance named "my-app-db-2": functions.database.instance('my-app-db-2').ref('/foo/bar')
You have mentioned the instance name otherwise it will redirect to the default database so the syntax seems correct.
For triggering the event data follow the syntax as :
onDelete(handler: (snapshot: DataSnapshot, context: EventContext) => any): CloudFunction
For example you can refer to the Documentation :
// Listens for new messages added to /messages/:pushId/original and creates an
// uppercase version of the message to /messages/:pushId/uppercase
exports.makeUppercase = functions.database.ref('/messages/{pushId}/original')
.onCreate((snapshot, context) => {
// Grab the current value of what was written to the Realtime Database.
const original = snapshot.val();
functions.logger.log('Uppercasing', context.params.pushId, original);
const uppercase = original.toUpperCase();
// You must return a Promise when performing asynchronous tasks inside a Functions such as
// writing to the Firebase Realtime Database.
// Setting an "uppercase" sibling in the Realtime Database returns a Promise.
return snapshot.ref.parent.child('uppercase').set(uppercase);
});
If all above syntax has been followed correctly then I will recommend you to report a bug with a minimal repro on the repo along with including the entire cloud function as mentioned by Frank in a similar scenario.
Is it possible to get document back after adding it / updating it without additional network calls with Firestore, similar to MongoDB?
I find it stupid to first make a call to add / update a document and then make an additional call to get it.
As you have probably seen in the documentation of the Node.js (and Javascript) SDKs, this is not possible, neither with the methods of a DocumentReference nor with the one of a CollectionReference.
More precisely, the set() and update() methods of a DocumentReference both return a Promise containing void, while the CollectionReference's add() method returns a Promise containing a DocumentReference.
Side Note (in line with answer from darrinm below): It is interesting to note that with the Firestore REST API, when you create a document, you get back (i.e. through the API endpoint response) a Document object.
When you add a document to Cloud Firestore, the server can affect the data that is stored. A few ways this may happen:
If your data contains a marker for a server-side timestamp, the server will expand that marker into the actual timestamp.
Your data data is not permitted according to your server-side security rules, the server will reject the write operation.
Since the server affects the contents of the Document, the client can't simply return the data that it already has as the new document. If you just want to show the data that you sent to the server in your client, you can of course do so by simply reusing the object you passed into setData(...)/addDocument(data: ...).
This appears to be an arbitrary limitation of the the Firestore Javascript API. The Firestore REST API returns the updated document on the same call.
https://firebase.google.com/docs/firestore/reference/rest/v1beta1/projects.databases.documents/patch
I did this to get the ID of a new Document created, and then use it in something else.
Future<DocumentReference<Object>> addNewData() async {
final FirebaseFirestore _firestore = FirebaseFirestore.instance;
final CollectionReference _userCollection = _firestore.collection('users');
return await _userCollection
.add({ 'data': 'value' })
.whenComplete(() => {
// Show good notification
})
.catchError((e) {
// Show Bad notification
});
}
And here I obtain the ID:
await addNewData()
.then((document) async {
// Get ID
print('ID Document Created ${document.id}');
});
I hope it helps.
I just started the Meteor js, and I'm struggling in its publish method. Below is one publish method.
//Server side
Meteor.publish('topPostsWithTopComments', function() {
var topPostsCursor = Posts.find({}, {sort: {score: -1}, limit: 30});
var userIds = topPostsCursor.map(function(p) { return p.userId });
return [
topPostsCursor,
Meteor.users.find({'_id': {$in: userIds}})
];
});
// Client side
Meteor.subscribe('topPostsWithTopComments');
Now I'm not getting how I can use publish data on client. I meant I want to use data which will be given by topPostsWithTopComments
Problem is detailed below
When a new post enters the top 30 list, two things need to happen:
The server needs to send the new post to the client.
The server needs to send that post’s author to the client.
Meteor is observing the Posts cursor returned on line 6, and so will send the new post down as soon as it’s added, ensuring the client will receive the new post straight away.
However, consider the Meteor.users cursor returned on line 7. Even if the cursor itself is reactive, it’s now using an outdated value for the userIds array (which is a plain old non-reactive variable), which means its result set will be out of date as well.
This is why as far as that cursor is concerned, there is no need to re-run the query and Meteor will happily continue to publish the same 30 authors for the original 30 top posts ad infinitum.
So unless the whole code of the publication runs again (to construct a new list of userIds), the cursor is no longer going to return the correct information.
Basically what I need is:
if any changes happens in Post, then it should have the updated users list. without calling user collection again. I found some user full mrt modules.
link1 |
link2 |
link3
Please share your views!
-Neelesh
When you publish data on the server you're just publishing what the client is allowed to query. This is for security. After you subscribe to your publication you still need to query what the publication returned.
if(Meteor.isClient) {
Meteor.subscribe('topPostsWithTopComments');
// This returns all the records published with topPostsWithComments from the Posts Collection
var posts = Posts.find({});
}
If you wanted to only publish posts that the current user owns you would want to filter them out in the publish method on the server and not on the client.
I think #Will Brock already answered your question but maybe it becomes more clear with an abstract example.
Let's construct two collections named collectiona and collectionb.
// server and client
CollectionA = new Meteor.Collection('collectiona');
CollectionB = new Meteor.Collection('collectionb');
On the server you could now call Meteor.publish with 'collectiona' and 'collectionb' separately to publish both record sets to the client. This way the client could then also separately subscribe to them.
But instead you can also publish multiple record sets in a single call to Meteor.publish by returning multiple cursors in an array. Just like in the standard publishing procedure you can of course define what is being sent down to the client. Like so:
if (Meteor.isServer) {
Meteor.publish('collectionAandB', function() {
// constrain records from 'collectiona': limit number of documents to one
var onlyOneFromCollectionA = CollectionA.find({}, {limit: 1});
// all cursors in the array are published
return [
onlyOneFromCollectionA,
CollectionB.find()
];
});
}
Now on the client there is no need to subscribe to 'collectiona' and 'collectionb' separately. Instead you can simply subscribe to 'collectionAandB':
if (Meteor.isClient) {
Meteor.subscribe('collectionAandB', function () {
// callback to use collection A and B on the client once
// they are ready
// only one document of collection A will be available here
console.log(CollectionA.find().fetch());
// all documents from collection B will be available here
console.log(CollectionB.find().fetch());
});
}
So I think what you need to understand is that there is no array sent to the client that contains the two cursors published in the Meteor.publish call. This is because returning an array of cursors in the function passed as an argument to your call to Meteor.publish merely tells Meteor to publish all cursors contained in the array. You still need to query the individual records using your collection handles on the client (see #Will Brock's answer).