firebase.database () multiple instances - node.js

I am using firebase realtime database and I was wondering which is a better pattern regarding
firebase.database()
is it considered bad practice to have multiple instances of this. Is it better if I have a single instance of the database which is exported within the node app. Or is it basically the same thing to create a new instance for every single action creator file.
import * as firebase from 'firebase';
firebase.initializeApp(config);
export const provider = new firebase.auth.GoogleAuthProvider();
export const auth = firebase.auth();
export default firebase;
I have this approach for the firebase app instance and I am unsure if a similar pattern is required for the database instance as well. There weren't any specifications within the firebase docs.

Every time you call one of the product methods on the firebase object that you get from the import, it will give you exactly the same object in return. So, every time you call firebase.auth(), you'll get the same thing back, and every time you call firebase.database(), you'll get the same thing. How you want to manage those instances is completely your preference.

Related

Firebase Functions: How to maintain 'app-global' API client?

How can I achieve an 'app-wide' global variable that is shared across Cloud Function instances and function invocations? I want to create a truly 'global' object that is initialized only once per the lifetime of all my functions.
Context:
My app's entire backend is Firestore + Firebase Cloud Functions. That is, I use a mix of background (Firestore) triggers and HTTP functions to implement backend logic. Additionally, I rely on a 3rd-party location service to continually listen to location updates from sensors. I want just a single instance of the client on which to subscribe to these updates.
The problem is that Firebase/Google Cloud Functions are stateless, meaning that function instances don't share memory/objects/state. If I call functionA, functionB, functionC, there's going to be at least 3 instances of locationService clients created, each listening separately to the 3rd party service so we end up with duplicate invocations of the location API callback.
Sample code:
// index.js
const functions = require("firebase-functions");
exports.locationService = require('./location_service');
this.locationService.initClient();
// define callable/HTTP functions & Firestore triggers
...
and
// location_service.js
var tracker = require("third-party-tracker-js");
const self = (module.exports = {
initClient: function () {
tracker.initialize('apiKey')
.then((client)=>{
client.setCallback(async function(payload) {
console.log("received location update: ", payload)
// process the payload ...
// with multiple function instances running at once, we receive as many callbacks for each location update
})
client.subscribeProject()
.then((subscription)=>{
subscription.subscribe()
.then((subscribeMsg)=>{
console.log("subscribed to project with message: ", subscribeMsg); // success
});
// subscription.unsubscribe(); // ??? at what point should we unsubscribe?
})
.catch((err)=>{
throw(err)
})
})
.catch((err)=>{
throw(err)
})
},
});
I realize what I'm trying to do is roughly equivalent to implementing a daemon in a single-process environment, and it appears that serverless environments like Firebase/Google Cloud Functions aren't designed to support this need because each instance runs as its own process. But I'd love to hear any contrary ideas and possible workarounds.
Another idea...
Inspired by this related SO post and the official GCF docs on stateless functions, I thought about using Firestore to persist a tracker value that allows us to conditionally initialize the API client. Roughly like this:
// read value from db; only initialize the client if there's no valid subscription
let locSubscriberActive = await getSubscribeStatusFromDb();
if (!locSubscriberActive) {
this.locationService.initClient();
}
// in `location_service.js`, do setSubscribeStatusToDb(); // set flag to true when we call subscribe(). reset when we get terminated
The problem faced: at what point do I unset/reset that value? Intuitively, I would do so the moment the function instance that initialized the client gets recycled/killed. However, it appears that it is not possible to know when a Firebase Cloud Function instance is terminated? I searched everywhere but couldn't find docs on how to detect such an event...
What you're trying to do is not at all supported in Cloud Functions. It's important to realize that there may be any number of server instances allocated for each deployed function. That's how Cloud Functions scales up and down to match the load on the function in a cost-effective way. These instances might be terminated at any time for any reason. You have no indication when an instance terminates.
Also, instances are not capable of performing any computation when they are idle. CPU resources are clamped down after a function terminates, and are spun up again when the next function is invoked on that instance. You can't have any "daemon" code running when a function is not actively being invoked. I don't know what your locationService does, but it is certainly doing nothing at all after a function terminates, regardless of how it terminated.
For any sort of long-running or daemon-like code, Cloud Functions is not a suitable product. You should instead consider also using another product that lets you run code 24/7 without disruptions. App Engine and Compute Engine are viable alternatives, and you will have to think carefully about if and how you want their server instances to scale with load.

when i use firebase.database().goOnline(); I get an error

this is my code
admin.initializeApp({...});
var db = admin.database();
let ref = db.ref(...);
await ref.once("value",function () {...},);
firebase.database().goOffline();
firebase.database().goOnline();
ref.on("value",function () {...},);
);
when i use firebase.database().goOnline(); I get an error
Firebase: No Firebase App '[DEFAULT]' has been created - call Firebase App.initializeApp() (app/no-app).
at app
You're mixing two ways of addressing Firebase.
You can access Firebase through:
Its client-side JavaScript SDK, in which case the entry point is usually called firebase.
Its server-side Node.js SDK, in which case the entry point is usually called admin.
Now you initialize admin, but then try to access firebase.database(). And since you did call admin.initializeApp that's where the error comes from.
My guess is that you're looking for admin.database().goOnline() or db.goOnline().
Also note that there is no reason to toggle goOffline()/goOnline() in the way you do now, and you're typically better off letting Firebase manage that on its own.

Is it possible to have a Firebase Function that is triggered by changes to a Firestore that lives in a seperate Firebase project to the Function?

Let's say I have a Firebase project named "A". Within this project, I have a Cloud Firestore triggered Firebase function that needs to run when a document within Firestore changes. By default, the Firebase Function will listen to changes within Firestore on project A.
However, let's say I have a particular use case where there is a second Firebase project named "B". I need the Firebase Function within Project A to be triggered on Firestore changes that happen to Firestore within project B.
Is this possible? Firebase docs do show initializing multiple projects, which would allow me to connect to multiple databases as such:
const admin = require("firebase-admin");
const serviceAccount = require("path/to/serviceAccountKey.json");
const secondaryAppConfig = {
credential: admin.credential.cert(serviceAccount),
databaseURL: "https://<DATABASE_NAME>.firebaseio.com"
};
// Initialize another app with a different config
const secondary = firebase.initializeApp(secondaryAppConfig, "secondary");
// Retrieve the database.
const secondaryDatabase = secondary.database();
But this doesn't allow me to trigger a Firestore Triggered Firebase Function on my secondary project. Firebase functions call the firebase-functions methods directly, whereas calling a database calls the initialized project.
const functions = require('firebase-functions');
exports.myFunction = functions.firestore
.document('...')
.onWrite((change, context) => { /* ... */ });
Is what I would like to do possible? Or does anyone have a workaround (other than creating this Firebase Function within project B)?
It's not possible. Cloud Functions triggers can only fire in response to changes in the resources of the project where they are deployed. This is true for all types of triggers, including Firestore.
If you want code to run in response to changes in another project, the function will have to be deploy to that project.
Currently it is only possible for writes to Cloud Firestore to trigger Cloud Functions that are part of the same project. It is not possible to trigger Cloud Functions that are defined in another project.
The typical solution is for example to call a HTTP Function in the secondary project, for which you can then configure the complete URL.
I'm not sure it can be done all in one codebase - that's from a lack of experience though. I'd say, given your setup, your calling function can trigger your callee function via HTTP call (documentation)
This might require a paid Firebase plan, but I'm not certain of it (source)

AWS Lambda Dynamic DB Switching Singelton (Node)

I'm trying to take advantage of db connection reuse in Lambda, by keeping the code outside of the handler.
For example - something like:
import dbconnection from './connection'
const handler(event, context, callback){
//use dbconnection
}
The issue is I don't decide what database to connect to until I do a lookup to see where they should be connecting. In my specific case I have 'customer=foo' in a query param then I can look to see that foo should connect to database1.
So what I need to do is something like this :
const dbconnection = require('./connection)('database1')
The way it is now I need to do this in every handler method which is expensive.
Is there some way I can pull the query parameter, look up my database and set it / switch it globally within the Lambda execution context?
I've tried this:
import dbconnection from './connection'
const handler(event, context, callback){
const client = dbconnection.setDatabase('database1')
}
....
./connection.js
setDatabase(database) {
if(this.currentDatabase !== database) {
// connect to different database
this.currentDatabase = database;
}
}
Everything works locally with sls offline but doesn't work through the AWS Lambda execution context. Thoughts?
You can either hardcode (or provide it via environment variable) it or not. If you can, then pull it out of then handler and it will not be executed each time. If you can't, as you have mentioned, then what you are trying to do is to make lambda stateful. Lambda was designed to be stateless and AWS intentionally doesn't expose specific informations about the underlying containers so that you don't start doing something like what you are trying to do now - introducing state to it.

Creating a global Datastore client

What are the conventions around when and where to create datastore client objects?
datastore = new Datastore({});
In the docs a new Datastore instance seems to be created in every single file. Would there be any benefit in creating a singleton that initialises the Datastore connection and returns the same instance to each part of the application that requires it?
It depends on the underlying code if new Datastore({}) actually creates a new instance or returns a singleton, you'd have to check that.
What you could do is move the creation of the datastore instance to a seperate file and require that instance in every file you need access to datastore. Since dependencies you require are cached you will always get the same instance.
Pseudo code:
datastore.js
const datastore = new Datastore({});
module.exports = datastore;
foo.js
const datastore = require('./datastore');
// do something with datastore
In reply to your follow-up question.
If you look at the source code of the nodejs/Datastore module you will see the same pattern:
src/index.js
* #example <caption>Import the client library</caption>
* const Datastore = require('#google-cloud/datastore');
// ...
module.exports = Datastore;
No matter where you require the client library:
const Datastore = require('#google-cloud/datastore');
It will always return the same instance. Datastore will handle scaling and connections (pooling) for you.
In conclusion: There's no functional difference between requiring the client library in each file or wrapping it in a seperate file and require that in the files where you need a connection.
Personally, I prefer wrapping the connection in a seperate file and require that in my data access files. Benefits for this are:
* You abstract away the actual implementation. If you ever need to change datastore or the way you connect to it it will only ever be in one place.
* In case you need to supply connection parameters (like a password) you only have to do that once. It saves you from writing the same code over and over again.

Resources