I'm currently learning to use cloud functions from firebase and just have the following problem:
In my database the structure I´ll be referring to looks like that:
fruits
RandomFruitID
fruitID: RandomFruitID
In my index.js I want to create the function:
exports.newFruit = functions.database.ref("fruits").onWrite(event => {
(...)
// INSIDE HERE I WANT TO ACCESS THE "fruitID" VALUE, MEANING THE "RandomFruitID"
});
How can I achieve that?
Best wishes
Your current function will trigger on any change under /fruits. So there is no current fruitID value.
If you want to trigger when a specific fruit gets written, you'll want to change the trigger to fruits/{fruidId}. This also makes the value of fruitId available in your code:
exports.newFruit = functions.database.ref("fruits/{fruitId}").onWrite(event => {
if (!event.data.previous.exists()) {
var newFruitKey = event.params.fruitId;
...
}
});
I recommend reading the Firebase documentation for Database triggered functions, which covers a lot of such cases.
Related
Sometimes we use the firebase functions triggered by real-time database (onCreate/onDelete/onUpdate ...) to do some logic (like counting, etc).
My question, would it be possible to avoid this trigger in some cases. Mainly, when I would like to allow a user to import a huge JSON to firebase?
Example:
a function E triggered on the creation of a new child in /examples. Normally, users add examples one by one to /examples and function E runs to do some logic. However, I would like to allow a user (from the front-end) to import 2000 children to /examples and the logic which is done by function E is possible at import time without the need for E. Then, I do not need E to be triggered for such a case where a high number of functions could be executed. (Note: I am aware of the 1000 limit)
Update:
based on the accepted answer, submitted my answer down.
As far as I know, there is no way to disable a Cloud Function programmatically without just deleting it. However this introduces an edge case where data is added to the database while the import is taking place.
A compromise would be to signal that the data you are uploading should be post-processed. Let's say you were uploading to /examples/{pushId}, instead of attaching the database trigger to /examples/{pushId}, attach it to /examples/{pushId}/needsProcessing (or something similar). Unfortunately this has the trade-off of not being able to make use of change objects for onUpdate() and onWrite().
const result = await firebase.database.ref('/examples').push({
title: "Example 1A",
desc: "This is an example",
attachments: { /* ... */ },
class: "-MTjzAKMcJzhhtxwUbFw",
author: "johndoe1970",
needsProcessing: true
});
async function handleExampleProcessing(snapshot, context) {
// do post processing if needsProcessing is truthy
if (!snapshot.exists() || !snapshot.val()) {
console.log('No processing needed, exiting.');
return;
}
const exampleRef = admin.database().ref(change.ref.parent); // /examples/{pushId}, as admin
const data = await exampleRef.once('value');
// do something with data, like mutate it
// commit changes
return exampleRef.update({
...data,
needsProcessing: null /* delete needsProcessing value */
});
}
const functionsExampleProcessingRef = functions.database.ref("examples/{pushId}/needsProcessing");
export const handleExampleNeedingProcessingOnCreate = functionsExampleProcessingRef.onCreate(handleExampleProcessing);
// this is only needed if you ever intend on writing `needsProcessing = /* some falsy value */`, I recommend just creating and deleting it, then you can use just the above trigger.
export const handleExampleNeedingProcessingOnUpdate = functionsExampleProcessingRef.onUpdate((change, context) => handleExampleProcessing(change.after, context));
An alternative to Sam's approach is to use feature flags to determine if a Cloud Function performs its main function. I often have this in my code:
exports.onUpload = functions.database
.ref("/uploads/{uploadId}")
.onWrite((event) => {
return ifEnabled("transcribe").then(() => {
console.log("transcription is enabled: calling Cloud Speech");
...
})
});
The ifEnabled is a simple helper function that checks (also in Realtime Database) if the feature is enabled:
function ifEnabled(feature) {
console.log("Checking if feature '"+feature+"' is enabled");
return new Promise((resolve, reject) => {
admin.database().ref("/config/features")
.child(feature)
.once('value')
.then(snapshot => {
if (snapshot.val()) {
resolve(snapshot.val());
}
else {
reject("No value or 'falsy' value found");
}
});
});
}
Most of my usage of this is during talks at conferences, to enable the Cloud Functions at the right time (as a deploy takes a bit longer than we'd like for a demo). But the same approach should work to temporarily disable features during for example data import.
Okay, another solution would be
A: Add a new table in firebase like /triggers-queue where all CRUD that should fire a background function are added. In this table, we add a key for each table that should have triggers - in our example /examples table. Any key that represents a table should also have /created, /updated, and /deleted keys as follows.
/examples
.../example-id-1
/triggers-queue
.../examples
....../created
........./example-id
....../updated
........./example-id
............old-value
....../deleted
........./example-id
............old-value
Note that the old-value should be added from app (front-end, etc).
We set triggers always onCreate on
/triggers-queue/examples/created/{exampleID} (simulate onCreate)
/triggers-queue/examples/updated/{exampleID} (simulate onUpdate)
/triggers-queue/examples/deleted/{exampleID} (simulate onDelete)
The fired function can know all the necessary info to handle the logic as follows:
Operation type: from the path (either: created, updated, or deleted)
key of the object: from the path
current data: by reading the corresponding table (i.e., /examples/id)
old data: from the triggers table
Good Points:
You can import a huge data to /examples table without firing any function as we do not add to the /triggers-queue
you can fanout functions to pass the limit 1000/sec. That is by setting triggers on (as an example to fanout on-create)
/triggers-queue/examples/created0/{exampleID} and
/triggers-queue/examples/created1/{exampleID}
bad-points:
more difficult to implement
need to write more data to firebase (like old-data) from the app.
B- Another way (although not an answer for this) is to move the login in the background function to an HTTP function and call it on every crud ops.
I have a firebase project which has one default database and 3 other databases which I have created. I understand that each of these database contain a different URL which can be used to trigger cloud functions.
All the 3 database have a path /ref/user/message. Now in my Index.JS file I want to create a function which performs onWrite if there's an update performed on the above path from any of the database. Also, I would want to know which database instance has been updated.
You will have to write one function for each of the database instances, but you can pretty easily share common code between them by having them all call a common function.
exports.db1 = functions.database.instance('db1').ref('/your/path').onWrite((change,context) => {
return onWrite('db1', change, context)
})
exports.db2 = functions.database.instance('db2').ref('/your/path').onWrite((change,context) => {
return onWrite('db2', change, context)
})
function onWrite(instance, change, context) {
// your code here
return some_promise
}
So I spent too long trying to figure out how to manipulate a returned database document (using mongoose) using transform and virtuals, but for my purposes, those aren't options. The behaviour I desire is very similar to that of a transform (in which I delete a property), but I only want to delete the property from the returned document IFF it satisfies a requirement calculated using the req.session.user/req.user object (I'm using PassportJS, but any equivalent session user suffices). Obviously, there is no access to the request object in a virtual or transform, and so I can't do the calculation.
Then it dawned on me that I could just query normally and manipulate the returned object in the callback before I send it to the client. And I could put it in a middleware function that looks nice, but something tells me this is a hacky thing to do. I'm presenting an api to the client that does not reflect the data stored/retrieved directly from the database. It may also clutter up my route configuration if I have middleware like this all over making it harder to maintain code. Below is an example of what the manipulation looks like:
app.route('/api/items/:id').get(manipulateItem, sendItem);
app.param('id', findUniqueItem);
function findUniqueItem(req, res, next, id) {
Item.findUniqueById(id, function(err, item) {
if (!err) { req.itemFound = item; }
next();
}
}
function manipulateItem(req, res, next) {
if (req.itemFound.people.indexOf(req.user) === -1) {
req.itemFound.userIsInPeopleArray = false;
} else {
req.itemFound.userIsInPeopleArray = true;
}
delete req.itemFound.people;
}
function sendItem(req, res, next) {
res.json(req.itemFound);
}
I feel like this is a workaround to a problem with a simpler solution, but I'm not sure what that solution is.
There's nothing hacky about the act of modifying it.
It's all a matter of when you modify it.
For toy servers, and learning projects, the answer is whenever you want.
In production environments, you want to do your transform on your way out of your system, and into the next system (the next system might be the end user; it might be another server; it might be another big block of functionality in your own server, that shouldn't have access to more information that it needs to do its job).
getItemsFromSomewhere()
.then(transformToTypeICanUse)
.then(filterBasedOnMyExpectations)
.then(doOperations)
.then(transformToTypeIPromisedYou)
.then(outputToNextSystem);
That example might not be super-helpful in terms of an actual how, but that's sort of the point.
As you can see, you could link that system of events up to another system of events (that does its own transform to its own data-structure, does its own filtering/mapping, transforms that data into whatever its API promises, and passes it along to the next system, and eventually out to the end user).
I think part of the sense of "hacking" comes from bolting the result of the async process onto req, where req gets injected from step to step, through the middleware.
That said:
function eq (a) {
return function (b) { return a === b; };
}
function makeOutputObject (inputObject, personWasFound) {
// return whatever you want
}
var personFound = req.itemFound.people.some(eq(req.user));
var outputObject = makeOutputObject(req.itemFound, personFound);
Now you aren't using the actual delete keyword, or modifying the call-to-call state of that itemFound object.
You're separating your view-based logic from your app-based logic, but without the formal barriers (can always be added later, if they're needed).
Mainly out of curiosity, but also for a better understanding of Meteor security, what is the reason(ing) behind Meteor.user() not working inside publish functions?
The reason is in this piece of code (from meteor source code)
Meteor.user = function () {
var userId = Meteor.userId();
if (!userId)
return null;
return Meteor.users.findOne(userId);
};
Meteor.userId = function () {
// This function only works if called inside a method. In theory, it
// could also be called from publish statements, since they also
// have a userId associated with them. However, given that publish
// functions aren't reactive, using any of the infomation from
// Meteor.user() in a publish function will always use the value
// from when the function first runs. This is likely not what the
// user expects. The way to make this work in a publish is to do
// Meteor.find(this.userId()).observe and recompute when the user
// record changes.
var currentInvocation = DDP._CurrentInvocation.get();
if (!currentInvocation)
throw new Error("Meteor.userId can only be invoked in method calls. Use this.userId in publish functions.");
return currentInvocation.userId;
};
I am trying to get all the variables that have been defined, i tried using the global object
but it seems to be missing the ones defined as var token='44'; and only includes the ones defined as token='44';. What i am looking for idealy is something like the get_defined_vars() function of php. I need to access the variables because i need to stop the node process and then restart at the same point without having to recalculate all the variables, so i want to dump them somewhere and access them later.
It's impossible within the language itself.
However:
1. If you have an access to the entire source code, you can use some library to get a list of global variables like this:
var ast = require('uglify-js').parse(source)
ast.figure_out_scope()
console.log(ast.globals).map(function (node, name) {
return name
})
2. If you can connect to node.js/v8 debugger, you can get a list of local variables as well, see _debugger.js source code in node.js project.
As you stated
I want to dump them somewhere and access them later.
It seems like you should work towards a database (as Jonathan mentioned in the comments), but if this is a one off thing you can use JSON files to store values. You can then require the JSON file back into your script and Node will handle the rest.
I wouldn't recommend this, but basically create a variable that will hold all the data / variables that you define. Some might call this a God Object. Just make sure that before you exit the script, export the values to a JSON file. If you're worried about your application crashing, perform backups to that file more frequently.
Here is a demo you can play around with:
var fs = require('fs');
var globalData = loadData();
function loadData() {
try { return require('./globals.json'); } catch(e) {}
return {};
}
function dumpGlobalData(callback) {
fs.writeFile(
__dirname + '/globals.json', JSON.stringify(globalData), callback);
}
function randomToken() {
globalData.token = parseInt(Math.random() * 1000, 10);
}
console.log('token was', globalData.token)
randomToken();
console.log('token is now', globalData.token)
dumpGlobalData(function(error) {
process.exit(error ? 1 : 0);
});