Mainly out of curiosity, but also for a better understanding of Meteor security, what is the reason(ing) behind Meteor.user() not working inside publish functions?
The reason is in this piece of code (from meteor source code)
Meteor.user = function () {
var userId = Meteor.userId();
if (!userId)
return null;
return Meteor.users.findOne(userId);
};
Meteor.userId = function () {
// This function only works if called inside a method. In theory, it
// could also be called from publish statements, since they also
// have a userId associated with them. However, given that publish
// functions aren't reactive, using any of the infomation from
// Meteor.user() in a publish function will always use the value
// from when the function first runs. This is likely not what the
// user expects. The way to make this work in a publish is to do
// Meteor.find(this.userId()).observe and recompute when the user
// record changes.
var currentInvocation = DDP._CurrentInvocation.get();
if (!currentInvocation)
throw new Error("Meteor.userId can only be invoked in method calls. Use this.userId in publish functions.");
return currentInvocation.userId;
};
Related
I have a client-side form that can create a document upon submission. I want to see if one of the input fields already exists on a Document in the DB though. This would then alert the user and ask them if they want to continue creating the record.
Client-side event
Template.createDoc.events({
'click button[type=submit]'(e, template) {
//This particular example is checking to see if a Doc with its `name` property set to `value` already exists
const value = $('#name');
const fieldName = 'name';
const exists = Meteor.call('checkIfFieldExistsOnDoc', fieldName, value);
if (exists) {
if (confirm(`Doc with ${value} as its ${fieldName} already exists. Are you sure you want to continue creating Doc?`) {
//db.Docs.insert....
}
}
}
});
Server-side Meteor Method
'checkIfFieldExistsOnDoc'(field, val) {
if (this.isServer) {
this.unblock();
check(field, String);
check(val, String);
if (!this.userId) {
throw new Meteor.Error('not-authorized', 'You are not authorized.');
}
const findObj = {};
findObj[field] = val;
const fieldsObj = {};
fieldsObj[fieldsObj] = 1;
const doc = Docs.findOne(findObj, {fields: fieldsObj});
return doc;
}
},
My issue is that the client-side code always gets undefined back when calling the Server method. I now understand why, however, I'm not keen on wrapping all of my subsequent client-code into a callback yet.
So - any other ideas on how I can attempt to do this simple feature?
Also - I was thinking of having the client-side page's onCreated do a 1-time server call to get ALL names for all Docs, storing this in memory, and then doing the check upon form submission using this. Obviously, this is inefficient and not-scalable, although it would work
Meteor.call in the client side is always an async call. Then you need implement a callback.
See docs: https://docs.meteor.com/api/methods.html#Meteor-call
Meteor.call('checkIfFieldExistsOnDoc', fieldName, value, function(error, result) {
if (result) {
if (confirm(`Doc with ${value} as its ${fieldName} already exists. Are you sure you want to continue creating Doc?`) {
//db.Docs.insert....
}
}
});
On the client, you can wrap any Meteor.call with a Promise and then use it with async/await. There are some packages on Atmosphere that do this for you to.
I've used this package for years: https://atmospherejs.com/deanius/promise
On the client I often just use await Meteor.callPromise() which returns a response nicely.
Here are a couple of the best write-ups on the many options available to you:
https://blog.meteor.com/using-promises-on-the-client-in-meteor-fb4f1c155f84
https://forums.meteor.com/t/meteor-methods-return-values-via-promise-async/42060
https://dev.to/jankapunkt/async-meteor-method-calls-24f9
Sometimes we use the firebase functions triggered by real-time database (onCreate/onDelete/onUpdate ...) to do some logic (like counting, etc).
My question, would it be possible to avoid this trigger in some cases. Mainly, when I would like to allow a user to import a huge JSON to firebase?
Example:
a function E triggered on the creation of a new child in /examples. Normally, users add examples one by one to /examples and function E runs to do some logic. However, I would like to allow a user (from the front-end) to import 2000 children to /examples and the logic which is done by function E is possible at import time without the need for E. Then, I do not need E to be triggered for such a case where a high number of functions could be executed. (Note: I am aware of the 1000 limit)
Update:
based on the accepted answer, submitted my answer down.
As far as I know, there is no way to disable a Cloud Function programmatically without just deleting it. However this introduces an edge case where data is added to the database while the import is taking place.
A compromise would be to signal that the data you are uploading should be post-processed. Let's say you were uploading to /examples/{pushId}, instead of attaching the database trigger to /examples/{pushId}, attach it to /examples/{pushId}/needsProcessing (or something similar). Unfortunately this has the trade-off of not being able to make use of change objects for onUpdate() and onWrite().
const result = await firebase.database.ref('/examples').push({
title: "Example 1A",
desc: "This is an example",
attachments: { /* ... */ },
class: "-MTjzAKMcJzhhtxwUbFw",
author: "johndoe1970",
needsProcessing: true
});
async function handleExampleProcessing(snapshot, context) {
// do post processing if needsProcessing is truthy
if (!snapshot.exists() || !snapshot.val()) {
console.log('No processing needed, exiting.');
return;
}
const exampleRef = admin.database().ref(change.ref.parent); // /examples/{pushId}, as admin
const data = await exampleRef.once('value');
// do something with data, like mutate it
// commit changes
return exampleRef.update({
...data,
needsProcessing: null /* delete needsProcessing value */
});
}
const functionsExampleProcessingRef = functions.database.ref("examples/{pushId}/needsProcessing");
export const handleExampleNeedingProcessingOnCreate = functionsExampleProcessingRef.onCreate(handleExampleProcessing);
// this is only needed if you ever intend on writing `needsProcessing = /* some falsy value */`, I recommend just creating and deleting it, then you can use just the above trigger.
export const handleExampleNeedingProcessingOnUpdate = functionsExampleProcessingRef.onUpdate((change, context) => handleExampleProcessing(change.after, context));
An alternative to Sam's approach is to use feature flags to determine if a Cloud Function performs its main function. I often have this in my code:
exports.onUpload = functions.database
.ref("/uploads/{uploadId}")
.onWrite((event) => {
return ifEnabled("transcribe").then(() => {
console.log("transcription is enabled: calling Cloud Speech");
...
})
});
The ifEnabled is a simple helper function that checks (also in Realtime Database) if the feature is enabled:
function ifEnabled(feature) {
console.log("Checking if feature '"+feature+"' is enabled");
return new Promise((resolve, reject) => {
admin.database().ref("/config/features")
.child(feature)
.once('value')
.then(snapshot => {
if (snapshot.val()) {
resolve(snapshot.val());
}
else {
reject("No value or 'falsy' value found");
}
});
});
}
Most of my usage of this is during talks at conferences, to enable the Cloud Functions at the right time (as a deploy takes a bit longer than we'd like for a demo). But the same approach should work to temporarily disable features during for example data import.
Okay, another solution would be
A: Add a new table in firebase like /triggers-queue where all CRUD that should fire a background function are added. In this table, we add a key for each table that should have triggers - in our example /examples table. Any key that represents a table should also have /created, /updated, and /deleted keys as follows.
/examples
.../example-id-1
/triggers-queue
.../examples
....../created
........./example-id
....../updated
........./example-id
............old-value
....../deleted
........./example-id
............old-value
Note that the old-value should be added from app (front-end, etc).
We set triggers always onCreate on
/triggers-queue/examples/created/{exampleID} (simulate onCreate)
/triggers-queue/examples/updated/{exampleID} (simulate onUpdate)
/triggers-queue/examples/deleted/{exampleID} (simulate onDelete)
The fired function can know all the necessary info to handle the logic as follows:
Operation type: from the path (either: created, updated, or deleted)
key of the object: from the path
current data: by reading the corresponding table (i.e., /examples/id)
old data: from the triggers table
Good Points:
You can import a huge data to /examples table without firing any function as we do not add to the /triggers-queue
you can fanout functions to pass the limit 1000/sec. That is by setting triggers on (as an example to fanout on-create)
/triggers-queue/examples/created0/{exampleID} and
/triggers-queue/examples/created1/{exampleID}
bad-points:
more difficult to implement
need to write more data to firebase (like old-data) from the app.
B- Another way (although not an answer for this) is to move the login in the background function to an HTTP function and call it on every crud ops.
Setting up a Google Assistant App via the NodeJS Google Actions SDK is done this way:
It seems that these are synchronous functions (per the documentation given on https://developers.google.com/actions/reference/nodejs/ActionsSdkApp#ActionsSdkApp
const app = new App({request: req, response: res});
function pickOption (app) {
/*A bunch of steps here*/
}
function optionPicked (app) {
/*Another bunch of steps here*/
}
const actionMap = new Map();
actionMap.set(app.StandardIntents.TEXT, pickOption);
actionMap.set(app.StandardIntents.OPTION, optionPicked);
app.handleRequest(actionMap);
Is it possible for pickOption and optionPicked to be asynchronous functions? i.e., would it be correct to have pickOption be implemented as
function pickOption(){
var pickOptionPromise = Q.defer();
pickOptionPromise.resolve({
/*Some results here*/
});
return pickOptionPromise.promise;
}
Yes, there are two functions you'll use most of the time to return a response to the Assistant. app.ask and app.tell. These wrap your response into a JSON object and return it to Google.
Your pickOption and optionPicked functions can be run asynchronously and run app.tell after your logic is completed.
For starters, there is no requirement that you use app.handleRequest() at all. (You don't even need to use app, but that's not your question.) It is there as a convenience method to map intents to functions, but you're free to do that task via other means. I tend to do things like
function handler1( app ){
// Does the logic asynchronously, but no voice response for some situation.
// Returns a Promise that resolves to an Object about what we did
}
function handler2( app ){
// Does the logic asynchronously, but no voice response for some situation.
// Returns a Promise that resolves to an Object about what we did
}
function chooseHandler( app ){
// Some logic which may be async, but which returns
// a Promise that will resolve to either handler1 or 2
}
function sendResponse( app, handlerResult ){
// Uses the handlerResult object to determine what to
// send back to the user. This may involve async database calls.
// Calls either app.ask(), or app.tell(), or a related call.
// Returns a Promise indicating success
}
chooseHandler( app )
.then( handler => handler( app ) )
.then( result => sendResponse( app, result ) )
.catch( err => {
// Log the error or something
} );
(This is meant to be illustrative. It isn't actually tested, nor guaranteed to be the highest quality or best design for a particular purpose.)
In this, the chooseHandler() is usually synchronous, but could be async if we need to do things like check if the user has been with us before and choose a different handler based on that. The handlers are almost always async, and the important bit is that they don't actually pick the phrases that are sent back to the user - that is done in sendResponse() and also may involve a database call which is async.
In Meteor (a NodeJS Framework), there is a function called Meteor.userId() that always returns the userId that belongs to the current session as long as I am in a function that was original called from a Meteor Method.
The Meteor.userId() function utilizes meteors DDP?._CurrentInvocation?.get()?.connection. So somehow this "Magic line" gets my current DDP connection. This also works when burried deep inside of callbacks.
So somehow meteor sets a context that it refers to. I also want to do this kind of trick for another API that doesn't utilize meteors DDP but is a plain HTTP Api.
What I want to do:
doActualStuff = function(param1, param2, param3) {
// here, i am burried deep inside of calls to functions
// but the function at the top of the stack trace was
// `answerRequest`.
// I want to access its `context` here but without
// passing it through all the function calls.
// What I want is something like this:
context = Framework.getRequestContext()
}
answerRequest = function(context) {
//do some stuff
someFancyFunctionWithCallback(someArray, function(arrayPosition) {
aFuncCallingDoActualStuff(arrayPosition);
})
}
I can wrap the call to answerRequest if this is necessary.
I don't know how Meteor does it, but it doesn't look like magic. It looks like Meteor is a global object (window.Meteor in the browser or global.Meteor in Node.js) that has some functions that refer to some stateful object that exists in the context where they were defined.
Your example could be achieved by having answerRequest (or whatever function calls answerRequest, or whatever you want) call a setRequestContext function that sets the state that will be returned by getRequestContext. If you wanted, you could have an additional function, clearRequestContext, that cleans up after request is over. (Of course, if you have async code you'll need to take care that the latter isn't called until any code that needs that data has finished running.)
This is rudimentary, but it might look something like the below snippet. window.Framework does not need to be defined in the same file as the rest of the code; it just needs to be initialized before answerRequest is called.
let _requestContext = null;
window.Framework = {
setRequestContext(obj) {
_requestContext = obj;
},
getRequestContext() {
return _requestContext;
},
clearRequestContext() {
_requestContext = null;
},
};
const doActualStuff = function(param1, param2, param3) {
const context = Framework.getRequestContext()
console.log('Request context is', context);
}
const answerRequest = function(context) {
Framework.setRequestContext(context);
setTimeout(() => {
try {
doActualStuff();
} finally {
Framework.clearRequestContext();
}
}, 100);
}
answerRequest({ hello: 'context' });
.as-console-wrapper{min-height:100%}
Rather than copy and pasting my code onto here, I have uploaded it to github. The RequireJS module does have a dependency on jquery.signalr and in tern has a dependency on jquery but also have a dependency on the javascript held in /signalr/hubs. There is a bit of config to do with Require.Config.
Basically what is happening is on the first time you load the page the connection is made to the hubs within signalr and the "server side" code is executed and does the desired thing. When you refresh the page it does not. All client side code is called, so for example:
var myViewModel = new MyViewMode();
myViewModel.init();
and within your init method you have
var connection = $.connection.myHub;
this.init = function() {
connection.server.myMethod();
}
this would then go off to
public MyHub : Hub
{
public void MyMethod()
{
Client.Request.populateSomeInformation() // I think it's request but I'm doing this from memory!
}
}
and then call
connection.client.populateSomeInformation = function () { .. )
but doesn't call this :(
It looks like a connection has been made (using the good old console.log() to see what it outputs) and indeed debugging the project it executes the code within the hub but there is no response made back to the javascript.
So wonderful people of the internet, where am I going wrong? Do I need to check the state of $.connection.hub.start(); before attempting to start it again?
Time for beer :)
I believe it should be
connection.client.populateSomeInformation = function () { .. )
(not connection.server)
http://www.asp.net/signalr/overview/hubs-api/hubs-api-guide-javascript-client#callclient
(observations on the code you have on github right now)
var isLoaded = false;
// ... some code that doesn't change isLoaded ...
if (isLoaded == false) {
scrollIntervalId = window.setInterval(function () {
signalRLoaded();
}, 30);
}
I think isLoaded will always be false at this point. Not sure what you intended this to accomplish.
var connection = $.connection.hub.start();
I don't think you're supposed to open the connection before defining any client functions. I don't see any client functions being defined here, so maybe you're doing that somewhere else? I don't know if it really matters other than if the server attempts to call a client function that hasn't yet been defined...
function SignalRReady(callback) {
if (isLoaded) {
callback(connection);
} else {
readyCalls = callback;
}
return SignalRReady;
}
SignalRReady.version = "1.0.0";
SignalRReady.load = function(name, request, onLoad, config) {
if (config.isBuild) {
onLoad();
} else {
SignalRReady(onLoad);
}
};
return SignalRReady;
I'm confused by this bit of code, probably because I don't see how it's being used. Is this an attempt at a kind of singleton? I see that SignalRReady is the "class" being returned for the module. You're not really returning an object, you're returning a constructor which implies that you're instantiating it in other places, something like
define(['SignalRReady'], function(sigR)
{
var srr = new sigR();
});
But then you have that load function defined that calls the constructor and makes this look all weird. How are you using this?
Anyways, I'm thinking you might be hitting some kind of race condition where the client function may not always be available at the time the server is trying to call it.
(additional comments/code 2013-09-06)
Your connection object is actually a jQuery promise ( http://api.jquery.com/category/deferred-object/ ).
If you're unfamiliar with promises, think of them generically as a queue of callbacks to be executed later. In this case, when connected, all the callbacks will be executed (in the order they were added). If a callback is added after being connected, it will get executed immediately. This is how your code is working now. You add the callback to the .done queue after the connection is made and is executed immediately.
If you insist on creating the connection object yourself, then you do not need to use the stateChanged event. You just add the callback to the .done queue:
define(function()
{
function signalRReady(callback)
{
if (window.connection == undefined) {
window.connection = $.connection.hub.start();
}
window.connection.done(callback);
}
signalRReady.version = "1.0.0";
return signalRReady;
});
However, I believe it's not a good idea to initiate the connection yourself. Because your module isn't a complete wrapper around SignalR such that people would only use your module to do SignalR stuff, you are not guaranteed (and cannot expect) other code will not initiate the connection. Especially if someone is adding your module to an existing codebase.
Your module is simply adding a new event, so keep it simple. Take the callback and execute it yourself when appropriate:
define(function()
{
function signalRReady(callback)
{
$.connection.hub.stateChanged(function (state)
{
if(state.newState === $.signalR.connectionState.connected)
{
callback();
}
});
}
signalRReady.version = "1.0.0";
return signalRReady;
});
Nowadays, promises are pretty popular. You might want to implement a promise-based module like:
define(function()
{
var deferred = $.Deferred();
$.connection.hub.stateChanged(function (state)
{
if(state.newState === $.signalR.connectionState.connected)
{
// executes all callbacks attached by the "ready" function below
deferred.resolve();
}
});
return {
ready: function(callback)
{
deferred.done(callback);
},
version: "1.0.0"
};
});
If callbacks are attached after the connection has been made, they are executed immediately.
Also, notice this example module's init function returns an object instead of a function. Since RequireJS will pass the same instance around to any module that requires it, state is maintained - we can use local variables instead of global.