When to reject/resolve a promise - node.js

I am thinking about when exactly I need to reject a promise.
I found a couple of questions regarding this topic, but could not find a proper answer.
When should I reject a promise?
This article
http://howtonode.org/6666a4b74d7434144cff717c828be2c3953d46e7/promises
says:
Resolve: A successful Promise is 'resolved' which invokes the success listeners that are waiting and remembers the value that was resolved for future success listeners that are attached. Resolution correlates to a returned value.
Reject: When an error condition is encountered, a Promise is 'rejected' which invokes the error listeners that are waiting and remembers the value that was rejected for future error listeners that are attached. Rejection correlates to a thrown exception.
Is this the principle guideline?
That one only reject a promise if an exception occured?
But in case of a function like
findUserByEmail()
I'd would expect the function to return a user, so that I can continue the chain without verifying the result
findUserByEmail()
.then(sendWelcomeBackEmail)
.then(doSomeNiceStuff)
.then(etc..)
What are best / common practises?

In general you can think of rejecting as being analogous to a synchronous throw and fulfilling as being analogous to a synchronous return. You should reject whenever the function is unsuccessful in some way. That could be a timeout, a network error, incorrect input etc. etc.
Rejecting a promise, just like throwing an exception, is useful for control flow. It doesn't have to represent a truly unexpected error; it can represent a problem that you fully anticipate and handle:
function getProfile(email) {
return getProfileOverNetwork(email)
.then(null, function (err) {
//something went wrong getting the profile
if (err.code === 'NonExistantUser') {
return defaultUser;
} else if (profileCached(email)) {
return getProfileFromCache(email);//fall back to cached profile
} else {
throw err;//sometimes we don't have a nice way of handling it
}
})
}
The rejection lets us jump over the normal success behavior until we get to a method that knows how to handle it. As another example, we might have some function that's deeply nested at the bottom of the applications stack, which rejects. This might not be handled until the very top of the stack, where we could log it. The point is that rejections travel up the stack just like exceptions do in synchronous code.
In general, wherever possible, if you are struggling to write some asynchronous code, you should think "what would I write if this were synchronous". It's usually a fairly simple transformation to get from that, to the promised equivalent.
A nice example of where rejected promises might be used is in an exists method:
function exists(filePath) {
return stat(filePath) //where stat gets last updated time etc. of the file
.then(function () { return true; }, function () { return false; })
}
Notice how in this case, the rejection is totally expected and just means the file does not exist. Notice also how it parallels the synchronous function though:
function existsSync(filePath) {
try {
statSync(filePath);
return true;
} catch (ex) {
return false;
}
}
Your Example
Returning to your example:
I would normally chose to reject the promise resulting from findUserByEmail if no user was found. It's something you fully expect to happen sometimes, but it's the exception from the norm, and should probably be handled pretty similarly to all other errors. Similarly if I were writing a synchronous function I would have it throw an exception.
Sometimes it might be useful to just return null instead, but this would depend on your applications logic and is probably not the best way to go.

I know where you're coming from. Q and Q documentation can quite easily have you believe that deferred/promise rejection is all about exception handling.
This is not necessarily the case.
A deferred can be rejected for whatever reason your application requires.
Deferreds/promises are all about handling responses from asynchronous processes, and every asynchronous processes can result in a variety of outcomes - some of which are "successful" and some "unsuccessful". You may choose to reject your deferred - for whatever reason, regardless of whether the outcome was nominally successful or unsuccessful, and without an exception ever having been thrown, either in javascript or in the asynchronous process.
You may also choose to implement a timeout on an asynchronous process, in which case you might choose to reject a deferred without a response (successful or unsuccessful) having been received. In fact, for timeouts, this is what you would typically choose to do.

Related

Node.js/NestJS: force a HTTP call to go (making it a hotter Observable)

I'm developping a library for some end-users, with calls to an API via HttpService whose result the end-user will use or not.
end-user consumer <==> my library <=(HTTP)=> remote API
Now HttpService uses cold Observables, meaning that it will trigger the HTTP call only if we subscribe to the generated Observable. If I return this Observable to my end-user, and they don't subscribe to it because in their particular usecase they don't need the result, the API will never be called.
For that reason, I tend to subscribe myself to the Observable before returning it to the end-user, but I'm not exactly sure it's an elegant or safe solution, hence this post. My code looks like this (simplified for the post, and using HttpService from #nestjs/common):
ServiceToAccessAPIData.ts
export class ServiceToAccessAPIData {
...
callToEndPoint(): Observable<any> {
const obs = httpService.get(someUrl)
.pipe(
shareReplay(1) // see A
);
obs.subscribe(()=>{},()=>{}); // see B
return obs;
}
...
}
With:
A: because I want to avoid each subscription to emit an HTTP call, I need to share the call results between subscribers; and because I subscribe now (in B) and the end-user might subscribe later (even after the API sent a result) and they obviously shouldn't miss the result, I use the shareReplay(1) operator, which stores the last result, and emits it to any new subscribers.
B: I subscribe now to force the HTTP call to go immediately. We provide success but especially error-handling method because an unhandled Exception blows Node.js server up).
Now, this seems a bit elaborate for such a trivial use-case: 'provide a safe way for the end-user to make a call, whether they use the result or not'.
Is there a better, simpler, more elegant way to do so?
Also, performance-wise, would there be any outcome with this technique?
HttpService Observers emits only once and then completes, so the subscriptions will be automatically 'unsubscribed'. But I'm wondering what becomes of the underlying Observer observed by shareReplay. I suppose it will be garbage-collected when the Observable I return will have no reference on it anymore, but I'm wondering.
Using Promises (as suggested by MoazzamArif)
The questions above quite remains (especially if needing to stick with Observables), but a solution to the initial problem could be to use .toPromise() as the returned value to the end-user.
return obs.toPromise();
.toPromise() subscribes to the Observable (having it do the HTTP call immediately) and wraps it in a Promise, which will resolve to the last emitted value of the Observable once the Observable completes (in the case of HttpService, as soon as the API returned a result).
=> This actually answers the need 'provide a safe way for the end-user to make a call, whether they use the result or not'.
Now let's dig further. I suppose this is an exotic and strange scenario to consider, but how could we handle such situation:
an error occurs during the HTTP call
the end-user chains .then() and .catch(), but only after the error occured
=> because .toPromise() subscribes to the Observable directly, but no error handler is provided, the error will bubble up and crash the app
=> if we directly chain .catch(err => anyValue) to the returned Promise, the error will be caught, but the end-user will receive anyValue in .then(), and won't be able to handle the original error
=> if we directly chain .catch(err => { someHandling(); return Promise.reject(err); } to the returned Promise, because the user has not chained their .catch() yet, the new error coming from the new rejected Promise will bubble up and crash the app
Therefore, I suppose there's no solution, and that above scenario is simply impossible to handle (it was possible to handle it with the Observable .shareReplay(1) solution).

lambda trigger callback vs context.done

I was following the guide here for setting up a presignup trigger.
However, when I used callback(null, event) my lambda function would never actually return and I would end up getting an error
{ code: 'UnexpectedLambdaException',
name: 'UnexpectedLambdaException',
message: 'arn:aws:lambda:us-east-2:642684845958:function:proj-dev-confirm-1OP5DB3KK5WTA failed with error Socket timeout while invoking Lambda function.' }
I found a similar link here that says to use context.done().
After switching it works perfectly fine.
What's the difference?
exports.confirm = (event, context, callback) => {
event.response.autoConfirmUser = true;
context.done(null, event);
//callback(null, event); does not work
}
Back in the original Lambda runtime environment for Node.js 0.10, Lambda provided helper functions in the context object: context.done(err, res) context.succeed(res) and context.fail(err).
This was formerly documented, but has been removed.
Using the Earlier Node.js Runtime v0.10.42 is an archived copy of a page that no longer exists in the Lambda documentation, that explains how these methods were used.
When the Node.js 4.3 runtime for Lambda was launched, these remained for backwards compatibility (and remain available but undocumented), and callback(err, res) was introduced.
Here's the nature of your problem, and why the two solutions you found actually seem to solve it.
Context.succeed, context.done, and context.fail however, are more than just bookkeeping – they cause the request to return after the current task completes and freeze the process immediately, even if other tasks remain in the Node.js event loop. Generally that’s not what you want if those tasks represent incomplete callbacks.
https://aws.amazon.com/blogs/compute/node-js-4-3-2-runtime-now-available-on-lambda/
So with callback, Lambda functions now behave in a more paradigmatically correct way, but this is a problem if you intend for certain objects to remain on the event loop during the freeze that occurs between invocations -- unlike the old (deprecated) done fail succeed methods, using the callback doesn't suspend things immediately. Instead, it waits for the event loop to be empty.
context.callbackWaitsForEmptyEventLoop -- default true -- was introduced so that you can set it to false for those cases where you want the Lambda function to return immediately after you call the callback, regardless of what's happening in the event loop. The default is true because false can mask bugs in your function and can cause very erratic/unexpected behavior if you fail to consider the implications of container reuse -- so you shouldn't set this to false unless and until you understand why it is needed.
A common reason false is needed would be a database connection made by your function. If you create a database connection object in a global variable, it will have an open socket, and potentially other things like timers, sitting on the event loop. This prevents the callback from causing Lambda to return a response, until these operations are also finished or the invocation timeout timer fires.
Identify why you need to set this to false, and if it's a valid reason, then it is correct to use it.
Otherwise, your code may have a bug that you need to understand and fix, such as leaving requests in flight or other work unfinished, when calling the callback.
So, how do we parse the Cognito error? At first, it seemed pretty unusual, but now it's clear that it is not.
When executing a function, Lambda will throw an error that the tasked timed out after the configured number of seconds. You should find this to be what happens when you test your function in the Lambda console.
Unfortunately, Cognito appears to have taken an internal design shortcut when invoking a Lambda function, and instead of waiting for Lambda to timeout the invocarion (which could tie up resources inside Cognito) or imposing its own explicit timer on the maximum duration Cognito will wait for a Lambda response, it's relying on a lower layer socket timer to constrain this wait... thus an "unexpected" error is thrown while invoking the timeout.
Further complicating interpreting the error message, there are missing quotes in the error, where the lower layer exception is interpolated.
To me, the problem would be much more clear if the error read like this:
'arn:aws:lambda:...' failed with error 'Socket timeout' while invoking Lambda function
This format would more clearly indicate that while Cognito was invoking the function, it threw an internal Socket timeout error (as opposed to Lambda encountering an unexpected internal error, which was my original -- and incorrect -- assumption).
It's quite reasonable for Cognito to impose some kind of response time limit on the Lambda function, but I don't see this documented. I suspect a short timeout on your Lambda function itself (making it fail more promptly) would cause Cognito to throw a somewhat more useful error, but in my mind, Cognito should have been designed to include logic to make this an expected, defined error, rather than categorizing it as "unexpected."
As an update the Runtime Node.js 10.x handler supports an async function that makes use of return and throw statements to return success or error responses, respectively. Additionally, if your function performs asynchronous tasks then you can return a Promise where you would then use resolve or reject to return a success or error, respectively. Either approach simplifies things by not requiring context or callback to signal completion to the invoker, so your lambda function could look something like this:
exports.handler = async (event) => {
// perform tasking...
const data = doStuffWith(event)
// later encounter an error situation
throw new Error('tell invoker you encountered an error')
// finished tasking with no errors
return { data }
}
Of course you can still use context but its not required to signal completion.

Is it safe to skip calling callback if no action needed in nodejs

scenario 1
function a(callback){
console.log("not calling callback");
}
a(function(callback_res){
console.log("callback_res", callback_res);
});
scenario 2
function a(callback){
console.log("calling callback");
callback(true);
}
a(function(callback_res){
console.log("callback_res", callback_res);
});
will function a be waiting for callback and will not terminate in scenario 1? However program gets terminated in both scenario.
The problem is not safety but intention. If a function accepts a callback, it's expected that it will be called at some point. If it ignores the argument it accepts, the signature is misleading.
This is a bad practice because function signature gives false impression about how a function works.
It also may cause parameter is unused warning in linters.
will function a be waiting for callback and will not terminate in scenario 1?
The function doesn't contain asynchronous code and won't wait for anything. The fact that callbacks are commonly used in asynchronous control flow doesn't mean that they are asynchronous per se.
will function a be waiting for callback and will not terminate in scenario 1?
No. There is nothing in the code you show that waits for a callback to be called.
Passing a callback to a function is just like passing an integer to a function. The function is free to use it or not and it doesn't mean anything more than that to the interpreter. the JS interpreter has no special logic to "wait for a passed callback to get called". That has no effect one way or the other on when the program terminates. It's just a function argument that the called function can decide whether to use or ignore.
As another example, it used to be common to pass two callbacks to a function, one was called upon success and one was called upon error:
function someFunc(successFn, errorFn) {
// do some operation and then call either successFn or errorFn
}
In this case, it was pretty clear that one of these was going to get called and the other was not. There's no need (from the JS interpreter's point of view) to call a passed callback. That's purely the prerogative of the logic of your code.
Now, it would not be a good practice to design a function that shows a callback in the calling signature and then never, ever call that callback. That's just plain wasteful and a misleading design. There are many cases of callbacks that are sometimes called and sometimes not depending upon circumstances. Array.prototype.forEach is one such example. If you call array.forEach(fn) on an empty array, the callback is never called. But, of course, if you call it on a non-empty array, it is called.
If your function carries out asynchronous operations and the point of the callback is to communicate when the asynchronous operation is done and whether it concluded with an error or a value, then it would generally be bad form to have code paths that would never call the callback because it would be natural for a caller to assume the callback is doing to get called eventually. I can imagine there might be some exceptions to this, but they better be documented really well with the doc/comments for the function.
For asynchronous operations, your question reminds me somewhat of this: Do never resolved promises cause memory leak? which might be useful to read.

Promise is neither resolved nor rejected. What could be the reason for this?

I am running into a strange problem. I am using a module to look up the geo location from the IP address. The lookup method by default is sync.
I converted the method to async using bluebird, but its promise never gets resolved or rejected!
Here is the snippet:
var Promise = require('bluebird');
var geoip = Promise.promisifyAll(require('geoip-lite'));
geoip.lookupAsync('52.39.138.72').then((r) => {
console.log(r);
}).catch((err) => {
console.log(err);
})
console.log(geoip.lookup('52.39.138.72').country + '^^^^');
In the above snippet, the last console.log always gets printed but neither of the statement inside then or catch gets executed. What could be the reason for this?
In the above snippet, the last console.log always gets printed but neither of the statement inside then or catch gets executed. What could be the reason for this?
The function you are trying to promisify does not follow the required asynchronous calling convention so promisifying it this way will not work.
For Bluebird's promisify to work properly, the function you promisify must follow the node.js async calling convention. That means the function must take a callback as its last argument and that callback must be called with two arguments err and result when the operation completes. If the function does not follow this convention, then promisifying it will not work.
And, there is really no reason to take a synchronous operation and promisify it either. Promisfying it won't suddenly make its functionality asynchronous.
So, your promise is never getting resolved or rejected because the underlying function doesn't use a callback that gets called with the right calling convention.
So, if geoip.lookup('52.39.138.72') is completely synchronous (as it appears to be) and gets called this way, then the underlying operation isn't asynchronous so there is no reason to even try to promisify it.
If you explain what problem you're actually trying to solve by promisifying it, we could likely offer another way (perhaps in a new question). One thing to keep in mind about stack overflow. If you describe your actual problem and show us the relevant code rather than asking about issues with one attempted solution, then we are much more likely to be able to help you and to offer you the best solution.

NodeJS: process.nextTick vs Instant Callbacking

I write lots of modules which look like this:
function get(index, callback) {
if (cache[index] === null) {
request(index, callback); // Queries database to get data.
} else {
callback(cache[index]);
}
}
Note: it's a bit simplified version of my actual code.
That callback is either called in the same execution or some time later. This means users of the module aren't sure which code is run first.
My observation is that such module reintroduces some problems of the multi-threading which was previously solved by JavaScript engine.
Question: should I use process.nextTick or ensure it's safe for the callback to be called outside the module?
It depends entirely on what you do in the callback function. If you need to be sure the callback hasn't fired yet when get returns, you will need the process.nextTick flow; in many cases you don't care when the callback fires, so you don't need to delay its execution. It is impossible to give a definitive answer that will apply in all situations; it should be safe to always defer the callback to the next tick, but it will probably be a bit less efficient that way, so it is a tradeoff.
The only situation I can think of where you will need to defer the callback for the next tick is if you actually need to set something up for it after the call to get but before the call to callback. This is perhaps a rare situation that also might indicate a need for improvement in the actual control flow; you should not be rely at all on when exactly your callback is called, so whatever environment it uses should already be set up at the point where get is called.
There are situations in event-based control flow (as opposed to callback-based), where you might need to defer the actual event firing. For example:
function doSomething() {
var emitter = new EventEmitter();
cached = findCachedResultSomehow();
if (cached) {
process.nextTick(function() {
emitter.emit('done', cached);
});
} else {
asyncGetResult(function(result) {
emitter.emit('done', result);
});
}
return emitter;
}
In this case, you will need to defer the emit in the case of a cached value, because otherwise the event will be emitted before the caller of doSomething has had the chance to attach a listener. You don't generally have this consideration when using callbacks.
http://blog.izs.me/post/59142742143/designing-apis-for-asynchrony
if you're doing callbacks internally, do whichever is suitable
if you're creating a module used by other people, asynchronous callbacks should always be asynchronous.

Resources