I am making myself a library which retries failed promise "chain-parts" - I collect methods to be called and queue next phase only after previous succeeded.
Conceptually rounded up - my problems are more fundamental. This is where I arrived with debugging:
this.runningPromise
.then(function() {
return Promise.reject();
})
//;
//this.runningPromise
.then(this.promiseResolver.bind(this))
.catch(this.promiseRejector.bind(this))
;
Works, promiseRejector kicks in. When I uncomment out the two lines, works not. promiseResolver gets called.
Can't find anywhere anything. Nodejs 6.10.3 with browserify on Windows, Chrome.
If you uncomment two rows means you are calling this.runningPromise twice and each time it has its own callbacks.
If you keep the rows commented then it will act as a promise (and associated callbacks)
Better you should assign promise to a variable and then you can use it multiple times.
let newPromise = this.runningPromise
.then(function() {
return Promise.reject();
});
newPromise
.then(this.promiseResolver.bind(this))
.catch(this.promiseRejector.bind(this));
With above code you can use newPromise multiple times.
this.runningPromise will not change when you chain other callbacks, i.e. the promise referenced by this.runningPromise was never rejected. So you have to assign the new promise to a new reference:
let something = this.runningPromise
.then(function() {
return Promise.reject();
});
something
.then(this.promiseResolver.bind(this))
.catch(this.promiseRejector.bind(this));
Related
Assume an Express route that makes a call to Mongoose and has to be async so it can await on the mongoose.find(). Also assume we are receiving XML but we have to change it to JSON, and that also needs to be async so I can call await inside of it.
If I do this:
app.post('/ams', async (req, res) => {
try {
xml2js.parseString(xml, async (err, json) => {
if (err) {
throw new XMLException();
}
// assume many more clauses here that can throw exceptions
res.status(200);
res.send("Data saved")
});
} catch(err) {
if (err instanceof XML2JSException) {
res.status(400);
message = "Malformed XML error: " + err;
res.send(message);
}
}
}
The server hangs forever. I'm assuming the async/await means that the server hits a timeout before something concludes.
If I put this:
res.status(200);
res.send("Data saved")
on the line before the catch(), then that is returned, but it is the only thing every returned. The client gets a 200, even if an XMLException is thrown.
I can see the XMLException throw in the console, but I cannot get a 400 to send back. I cannot get anything I that catch block to execute in a way that communicates the response to the client.
Is there a way to do this?
In a nutshell, there is no way to propagate an error from the xml2js.parseString() callback up to the higher code because that parent function has already exited and returned. This is how plain callbacks work with asynchronous code.
To understand the problem here, you have to follow the code flow for xml2js.parseString() in your function. If you instrumented it like this:
app.post('/ams', async (req, res) => {
try {
console.log("1");
xml2js.parseString(xml, async (err, json) => {
console.log("2");
if (err) {
throw new XMLException();
}
// assume many more clauses here that can throw exceptions
res.status(200);
res.send("Data saved")
});
console.log("3");
} catch (err) {
if (err instanceof XML2JSException) {
res.status(400);
message = "Malformed XML error: " + err;
res.send(message);
}
}
console.log("4");
});
Then, you would see this in the logs:
1 // about to call xml2js.parseString()
3 // after the call to xml2js.parseString()
4 // function about to exit
2 // callback called last after function returned
The outer function has finished and returned BEFORE your callback has been called. This is because xml2js.parseString() is asynchronous and non-blocking. That means that calling it just initiates the operation and then it immediately returns and the rest of your function continues to execute. It works in the background and some time later, it posts an event to the Javascript event queue and when the interpreter is done with whatever else it was doing, it will pick up that event and call the callback.
The callback will get called with an almost empty call stack. So, you can't use traditional try/catch exceptions with these plain, asynchronous callbacks. Instead, you must either handle the error inside the callback or call some function from within the callback to handle the error for you.
When you try to throw inside that plain, asynchronous callback, the exception just goes back into the event handler that triggered the completion of the asynchronous operation and no further because there's nothing else on the call stack. Your try/catch you show in your code cannot catch that exception. In fact, no other code can catch that exception - only code within the exception.
This is not a great way to write code, but nodejs survived with it for many years (by not using throw in these circumstances). However, this is why promises were invented and when used with the newer language features async/await, they provide a cleaner way to do things.
And, fortunately in this circumstance xml2js.parseString() has a promise interface already.
So, you can do this:
app.post('/ams', async (req, res) => {
try {
// get the xml data from somewhere
const json = await xml2js.parseString(xml);
// do something with json here
res.send("Data saved");
} catch (err) {
console.log(err);
res.status(400).send("Malformed XML error: " + err.message);
}
});
With the xml2js.parseString() interface, if you do NOT pass it a callback, it will return a promise instead that resolves to the final value or rejects with an error. This is not something all asynchronous interfaces can do, but is fairly common these days if the interface had the older style callback originally and then they want to now support promises. Newer interfaces are generally just built with only promise-based interfaces. Anyway, per the doc, this interface will return a promise if you don't pass a callback.
You can then use await with that promise that the function returns. If the promise resolves, the await will retrieve the resolved value of the promise. If the promise rejects, because you awaiting the rejection will be caught by the try/catch. FYI, you can also use .then() and .catch() with the promise, but in many cases, async and await are simpler so that's what I've shown here.
So, in this code, if there is invalid XML, then the promise that xml2js.parseString() returns will reject and control flow will go to the catch block where you can handle the error.
If you want to capture only the xml2js.parseString() error separately from other exceptions that could occur elsewhere in your code, you can put a try/catch around just it (though this code didn't show anything else that would likely throw an exception so I didn't add another try/catch). In fact, this form of try/catch can be used pretty much like you would normally use it with synchronous code. You can throw up to a higher level of try/catch too.
A few other notes, many people who first start programming with asynchronous operations try to just put await in front of anything asynchronous and hope that it solves their problem. await only does anything useful when you await a promise so your asynchronous function must return a promise that resolves/rejects when the asynchronous operation is complete for the await to do anything useful.
It is also possible to take a plain callback asynchronous function that does not have a promise interface and wrap a promise interface around it. You pretty much never want to mix promise interface functions with plain callback asynchronous operations because error handling and propagation is a nightmare with a mixed model. So, sometimes you have to "promisify" an older interface so you can use promises with it. In most cases, you can do that with util.promisify() built into the util library in nodejs. Fortunately, since promises and async/await are the modern and easier way to do asynchronous things, most newer asynchronous interfaces in the nodejs world come with promise interfaces already.
You are throwing exceptions inside the callback function. So you cant expect the catch block of the router to receive it.
One way to handle this is by using util.promisify.
try{
const util = require('util');
const parseString = util.promisify(xml2js.parseString);
let json = await parsestring(xml);
}catch(err)
{
...
}
I'm writing an API where I'm having a bit of trouble with the error handling. What I'm unsure about is whether the first code snippet is sufficient or if I should mix it with promises as in the second code snippet. Any help would be much appreciated!
try {
var decoded = jwt.verify(req.params.token, config.keys.secret);
var user = await models.user.findById(decoded.userId);
user.active = true;
await user.save();
res.status(201).json({user, 'stuff': decoded.jti});
} catch (error) {
next(error);
}
Second code snippet:
try {
var decoded = jwt.verify(req.params.token, config.keys.secret);
var user = models.user.findById(decoded.userId).then(() => {
}).catch((error) => {
});
user.active = true;
await user.save().then(() => {
}).catch((error) => {
})
res.status(201).json({user, 'stuff': decoded.jti});
} catch (error) {
next(error);
}
The answer is: it depends.
Catch every error
Makes sense if you want to react differently on every error.
e.g.:
try {
let decoded;
try {
decoded = jwt.verify(req.params.token, config.keys.secret);
} catch (error) {
return response
.status(401)
.json({ error: 'Unauthorized..' });
}
...
However, the code can get quite messy, and you'd want to split the error handling a bit differently (e.g.: do the JWT validation on some pre request hook and allow only valid requests to the handlers and/or do the findById and save part in a service, and throw once per operation).
You might want to throw a 404 if no entity was found with the given ID.
Catch all at once
If you want to react in the same way if a) or b) or c) goes wrong, then the first example looks just fine.
a) var decoded = jwt.verify(req.params.token, config.keys.secret);
b) var user = await models.user.findById(decoded.userId);
user.active = true;
c) await user.save();
res.status(201).json({user, 'stuff': decoded.jti});
I read some articles that suggested the need of a try/catch block for each request. Is there any truth to that?
No, that is not required. try/catch with await works conceptually like try/catch works with regular synchronous exceptions. If you just want to handle all errors in one place and want all your code to just abort to one error handler no matter where the error occurs and don't need to catch one specific error so you can do something special for that particular error, then a single try/catch is all you need.
But, if you need to handle one particular error specifically, perhaps even allowing the rest of the code to continue, then you may need a more local error handler which can be either a local try/catch or a .catch() on the local asynchronous operation that returns a promise.
or if I should mix it with promises as in the second code snippet.
The phrasing of this suggests that you may not quite understand what is going on with await because promises are involved in both your code blocks.
In both your code blocks models.user.findById(decoded.userId); returns a promise. You have two ways you can use that promise.
You can use await with it to "pause" the internal execution of the function until that promise resolves or rejects.
You can use .then() or .catch() to see when the promise resolves or rejects.
Both are using the promise returns from your models.user.findById(decoded.userId); function call. So, your phrasing would have been better to say "or if I should use a local .catch() handler on a specific promise rather than catching all the rejections in one place.
Doing this:
// skip second async operation if there's an error in the first one
async function someFunc() {
try {
let a = await someFunc():
let b = await someFunc2(a);
return b + something;
} catch(e) {
return "";
}
}
Is analogous to chaining your promise with one .catch() handler at the end:
// skip second async operation if there's an error in the first one
function someFunc() {
return someFunc().then(someFunc2).catch(e => "");
}
No matter which async function rejects, the same error handler is applied. If the first one rejects, the second one is not executed as flow goes directly to the error handler. This is perfectly fine IF that's how you want the flow to go when there's an error in the first asynchronous operation.
But, suppose you wanted an error in the first function to be turned into a default value so that the second asynchronous operation is always executed. Then, this flow of control would not be able to accomplish that. Instead, you'd have to capture the first error right at the source so you could supply the default value and continue processing with the second asynchronous operation:
// always run second async operation, supply default value if error in the first
async function someFunc() {
let a;
try {
a = await someFunc():
} catch(e) {
a = myDefaultValue;
}
try {
let b = await someFunc2(a);
return b + something;
} catch(e) {
return "";
}
}
Is analogous to chaining your promise with one .catch() handler at the end:
// always run second async operation, supply default value if error in the first
function someFunc() {
return someFunc()
.catch(err => myDefaultValue)
.then(someFunc2)
.catch(e => "");
}
Note: This is an example that never rejects the promise that someFunc() returns, but rather supplies a default value (empty string in this example) rather than reject to show you the different ways of handling errors in this function. That is certainly not required. In many cases, just returning the rejected promise is the right thing and that caller can then decide what to do with the rejection error.
I have a chain of 4 promises, and 1 function at the end. The final function is executing before the previous promises in the chain have resolved.
Can someone explain to me why this might be happening?
Here is the promise chain:
updateGdax(db)
.then(updateBitstamp(db))
.then(updateBitfinex(db))
.then(updatePoloniex(db))
.then(coinMarketData.updateCoinMarketData(db))
.then(addRates(db)); //this function is executing after the first promise in the chain.
I would like each function to execute after the one listed before it, so addRates(db) should be executed last.
I can post the code from the promise functions if needed for further analyses, but I really just want to understand WHY this would happen, as my understanding is that functions in a promise chain won't execute unless the previous promise in the chain has resolved.
Unless those update functions in the then calls are partially applied (unless they return a function), they are being executed before the then is called. You need to wrap them in an anonymous function to have them executed in order. Do what the other answer says or use fat arrows:
updateGdax(db)
.then(()=>updateBitstamp(db))
.then(()=>updateBitfinex(db))
.then(()=>updatePoloniex(db))
.then(()=>coinMarketData.updateCoinMarketData(db))
.then(()=>addRates(db));
If your update functions could be rewritten to return the db after completing, then you could rewrite the calls like so, point free style:
updateGdax(db)
.then(updateBitstamp)
.then(updateBitfinex)
.then(updatePoloniex)
.then(coinMarketData.updateCoinMarketData)
.then(addRates);
Each function, would then look something like this:
function updateGdax(db) {
return db.doSomething().then(()=> db)
}
Follow that pattern, and you have yourself some nice looking javascript.
And have a look at the new async/await, included in nodejs 8. It is much more intuitive:
async function main() {
await updateGdax(db)
await updateBitstamp(db)
await updateBitfinex(db)
await updatePoloniex(db)
await coinMarketData.updateCoinMarketData(db)
await addRates(db)
}
main().catch(e => console.error(e))
Try below approach,
updateGdax(db)
.then(function(){
return updateBitstamp(db)
}).then(function (){
return updateBitfinex(db);
}).then(function() {
return updatePoloniex(db);
}).then(function(){
return coinMarketData.updateCoinMarketData(db)
}).then(function(){
return addRates(db);
}).catch(function(err){
console.log(err);
});
Hope this will work. If any of the function is returning any value and if you want to use it in subsequent function the pass that value in following function() used inside then. Refer : https://strongloop.com/strongblog/promises-in-node-js-an-alternative-to-callbacks/
function saveToTheDb(value) {
return new Promise(function(resolve, reject) {
db.values.insert(value, function(err, user) { // remember error first ;)
if (err) {
return reject(err); // don't forget to return here
}
resolve(user);
})
}
}
Here is the code which i see from here.
i am confused about return keyword.
For resolve(user);, do i need return?
For reject(user);, do i need return?
There is no need to use a return statement inside a new Promise() callback. The Promise constructor is not expecting any sort of return value from the callback.
So, the reason to use a return statement inside that callback is only to control the flow of execution in that function. If you want execution inside your callback to finish and not execute any more code within that callback, you can issue a return; at that point.
For example, you could have written your code like this with no return statement:
function saveToTheDb(value) {
return new Promise(function(resolve, reject) {
db.values.insert(value, function(err, user) {
if (err) {
reject(err);
} else {
resolve(user);
}
});
}
}
In this case, you used the if/else clause to make sure the flow of control in your function takes the correct path and no return was needed or used.
A common shortcut when promisifying async functions like this is:
function saveToTheDb(value) {
return new Promise(function(resolve, reject) {
db.values.insert(value, function(err, user) {
if (err) return reject(err);
resolve(user);
});
}
}
This is not functionally different than the previous code block, but it is less typing and more compact. The return statement in front of reject(err); is only for flow of control reasons to prevent from executing the resolve(user); statement in case of error since the desired flow of control is to call reject(err) and then not execute anything else in the callback.
In fact, the return statement in this last block is not actually even needed in this specific case because executing a resolve() after a reject() will not do anything since promises are latched to whichever happens first resolve or reject. But, it is generally considered poor practice to execute unnecessary code so many would argue that it is better to use flow of control structures such as if/else or return to only execute the code that is needed.
So, this would technically work too, but is not considered a best practice because it executes unnecessary code and isn't as clearly structured:
function saveToTheDb(value) {
return new Promise(function(resolve, reject) {
db.values.insert(value, function(err, user) {
if (err) reject(err);
resolve(user);
});
}
}
FYI, what you are doing here is called "promisifying" which makes a regular async function that works with a callback into a function that returns a promise. There are libraries and functions that will "promisify" a function or a whole object of functions (e.g. a whole API) for you in one function call so you don't have to do this manually. For example, I regularly use Bluebird which offers Promise.promisify() for promisifying a single function or Promise.promisifyAll() which will promisify all the methods on an object or prototype. This is very useful. For example, you could get promisified versions of the entire fs module with just this:
var Promise = require('bluebird');
var fs = Promise.promisifyAll(require('fs'));
Then, you can use methods that return a promise such as:
fs.readFileAsync("file.txt").then(function(data) {
// do something with file.txt data here
});
Generally, in NodeJS, you shouldn't use the promise constructor very much.
The promise constructor is for converting APIs that don't return promises to promises. You should consider using a library that provides promisification (even if you use native promises all-around) since it provides a safe alternative that does not have subtle errors with error-handling logic.
Automatic promisification is also considerably faster.
That said, the answer to your question is "Yes".
It is perfectly safe to do so, there is nothing special about promise constructors - they are just plain JavaScript. Domenic discusses the design of the promise constructor in his blog.
It is perfectly safe (just like any other function) to return early - it is actually quite common in regular asynchronous functions.
(Also, in your example code you should just use Promise.resolve, but I assume it was that simple only because it is an example).
Copied this answer from duplicate
As #JaromandaX said in this case the return statement does not make any diference.
From the docs:
In all cases where a promise is resolved (i.e. either fulfilled or rejected), the resolution is permanent and cannot be reset. Attempting to call resolve, reject, or notify if promise is already resolved will be a no-op.
I'm new to node.js and using it for a backend that takes data from syslog messages and stores it to a database.
I've run into the following type of serial operations:
1. Query the database
2. If the query value is X do A. otherwise do B.
A. 1. Store "this" in DB
2. Query the database again
3. If the query value is Y do P. otherwise do Q.
P. Store "something"
Q. Store "something else"
B. 1. Store "that" in the DB
2. Store "the other thing" in the DB
The gist here is that I have some operations that need to happen in order, but there is branching logic in the order.
I end up in callback hell (I didn't know what that is when I came to Node.. I do now).
I have used the async library for things that are more straight forward - like doing things in order with async.forEachOfSeries or async.queue. But I don't think there's a way to use that if things have to happen in order but there is branching.
Is there a way to handle this that doesn't lead to callback hell?
Any sort of complicated logic like this is really, really going to benefit from using promises for every async operation, especially when you get to handling errors, but also just for structuring the logic flow.
Since you haven't provided any actual code, I will make up an example. Suppose you have two core async operations that both return a promise: query(...) and store(...).
Then, you could implement your above logic like this:
query(...).then(function(value) {
if (value === X) {
return store(value).then(function() {
return query(...).then(function(newValue) {
if (newValue === Y) {
return store("something");
} else {
return store("something else");
}
})
});
} else {
return store("that").then(function() {
return store("the other thing");
});
}
}).then(function() {
// everything succeeded here
}, function(err) {
// error occurred in anyone of the async operations
});
I won't pretend this is simple. Implemented logic flow among seven different async operation is just going to be a bit of code no matter how you do it. But, promises can make it less painful than otherwise and massively easier to make robust error handling work and to interface this with other async operations.
The main keys of promises used here are:
Make all your async operations returning promises that resolve with the value or reject with the failure as the reason.
Then, you can attach a .then() handler to any promise to see when the async operation is finished successfully or with error.
The first callback to .then() is the success handler, the second callback is the error handler.
If you return a promise from within a .then() handler, then that promise gets "chained" to the previous one so all success and error state becomes linked and gets returned back to the original promise state. That's why you see the above code always returning the nested promises. This automates the returning of errors all the way back to the caller and allows you to return a value all the way back to the caller, even from deeply nested promises.
If any promise in the above code rejects, then it will stop that chain of promises up until an actual reject handler is found and propagate that error back to that reject handler. Since the only reject handler in the above code is at the to level, then all errors can be caught and seen there, no matter how deep into the nested async mess it occurred (try doing that reliably with plain callbacks - it's quite difficult).
Native Promises
Node.js has promises built in now so you can use native promises for this. But, the promise specification that node.js implements is not particularly feature rich so many folks use a promise library (I use Bluebird for all my node.js development) to get additional features. One of the more prominent features is the ability to "promisify" an existing non-promise API. This allows you to take a function that works only with a callback and create a new function that works via a promise. I find this particularly useful since many APIs that have been around awhile do not natively return promises.
Promisifying a Non-Promise Interface
Suppose you had an async operation that used a traditional callback and you wanted to "promisify it". Here's an example of how you could do that manually. Suppose you had db.query(whatToSearchFor, callback). You can manually promisify it like this:
function queryAsync(whatToSearchFor) {
return new Promise(function(resolve, reject) {
db.query(whatToSearchFor, function(err, data) {
if (!err) {
reject(err);
} else {
resolve(data);
}
});
});
}
Then, you can just call queryAsync(whatToSearchFor) and use the returned promise.
queryAsync("foo").then(function(data) {
// data here
}, function(err) {
// error here
});
Or, if you use something like the Bluebird promise library, it has a single function for promisifying any async function that communicates its result via a node.js-style callback passed as the last argument:
var queryAsync = Promise.promisify(db.query, db);