While working with mongoose, I have the following two async operations
const user = await userModel.findOneAndDelete(filter, options);
await anotherAsyncFunctionToDeleteSomething(user); // not a database operation
But I need to ensure that both deletions happen or reject together. I've thought about putting trycatch on second function and recreate the user document if any error is caught (in the catch block). However, this doesn't seem like an efficient (or at least appealing) solution.
If second operation is totally unrelated then there's nothing much you can do. You can later ensure if both records were if that's extremely necessary. You could use Promise.all() to run both promises simultaneously but it'll throw error if either fails so you can retry it.
But as you mentioned, you need result of first operation before running the second then you'll have to run those individually.
Yes, you can catch any errors thrown in the second promise and recreate a document with same ID as shown below:
const user = await userModel.findOneAndDelete(filter, options);
try {
await anotherAsyncFunctionToDeleteSomething(user);
} catch (e) {
console.log(e)
await collection.insertOne({ _id: new ObjectId("62....54"), ...otherData });
}
Related
I am trying to build a financial system in NodeJS. I am refactoring my code from using callbacks to using Promises to minimize nesting. But I am struggling with understanding what would the proper meaning of a Resolve or a Rejection should be. Let's say in a situation where I am checking the balance before debiting an account, would the result where the balance is insufficient be a resolve or a reject? Does Resolve/Reject pertain to success or failure of code execution or that of the logical process?
There is lots of design flexibility for what you decide is a resolved promise and what is a rejected promise so there is no precise answer to that for all situations. It often takes some code design judgement to decide what best fits your situataion. Here are some thoughts:
If there's a true error and any sequence of operations of which this is one should be stopped, then it's clear you want a rejection. For example, if you're making a database query and can't even connect to the database, you would clearly reject.
If there's no actual error, but a query doesn't find a match, then that would typically not be a rejection, but rather a null or empty sort of resolve. The query found an empty set of results.
Now, to your question about debiting an account. I would think this would generally be an error. For concurrency reasons, you shouldn't have separate functions that check the balance and then separately debit it. You need an atomic database operation that attempts to debit and will succeed or fail based on the current balance. So, if the operation is to debit the account by X amount and that does not succeed, I would think you would probably reject. The reason I'd select that design option is that if debiting the account is one of several operations that are part of some larger operation, then you want a chain of operations to be stopped when the debit fails. If you don't reject, then everyone who calls this asynchronous debit function will have to have separate "did it really succeed" test code which doesn't seem proper here. It should reject with a specific error that allows the caller to see exactly why it rejected.
Let's say in a situation where I am checking the balance before debiting an account, would the result where the balance is insufficient be a resolve or a reject?
There's room to go either way, but as I said above, I'd probably make it a reject because if this was part of a sequence of operations (debit account, then place order, send copy of order to user), I'd want the code flow to take a different path if this failed and that's easiest to code with a rejection.
In general, you want the successful path through your code to be able to be coded like this (if possible):
try {
await op1();
await op2();
await opt3();
} catch(e) {
// handle errors here
}
And, not something like:
try {
let val = await op1();
if (val === "success") {
await op2();
....
} else {
// some other error here
}
} catch(e) {
// handle some errors here
}
Does Resolve/Reject pertain to success or failure of code execution or that of the logical process?
There is no black and white here. Ultimately, it's success or failure of the intended operation of the function and interpreting that is up to you. For example, fetch() in the browser resolves if any response is received from the server, even if it is a 404 or 500. The designers of fetch() decided that it would only reject if the underlying networking operation failed to reach the server and they would leave the interpretation of individual status codes to the calling application. But, often times, our applications want 4xx and 5xx statuses to be rejections (an unsuccessful operation). So, I sometimes use a wrapper around fetch that converts those statuses to rejections. So, hopefully you can see, it's really up to the designer of the code to decide which is most useful.
If you had a function called debitAccount(), it would make sense to me that it would be considered to have failed if there are insufficient funds in the account and thus that function would reject. But, if you had a function called checkBalance() that checks to see if a particular balance is present or not, then that would resolve true or false and only actual database errors would be rejections.
To use resolve or reject depends on your implementation.
Basic Promise upgrade from callback:
return new Promise(function (resolve, reject) {
myFunction(param, function callback(err, data) {
if (err)
reject(err);
else
resolve(data);
})
})
and shortcut to this is util.promisify(myFunction)
const util = require('util');
function myFunction(param, callback) { ... }
myFunctionAsync = util.promisify(myFunction)
myFunctionAsync(param).then(actionSuccess).catch(actionFailure)
As you see, reject is used to pass the error, and resolve to pass the result.
You can use with resolve/reject:
function finacialOperationAsync() {
return new Promise((resolve, reject) => {
const balance = checkBalance();
if (balance < 0)
reject(new Error('Not enough balance'))
else
finacialOperation((err, res) => {
if (err)
reject(err);
then
resolve(res);
});
})
}
finacialOperationAsync().then(actionOnSuccess).catch(actionOnFailure)
Or you can use true/false and async/await:
function checkBalanceAsync() {
return new Promise(resolve => {
resolve(checkBalance() >= 0);
})
}
async function finacialOperationAsync() {
if (await checkBalanceAsync()) {
const res = await finacialOperationAsync();
return actionOnSuccess(res);
} else
return actionFailure();
}
All err variables in the callback style code should be mapped to rejection in promises. In this way, promisify decorator converts callbacks to promises, so you should take this approach to maintain parity.
When working with a big application that has several tables and several DB operations it's very difficult to keep track of what transactions are occurring. To workaround this we started by passing around a trx object.
This has proven to be very messy.
For example:
async getOrderById(id: string, trx?: Knex.Transaction) { ... }
Depending on the function calling getOrderById it will either pass a trx object or not. The above function will use trx if it is not null.
This seems simple at first, but it leads to mistakes where if you're in the middle of a transaction in one function and call another function that does NOT use a transaction, knex will hang with famous Knex: Timeout acquiring a connection. The pool is probably full.
async getAllPurchasesForUser(userId: string) {
..
const trx = await knex.transaction();
try {
..
getPurchaseForUserId(userId); // Forgot to make this consume trx, hence Knex timesout acquiring connection.
..
}
Based on that, I'm assuming this is not a best practice, but I would love if someone from Knex developer team could comment.
To improve this we're considering to instead use knex.transactionProvider() that is accessed throughout the app wherever we perform DB operations.
The example on the website seems incomplete:
// Does not start a transaction yet
const trxProvider = knex.transactionProvider();
const books = [
{title: 'Canterbury Tales'},
{title: 'Moby Dick'},
{title: 'Hamlet'}
];
// Starts a transaction
const trx = await trxProvider();
const ids = await trx('catalogues')
.insert({name: 'Old Books'}, 'id')
books.forEach((book) => book.catalogue_id = ids[0]);
await trx('books').insert(books);
// Reuses same transaction
const sameTrx = await trxProvider();
const ids2 = await sameTrx('catalogues')
.insert({name: 'New Books'}, 'id')
books.forEach((book) => book.catalogue_id = ids2[0]);
await sameTrx('books').insert(books);
In practice here's how I'm thinking about using this:
SingletonDBClass.ts:
const trxProvider = knex.transactionProvider();
export default trxProvider;
Orders.ts
import trx from '../SingletonDBClass';
..
async getOrderById(id: string) {
const trxInst = await trx;
try {
const order = await trxInst<Order>('orders').where({id});
trxInst.commit();
return order;
} catch (e) {
trxInst.rollback();
throw new Error(`Failed to fetch order, error: ${e}`);
}
}
..
Am I understanding this correctly?
Another example function where a transaction is actually needed:
async cancelOrder(id: string) {
const trxInst = await trx;
try {
trxInst('orders').update({ status: 'CANCELED' }).where({ id });
trxInst('active_orders').delete().where({ orderId: id });
trxInst.commit();
} catch (e) {
trxInst.rollback();
throw new Error(`Failed to cancel order, error: ${e}`);
}
}
Can someone confirm if I'm understanding this correctly? And more importantly if this is a good way to do this. Or is there a best practice I'm missing?
Appreciate your help knex team!
No. You cannot have global singleton class returning the transaction for your all of your internal functions. Otherwise you are trying always to use the same transaction for all the concurrent users trying to do different things in the application.
Also when you once commit / rollback the transaction returned by provider, it will not work anymore for other queries. Transaction provider can give you only single transaction.
Transaction provider is useful in a case, where you have for example middleware, which provides transaction for request handlers, but it should not be started, since it might not be needed so you don't want yet allocate a connection for it from pool.
Good way to do your stuff is to pass transcation or some request context or user session around, so that each concurrent user can have their own separate transactions.
for example:
async cancelOrder(trxInst, id: string) {
try {
trxInst('orders').update({ status: 'CANCELED' }).where({ id });
trxInst('active_orders').delete().where({ orderId: id });
trxInst.commit();
} catch (e) {
trxInst.rollback();
throw new Error(`Failed to cancel order, error: ${e}`);
}
}
Depending on the function calling getOrderById it will either pass a trx object or not. The above function will use trx if it is not null.
This seems simple at first, but it leads to mistakes where if you're in the middle of a transaction in one function and call another function that does NOT use a transaction, knex will hang with famous Knex: Timeout acquiring a connection. The pool is probably full.
We usually do it in a way that if trx is null, query throws an error, so that you need to explicitly pass either knex / trx to be able to execute the method and in some methods trx is actually required to be passed.
Anyhow if you really want to force everything to go through single transaction in a session by default you could create API modules in a way that for each user session you create an API instance which is initialized with transaction:
const dbForSession = new DbService(trxProvider);
const users = await dbForSession.allUsers();
and .allUsers() does something like return this.trx('users');
I am getting a mongoose error when I attempt to update a user field multiple times.
What I want to achieve is to update that user based on some conditions after making an API call to an external resource.
From what I observe, I am hitting both conditions at the same time in the processUser() function
hence, user.save() is getting called almost concurrently and mongoose is not happy about that throwing me this error:
MongooseError [ParallelSaveError]: Can't save() the same doc multiple times in parallel. Document: 5ea1c634c5d4455d76fa4996
I know am guilty and my code is the culprit here because I am a novice. But is there any way I can achieve my desired result without hitting this error? Thanks.
function getLikes(){
var users = [user1, user2, ...userN]
users.forEach((user) => {
processUser(user)
})
}
async function processUser(user){
var result = await makeAPICall(user.url)
// I want to update the user based on the returned value from this call
// I am updating the user using `mongoose save()`
if (result === someCondition) {
user.meta.likes += 1
user.markModified("meta.likes")
try {
await user.save()
return
} catch (error) {
console.log(error)
return
}
} else {
user.meta.likes -= 1
user.markModified("meta.likes")
try {
await user.save()
return
} catch (error) {
console.log(error)
return
}
}
}
setInterval(getLikes, 2000)
There are some issues that need to be addressed in your code.
1) processUser is an asynchronous function. Array.prototype.forEach doesn't respect asynchronous functions as documented here on MDN.
2) setInterval doesn't respect the return value of your function as documented here on MDN, therefore passing a function that returns a promise (async/await) will not behave as intended.
3) setInterval shouldn't be used with functions that could potentially take longer to run than your interval as documented here on MDN in the Usage Section near the bottom of the page.
Hitting an external api for every user every 2 seconds and reacting to the result is going to be problematic under the best of circumstances. I would start by asking myself if this is absolutely the only way to achieve my overall goal.
If it is the only way, you'll probably want to implement your solution using the recursive setTimeout() mentioned at the link in #2 above, or perhaps using an async version of setInterval() there's one on npm here
I'm writing an API where I'm having a bit of trouble with the error handling. What I'm unsure about is whether the first code snippet is sufficient or if I should mix it with promises as in the second code snippet. Any help would be much appreciated!
try {
var decoded = jwt.verify(req.params.token, config.keys.secret);
var user = await models.user.findById(decoded.userId);
user.active = true;
await user.save();
res.status(201).json({user, 'stuff': decoded.jti});
} catch (error) {
next(error);
}
Second code snippet:
try {
var decoded = jwt.verify(req.params.token, config.keys.secret);
var user = models.user.findById(decoded.userId).then(() => {
}).catch((error) => {
});
user.active = true;
await user.save().then(() => {
}).catch((error) => {
})
res.status(201).json({user, 'stuff': decoded.jti});
} catch (error) {
next(error);
}
The answer is: it depends.
Catch every error
Makes sense if you want to react differently on every error.
e.g.:
try {
let decoded;
try {
decoded = jwt.verify(req.params.token, config.keys.secret);
} catch (error) {
return response
.status(401)
.json({ error: 'Unauthorized..' });
}
...
However, the code can get quite messy, and you'd want to split the error handling a bit differently (e.g.: do the JWT validation on some pre request hook and allow only valid requests to the handlers and/or do the findById and save part in a service, and throw once per operation).
You might want to throw a 404 if no entity was found with the given ID.
Catch all at once
If you want to react in the same way if a) or b) or c) goes wrong, then the first example looks just fine.
a) var decoded = jwt.verify(req.params.token, config.keys.secret);
b) var user = await models.user.findById(decoded.userId);
user.active = true;
c) await user.save();
res.status(201).json({user, 'stuff': decoded.jti});
I read some articles that suggested the need of a try/catch block for each request. Is there any truth to that?
No, that is not required. try/catch with await works conceptually like try/catch works with regular synchronous exceptions. If you just want to handle all errors in one place and want all your code to just abort to one error handler no matter where the error occurs and don't need to catch one specific error so you can do something special for that particular error, then a single try/catch is all you need.
But, if you need to handle one particular error specifically, perhaps even allowing the rest of the code to continue, then you may need a more local error handler which can be either a local try/catch or a .catch() on the local asynchronous operation that returns a promise.
or if I should mix it with promises as in the second code snippet.
The phrasing of this suggests that you may not quite understand what is going on with await because promises are involved in both your code blocks.
In both your code blocks models.user.findById(decoded.userId); returns a promise. You have two ways you can use that promise.
You can use await with it to "pause" the internal execution of the function until that promise resolves or rejects.
You can use .then() or .catch() to see when the promise resolves or rejects.
Both are using the promise returns from your models.user.findById(decoded.userId); function call. So, your phrasing would have been better to say "or if I should use a local .catch() handler on a specific promise rather than catching all the rejections in one place.
Doing this:
// skip second async operation if there's an error in the first one
async function someFunc() {
try {
let a = await someFunc():
let b = await someFunc2(a);
return b + something;
} catch(e) {
return "";
}
}
Is analogous to chaining your promise with one .catch() handler at the end:
// skip second async operation if there's an error in the first one
function someFunc() {
return someFunc().then(someFunc2).catch(e => "");
}
No matter which async function rejects, the same error handler is applied. If the first one rejects, the second one is not executed as flow goes directly to the error handler. This is perfectly fine IF that's how you want the flow to go when there's an error in the first asynchronous operation.
But, suppose you wanted an error in the first function to be turned into a default value so that the second asynchronous operation is always executed. Then, this flow of control would not be able to accomplish that. Instead, you'd have to capture the first error right at the source so you could supply the default value and continue processing with the second asynchronous operation:
// always run second async operation, supply default value if error in the first
async function someFunc() {
let a;
try {
a = await someFunc():
} catch(e) {
a = myDefaultValue;
}
try {
let b = await someFunc2(a);
return b + something;
} catch(e) {
return "";
}
}
Is analogous to chaining your promise with one .catch() handler at the end:
// always run second async operation, supply default value if error in the first
function someFunc() {
return someFunc()
.catch(err => myDefaultValue)
.then(someFunc2)
.catch(e => "");
}
Note: This is an example that never rejects the promise that someFunc() returns, but rather supplies a default value (empty string in this example) rather than reject to show you the different ways of handling errors in this function. That is certainly not required. In many cases, just returning the rejected promise is the right thing and that caller can then decide what to do with the rejection error.
I am making myself a library which retries failed promise "chain-parts" - I collect methods to be called and queue next phase only after previous succeeded.
Conceptually rounded up - my problems are more fundamental. This is where I arrived with debugging:
this.runningPromise
.then(function() {
return Promise.reject();
})
//;
//this.runningPromise
.then(this.promiseResolver.bind(this))
.catch(this.promiseRejector.bind(this))
;
Works, promiseRejector kicks in. When I uncomment out the two lines, works not. promiseResolver gets called.
Can't find anywhere anything. Nodejs 6.10.3 with browserify on Windows, Chrome.
If you uncomment two rows means you are calling this.runningPromise twice and each time it has its own callbacks.
If you keep the rows commented then it will act as a promise (and associated callbacks)
Better you should assign promise to a variable and then you can use it multiple times.
let newPromise = this.runningPromise
.then(function() {
return Promise.reject();
});
newPromise
.then(this.promiseResolver.bind(this))
.catch(this.promiseRejector.bind(this));
With above code you can use newPromise multiple times.
this.runningPromise will not change when you chain other callbacks, i.e. the promise referenced by this.runningPromise was never rejected. So you have to assign the new promise to a new reference:
let something = this.runningPromise
.then(function() {
return Promise.reject();
});
something
.then(this.promiseResolver.bind(this))
.catch(this.promiseRejector.bind(this));