gcloud Datastore transaction issue using nodejs - node.js

I am using nodejs to contact the google datastore. This is my code:
dataset.runInTransaction(function(transaction, done,err) {
// From the `transaction` object, execute dataset methods as usual.
// Call `done` when you're ready to commit all of the changes.
transaction.save(entities, function(err) {
if(err){
console.log("ERROR TRNASCTION");
transaction.rollback(done);
return;
}else{
console.log("TRANSACTION SUCCESS!");
done();
}
});
});
If the save was not successful, I would like the transaction to rollback and if it was I want it to commit. The problem I am facing is that neither of them seem to be running, just no output on the console. Nothing is being sent to my database so I would assume at least the 'if(err)' condition would run but it doesn't. I am new to glcoud so I am not sure if I am doing something wrong here? I am following the doc at https://googlecloudplatform.github.io/gcloud-node/#/docs/v0.26.0/datastore/transaction?method=save.

In the context of a transaction, the save method doesn't actually need a callback. The things you wish to save are queued up until you call done(). Calling done will commit the transaction.
You can then handle errors from the commit operation in a second function passed to runInTransaction. See https://googlecloudplatform.github.io/gcloud-node/#/docs/v0.26.0/datastore/dataset?method=runInTransaction for examples of that.
--
Just mentioning this since it relates to the rollback part: https://github.com/GoogleCloudPlatform/gcloud-node/issues/633 -- we're waiting on an upgrade to Datastore's API before tackling that issue.

Related

fspromises.writeFile() Writes Empty File on process.exit()

I've been looking all over, but I can't seem to find the answer why I'm getting nothing in the file when exiting.
For context, I'm writing a discord bot. The bot stores its data once an hour. Sometime between stores I want to store the data in case I decide I want to update the bot. When I manually store the data with a command, then kill the process, things work fine. Now, I want to be able to just kill the process without having to manually send the command. So, I have a handler for SIGINT that stores the data the same way I was doing manually and after the promise is fulfilled, I exit. For some reason, the file contains nothing after the process ends. Here's the code (trimmed).
app.ts
function exit() {
client.users.fetch(OWNER)
.then(owner => owner.send('Rewards stored. Bot shutting down'))
.then(() => process.exit());
}
process.once('SIGINT', () => {
currencyService.storeRewards().then(exit);
});
process.once('exit', () => {
currencyService.storeRewards().then(exit);
});
currency.service.ts
private guildCurrencies: Map<string, Map<string, number>> = new Map<string, Map<string, number>>();
storeRewards(): Promise<void[]> {
const promises = new Array<Promise<void>>();
this.guildCurrencies.forEach((memberCurrencies, guildId) => {
promises.push(this.storageService.store(guildId, memberCurrencies));
})
return Promise.all(promises)
}
storage.service.ts
store(guild: string, currencies: Map<string, number>): Promise<void> {
return writeFile(`${this.storageLocation}/${guild}.json`, JSON.stringify([...currencies]))
.catch(err => {
console.error('could not store currencies', err);
})
}
So, as you can see, when SIGINT is received, I get the currency service to store its data, which maps guilds to guild member currencies which is a map of guild members to their rewards. It stores the data in different files (each guild gets its own file) using the storage service. The storage service returns a promise from writeFile (should be a promise of undefined when the file is finished writing). The currency service accumulates all the promises and returns a promise that resolves when all of the store promises resolve. Then, after all of the promises are resolved, a message is sent to the bot owner (me), which returns a promise. After that promise resolves, then we exit the process. It should be a clean exit with all the data written and the bot letting me know that it's shutting down, but when I read the file later, it's empty.
I've tried logging in all sorts of different places to make sure the steps are being done in the right order and I'm not getting weird async stuff, and everything seems to be proceeding as expected, but I'm still getting an empty file. I'm not sure what's going on, and I'd really appreciate some guidance.
EDIT: I remembered something else. As another debugging step, I tried reading the files after the currency service storeRewards() promise resolved, and the contents of the files were valid* (they contained valid data, but it was probably old data as the data doesn't change often). So, one of my thoughts is that the promise for writeFile resolves before the file is fully written, but that isn't indicated in the documentation.
EDIT 2: The answer was that I was writing twice. None of the code shown in the post or the first edit would have made it clear that I was having a double write issue, so I am adding the code causing the issue so that future readers can get the same conclusion.
Thanks to #leitning for their help finding the answer in the comments on my question. After writing a random UUID in the file name, I found the file was being written twice. I had assumed when asking the question, that I had shared all the relevant info, but I had missed something. process.once('exit', ...) was being called after calling process.exit() (more details here). The callback function for the exit event does not handle asynchronous calls. When the callback function returns, the process exits. Since I had duplicated the logic in the SIGINT callback function in the exit callback function, the file was being written a second time and the process was exiting before the file could be written, resulting in an empty file. Removing the process.once('exit', ...) logic fixed the issue.

Mongoose Node - Request Multiple updates to multi documents with success all or cancel all?

Good question I expect to be slane down quickly.
Sometimes we want to update many documents when an action is performed at the front end.
Example React Code
this.props.submitRecord(newRecord, (err, record) => {
if (err) actions.showSnackBar(err);
else {
actions.showSnackBar("Record Submitted Successfully ...");
this.props.validateClub(this.props.club._id, (err, message) => {
if (err) actions.showSnackBar(err);
else {
obj.setState({
player: {},
open: false
});
actions.showSnackBar(message);
}
});
}
});
As we can see I firstly submit the first request, and on success, I submit the second request. If the first fails, the second one won't happen. But, if the first one passes, and the second one fails for whatever reason, we have a data mismatch.
Ideally, we want to send them all together and they all pass or none pass. Is there a simple way at doing this with react, Node and mongoose or do I have to do it the hard way (Which is also subject to error been possible, store old values until all request are satisfied, or make some revert request on the node server, lol).
Thanks
Transactions are a part of Mongodb 4.0. There was no transaction support in Mongodb in the previous versions. The other way could be to perform rollback on failure through code, and there are some non-opinionated npm packages such as mongoose-transaction.
https://www.mongodb.com/transactions

How to build a transaction system into a .then() chain?

I have multiple chained, synchronous requests in my code. I am using the NodeJS package request-promise.
Here is some pseudocode to show how it is formatted:
initRequest.then(function(response){
return request2;
}).then(function(response2){
return request3;
}).then(function(response3){
return requestN;
}).catch(function(err){
log(error)
});
If, for example, request3 fails, what happens? Does the chain continue, or does it break out of the loop completely?
And if request2 was a POST, and request3 failed, is there a way to systematically roll back the data that request2 changed?
Thanks.
Does the chain continue, or does it break out of the loop completely?
It breaks and proceeds to catch or finally proposal, which is available in recent Node.js versions and polyfillable in older versions - similarly to how try..catch..finally works for synchronous code (this is how plain promises are translated to async functions as well).
And if request2 was a POST, and request3 failed, is there a way to systematically roll back the data that request2 changed?
This should be secured by a developer. If there's a possibility that data should be rolled back, necessary information (data entry ids) should be saved to variables and rolled back in catch.
if request3 fails, it will stop executing the rest of your request chains.
and there is no way to systematically rollback what request2 changed you would have to implement it your custom way.
to handle when request3 fails, catch request3 it self.
here is a simple/mini way to handle when request3 fails
initRequest.then(function(response){
return request2;
}).then(function(response2){
return request3.catch(function(err2){
//if something goes wrong do rollback
request2Rollback.then(rollbackRes => {
throw new Error("request3 failed! roll backed request 2!");
}).catch(function(err){
// the rollback itself has failed so do something serious here
throw err;
})
});;
}).then(function(response3){
return requestN;
}).catch(function(err){
log(error)
});

PouchDb Replicate of single document causes huge memory usage, then crash

I have a situation where live sync is refusing to get some documents on it's own, making PouchDb.get return saying the document is not found (despite it being there in CouchDb that it is replicating from).
Reading through the documentation, it suggests doing a manual replicate first, then a sync. So I changed my code to first replicate
docId='testdoc';
return new Promise(function (resolve, reject) {
var toReplicate=[docId];
console.log("replicate new ", toReplicate);
var quickReplicate = self.db.replicate.from(self.url, {
doc_ids: toReplicate,
// timeout: 100000, //makes no difference
checkpoint: false, //attempt to get around bad checkpoints, but I purged all checkpoints and still have the issue
batch_size: 10, //attempt to get around huge memory usage
batches_limit: 1
}).on('denied', function (err) {
// a document failed to replicate (e.g. due to permissions)
console.log("replicate denied", err);
reject(err);
}).on('complete', function (info) {
// handle complete
console.log("replicate complete", info, toReplicate);
resolve(info);
}).on('error', function (err) {
// handle error
console.log("replicate error", err);
reject(err);
}).on('change', function (change) {
console.log("replicate change", change);
}).on('pause', function (err) {
console.log("replicate pause", err);
});
})
Then get the doc
return self.db.get(docId).catch(function (err) {
console.error(err);
throw err;
});
This function is called multiple times (about 8 times on average), each time requesting a single doc. They may all run at almost the exact same time.
To simplify this, I commented out nearly every single time this function was used, one at a time, until I found the exact document causing the problem. I reduced it down to a very simple command directly calling the problem document
db.replicate.from("https://server/db",{
doc_ids:['profile.bf778cd1c7b4e5ea9a3eced7049725a1']
}).then(function(result){
console.log("Done",result);
});
This will never finish, the browser will rapidly use up memory and crash.
It is probably related to database rollback issues in this question here Is it possible to get the latest seq number of PouchDB?
When you attempt to replicate this document, no event is ever fired in the above code. Chrome/firefox will just sit, gradually using more ram and maxing the CPU then the browser crashes with this message in chrome.
This started happening after we re-created our test system like this:
1: A live Couchdb is replicated to a test system.
2: The test Couchdb is modified and becomes ahead of the live system. Causing replication conflicts.
3: The test CouchDb is deleted, and the replication rerun from start, creating a fresh test system.
Certain documents now have this problem, despite never being in PouchDb before, and there should be no existing replication checkpoints for PouchDb since the database is a fresh replication of live. Even destroying the PouchDb doesn't work. Even removing the indexDb pouch doesn't solve it. I am not sure what else to try.
-Edit, I've narrowed down the problem a little bit, the document has a ton of deleted revisions from conflicts. It seems to get stuck looping through them.

node async call return data in response

I am new to nodejs so I have a basic question and this is my scanrio
I have a javascript client which is making a http request to a node server to read a value from the database.
Once the node server receives the request it makes a simple db call and returns the data to the client in the response, and this is where the problem is.
router.get('/state', function(req, res){
var result = dbServer.makeDBCall();//Before this line executes and returns the result the next line executes
res.send(result);
}
The database call from the node server is asynchronous, therefore before the result is returned the node server has already sent a blank response to the client. What is the standard/acceptable way of getting this achieved, I know I can block the node thread using async, but then the whole purpose of node is gone right?
It depends on what kind of database node module you are using.
Other than the standard callback approach, there are also the promise way. The pg-promise library is 1 of those kind.
See sample code:
this.databaseConnection.makeDBCall('your query...')
.then(function(dbResponse) {
// Parse the response to the format you want then...
res.send(result);
})
.catch(function(error) {
// Handle error
res.send(error.message);
});
#spdev : I saw 1 of your comments about you being worried about how Node actually knows who to reply the response to, especially when there are multiple requests.
This is a very good question, and to be honest with you - I don't know much about it as well.
In short the answer is yes, Node somehow handles this by creating a corresponding ServerResponse object when a HTTP request comes through. This object seems to have some smartness to tell the Nodejs network stack how to route itself back to the caller when it gets parsed as data packets.
I tried Googling a bit for an answer but didn't got too far. I hope the ServerResponseObject documentation can provide more insight for you. Share with me if you got an answer thanks!
https://nodejs.org/api/all.html#http_class_http_serverresponse
Try below code.
router.get('/state', function(req, res){
var result = dbServer.makeDBCall(function(err,result){
if(!err) {
res.send(result);
}
});
}
Hope this Help.
The dbServer.makeDBCall(); must have a callback that runs when the statement completes executing.
Something like -
dbServer.makeDBCall({query: 'args'}, function(err, result){
if (err) // handle error
res.send(result);
})
You return the response from db from that callback function.
Learn more about callback from here-
nodeJs callbacks simple example
https://docs.nodejitsu.com/articles/getting-started/control-flow/what-are-callbacks/

Resources