Node.js oracle driver - multiple updates - node.js

So I've set up a node.js backend that is to be used to move physical items in our warehouse. The database hosting our software is oracle, and our older version of this web application is written in PHP which works fine, but has some weird glitches and is slow as all hell.
The node.js backend works fine for moving single items, but once I try moving a box (which will then move anything from 20-100 items), the entire backend stops at the .commit() part.
Anyone have any clue as to why this happens, and what I can do to remedy it? Suggestions for troubleshooting would be most welcome as well!
Code:
function move(barcode,location) {
var p = new Promise(function(resolve,reject) {
console.log("Started");
exports.findOwner(barcode).then(function(data) {
console.log("Got data");
// console.log(barcode);
var type = data[0];
var info = data[1];
var sql;
sql = "update pitems set location = '"+location+"' where barcode = '"+barcode+"' and status = 0"; // status = 0 is goods in store.
ora.norway.getConnection(function(e,conn) {
if(e) {
reject({"status": 0, "msg": "Failed to get connection", "error":e});
}
else {
console.log("Got connection");
conn.execute(sql,[],{}, function(err,results) {
console.log("Executed");
if(err) {
conn.release();
reject({"status": 0, "msg": "Failed to execute sql"+sql, "error": err});
}
else {
console.log("Execute was successfull"); // This is the last message logged to the console.
conn.commit(function(e) {
conn.release(function(err) {
console.log("Failed to release");
})
if(e) {
console.log("Failed to commit!");
reject({"status": 0, "msg": "Failed to commit sql"+sql, "error": e});
}
else {
console.log("derp6");
resolve({"status": 1, "msg": "Relocated "+results.rowsAffected+" items."});
}
});
}
});
}
});
});
});
return p;
}

Please be aware that your code is open to SQL injection vulnerabilities. Even more so since you posted it online. ;)
I recommend updating your statement to something like this:
update pitems
set location = :location
where barcode = :barcode
and status = 0
Then update your conn.execute as follows:
conn.execute(
sql,
{
location: location,
barcode: barcode
},
{},
function(err,results) {...}
);
Oracle automatically escapes bind variables. But there's another advantage in that you'll avoid hard parses when the values of the bind variables change.
Also, I'm happy to explore the issue you're encountering more with commit. But it would really help if you could provide a reproducible test case that I could run on my end.

I think this is an issue on the database level, an update on multiple items without providing an ID is maybe not allowed.
You should do two things:
1) for debugging purposes, add console.log(JSON.stringify(error)) where you expect an error. Then you'll find the error that your database provides back
2) at the line that says
conn.release(function(err) {
console.log("Failed to release");
})
Check if err is defined:
conn.release(function(err) {
if(err){
console.log("Failed to release");
}
else{console.log("conn released");}
})

This sounds like similar to an issue that I'm having. Node.js is hanging while updating oracle db using oracledb library. It looks like when there are 167 updates to make, it works fine. The program hangs when I have 168 updates. The structure of the program goes like this:
When 168 records from local sqlite db, for each record returned as callback from sqlite: 1.) get an Oracle connection; 2.) do 2 updates to two tables (one update to each table with an autoCommit on the latter execute). All 1st update completed but none can start the second update. They are just hanging there. With 167 records, they will run to completion.
The strange thing observed is that none of the 168 could get started on 2nd update (they finished 1st update) so some will have a chance to go forward to commit. It looks like they are all queued up in some way.

Related

got some error massage in console when I try to update my field value in mongodb through express

My node app will crash when i send req for update the field value in my mongo db. The data will updated successfully, But The message will no show which is i provided in (
(err) => {
if (err) {
res.status(500).json({
error: "There was an error in server side!",
});
} else {
res.status(200).json({
message: "value updated successfully!",
});
}
}
)
Instead of showing above message. the mongo sent me (const err = new MongooseError('Query was already executed: ' + str);). this message and more :
MongooseError: Query was already executed: Todo.updateOne({ _id: new ObjectId("6243ed2b5e0bdc9ab780b4d9...
But I use each and every time different id and update with differen message.
when i check in db is that the old value updated or not, but nicely the old value updated. but no message or any thing i can't see in postman resposnse.Also in my console the mongodb throw me above error.
What ever happened I want to see my predefined err messages or successfully messages.
Finally, I got and understood the answer. Actually, I am a beginner programmer and developer that's why I made this problem and it took a lot of time to solve. I solved the problem within 3-5 minutes by removing async/await but I took a lot of time to dig out why it's working after removing async/await. Here is a basic concept of asynchronous programming if we use async/await we don't have to use callback again, Or if we use callback we don't need async/await Because it's just redundant. So, if we want to get data like we are getting from
callback(err,data) =>{
res.send(data);
}
then we just assigned our full thing in a variable like:
try{
const data=await (our code);
res.send(data);
}catch(err){
res.send(err.message);
}

Tile38 Near by query node call back function is not working

I'm building a small GEO application and it's using http://tile38.com/ and https://www.npmjs.com/package/tile38 node module. Everything working fine but I'm unable to get result from the NEARBY query from the node module. It looks like the call back function isn't working, I've spent lot of time but couldn't find a way out. What I want is get the result from the nearby query and assign to a variable.
Here is the code:
var Tile38 = require('tile38');
var client = new Tile38({host: 'localhost', port: 9851, debug: true });
// set a simple lat/lng coordinate
client.set('fleet', 'truck1', [33.5123, -112.2693])
// set with additional fields
client.nearbyQuery('fleet').distance().point(33.5123, -112.2693, 6000).execute((err, results) => {
console.log("########");
// this callback will be called multiple times
if (err) {
console.error("something went wrong! " + err);
} else {
console.log(results + "##########");
}
});;
but when I try the following simple query it's working fine.
client.get('fleet', 'truck1').then(data => {
console.log(data); // prints coordinates in geoJSON format
}).catch(err => {
console.log(err); // id not found
});
ALSO when I try the RAW query in the tile38-cli it's working fine.
NEARBY fleet POINT 33.5123 -112.2693 6000
Any help would appreciated.
Thanks in advance.
EDIT
I tried the following also but didn't work.
let query = client.nearbyQuery('fleet').distance().point(33.5123, -112.2693, 6000)
query.execute(function(results).then(results => {
console.dir(results); // results is an object.
}))
Receiving following error
query.execute(function(results).then(results => {
^
SyntaxError: Unexpected token .
the author of the node library for Tile38 here. Sorry for the trouble getting this working. I noticed a typo in the readme which may have thrown you off. I will correct this.
The execute() method returns a Promise, and (as you already figured out) the example should have stated
query.execute().then(results => {
console.dir(results);
});
instead of
query.execute(function(results).then(results => {
console.dir(results);
});
After long time debugging I found that following code is working:
let query = client.nearbyQuery('fleet').distance().point(33.5123, -112.2693, 6000)
query.execute().then(data => {
console.dir(results); // results is an object.
}))

NodeJS writes to MongoDB only once

I have a NodeJS app that is supposed to generate a lot of data sets in a synchronous manner (multiple nested for-loops). Those data sets are supposed to be saved to my MongoDB database to look them up more effectively later on.
I use the mongodb - driver for NodeJS and have a daemon running. The connection to the DB is working fine and according to the daemon window the first group of datasets is being successfully stored. Every ~400-600ms there is another group to store but after the first dataset there is no output in the MongoDB console anymore (not even an error), and as the file sizes doesn't increase i assume those write operations don't work (i cant wait for it to finish as it'd take multiple days to fully run).
If i restart the NodeJS script it wont even save the first key anymore, possibly because of duplicates? If i delete the db folder content the first one will be saved again.
This is the essential part of my script and i wasn't able to find anything that i did wrong. I assume the problem is more in the inner logic (weird duplicate checks/not running concurrent etc).
var MongoClient = require('mongodb').MongoClient, dbBuffer = [];
MongoClient.connect('mongodb://127.0.0.1/loremipsum', function(err, db) {
if(err) return console.log("Cant connect to MongoDB");
var collection = db.collection('ipsum');
console.log("Connected to DB");
for(var q=startI;q<endI;q++) {
for(var w=0;w<words.length;w++) {
dbBuffer.push({a:a, b:b});
}
if(dbBuffer.length) {
console.log("saving "+dbBuffer.length+" items");
collection.insert(dbBuffer, {w:1}, function(err, result) {
if(err) {
console.log("Error on db write", err);
db.close();
process.exit();
}
});
}
dbBuffer = [];
}
db.close();
});
Update
db.close is never called and the connection doesn't drop
Changing to bulk insert doesn't change anything
The callback for the insert is never called - this could be the problem! The MongoDB console does tell me that the insert process was successful but it looks like the communication between driver and MongoDB isn't working properly for insertion.
I "solved" it myself. One misconception that i had was that every insert transaction is confirmed in the MongoDB console while it actually only confirms the first one or if there is some time between the commands. To check if the insert process really works one needs to run the script for some time and wait for MongoDB to dump it in the local file (approx. 30-60s).
In addition, the insert processes were too quick after each other and MongoDB appears to not handle this correctly under Win10 x64. I changed from the Array-Buffer to the internal buffer (see comments) and only continued with the process after the previous data was inserted.
This is the simplified resulting code
db.collection('seedlist', function(err, collection) {
syncLoop(0,0, collection);
//...
});
function syncLoop(q, w, collection) {
batch = collection.initializeUnorderedBulkOp({useLegacyOps: true});
for(var e=0;e<words.length;e++) {
batch.insert({a:a, b:b});
}
batch.execute(function(err, result) {
if(err) throw err;
//...
return setTimeout(function() {
syncLoop(qNew,wNew,collection);
}, 0); // Timer to prevent Memory leak
});
}

Mongo Bulk Updates - which succeeded (matched and modified) and which did not?

In order to improve the performance of many single Mongo document updates #Node, I consider using Mongo's Bulk operation - to update as many as 1000 documents at each iteration.
In this use case, each individual update opeartion may or may not occur - an update will only occur if the document version had not changed since it was last read by the updater. If a docuemnt was not updated, the application needs to retry and/or do other stuff to hadnle the situation.
Currently the Node code looks like this:
col.update(
{_id: someid, version:someversion},
{$set:{stuf:toupdate, version:(someversion+1)}},
function(err, res) {
if (err) {
console.log('wow, something is seriously wrong!');
// do something about it...
return;
}
if (!res || !res.result || !res.result.nModified) { // no update
console.log('oops, seems like someone updated doc before me);
// do something about it...
return;
}
// Great! - Document was updated, continue as usual...
});
Using Mongo's Bulk unordered operations, is there a way to know which of the group of (1000) updates had succeeded and which had not been performed (in this case due to wrong version)?
The code I started playing with looks like:
var bulk = col.initializeUnorderedBulkOp();
bulk.find({_id: someid1, version:someversion1}).updateOne(
{$set:{stuf:toupdate1, version:(someversion1+1)}});
bulk.find({_id: someid2, version:someversion2}).updateOne(
{$set:{stuf:toupdate2, version:(someversion2+1)}});
...
bulk.find({_id: someid1000, version:someversion1000}).updateOne(
{$set:{stuf:toupdate1000, version:(someversion1000+1)}});
bulk.execute(function(err, result) {
if (err) {
console.log('wow, something is seriously wrong!');
// do something about it...
return;
}
if (result.nMatched < 1000) { // not all got updated
console.log('oops, seems like someone updated at least one doc before me);
// But which of the 1000 got updated OK and which had not!!!!
return;
}
// Great! - All 1000 documents got updated, continue as usual...
});
I was unable to find a Mongo solution for that.
The solution I used was to revert to per document operation if the bulk operation failed... This gives reasonable performance in most cases.

azure mobile service back-end SQL insert

my question is, on azure mobile service back-end when I run SQL insert on mssql.query like the one below
var sql = " INSERT INTO Customers
(CustomerName, ContactName) VALUES (?, ?); ";
mssql.query(sql, [item.CustomerName, item.ContactName], {
success: function(results) {
request.execute();
},
error: function(err) {
console.log("error is: " + err);
}
});
the data won't show up on azure portal website anymore. I know I can use the built in
todoItemTable.insert()
to insert, but sometimes the business logic is very complicated that it can only be done with in SQL. Is it the __version field that is causing the problem? If it is what should I put in when I insert?
Thanks!
Check your logs to see what might be going wrong. You do not need to worry about __version or other system columns when inserting a new record.
Is this in a table insert script? If so, you may not want request.execute() in your callback. That would insert the original record in addition to the record inserted in your mssql statement.
You may also have an issue because the mssql.query() can call its callback function multiple times depending on how many result messages the SQL produces. Have a variable like requestExecuted defined up with your sql variable, and in the mssql success callback, check it before executing the request.execute() call:
var requestExecuted = false;
mssql.query(sql, [item.CustomerName, item.ContactName], {
success: function(results) {
if (requestExecuted === false) {
requestExecuted = true;
request.execute();
}
},
error: function(err) {
console.log("error is: " + err);
}
});
If this doesn't get you going, try adding console.log statements in the callback to see if it is getting called and how many times. Update your question if you have any more details from errors in your log.

Resources