my question is, on azure mobile service back-end when I run SQL insert on mssql.query like the one below
var sql = " INSERT INTO Customers
(CustomerName, ContactName) VALUES (?, ?); ";
mssql.query(sql, [item.CustomerName, item.ContactName], {
success: function(results) {
request.execute();
},
error: function(err) {
console.log("error is: " + err);
}
});
the data won't show up on azure portal website anymore. I know I can use the built in
todoItemTable.insert()
to insert, but sometimes the business logic is very complicated that it can only be done with in SQL. Is it the __version field that is causing the problem? If it is what should I put in when I insert?
Thanks!
Check your logs to see what might be going wrong. You do not need to worry about __version or other system columns when inserting a new record.
Is this in a table insert script? If so, you may not want request.execute() in your callback. That would insert the original record in addition to the record inserted in your mssql statement.
You may also have an issue because the mssql.query() can call its callback function multiple times depending on how many result messages the SQL produces. Have a variable like requestExecuted defined up with your sql variable, and in the mssql success callback, check it before executing the request.execute() call:
var requestExecuted = false;
mssql.query(sql, [item.CustomerName, item.ContactName], {
success: function(results) {
if (requestExecuted === false) {
requestExecuted = true;
request.execute();
}
},
error: function(err) {
console.log("error is: " + err);
}
});
If this doesn't get you going, try adding console.log statements in the callback to see if it is getting called and how many times. Update your question if you have any more details from errors in your log.
Related
I met an annoyance when trying to use simple function to query data for my web app. The idea is to use one function to list the contents of one table; the other function to use the user-selected record_id in this table to query the detailed contents data in another table.
When running, the app ran the two functions without any error while no data got. Checked the console and found the second function query results is null (I console.log the input for the second function, found they got and using the correct query keys). Since I am sure database has the data for query.
I tried:
use psql command line to query data using the same query keys, I have the results without problem;
I run a node command line, and try to run the two functions by providing the query keys, it also gave me the correct results.
So the functions should work. Now my question is why put them in the app and let them run by themselves, they did not get the query results?
I am using pg = require("pg"); and const pool = new pg.Pool(config) for database connection;
Your sharing of your experience will be very appreciated.
(UPDATE)The functions are like below:
function listItemDB(callback){
pool.connect(function(err, client, done) {
if(err) {
return console.error('error fetching client from pool', err);
}
//use the client for executing the query
client.query(`SELECT * FROM basicdb.items`,
function(err, result) {
//call `done(err)` to release the client back to the pool (or destroy it if there is an error)
done(err);
if(err) {
return console.error('error running query', err);
}
// console.log(result.rows);
callback(result.rows);
});
});
}
The above function is only trying to get "item1" and "dataset1" for future use and pass them to below function args. It does its job perfectly.
function getFileName(itemName,datasetName, callback) {
let fileName;
console.log(itemName,datasetName);
pool.connect(function(err, client, done) {
if(err) {
return console.error('error fetching client from pool', err);
}
client.query("SELECT * "+
"FROM basicdb.dataset "+
"INNER JOIN basicdb.items "+
"ON basicdb.dataset.item_id = basicdb.items.item_id "+
"WHERE (basicdb.items.item_name = ($1)) "+
"AND (basicdb.dataset.datasetname = ($2))",[itemName,datasetName],
function (err, result){
done();
if(err) {
return console.error('error running query', err);
}
let records = result.rows;
fileName = records[records.length-1].filename;
callback(fileName);
});
});
}
This above function is trying to get the filename so the main app can use it. The code to call the above function in my main app.js is like below:
db.getFileName("item1","dataset1",function(fileName) {
//do something with the fileName....}
("db" is the module name which include the functions.)
I finally found the problem, which is a low-level mistake and has nothing to do with the database and the queries.
The item names got from the dropdown list in the app, which was feed to the function args, has one " "(space) attached to the end of the name(i dont know why?), which always "!=" the record in the database:-(, so always no query result in the app. But for the function test, I hardcode the item name which is correct "==" the record in the database. Since it is " ", even when I console.log(itemName), I did not find the space at the end.
It turns out to be 'A mistake of space'.
Hi Can anyone give an example of how use insert statement in nodejs. I am able to use select query. But for insert query i am getting the result as []. no error can be seen but the values are not added to the original table. I am using db2, ibm_db,express,nodejs and angularjs.
I wrote a blog entry on using DB2 and node.js on Bluemix a while ago. It includes code for an INSERT statement.
As part of the insert
first prepare the statement,
then bind the values to be inserted and
finally execute the statement.
Here is the relevant code snippet, the full context is in the blog:
exports.insertIP = function(ibmdb,connString,ipinfo) {
console.log("insertIP called",ipinfo);
ibmdb.open(connString, function(err, conn) {
if (err ) {
res.send("error occurred " + err.message);
}
else {
// prepare the SQL statement
conn.prepare("INSERT INTO IP.VISITORS(vtime,ip,country_code,country,region_code,region,city,zip,latitude,longitude,metro,area) VALUES (current timestamp,?,?,?,?,?,?,?,?,?,?,?)", function(err, stmt) {
if (err) {
//could not prepare for some reason
console.log(err);
return conn.closeSync();
}
//Bind and Execute the statment asynchronously
stmt.execute([ipinfo["ip"],ipinfo["country_code"],ipinfo["country_name"],ipinfo["region_code"],ipinfo["region_name"],ipinfo["city"],ipinfo["zipcode"], ipinfo["latitude"], ipinfo["longitude"],ipinfo["metro_code"],ipinfo["area_code"]], function (err, result) {
console.log(err);
// Close the connection to the database
conn.close(function(){
console.log("Connection Closed");
});
});
});
}
})};
I would suggest and recommend (as one of the members of node-ibm_db) to follow the node-ibm_db github repository (https://github.com/ibmdb/node-ibm_db) , we have updated the README document as well as the list of APIs to do particular tasks.
For your above query you can use ".prepare(sql, callback)" or ".prepareSync(sql)" API (as per your requirements Async/sync call), below is the attached code snippet and URL link for particular API documentation.
var ibmdb = require("ibm_db"),
cn ="DATABASE=dbname;HOSTNAME=hostname;PORT=port;PROTOCOL=TCPIP;UID=dbuser;PWD=xxx";
ibmdb.open(cn,function(err,conn){
conn.prepare("insert into hits (col1, col2) VALUES (?, ?)",
function (err, stmt) {
if (err) {
//could not prepare for some reason
console.log(err);
return conn.closeSync();
}
//Bind and Execute the statment asynchronously
stmt.execute(['something', 42], function (err, result) {
if( err ) console.log(err);
else result.closeSync();
//Close the connection
conn.close(function(err){});
});
});
});
API documentation(Github URL) : https://github.com/ibmdb/node-ibm_db#-8-preparesql-callback
Try to install jt400 by using the below command
npm install node-jt400 --save
use the below code to insert the data to table name foo.
Follow link https://www.npmjs.com/package/node-jt400 for for details
pool
.insertAndGetId('INSERT INTO foo (bar, baz) VALUES(?,?)',[2,'b'])
.then(id => {
console.log('Inserted new row with id ' + id);
});
So I've set up a node.js backend that is to be used to move physical items in our warehouse. The database hosting our software is oracle, and our older version of this web application is written in PHP which works fine, but has some weird glitches and is slow as all hell.
The node.js backend works fine for moving single items, but once I try moving a box (which will then move anything from 20-100 items), the entire backend stops at the .commit() part.
Anyone have any clue as to why this happens, and what I can do to remedy it? Suggestions for troubleshooting would be most welcome as well!
Code:
function move(barcode,location) {
var p = new Promise(function(resolve,reject) {
console.log("Started");
exports.findOwner(barcode).then(function(data) {
console.log("Got data");
// console.log(barcode);
var type = data[0];
var info = data[1];
var sql;
sql = "update pitems set location = '"+location+"' where barcode = '"+barcode+"' and status = 0"; // status = 0 is goods in store.
ora.norway.getConnection(function(e,conn) {
if(e) {
reject({"status": 0, "msg": "Failed to get connection", "error":e});
}
else {
console.log("Got connection");
conn.execute(sql,[],{}, function(err,results) {
console.log("Executed");
if(err) {
conn.release();
reject({"status": 0, "msg": "Failed to execute sql"+sql, "error": err});
}
else {
console.log("Execute was successfull"); // This is the last message logged to the console.
conn.commit(function(e) {
conn.release(function(err) {
console.log("Failed to release");
})
if(e) {
console.log("Failed to commit!");
reject({"status": 0, "msg": "Failed to commit sql"+sql, "error": e});
}
else {
console.log("derp6");
resolve({"status": 1, "msg": "Relocated "+results.rowsAffected+" items."});
}
});
}
});
}
});
});
});
return p;
}
Please be aware that your code is open to SQL injection vulnerabilities. Even more so since you posted it online. ;)
I recommend updating your statement to something like this:
update pitems
set location = :location
where barcode = :barcode
and status = 0
Then update your conn.execute as follows:
conn.execute(
sql,
{
location: location,
barcode: barcode
},
{},
function(err,results) {...}
);
Oracle automatically escapes bind variables. But there's another advantage in that you'll avoid hard parses when the values of the bind variables change.
Also, I'm happy to explore the issue you're encountering more with commit. But it would really help if you could provide a reproducible test case that I could run on my end.
I think this is an issue on the database level, an update on multiple items without providing an ID is maybe not allowed.
You should do two things:
1) for debugging purposes, add console.log(JSON.stringify(error)) where you expect an error. Then you'll find the error that your database provides back
2) at the line that says
conn.release(function(err) {
console.log("Failed to release");
})
Check if err is defined:
conn.release(function(err) {
if(err){
console.log("Failed to release");
}
else{console.log("conn released");}
})
This sounds like similar to an issue that I'm having. Node.js is hanging while updating oracle db using oracledb library. It looks like when there are 167 updates to make, it works fine. The program hangs when I have 168 updates. The structure of the program goes like this:
When 168 records from local sqlite db, for each record returned as callback from sqlite: 1.) get an Oracle connection; 2.) do 2 updates to two tables (one update to each table with an autoCommit on the latter execute). All 1st update completed but none can start the second update. They are just hanging there. With 167 records, they will run to completion.
The strange thing observed is that none of the 168 could get started on 2nd update (they finished 1st update) so some will have a chance to go forward to commit. It looks like they are all queued up in some way.
I am wondering what the best practice would be for saving multiple records in an action that makes changes, particularly add and remove, to multiple records. The end goal is to have one function have access to all of the changed data in the action. Currently I am nesting the saves in order for the innermost saves to have access to the data in all the updated records. Here is an example of how I am saving:
record1.save(function (error, firstRecord) {
record2.save(function (erro, secondRecord) {
record3.save(function (err, thirdRecord) {
res.send({recordOne: firstRecord, recordTwo: secondRecord, recordThree: thirdRecord});
});
});
});
With this structure of saving, recordOne, recordTwo, and recordThree display the expected values on the server. However, checking localhost/1337/modelName reveals that the models did not properly update and have incorrect data.
You could use the built in promise engine Bluebird to to that.
Promise.then(function() {
return [record1.save(),record2.save(),record3.save()];
}).spread(function(record1_saved,record2_saved,record3_saved){
res.send({recordOne: record1_saved, recordTwo: record2_saved, recordThree: record3_saved});
req.namespamce = this;
}).catch(function(err){
res.badRequest({
error: err.message
});
req.namespamce = this;
});
Hi im developing an app with nodeJS, express and a mongoDB, i need to take users data from a csv file and upload it to my database this db has a schema designed with mongoose.
but i don know how to do this, what is the best approach to read the csv file check for duplicates against the db and if the user (one column in the csv) is not here insert it?
are there some module to do this? or i need to build it from scratch? im pretty new to nodeJS
i need a few advices here
Thanks
this app have an angular frontend so the user can upload the file, maybe i should read the csv in the front end and transform it into an array for node, then insert it?
Use one of the several node.js csv libraries like this one, and then you can probably just run an upsert on the user name.
An upsert is an update query with the upsert flag set to true: {upsert: true}. This will insert a new record only if the search returns zero results. So you query may look something like this:
db.collection.update({username: userName}, newDocumentObj, {upsert: true})
Where userName is the current username you're working with and newDocumentObj is the json document that may need to be inserted.
However, if the query does return a result, it performs an update on those records.
EDIT:
I've decided that an upsert is not appropriate for this but I'm going to leave the description.
You're probably going to need to do two queries here, a find and a conditional insert. For this find query I'd use the toArray() function (instead of a stream) since you are expecting 0 or 1 results. Check if you got a result on the username and if not insert the data.
Read about node's mongodb library here.
EDIT in response to your comment:
It looks like you're reading data from a local csv file, so you should be able to structure you program like:
function connect(callback) {
connStr = 'mongodb://' + host + ':' + port + '/' + schema; //command line args, may or may not be needed, hard code if not I guess
MongoClient.connect(connStr, function(err, db) {
if(err) {
callback(err, null);
} else {
colObj = db.collection(collection); //command line arg, hard code if not needed
callback(null, colObj);
}
});
}
connect(function(err, colObj) {
if(err) {
console.log('Error:', err.stack);
process.exit(0);
} else {
console.log('Connected');
doWork(colObj, function(err) {
if(err) {
console.log(err.stack);
process.exit(0);
}
});
}
});
function doWork(colObj, callback) {
csv().from('/path/to/file.csv').on('data', function(data) {
//mongo query(colObj.find) for data.username or however the data is structured
//inside callback for colObj.find, check for results, if no results insert data with colObj.insert, callback for doWork inside callback for insert or else of find query check
});
}