I'm creating a discord bot (DiscordJS) using NodeJS and using sqlite3 to store 2 sets of data.
I open the DB with:
let db = new sqlite3.Database('./db/games.db', (err) => {
if (err) {
console.error(err.message);
return;
}
...
}
Later on, I store a certain message in a table called promotedGames (id INCREMENTAL, server INT, channel INT, messageID INT)
And when I query the db with
db.all(`SELECT * FROM promotedGames WHERE gameID = ${row.id}`, (err, promotedGamesResults) => {
...
It always returns the first ever game stored, I've no idea why this could be happening at all.
Steps taken so far:
- Restarted the Node App
- Deleted the sqlite3 database and redone it
- Deleted the node_modules folder and reinstalled all the packages
Any hints of what could be that I'm doing wrong?
Thanks in advance
You try to search your gameID with row.Id , row id in sqlite is a built-in counter for the rows you have insert in a table IT IS NOT the column gameID .
I think you intend to return the records matching the ${row.id}
It runs the query passed as an argument and for each result from the database, it will run the callback.
Therefore is better to use db.each() when you intend to retrieve some items from the database.
db.each(`SELECT * FROM promotedGames WHERE gameID = ${row.id}`, (err, promotedGamesResults) => {
...
PS: db.each is used when you need to return all the rows and work on the selected returned data set.
Related
I have a basic NodeJS Couchbase script straight from their documentation. It just inserts a document and immediately N1QL queries the inserted document.
var couchbase = require('couchbase')
var cluster = new couchbase.Cluster('couchbase://localhost/');
cluster.authenticate('admin', 'admini');
var bucket = cluster.openBucket('application');
var N1qlQuery = couchbase.N1qlQuery;
bucket.manager().createPrimaryIndex(function() {
bucket.upsert('user:king_arthur', {
'email': 'kingarthur#couchbase.com', 'interests': ['Holy Grail',
'African Swallows']
},
function (err, result) {
bucket.get('user:king_arthur', function (err, result) {
console.log('Got result: %j', result.value);
bucket.query(
N1qlQuery.fromString('SELECT * FROM application WHERE $1 in
interests LIMIT 1'),
['African Swallows'],
function (err, rows) {
console.log("Got rows: %j", rows);
});
});
});
});
This is returning back
bash-3.2$ node nodejsTest.js Got result:
{"email":"kingarthur#couchbase.com","interests":["Holy
Grail","African Swallows"]}
Got rows: []
I was expecting the inserted document in the "rows" array.
Any idea why this very basic nodeJS starter script is not working?
Key/value read writes are always consistent (that is, if you write a document, then retrieve it by ID, you will always get back what you just wrote). However, updating an index for N1QL queries takes time and can affect performance.
As of version 5.0, you can control your consistency requirements to balance the trade-off between performance and consistency. By default, Couchbase uses the Not_bounded mode. In this mode, queries are executed immediately, without waiting for indexing to catch up. This causes the issue you're seeing. Your query is executing before the index has been updated with the mutation you made.
You can read about this further here: https://developer.couchbase.com/documentation/server/current/indexes/performance-consistency.html
This maybe a very simple question, while it took me long time and still failed to figure it out. I need two tables in postgresql and to be updated dynamically in my nodejs app. The table1 has two column(obj_id, obj_name), table2 has more columns while include one column as "obj_id" as the "fk" refer to table1's "obj_id";
I tried to insert one record in table1 and then do some processing and insert the corresponding record to table2. I need the "obj_id" value returned from the 1st insert, and use it together with other values to do the 2nd insert.
I am trying to do this with following code, while it always throw error saying (insert or update on table "table2" violates foreign key constraint "rel_table2_table1").
function uploadTable1(name,callback){
const client = new pg.Client(conString);
client.connect();
client.on("drain", function () {
client.end();
});
let insertTable1 = client.query(`INSERT INTO table1 (obj_name) VALUES ($1) RETURNING obj_id`, [name]);
insertTable1.on("row", function (row,result) {
callback(row.obj_id);
});
}
function uploadTable2(kk,ff) {
return function (obj_id) {
const client = new pg.Client(conString);
client.connect();
client.on("drain", function () {
client.end();
});
let uploadQuery = client.query("INSERT INTO table2 (kk,ff,obj_id) VALUES ($1,$2, $3)",[kk,ff, obj_id]);
}
}
I can run these two functions mannually one by one to insert records without problem, while when run them together in the main program as shown below code, I got errors mentioned above.
uploadTable1("obj_name1",uploadTable2("kk1","ff1"));
I checked the table1 has been inserted successfully and the new record "obj_id" value has been passed to the callback function who is doing the insert table2. While stranged thing is they throw the error, seems complaining that "fk" does not exist yet when insert records for table2, at least when running the callback function.
What's your suggestions to solve this conflicts?
I assume you are making two separate connections in two functions. Maybe uploadTable1 transaction is not committed yet, when you try to uploadTable2("kk1","ff1")?..
With this assumption made, try changing uploadTable1 to:
function uploadTable1(name,callback){
const client = new pg.Client(conString);
client.connect();
client.on("drain", function () {
client.end();
let insertTable1 = client.query(`INSERT INTO table1 (obj_name) VALUES ($1) RETURNING obj_id`, [name]);
insertTable1.on("row", function (row,result) {
callback(row.obj_id);
});
});
with idea, that client.end(); will terminate connection "forcing" commit...
Or, consider rewriting the code to run two statements in one session, one transaction instead...
After reading https://stackoverflow.com/a/14797359/4158593 : about nodejs single thread and that it takes the first parameter of async function, processes it and then uses the callback to respond when everything is ready. What confused me is what if I had multiple queries that need to be excused all at once and tell nodeJS to block other requests by adding them in a queue.
To do that I realised that I need to wrap my queries in another callback. And promises do that pretty well.
const psqlClient = psqlPool.connect();
return psqlClient.query(`SELECT username FROM usernames WHERE username=$1`, ['me'])
.then((data) => {
if(!data.rows[0].username) {
psqlClient.query(`INSERT INTO usernames (username) VALUES ('me')`);
}
else { ... }
});
This code is used during sign up, to check if username isn't taken before inserting. So it very important that nodejs puts other requests into a queue, and makes sure to select and insert at the same time. Because this code might allow people with the same username sent at the same time to select a username that has been already been taken, therefore two usernames will be inserted.
Questions
Does the code above executes queries all at once?
If 1 is correct, if I was to change the code like this
const psqlClient = psqlPool.connect();
return psqlClient.query(`SELECT username FROM usernames WHERE username=$1`, ['me'], function(err, reply) {
if(!reply.rows[0].username) {
psqlClient.query(`INSERT INTO usernames (username) VALUES ('me')`);
}
});
would that effect the behaviour?
If 1 is wrong, how should this be solved? I am going to need this pattern (mainly using select and insert/update one after another) for things like making sure that my XML sitemaps don't contain more than 50000 urls by storing the count for each file in my db which happens dynamically.
The only thing that can guarantee data integrity in your case is a single SELECT->INSERT query, which was discussed here many times.
Some examples:
Is SELECT or INSERT in a function prone to race conditions?
Get Id from a conditional INSERT
You should be able to find more of that here ;)
I also touched on this subject in a SELECT ⇒ INSERT example within pg-promise.
There is however an alternative, to make any repeated insert generate a conflict, in which case you can re-run your select to get the new record. But it is not always a suitable solution.
Here's a reference from the creator of node-postgres: https://github.com/brianc/node-postgres/issues/83#issuecomment-212657287. Basically queries are queued, but don't rely on them in production where you have many requests....
However you can use BEGIN and COMIT
var Client = require('pg').Client;
var client = new Client(/*your connection info goes here*/);
client.connect();
var rollback = function(client) {
//terminating a client connection will
//automatically rollback any uncommitted transactions
//so while it's not technically mandatory to call
//ROLLBACK it is cleaner and more correct
client.query('ROLLBACK', function() {
client.end();
});
};
client.query('BEGIN', function(err, result) {
if(err) return rollback(client);
client.query('INSERT INTO account(money) VALUES(100) WHERE id = $1', [1], function(err, result) {
if(err) return rollback(client);
client.query('INSERT INTO account(money) VALUES(-100) WHERE id = $1', [2], function(err, result) {
if(err) return rollback(client);
//disconnect after successful commit
client.query('COMMIT', client.end.bind(client));
});
});
});
Check out: https://github.com/brianc/node-postgres/wiki/Transactions
However this doesn't block the table. Here's a list of solutions: Update where race conditions Postgres (read committed)
I use node.js sqlite3 to manipulate data. I use these codes to insert data to database and get inserted id:
db.run("INSERT INTO myTable (name) VALUES ('test')");
db.get("SELECT last_insert_rowid() as id", function (err, row) {
console.log('Last inserted id is: ' + row['id']);
});
I think this is not stable. my db connection is always open. when my server serves this code on multiple and same time connections from clients, DoesSELECT last_insert_rowid() get id rightly?
Is sqlite last_insert_rowid atomic?
thanks.
By documentation sqlite3 Database#run(sql, [param, ...], [callback])
you can retrive lastID from callback.
try{
db.run("INSERT INTO TABLE_NAME VALUES (NULL,?,?,?,?)",data1,data2,data3,data4,function(err){
if(err){
callback({"status":false,"val":err});
}else{
console.log("val "+this.lastID);
callback({"status":true,"val":""});
}
});
}catch(ex){
callback({"status":false,"val":ex});
}
this.lastID return the last inserted row id By documentation sqlite3 Database#run(sql, [param, ...], [callback])
for Node.js Sqlite3 last inserted id you can use lastId
here is a simple Example
db.run("INSERT INTO foo ...", function(err) {
if(null == err){
// row inserted successfully
console.log(this.lastID);
} else {
//Oops something went wrong
console.log(err);
}
});
last_insert_rowid() returns the ROWID for the last insert operation on this connection.
The result is unpredictable if the function is called from multiple threads on the same database connection.
Documentation (for the C API):
https://www.sqlite.org/c3ref/last_insert_rowid.html
If you don't share your database connection (session) between multiple threads for concurrent inserts, this is safe. If multiple threads insert on the same connection, this is unsafe, i.e. you might get either ID or a completely invalid ID.
This.data.lastID will give you last inserted ID.
Hi im developing an app with nodeJS, express and a mongoDB, i need to take users data from a csv file and upload it to my database this db has a schema designed with mongoose.
but i don know how to do this, what is the best approach to read the csv file check for duplicates against the db and if the user (one column in the csv) is not here insert it?
are there some module to do this? or i need to build it from scratch? im pretty new to nodeJS
i need a few advices here
Thanks
this app have an angular frontend so the user can upload the file, maybe i should read the csv in the front end and transform it into an array for node, then insert it?
Use one of the several node.js csv libraries like this one, and then you can probably just run an upsert on the user name.
An upsert is an update query with the upsert flag set to true: {upsert: true}. This will insert a new record only if the search returns zero results. So you query may look something like this:
db.collection.update({username: userName}, newDocumentObj, {upsert: true})
Where userName is the current username you're working with and newDocumentObj is the json document that may need to be inserted.
However, if the query does return a result, it performs an update on those records.
EDIT:
I've decided that an upsert is not appropriate for this but I'm going to leave the description.
You're probably going to need to do two queries here, a find and a conditional insert. For this find query I'd use the toArray() function (instead of a stream) since you are expecting 0 or 1 results. Check if you got a result on the username and if not insert the data.
Read about node's mongodb library here.
EDIT in response to your comment:
It looks like you're reading data from a local csv file, so you should be able to structure you program like:
function connect(callback) {
connStr = 'mongodb://' + host + ':' + port + '/' + schema; //command line args, may or may not be needed, hard code if not I guess
MongoClient.connect(connStr, function(err, db) {
if(err) {
callback(err, null);
} else {
colObj = db.collection(collection); //command line arg, hard code if not needed
callback(null, colObj);
}
});
}
connect(function(err, colObj) {
if(err) {
console.log('Error:', err.stack);
process.exit(0);
} else {
console.log('Connected');
doWork(colObj, function(err) {
if(err) {
console.log(err.stack);
process.exit(0);
}
});
}
});
function doWork(colObj, callback) {
csv().from('/path/to/file.csv').on('data', function(data) {
//mongo query(colObj.find) for data.username or however the data is structured
//inside callback for colObj.find, check for results, if no results insert data with colObj.insert, callback for doWork inside callback for insert or else of find query check
});
}