knex issue whereNotExists - node.js

I am having some issues with a Knex route for PostgreSQL. I am trying to insert into a database but only when the item is not in the database already. I am trying to use where not exists but it doesn't seem to be doing what I want it to. I appreciate any help you can give me.
Thank you!
app.post('/addcart', (req,res)=>{
const{customer_id, product_id,item_quantity}=req.body;
db('shopping_carts')
.insert({
customer_id:customer_id,
product_id:product_id,
item_quantity:item_quantity
})
.whereNotExists(db.select('*').from('shopping_carts').where('product_id',product_id))
.then(item=>{
console.log(item)
res.json(item)
})
.catch((err)=>{
if(err.column === 'customer_id'){
res.status(400).json({message:err})
console.log('test')
}else{
res.status(500).json({message:err})
// console.log(err.name);
}
})
})

You can't combine a whereNotExists query with an insert query, they don't support this due to it's complexity (and per #mikael, most db's don't support this). So knex ignores the whereNotExists call after the insert in your method chain.
You need to check for existence first, and then do the insert, via separate calls.
You could also write a raw query. Here's an example, it's not pretty:
https://github.com/tgriesser/knex/commit/e74f43cfe57ab27b02250948f8706d16c5d821b8
However, you will run into concurrency/lock issues when trying to do this. You're much better off making use of a unique key and letting the DB reject the insert. Then you can catch it:
.catch((err) => {
if (err.code === 23505) { res.status(500).json({message: 'duplicate'});
}
Edit, more info if you're curious. There's a very long thread on the topic here:
https://github.com/tgriesser/knex/issues/871
Edit: thread from #mikael regarding DB's and insert-where:
https://github.com/tgriesser/knex/issues/871

Related

Not able to return posts by user id

When trying to fetch all the posts by a user id, cosmos DB only return an empty array, but when using mongo DB through atlas, it returns all the posts by that user. What am i doing wrong.
exports.postsByUser = (req, res) => {
Post.find({ postedBy: req.profile._id })
.populate("postedBy", "_id name")
.select("_id title body created likes")
.sort("_created")
.exec((err, posts) => {
if (err) {
return res.status(400).json({
error: err
});
}
res.json(posts);
});
`enter code here`};
I receive a http status of 200, but with just an empty array. and when i try fetching all the posts by all users it returns.
I know this might not be "the answer" but it is a suggestion which will bring others to the correct answer in their case.
For those who run into bugs like these, the easiest way to find out what is happening and which step is making the unexpected return is to comment out all the pipeline steps, in this case, populate, select and sort. Then debug from there - if it returns anything, if there is no result you can start from your match case (find), if it does return you can move to next pipeline step and so on.
Taking some time debugging yourself will make you understand your code better, and you will find the answer yourself faster than waiting for someone on stackoverflow to give u suggestions.

Cassandra read in loop

I am facing some difficulty in deciding how to implement a read operation using cassandra.
The case is that I have an array of id's, let's call it idArray.
After making the read, I am pushing the result in a result array (resultArray)
Now my problem being that would such a code be efficient at all ?
`for(i;i<idAArray.length;i++)
{
let query = "SELECT * FROM table WHERE \"id\"=?idArray[i]"
client.execute(query)
.then(result => resultArray.push(result));
}`
If running in parallel is an option, please specify how exactly ?
Thanks in advance !
If you provide callback to the execute call, then the code will be asynchronous, and you can issue multiple requests in parallel:
client.execute(query, [ 'someone' ], function(err, result) {
assert.ifError(err);
console.log('User with email %s', result.rows[0].email);
});
Depending on number of queries that you need to execute, you may need to tune connection pooling to allow more in-fligh requests per connection. It's also recommended to prepare your query.
More information is in documentation.

Monk - Where goes the data after a query?

I have started using Monk today, and there are a few things that I don't really get, and documentation is too light.
First here is the code:
const movieToProcess = movieCollection.findOne({ link: videoURL }).then((doc) => {
console.log(doc)
console.log("BLABLA")
});
console.log("CURSOR", typeof(movieToProcess))
First thing, I don't understand why the two console.log inside the promise .then() are not displaying, is that normal? If so, why?
And if this is not normal that the console.logs don't work, why is that?
And finally, in then, how can I get the return value of findOne()?
Bonus: Is there another function than findOne() to check if the value exist in the database?
I apologise for these questions, but there are not that much documentation for Monk.
A few things:
In your example you are setting movieToProcess to the value of movieCollection.findOne() while also calling .then() on it.
in your .then, doc is the return value of findOne()
ALSO, referring to #Geert-Jan's comment, the promise is probably being rejected and you aren't catching it.
Try this:
movieCollection.findOne({ link: videoURL })
.then((doc) => {
console.log(doc)
console.log("BLABLA")
})
.catch((err) => {
console.log(err)
})
I'll also add that findOne() does not return a cursor, it returns a document.

How do you save multiple records in SailsJS in one action?

I am wondering what the best practice would be for saving multiple records in an action that makes changes, particularly add and remove, to multiple records. The end goal is to have one function have access to all of the changed data in the action. Currently I am nesting the saves in order for the innermost saves to have access to the data in all the updated records. Here is an example of how I am saving:
record1.save(function (error, firstRecord) {
record2.save(function (erro, secondRecord) {
record3.save(function (err, thirdRecord) {
res.send({recordOne: firstRecord, recordTwo: secondRecord, recordThree: thirdRecord});
});
});
});
With this structure of saving, recordOne, recordTwo, and recordThree display the expected values on the server. However, checking localhost/1337/modelName reveals that the models did not properly update and have incorrect data.
You could use the built in promise engine Bluebird to to that.
Promise.then(function() {
return [record1.save(),record2.save(),record3.save()];
}).spread(function(record1_saved,record2_saved,record3_saved){
res.send({recordOne: record1_saved, recordTwo: record2_saved, recordThree: record3_saved});
req.namespamce = this;
}).catch(function(err){
res.badRequest({
error: err.message
});
req.namespamce = this;
});

Saving reference to a mongoose document, after findOneAndUpdate

I feel like I'm encountering something completely simple, yet can not figure it out, will be glad, if you can help me.
I'm using mongoose + socket.io as CRUD between client and server. As I'm using sockets, there is a private scope individual for each client's socket, in which, for future use without making db find calls, I would like to store a reference of mongoose document, that I once found for this user.
One client is creanting a Room:
var currentroom;
client.on('roomcreate', function (data) {
currentroom = new Room({
Roomid: id,
UsersMeta: [UserMeta],
///other stuff///
})
currentroom.save(function (err, room) {
if (err) console.log(err);
else console.log('success');
});
});
Then, whenewer I want, on another creator's call I can just simply
currentroom.Roomnaid = data;
curretroom.save()
And it's working fine, the problem is - I do not understand how I can get the same reference on not creating, but Room search, for the moment i'm using this for search:
Room.findOneAndUpdate({ Roomid: roomid }, { $push: { UsersMeta: UserMeta}}, { new: false }, function (err, room) {
if (err) console.log(err);
console.log('room output:');
console.log(room);
client.emit('others', room);
})
The thing is, that in one call I want to:
1: find a doc in db,
3: send it to user (in pre-updated state),
4: update found document,
2: save a reference (of the current updated doc)
With findOneAndUpdate I can do all, but not saving a current reference.
So, how I need to approach it then?
Like this;
Room.findOne({ Roomid: roomid }, function (err, oldRoom) {
//make changes to oldRoom
//then save it like this
oldRoom.save(function(err,newRoom) {
//newRoom is updated document
//now you have reference to both old and new docs
});
})
In the end of the road i found out that i was trying to make it wrong. The idea of storing reference (instance) of doc, accessed by different users is obviously bad, as it leads to data conflicts between those instances. Also, approach with separate find and save can cause racing conditions and conflics as being non atomic operation, its far better to use findOneAnd*** to make mongoose\mongoDB handle db requests querieng by itself and make sure that no conflicts will occure.

Resources