Functions with sequelize transactions and not transactions - node.js

I'm developing an app using sequelize and I just started to use transaction() because I want my queries to be able to rollback on errors. So I had my regular functions
const customer = await Customer.findOne({ where: { email: { [Op.eq]: body.email } } });
and now I also have
const newCustomer = await Customer.create(body, { transaction: t });
Everything is working fine for the moment, but I can help thinking if it's a good idea in terms of performance to use both on the same operation. (looking for an already created customer and in case it doesn't exist, create a new one based on email address) I think they both are using different transactions but I'm not sure how that can affect for example my pool max number.
PD: I'm facing some issues where it seems that a query blocks or something my node and I have to restart the server to make everything work again.
Thanks in advance for your help!

If you forget to indicate a transaction in one or more queries than it can lead to deadlocks because you will have two transactions changing/reading the same or linked records.
You should indicate a transaction even in read operations like findOne:
const customer = await Customer.findOne({ where: { email: { [Op.eq]: body.email } }, transaction: t });

Related

Proper way to fetch some docs from DB (Mongo)

Hello and thanks in advance!
MERN stack student here. I'm just getting to know the MVC design pattern aswell.
So, here's my question:
I'm trying to get some docs from a Mongo collection (working with Mongoose) and I get a limit query with my request (let's say I need only the first 5 docs from a collection of 30 docs). What is considered best practice in general? Would you use one or the other in different cases (i.e. how big is the db, as an example that comes to mind)?
Something like this:
Controller:
getProducts() {
const { limit } = req.query;
const products = await productManagerDB.getProducts();
res.status(200).json({ success: true, limitedProductsList: products.slice(0, Number(limit))});
}
Or
Like this:
Controller:
getProducts() {
const { limit } = req.query;
const products = await productManagerDB.getProducts(limit);
res.status(200).json({ success: true, limitedProductsList: products});
}
Service:
getProducts(query) {
try {
const limit = query? Number(query) : 0;
const products = await ProductsModel.find().limit(limit);
return products;
} catch (error) {
throw new Error(error.message)
}
}
Tried both ways with the same outcome. I expect second to be more efficient since it's not loading all the data that I ain't using but curious if in some cases would be better to fetch the whole collection...
As you have already stated the second query is far more efficient - especially when you have a large collection.
the two differences would be:
MongoDB engine will have to fetch all the documents from the disk -
the way MongoDB is designed (the internal wired-tiger storage
engine) is that it caches the frequently used data either in the
application cache or the OS cache - in any case, its quite possible
that the whole collection will not fit in the memory and therefore a
large number of disk operations will happen (very slow comparatively
even with the latest nvme disks)
A large amount of data will have to flow over the network from the
database server to the application server which is a waste of
bandwidth and will be slower.
Where you might need the full collection obviously depends on the usecase

Document Read and insert with locking/transaction in nodejs with mongodb

In reservation system, only 5 different user can create bookings. If 100 user call booking api at same time than how to handle concurrency with locking. I am using nodejs with mongodb. I went through mongo concurrency article and transactions in mongodb, but cannot find any sample coding solution with locking.
I have achieved solution with Optimistic concurrency control (when there is low contention for the resource - This can be easily implemented using versionNumber or timeStamp field).
Thank you in advance for suggesting me solution with locking.
Now the algorithm is:
Step 1: Get userAllowedNumber from userSettings collection.
//Query
db.getCollection('userSettings').find({})
//Response Data
{ "userAllowedNumber": 5 }
Step 2, Get current bookedCount from bookings collection.
//Query
db.getCollection('bookings').count({ })
//Response Data
2
Step 3, if bookedCount <= userAllowedNumber then insert in bookings.
//Query
db.getCollection('bookings').create({ user_id: "usr_1" })
I had indepth discussion about locking with transaction in mongodb community. In the conclusion, I learn and found the limitation of transaction. There is no lock we can use to handle concurrent request for this task.
You can see the full Mongodb community conversation at this link
https://www.mongodb.com/community/forums/t/implementing-locking-in-transaction-for-read-and-write/127845
Github demo code with Jmeter testing shows the limitation and not able to handle concurrent request for this task.
https://github.com/naisargparmar/concurrencyMongo
New suggestion are still welcome and appreciate
To solve your problem, you can also try using Redlock (https://redis.com/redis-best-practices/communication-patterns/redlock/) for distributed locking or using Mutex lock for instance locking.
For a simple example of transaction, first you need to have a client connect to a MongoDB.
const client = new MongoClient(uri);
await client.connect();
Once you have the client, you can create a session from which you can make transactions:
const session = await client.startSession();
const transactionResults = await session.withTransaction( async () => {
await client.db().collection("example").insertOne(example);
}, transactionOptions);
With transactionOptions being:
const transactionOptions = {
readPreference: 'primary',
readConcern: { level: 'majority' },
writeConcern: { w: 'majority' }
};
Find about the Read/Write concern in the MongoDb documentation.
Depending on your usecase, you may also consider findAndModify which lock the document on change.

How do I determine if MongoDB or Node.js is the bottle neck?

I'm very new to systems design in general, so let me try and explain my question to the best of my ability!
I have two EC2 t2.micro instances up and running: one is housing my MongoDB, which is storing 10,000,000 primary records, and the other has my express server on it.
The structure of my MongoDB documents are as follows:
{
_id: 1,
images: ["url_1.jpg", "url_2.jpg", "url_3.jpg"],
}
This is what my mongo connection looks like:
const { MongoClient } = require('mongodb');
const { username, password, ip } = require('./config.js');
const client = new MongoClient(`mongodb://${username}:${password}#${ip}`,
{ useUnifiedTopology: true, poolSize: 10 });
client.connect();
const Images = client.db('imagecarousel').collection('images');
module.exports = Images;
I am using loader.io to run a 1000PRS stress test to my servers GET API endpoint. The first test uses a .findOne() query, the second a .find().limit(1) query, like so:
const query = { _id: +req.params.id };
Images.findOne(query).then((data) => res.status(200).send(data))
.catch((err) => {
console.log(err);
res.status(500).send(errorMessage);
});
//////////////////////////////////////////
const query = { _id: +req.params.id };
Images.find(query).limit(1).toArray().then((data) => res.status(200).send(data[0]))
.catch((err) => {
console.log(err);
res.status(500).send(errorMessage);
});
When I looked at the results on New Relic, I was a little perplexed by what I saw: New Relic Results
After some research, I figured this has something to do with .findOne() returning a document, and .find() returning a cursor?
So my question is: How do I determine if the bottle neck is node.js or MongoDB, and do the queries I use determine that for me (in this specific case)?
I would suggest that you start with the mongodb console and explore your queries in detail. This way you will isolate the mongodb behavior from the driver behavior.
A good way to analyse your queries is:
cursor.explain() - https://docs.mongodb.com/manual/reference/method/cursor.explain/
$explain - https://docs.mongodb.com/manual/reference/operator/meta/explain/
If you aim at pitch-perfect database performance tuning, you need to understand every detail of the execution of your queries. It will take some time to get grip of it, but it's totally worth it!
Another detail of interest is the real-world performance monitoring and profiling in production, which reveals the true picture of the bottlenecks in your application, as opposed to the more "sterile" non-production stress-testing. Here is a good profiler, which allows you to insert custom profiling points in your app and to easily turn profiling on and off without restarting the app:
https://www.npmjs.com/package/mc-profiler
A good practice would be to first let the application run in production as a beta, inspect profiling data and optimize the slow code. Otherwise you could waste swathes of time going after some optimizations, which have little to no impact to the general app performance.

Am I having a local Firestore database?

I want to understand what kind of Firestore database is installed to my box.
The code is running with node.js 9.
If I remove the internet for X minutes and put it back, I can see all the cached transactions going to Firestore (add, updates, deletes).
If I add firebase.firestore().enablePersistence() line after 'firebase.initializeApp(fbconfig), I am getting this error:
Error enabling offline persistence. Falling back to persistence
disabled: FirebaseError: [code=unimplemented]: This platform is either
missing IndexedDB or is known to have an incomplete implementation.
Offline persistence has been disabled.
Now, my question is. If I don't have persistence enabled or can't have it, how come when disconnecting my device from internet, I still have internal transaction going on? Am I really seeing it the proper way?
To me, beside not seeing the console.log() that I have inside the "then()" to batch.commit or transaction.update right away (only when putting back the internet) tells me that I have some kind of internal database persistence, don't you think?
Thanks in advance for your help.
UPDATE
When sendUpdate is called, it looks like the batch.commit is executed because I can see something going on in listenMyDocs(), but the console.log "Commit successfully!" is not shown until the internet comes back
function sendUpdate(response) {
const db = firebase.firestore();
let batch = db.batch();
let ref = db.collection('my-collection')
.doc('my-doc')
.collection('my-doc-collection')
.doc('my-new-doc');
batch.update(ref, { "variable": response.state });
batch.commit().then(() => {
console.log("Commit successfully!");
}).catch((error) => {
console.error("Commit error: ", error);
});
}
function listenMyDocs() {
const firebase = connector.getFirebase()
const db = firebase.firestore()
.collection('my-collection')
.doc('my-doc')
.collection('my-doc-collection');
const query = db.where('var1', '==', "true")
.where('var2', '==', false);
query.onSnapshot(snapshot => {
snapshot.docChanges().forEach(change => {
if (change.type === 'added') {
console.log('ADDED');
}
if (change.type === 'modified') {
console.log('MODIFIED');
}
if (change.type === 'removed') {
console.log('DELETED');
}
});
});
the console.log "Commit successfully!" is not shown until the internet comes back
This is the expected behavior. Completion listeners fire once the data is committed on the server.
Local events may fire before completion, in an effort to allow your UI to update optimistically. If the server changes the behavior that the client raised events for (for example: if the server rejects a write), the client will fire reconciliatory events (so if an add was rejected, it will firebase a change.type = 'removed event once that is detected).
I am not entirely sure if this applies to batch updates though, and it might be tricky to test that from a Node.js script as those usually bypass the security rules.

Azure mobile apps CRUD operations on SQL table (node.js backend)

This is my first post here so please don't get mad if my formatting is a bit off ;-)
I'm trying to develop a backend solution using Azure mobile apps and node.js for server side scripts. It is a steep curve as I am new to javaScript and node.js coming from the embedded world. What I have made is a custom API that can add users to a MSSQL table, which is working fine using the tables object. However, I also need to be able to delete users from the same table. My code for adding a user is:
var userTable = req.azureMobile.tables('MyfUserInfo');
item.id = uuid.v4();
userTable.insert(item).then( function (){
console.log("inserted data");
res.status(200).send(item);
});
It works. The Azure node.js documentation is really not in good shape and I keep searching for good example on how to do simple things. Pretty annoying and time consuming.
The SDK documentation on delete operations says it works the same way as read, but that is not true. Or I am dumb as a wet door. My code for deleting looks like this - it results in exception
query = queries.create('MyfUserInfo')
.where({ id: results[i].id });
userTable.delete(query).then( function(delet){
console.log("deleted id ", delet);
});
I have also tried this and no success either
userTable.where({ id: item.id }).read()
.then( function(results) {
if (results.length > 0)
{
for (var i = 0; i < results.length; i++)
{
userTable.delete(results[i].id);
});
}
}
Can somebody please point me in the right direction on the correct syntax for this and explain why it has to be so difficult doing basic stuff here ;-) It seems like there are many ways of doing the exact same thing, which really confuses me.
Thanks alot
Martin
You could issue SQL in your api
var api = {
get: (request, response, next) => {
var query = {
sql: 'UPDATE TodoItem SET complete=#completed',
parameters: [
{ name: 'completed', value: request.params.completed }
]
};
request.azureMobile.data.execute(query)
.then(function (results) {
response.json(results);
});
}};
module.exports = api;
That is from their sample on GitHub
Here is the full list of samples to take a look at
Why are you doing a custom API for a table? Just define the table within the tables directory and add any custom authorization / authentication.

Resources