Avoiding race conditions and starvation with MongoDB + NodeJS - node.js

I have a proxy management pool that is responsible for storing, checking, and retrieving proxies so that they can be used with web requests.
async getNextAvailableProxy() {
while(true) {
var sleepTime = global.Settings.ProxyPool.ProxySleepTimeMS;
var availableProxies = await this.data.find(PROXY_COLLECTION, {
$query: {
Enabled: true,
InUse: false,
LastUsed: { $lte: new Date(new Date() - sleepTime) }
},
$orderby: { ResponseTime: 1 }
});
if (availableProxies.length <= 0) {
var nextAvailable = await this.data.findOne(PROXY_COLLECTION, {
$query: { Enabled: true, InUse: false },
$orderby: { LastUsed: -1 }
});
if (!nextAvailable) {
await Utils.sleep(100);
console.log('No proxies available, sleeping');
continue;
}
sleepTime = sleepTime - (new Date() - nextAvailable.LastUsed)
if (sleepTime > 0)
await Utils.sleep(sleepTime);
continue;
}
var selectedProxy = availableProxies[0];
selectedProxy.InUse = true;
await this.data.save(PROXY_COLLECTION, selectedProxy);
return selectedProxy;
}
}
It is worth noting that my versions of find and save are wrappers around the MongoDB driver for NodeJS.
It is also worth noting that Utils.sleep() is a promise that uses a setTimeout to perform an async sleep.
Now, I understand that since NodeJS is single-threaded, race conditions cannot occur. However, when using multiple isolated objects querying the database rapidly, this is simply not the case.
If I have, say, five instances of object ProxyPool and they all call getNextAvailableProxy() within a short time of each other, they will all fetch the same proxy from the database, because one instance has already started the query before another instance has saved the InUse flag, leaving me with n-instances of ProxyPool all retrieving the save exact proxy.
How can I get around this in an asynchronous manner?

Honestly, it's hard to tell why it's a problem based on your post. While collisions can happen, it should be rare enough not to matter in my opinion, unless the use of the proxy is a really long running operation (and so a given proxy is tied up a lot).
That said, I also would not lookup a proxy on every request. Instead, I'd probably have each worker fetch a pool of proxies either on startup or at intervals (maybe once an hour or something), and then internally manage (in-memory) the proxies it has available.
Your algorithm for figuring out what proxies to give a given worker can then be pretty flexible, and a lot less likely to have collisions, since each node instance is single threaded it won't allocate the same proxy twice.
The risk is that you may hit a place where a given worker has run out of proxies. That's something you'll need to handle as well, but since you will (in theory) have your workers load balanced in some fashion, if you hit that spot you're probably running out of proxies anyway and will have to issue a Too Busy response soon.
Finally, when you do hit the DB for a list of available proxies, you should be using findAndModify() or similar to fetch and update the documents in one shot, so that as you pull one out of the DB you tell the DB it's not available, rather than waiting on processing on your web server first.

Related

Synchronize multiple requests to database in NestJS

in our NestJS application we are using TypeORM as ORM to work with db tables and typeorm-transactional-cls-hooked library.
now we have problem with synchronization of requests which are read and modifying database at same time.
Sample:
#Transactional()
async doMagicAndIncreaseCount (id) {
const await { currentCount } = this.fooRepository.findOne(id)
// do some stuff where I receive new count which I need add to current, for instance 10
const newCount = currentCount + 10
this.fooRepository.update(id, { currentCount: newCount })
}
When we executed this operation from the frontend multiple times at the same time, the final count is wrong. The first transaction read currentCount and then start computation, during computation started the second transaction, which read currentCount as well, and first transaction finish computation and save new currentCount, and then also second transaction finish and rewrite result of first transaction.
Our goal is to execute this operation on foo table only once at the time, and other requests should wait until.
I tried set SERIALIZABLE isolation level like this:
#Transactional({ isolationLevel: IsolationLevel.SERIALIZABLE })
which ensure that only one request is executed at time, but other requests failed with error. Can you please give me some advice how to solve that?
I never used TypeORM and moreover you are hiding the DB engine you are using.
Anyway to achieve this target you need write locks.
The doMagicAndIncreaseCount pseudocode should be something like
BEGIN TRANSACTION
ACQUIRE WRITE LOCK ON id
READ id RECORD
do computation
SAVE RECORD
CLOSE TRANSACTION
Alternatively you have to use some operation which is natively atomic on the DB engine; ex. the INCR operation on Redis.
Edit:
Reading on TypeORM find documentation, I can suggest something like:
this.fooRepository.findOne({
where: { id },
lock: { mode: "pessimistic_write", version: 1 },
})
P.S. Looking at the tags of the question I would guess the used DB engine is PostgreSQL.

How to manage massive calls to Postgresql in Node

I have a question regarding massive calls to PostgreSQL.
This is the scenario:
I have a simple Nodejs app that makes queries to PostgreSQL in a short period of time.
Everything is fine, but sometimes these calls get rejected due to Postgresql maximum pool connections setting, which is equal to 100.
I have in mind to make queue consumption app style, which means adding every query to a queue and then consuming an element every second. By consequence a query to PostgreSQL every second.
But my problem is, Idk where to start. This is the part where I am getting problems with, at some point, I have a lot of calls and I get lots of "ERROR IN QUERY EXECUTION" for the reason explained before.
const pool3 = new Pool(credentialsPostGres);
let res = [];
let sql_call = "select colum1 from table2 where x = y"; //the real query is a bit more complex, but you get the idea.
poll_query.query(sql_call,(err,results) => {
if (err) {
pool3.end();
console.log(err + " ERROR IN QUERY EXECUTION");
} else {
res.push({ data: Object.values(JSON.parse(JSON.stringify(results.rows))) });
pool3.end();
return callback(res,data);
}
})
How I should manage this part into a queue? I am a bit lost.
Help!

Cron job failed without a reason

I am in a situation where I have a CRON task on google app engine (using flex environment) that just dies after some time, but I have no trace WHY (checked the GA Logs, nothing, tried try/catch, and explicitly log it - no error).
I have explicitly verified that if I create a cron task that runs for 8 minutes (but doesn't do much - just sleeps and updates database every second), it will run successfully. This is just to prove that CRON jobs can at least run 8 minutes if not more. & I have set up the Express & NodeJS combo up correctly.
This is all fine, but seems that my other cron job dies in 2-3 minutes, so quite fast. It is hitting some kind of limit, but I have no idea how to control for it, or even what limit it is, so all I can do is speculate.
I will tell more about my CRON task. It is basically rapidly querying MongoDB database where every query is quite fast. I've tried the same code locally, and there are no problems.
My speculation is that I am somehow creating too many MongoDB requests at once, and potentially running out of something?
Here's a pseudocode (just to describe what kind of scale data we're talking about - the numbers and flow are exactly the same):
function q1() {
return await mongoExecute(async (db) => {
const [l1, l2] = await Promise.all([
db.collection('Obj1').count({uid1: c1, u2action: 'L'}),
db.collection('Obj1').count({uid2: c2, u1action: 'L'}),
]);
return l1+l2;
});
}
for(let i = 0; i < 8000; i++) {
const allImportantInformation = Promise.all([
q1(),
q2(),
q3(),
.....
q10()
])
await mongoDb.saveToServer(document);
}
It is getting somewhere around i=1600 before the CRON job just dies without any explanation. The GA Cron Job panel clearly says the JOB has failed.
Here is also my mongoExecute (which is just a separate module that caches the db object, which hopefully is the correct practice in order to ensure that mongodb pooling works correctly.)
import { MongoClient, Db } from 'mongodb';
let db = null;
let promiseInProgress = null;
export async function mongoExecute<T> (executor: (instance: Db) => T): Promise<T | null> {
if (!db) {
if (!promiseInProgress) {
promiseInProgress = new Promise(async (resolve, reject) => {
const tempDb = await MongoClient.connect(process.env.MONGODB_URL);
resolve(tempDb);
});
}
db = await promiseInProgress;
}
try {
const value = await executor(db);
return value;
} catch (error) {
console.log(error);
return null;
}
}
What would be the solution? My idea is to basically ensure less requests are made at once (so all the promises would be sequential, and potentially add sleep between each cycle in the FOR.
I don't understand because it works fine up until some specific point (and quite big point, it's definitely different amount, sometimes it is 800, sometimes 1200, etc).
Is there any "running out of TCP connections" scenario happening? Theoretically we shouldn't run out of anything because we don't have much open at any given point.
It seems to be working if I throw 200ms wait between each cycle & I suspect I can figure out solution, all the items don't have to be updated in the same CRON execution, but it is a bit annoying, and I would like to know what's going on.
Is the garbage collector not catching up fast enough, why exactly is GA silently failing my cron task?
I discovered what the bug is, and fixed it accordingly.
Let me rephrase it; I have no idea what the bug was, and having no errors at any point was discouraging, however I managed to fix (lucky guess) whatever was happening by updating my nodejs mongodb driver to the latest version (from 2.xx -> 3.1.10).
No sleeps needed in my code anymore.

Request rate is large

Im using Azure documentdb and accessing it through my node.js on express server, when I query in loop, low volume of few hundred there is no issue.
But when query in loop slightly large volume, say around thousand plus
I get partial results (inconsistent, every time I run result values are not same. May be because of asynchronous nature of Node.js)
after few results it crashes with this error
body: '{"code":"429","message":"Message: {\"Errors\":[\"Request rate is large\"]}\r\nActivityId: 1fecee65-0bb7-4991-a984-292c0d06693d, Request URI: /apps/cce94097-e5b2-42ab-9232-6abd12f53528/services/70926718-b021-45ee-ba2f-46c4669d952e/partitions/dd46d670-ab6f-4dca-bbbb-937647b03d97/replicas/130845018837894542p"}' }
Meaning DocumentDb fail to handle 1000+ request per second?
All together giving me a bad impression on NoSQL techniques.. is it short coming of DocumentDB?
As Gaurav suggests, you may be able to avoid the problem by bumping up the pricing tier, but even if you go to the highest tier, you should be able to handle 429 errors. When you get a 429 error, the response will include a 'x-ms-retry-after-ms' header. This will contain a number representing the number of milliseconds that you should wait before retrying the request that caused the error.
I wrote logic to handle this in my documentdb-utils node.js package. You can either try to use documentdb-utils or you can duplicate it yourself. Here is a snipit example.
createDocument = function() {
client.createDocument(colLink, document, function(err, response, header) {
if (err != null) {
if (err.code === 429) {
var retryAfterHeader = header['x-ms-retry-after-ms'] || 1;
var retryAfter = Number(retryAfterHeader);
return setTimeout(toRetryIf429, retryAfter);
} else {
throw new Error(JSON.stringify(err));
}
} else {
log('document saved successfully');
}
});
};
Note, in the above example document is within the scope of createDocument. This makes the retry logic a bit simpler, but if you don't like using widely scoped variables, then you can pass document in to createDocument and then pass it into a lambda function in the setTimeout call.

Node.js Synchronous Library Code Blocking Async Execution

Suppose you've got a 3rd-party library that's got a synchronous API. Naturally, attempting to use it in an async fashion yields undesirable results in the sense that you get blocked when trying to do multiple things in "parallel".
Are there any common patterns that allow us to use such libraries in an async fashion?
Consider the following example (using the async library from NPM for brevity):
var async = require('async');
function ts() {
return new Date().getTime();
}
var startTs = ts();
process.on('exit', function() {
console.log('Total Time: ~' + (ts() - startTs) + ' ms');
});
// This is a dummy function that simulates some 3rd-party synchronous code.
function vendorSyncCode() {
var future = ts() + 50; // ~50 ms in the future.
while(ts() <= future) {} // Spin to simulate blocking work.
}
// My code that handles the workload and uses `vendorSyncCode`.
function myTaskRunner(task, callback) {
// Do async stuff with `task`...
vendorSyncCode(task);
// Do more async stuff...
callback();
}
// Dummy workload.
var work = (function() {
var result = [];
for(var i = 0; i < 100; ++i) result.push(i);
return result;
})();
// Problem:
// -------
// The following two calls will take roughly the same amount of time to complete.
// In this case, ~6 seconds each.
async.each(work, myTaskRunner, function(err) {});
async.eachLimit(work, 10, myTaskRunner, function(err) {});
// Desired:
// --------
// The latter call with 10 "workers" should complete roughly an order of magnitude
// faster than the former.
Are fork/join or spawning worker processes manually my only options?
Yes, it is your only option.
If you need to use 50ms of cpu time to do something, and need to do it 10 times, then you'll need 500ms of cpu time to do it. If you want it to be done in less than 500ms of wall clock time, you need to use more cpus. That means multiple node instances (or a C++ addon that pushes the work out onto the thread pool). How to get multiple instances depends on your app strucuture, a child that you feed the work to using child_process.send() is one way, running multiple servers with cluster is another. Breaking up your server is another way. Say its an image store application, and mostly is fast to process requests, unless someone asks to convert an image into another format and that's cpu intensive. You could push the image processing portion into a different app, and access it through a REST API, leaving the main app server responsive.
If you aren't concerned that it takes 50ms of cpu to do the request, but instead you are concerned that you can't interleave handling of other requests with the processing of the cpu intensive request, then you could break the work up into small chunks, and schedule the next chunk with setInterval(). That's usually a horrid hack, though. Better to restructure the app.

Resources