Firestore performance issues when compared to Firestore emulator - node.js

My app can have a large amount of writes, reads and updates (can even go above 10000) under certain circumstances.
While developing the application locally, these operations usually take a few seconds at most (great!) however, it can easily take minutes when running the application on google cloud, to the point that the Firebase function times out.
I developed a controlled test in a separate project, whose sole purpose is to write, get and delete thousands of items for bench-marking. These were the results (averaged out from several tests):
Local Emulator:
5000 items, 4.2s write, 2.2s delete
5000 items, batch mode ON, 0.75s write, 0.11s delete
Cloud Firestore:
100 items, 15.8s write, 14.5s delete
1000 items, batch mode ON, 4.8s write, 3.0s delete
5000 items, async mode ON, 10.2s write, 8.0s delete
5000 items, batch & async ON, 4.5s write, 3.9s delete
NOTE: My local emulator crashes whenever I try to perform db operations async (which is a problem for another day) but it is why I was unable to test the write/delete speeds asynchronously locally. Also, write and read values usually vary +-25% between runs.
However, as you can see, the fact that my local emulator is faster in its slowest mode compared to the fastest test in the cloud definitely raises some questions.
Could it be that I have some sort of configuration issue? or is it just that these numbers are standard for firestore? Here is the (summarised) typescript code if you wish to try it:
functions.runWith({ timeoutSeconds: 540, memory: "2GB" }).https.onRequest(async (req, res) => {
//getting the settings from the request
var data = req.body;
var numWrites: number = data.numWrites;
var syncMode: boolean = !data.asyncMode;
var batchMode: boolean = data.batchMode;
var batchLimit: number = data.batchLimit;
//pre-run setup
var dbObj = {
number: 123,
string: "abc",
boolean: true,
object: { var1: "var1", num1: 1 },
array: [1, 2, 3, 4]
};
var collection = db.collection("testCollection");
var startTime = moment();
//insert requested number of items, using requested settings
var allInserts: Promise<any>[] = [];
if (!batchMode) { //sequential writes
for (var i = 0; i < numWrites; i++) {
var set = collection.doc().set(dbObj);
allInserts.push(set);
if (syncMode) await set;
}
} else { //batch writes
var batch = db.batch();
for (var i = 1; i <= numWrites; i++) {
batch.set(collection.doc(), dbObj);
if (i % batchLimit === 0) {
var commit = batch.commit();
allInserts.push(commit);
batch = db.batch();
if (syncMode) await commit;
}
}
}
//some logging information. Getting items to delete
var numInserts = allInserts.length;
await Promise.all(allInserts);
var insertTime = moment();
var alldocs = (await collection.get()).docs;
var numDocs = alldocs.length;
var getTime = moment();
//deletes all of the items in the collection
var allDeletes: Promise<any>[] = [];
if (!batchMode) { //sequential deletes
for (var doc of alldocs) {
var del = doc.ref.delete();
allDeletes.push(del);
if (syncMode) await del;
}
} else { //batch deletes
var batch = db.batch();
for (var i = 1; i <= numDocs; i++) {
var doc = alldocs[i - 1];
batch.delete(doc.ref);
if (i % batchLimit === 0) {
var commit = batch.commit();
allDeletes.push(commit);
batch = db.batch();
if (syncMode) await commit;
}
}
}
var numDeletes = allDeletes.length;
await Promise.all(allDeletes);
var deleteTime = moment();
res.status(200).send(/* a whole bunch of metrics for analysis */);
});
EDIT: just to clarify, the UI does not perform these write operations, so latency between the end-user machine and cloud servers should (in theory) not cause any major latency issues. The communication to the database is handled fully by Firebase Functions
EDIT 2: I have run this test on two deployments, one in Europe and another in US. Both took around the same amount of time to run, even though my ping to these two servers are vastly different

It is normal to have faster response with the local emulator than Cloud Firestore as the remote environment adds the network traffic that takes time.
For large amounts of operations from a single source the recommendation is to use batch operations as these will reduce the transcactions, and with it Round trips.
And the reason for the Async mode to be faster is that the caller is not waiting for each transaction to be completed before sending the next one So it also makes sense that the calls are faster with it.
The Times you have on the table seem normal to me.
Just as an additional thing to optimize make sure that the region where your firestore database is located is the closest one to your location.

Related

How to handle multiple Nodejs requests reading, modifying and updating the same document in MongoDB at same time?

I have the following code where I am using MongoDB and Nodejs. I read data from collection, perform some arithmetic operations on the data and then update the document. My issue is that when multiple requests come at the same time it causes some data to be lost. How can I avoid this?
//Read the Document
const commissionRecord = await CommissionTable.getCommissionRecord(
publicKey
);
// Check if A record Exists or Not
if (commissionRecord.responseData.exists === true) {
// Assigning values to variables
commissionLimit =
commissionRecord.responseData.data.commissionLimit;
commission = commissionRecord.responseData.data.commission;
}
// Perform Arithmetic Operations
commissionLimit = parseInt(commissionLimit) + parseInt(amount);
if (commissionLimit >= 20) {
remainder = commissionLimit % 20;
commission =
parseInt(commission) + Math.floor(commissionLimit / 20);
commissionLimit = parseInt(remainder);
}
if (commissionRecord.responseData.exists === true) {
//Update the document
const result = await CommissionTable.updateCommissionNormal(
publicKey,
commission,
commissionLimit
);
if (result.success) {
return result;
}
The problem is that when all the requests come at the same time then they all read the data together and the updates are all made based on their read data. How to solve this situation?
To avoid race conditions you can use a mutex, Nodejs has a module async-mutex
https://www.npmjs.com/package/async-mutex
The Idea is to lock all processes to not run a specific function same time, all processes will work async until calling to this function, for this function will be created queue

Activity Function Http Call Issue Nodejs

I have an Orchestration that takes 100 search terms. Batches these search terms in a batch of 10 and fans out to start the search activities (each activity takes 10 names).
A search activity sequentially processes each name. For each name, it makes 2 search requests to azure search. One with space and punctuation and another without. To make the search request I call the REST API of the azure search.
The orchestration waits for all the search activities to resolve and return the result.
The issue I am facing is that the round trip for the azure search HTTP request is taking too long in the function app when deployed on azure.
At the start of the search, it takes 3-4 seconds for each request. But after few requests, the time for a single request goes up to 17-20 seconds.
Locally when I run this Orchestration with the same input and request to the same azure search index, it does not take more than 1.5 - 2 second for each request. Locally it takes 1.0-1.2 minutes for the Orchestration to complete. But deployed app takes 7-8 minutes for the same input and request to the same azure search index.
the following is how I make the request (code for the search activity funtion):
const request = require('request');
const requestDefault = request.defaults({
method: 'GET',
gzip: true,
json: true,
timeout: `some value`,
time: true,
pool: {maxSockets: 100}
});
module.exports = async function (context, names) {
let results = [];
for (let i = 0; i < names.length; i++) {
results.push(await search(context, names[i]));
results.push(await search(context, withOutSpaceAndPunctuations(names[i])));
}
return results;
}
function search(context, name) {
let url = createAzureSearchUrl(name);
return (new Promise((resolve, reject) => {
requestDefault({
uri: url,
headers: { 'api-key': `key` }
}, function (error, response, body) {
if (!error) {
context.log(`round trip time => ${response.elapsedTime/1000} sec`);
context.log(`elapsed-time for search => ${response.headers['elapsed-time']} ms`);
resolve(body.value);
} else {
reject(new Error(error));
}
})
}));
}
function createAzureSearchUrl(name) {
return `azure search url`;
}
The Orchestration
const df = require("durable-functions");
module.exports = df.orchestrator(function* (context) {
let names = context.bindings.context.input;
let chunk = 10;
let batches = [];
for (let i = 0; i < names.length; i += chunk) {
let slice = names.slice(i, i + chunk);
let batch = [];
for (let j = 0; j < slice.length; j++) {
batch.push(slice[j]);
}
batches.push(batch);
}
const tasks = [];
for (let i = 0; i < batches.length; i++) {
tasks.push(context.df.callActivity("Search", batches[i]));
}
let searchResults = yield context.df.Task.all(tasks);
return searchResults;
});
The elapsed-time for search is always less than 500 milliseconds.
According to this documentation I removed the request module and used the native https module. But it had no improvement.
var https = require('https');
https.globalAgent.maxSockets = 100;
function searchV2(context, name) {
let url = createAzureSearchUrl(name);
const t0 = performance.now();
return (new Promise((resolve, reject) => {
let options = {headers: { 'api-key': 'key' }}
https.get(url, options, (res) => {
onst t1 = performance.now();
context.log(`round trip time => ${(t1-t0)/1000} sec`);
context.log(`elapsed-time => ${res.headers['elapsed-time']}`);
res.on('data', (d) => {
resolve(d);
});
});
}));
}
For testing, I changed the batch count from 10 to 100 so that a single search activity processes all 100 search terms sequentially. Here all requests to azure search took 3.0-3.5 seconds. But 3.5sec * 200 req = 11.6666666667 minutes. So not fanning out is not an option.
The deployed app had a 1 instance count. I updated it to 6 instances. With 6 instances now takes 3.5 - 7.5 seconds for a single request. The total time for 100 search terms now takes 4.0 - 4.3 minutes. increasing instances to 6 had quite a lot of improvement. But still, it's taking 7.5seconds for a lot of requests. maxConcurrentActivityFunctions parameter was 6 in the host file.
I updated the instance count to 10 and maxConcurrentActivityFunctions to also 10. but it still takes 4.0 - 4.3 minutes for 100 search terms. No improvement. I saw a lot of requests taking up to 10 seconds.
I do not think it is a code-level issue. It has something to do with fanning out and making multiple concurrent requests for the same function.
Why is this happening to the deployed app and not locally? What should I do to decrease the request latency? Any suggestion will be appreciated.
My function app runs on the azure function App Service plan.
My DurableTask version is 1.7.1
The latency increases when there is indexing also happening in parallel. Is that the case for you? elapsed-time for the query may not be taking the latency into account.
On the Azure portal, when you navigate to your search resource, if you go to the monitoring tab, you should be able to see the latency, number of queries, % of throttled queries. That should provide some direction. What tier is your search service on? What is the number of partitions and what replicas that you provisioned for your search service?
As a test, you can increase the number of replicas and partitions to see if that helps with your performance. It did for me.

Nodejs - Fire multiple API calls while limiting the rate and wait until they are all done

My issues
Launch 1000+ online API that limits the number of API calls to 10 calls/sec.
Wait for all the API calls to give back a result (or retry), it can take 5 sec before the API sends it data
Use the combined data in the rest of my app
What I have tried while looking at a lot of different questions and answers here on the site
Use promise to wait for one API request
const https = require("https");
function myRequest(param) {
const options = {
host: "api.xxx.io",
port: 443,
path: "/custom/path/"+param,
method: "GET"
}
return new Promise(function(resolve, reject) {
https.request(options, function(result) {
let str = "";
result.on('data', function(chunk) {str += chunk;});
result.on('end', function() {resolve(JSON.parse(str));});
result.on('error', function(err) {console.log("Error: ", err);});
}).end();
});
};
Use Promise.all to do all the requests and wait for them to finish
const params = [{item: "param0"}, ... , {item: "param1000+"}]; // imagine 1000+ items
const promises = [];
base.map(function(params){
promises.push(myRequest(params.item));
});
result = Promise.all(promises).then(function(data) {
// doing some funky stuff with dat
});
So far so good, sort of
It works when I limit the number of API requests to a maximum of 10 because then the rate limiter kicks in. When I console.log(promises), it gives back an array of 'request'.
I have tried to add setTimeout in different places, like:
...
base.map(function(params){
promises.push(setTimeout(function() {
myRequest(params.item);
}, 100));
});
...
But that does not seem to work. When I console.log(promises), it gives back an array of 'function'
My questions
Now I am stuck ... any ideas?
How do I build in retries when the API gives an error
Thank you for reading up to hear, you are already a hero in my book!
When you have a complicated control-flow using async/await helps a lot to clarify the logic of the flow.
Let's start with the following simple algorithm to limit everything to 10 requests per second:
make 10 requests
wait 1 second
repeat until no more requests
For this the following simple implementation will work:
async function rateLimitedRequests (params) {
let results = [];
while (params.length > 0) {
let batch = [];
for (i=0; i<10; i++) {
let thisParam = params.pop();
if (thisParam) { // use shift instead
batch.push(myRequest(thisParam.item)); // of pop if you want
} // to process in the
// original order.
}
results = results.concat(await Promise.all(batch));
await delayOneSecond();
}
return results;
}
Now we just need to implement the one second delay. We can simply promisify setTimeout for this:
function delayOneSecond() {
return new Promise(ok => setTimeout(ok, 1000));
}
This will definitely give you a rate limiter of just 10 requests each second. In fact it performs somewhat slower than that because each batch will execute in request time + one second. This is perfectly fine and already meet your original intent but we can improve this to squeeze a few more requests to get as close as possible to exactly 10 requests per second.
We can try the following algorithm:
remember the start time
make 10 requests
compare end time with start time
delay one second minus request time
repeat until no more requests
Again, we can use almost exactly the same logic as the simple code above but just tweak it to do time calculations:
const ONE_SECOND = 1000;
async function rateLimitedRequests (params) {
let results = [];
while (params.length > 0) {
let batch = [];
let startTime = Date.now();
for (i=0; i<10; i++) {
let thisParam = params.pop();
if (thisParam) {
batch.push(myRequest(thisParam.item));
}
}
results = results.concat(await Promise.all(batch));
let endTime = Date.now();
let requestTime = endTime - startTime;
let delayTime = ONE_SECOND - requestTime;
if (delayTime > 0) {
await delay(delayTime);
}
}
return results;
}
Now instead of hardcoding the one second delay function we can write one that accept a delay period:
function delay(milliseconds) {
return new Promise(ok => setTimeout(ok, milliseconds));
}
We have here a simple, easy to understand function that will rate limit as close as possible to 10 requests per second. It is rather bursty in that it makes 10 parallel requests at the beginning of each one second period but it works. We can of course keep implementing more complicated algorithms to smooth out the request pattern etc. but I leave that to your creativity and as homework for the reader.

Google Cloud Platform - Optimise Cloud Function using puppeteer (Node.js)

I have written a function in node.js that works well when I run it locally (~10s to run).
As I want to run it every hour, I have deployed it on Google Cloud Platform. But, there, I always have a TimeOut error.
Therefore, do you have any advice on:
what I should change in my function to make it more efficient?
a alternate way to automate my function so it runs every hour?
FYI my cloud function has the following characteristics:
Node js 8
Memory: 2Go
Timeout: 540 seconds
and the following form:
exports.launchSearch = (req, res) => {
const puppeteer = require('puppeteer');
const url = require('./pageInformation').url;
const pageLocation = require('./pageInformation').location;
const userInformation = require('./userInformation').information;
(async () => {
const browser = await puppeteer.launch({args: ['--no-sandbox']});
const page = await browser.newPage();
await page.goto(url);
// Part 1
await page.click(pageLocation['...']);
await page.type(pageLocation['...'], userInformation['...']);
await page.waitFor(pageLocation['...']);
await page.click(pageLocation['...']);
... ~20 other "page.click" or "page.select"
// Part 2
var continueLoop = true;
while (continueLoop) {
var list = await page.$x(pageLocation['...']);
if (list.length > 0) {
await list[0].click();
var found = true;
var continueLoop = false;
} else {
var afficher = await page.$x(pageLocation['...']);
if (afficher.length > 0) {
await afficher[0].click();
} else {
var continueLoop = false;
var found = false;
};
};
};
// Part 3
if (found) {
await page.waitForXPath(pageLocation['...']);
const xxx = await page.$x(pageLocation['...']);
await xxx[0].click();
... 5 other blocks with exact same 3 lines, but with other elements to click
};
await browser.close();
})();
};
I have tried to run it part by part; sometimes it times out at the end of Part 1, sometimes at the end of Part 2. But the whole script never entirely completed.
Without having too much context of what your code does, it is hard to point out the root cause, but that I tell is continue debugging your code as Horatio suggested, or you can use a more sophisticated tool like StackDriver to monitoring the performance of your Cloud Functions. Evaluate its pricing if you are interested in.
If Stackdriver is an overkill, simply make use of inline function wrapping to find out the exact place of your routine that consuming all that time. Here is an example:
var start = process.hrtime();
yourfunction();
var elapsed = process.hrtime(start)[1] / 1000000;
console.log("Elapsed:" + elapsed.toFixed(3));
Once you have the exact piece of code that is affecting the execution, then you probably may have to optimize it. Additionally, as I understand that locally it worked perfectly, consider that sometimes processes running in Cloud environment are affected by latency due the 'proximity' of the other resources they consume.
Regarding your second question, about automating your function to be executed every hour. You can take advantage of Cloud Scheduler. It has the capability to make scheduled calls to HTTP/HTTPS endpoints, which Cloud Functions classify as one of those. Make sure to check its pricing also.

Firebase high volume of queries: Maximum call stack size exceeded

In reference to this answer large number of promises of how to handle a high number of queries I have regrouped the queries as shown using the 'lodash' library which works for a low number of queries however firebase returns an error
#firebase/database: FIREBASE WARNING: Exception was thrown by user callback. RangeError: Maximum call stack size exceeded
Which I know means that the arrays have grown too large however, when I try running pure Javascript Promises with a 10 ms timer the code seems to hold up to 1,000,000 as shown in that answer. I am not sure if this is a firebase or a node.js issue but given that firebase real time database can store millions of records in a JSON tree there must be a better way to process so many promises. I have largely based the approach off of these three questions, this was the original problem Find element nodes contained in another node , this approach for checking the database which requires so many reads check if data exists in firebase, and this approach for speeding up the requests Speed up fetching posts for my social network app by using query instead of observing a single event repeatedly
I am not sure if I am performing all of these reads correctly especially since it is such a high volume, thank you.
exports.postMadeByFriend = functions.https.onCall(async (data,context) => {
const mainUserID = "hJwyTHpoxuMmcJvyR6ULbiVkqzH3";
const follwerID = "Rr3ePJc41CTytOB18puGl4LRN1R2"
const otherUserID = "q2f7RFwZFoMRjsvxx8k5ryNY3Pk2"
var promises = [];
console.log("start")
var refs = [];
for(var x = 0; x < 100000; x +=1){
if (x === 999){
const ref = admin.database().ref(`Followers`).child(mainUserID).child(follwerID)
refs.push(ref);
continue;
}
const ref = admin.database().ref(`Followers`).child(mainUserID).child(otherUserID);
refs.push(ref);
}
function runQuery(ref){
return ref.once('value');
}
const batches = _.chunk(refs, 10000);
refs = [];
const results = [];
while (batches.length) {
const batch = batches.shift();
const result = await Promise.all(batch.map(runQuery));
results.push(result)
}
_.flatten(results);
console.log("results: " + JSON.stringify(results));
})

Resources