How to know how many requests to make without knowing amount of data on server - node.js

I have a NodeJS application where I need to fetch data from another server (3rd-party, I have no control over it). The server requires you to specify a max number of entries to return, along with an offset. So for example if there are 100 entries on the server, I could request a pageSize of 100 and offset of 0, or pageSize of 10, and do 10 requests with offset 1,2,3, etc. and do a Promise.all (doing multiple concurrent smaller requests is faster when timing it).
var pageSize = 100;
var offsets = [...Array(totalItems / pageSize).keys()];
await Promise.all(offsets.map(async i => //make request with pageSize and offset));
The only problem is that the number of entries changes, and there is no property returned by the server indicating the total number of items. I could do something like this and while loop until the server comes back empty:
var offset = 0;
var pageSize = 100;
var data = [];
var response = await //make request with pageSize and offset
while (response is not empty){
data.push(response);
offset++;
//send another request
But that isn't as efficient/quick as sending multiple concurrent requests like above.
Is there any good way around this that can deal with the dynamic length of the data on the server?

Without the server giving you some hints about how many items there are, there's not a lot you can do to parallelize multiple requests as you don't really want to send more requests than are needed and you don't want to artificially make your requests for smallish number of items just so you can run more requests in parallel.
You could run some tests and find some practical limits. What are the maximum number of items that the server and your client seem to be OK with you requesting (100? 1000? 10,000? 100,000?) and just request that many to start with. If it indicates there are more after that, then send another request of a similar size.
The main idea with this is to minimize the number of separate requests and maximize the data you can get in a single call. That should be more efficient than more parallel requests, each requesting fewer items, because its ultimately the same server on the other end and same data store that has to provide all the data so the fewest roundtrips in the fewest separate requests is probably the best.
But, some of this is dependent upon the scale and architecture of the target host so experiments will be required to see what practically works best.

Related

Batch requests and concurrent processing

I have a service in NodeJS which fetches user details from DB and sends that to another application via http. There can be millions of user records, so processing this 1 by 1 is very slow. I have implemented concurrent processing for this like this:
const userIds = [1,2,3....];
const users$ = from(this.getUsersFromDB(userIds));
const concurrency = 150;
users$.pipe(
switchMap((users) =>
from(users).pipe(
mergeMap((user) => from(this.publishUser(user)), concurrency),
toArray()
)
)
).subscribe(
(partialResults: any) => {
// Do something with partial results.
},
(err: any) => {
// Error
},
() => {
// done.
}
);
This works perfectly fine for thousands of user records, it's processing 150 user records concurrently at a time, pretty faster than publishing users 1 by 1.
But problem occurs when processing millions of user records, getting those from database is pretty slow as result set size also goes to GBs(more memory usage also).
I am looking for a solution to get user records from DB in batches, while keep on publishing those records concurrently in parallel.
I thinking of a solution like, maintain a queue(of size N) of user records fetched from DB, whenever queue size is less than N, fetch next N results from DB and add to this queue.
Then the current solution which I have, will keep on getting records from this queue and keep on processing those concurrently with defined concurrency. But I am not quite able to put this in code. Is there are way we can do this using RxJS?
I think your solution is the right one, i.e. using the concurrent parameter of mergeMap.
The point that I do not understand is why you are adding toArray at the end of the pipe.
toArray buffers all the notifications coming from upstream and will emit only when the upstream completes.
This means that, in your case, the subscribe does not process partial results but processes all of the results you have obtained executing publishUser for all users.
On the contrary, if you remove toArray and leave mergeMap with its concurrent parameter, what you will see is a continuous flow of results into the subscribe due to the concurrency of the process.
This is for what rxjs is concerned. Then you can look at the specific DB you are using to see if it supports batch reads. In which case you can create buffers of user ids with the bufferCount operator and query the db with such buffers.

will I hit maximum writes per second per database if I make a document using Promise.all like this?

I am now developing an app. and I want to send a message to all my users inbox. the code is like this in my cloud functions.
const query = db.collection(`users`)
.where("lastActivity","<=",now)
.where("lastActivity",">=",last30Days)
const usersQuerySnapshot = await query.get()
const promises = []
usersQuerySnapshot.docs.forEach( userSnapshot => {
const user = userSnapshot.data()
const userID = user.userID
// set promise to create data in user inbox
const p1 = db.doc(`users/${userID}/inbox/${notificationID}`).set(notificationData)
promises.push(p1)
})
return await Promise.all(promises)
there is a limit in Firebase:
Maximum writes per second per database 10,000 (up to 10 MiB per
second)
say if I send a message to 25k users (create a document to 25K users),
how long the operations of that await Promise.all(promises) will take place ? I am worried that operation will take below 1 second, I don't know if it will hit that limit or not using this code. I am not sure about the operation rate of this
if I hit that limit, how to spread it out over time ? could you please give a clue ? sorry I am a newbie.
If you want to throttle the rate at which document writes happen, you should probably not blindly kick off very large batches of writes in a loop. While there is no guarantee how fast they will occur, it's possible that you could exceed the 10K/second/database limit (depending on how good the client's network connection is, and how fast Firestore responds in general). Over a mobile or web client, I doubt that you'll exceed the limit, but on a backend that's in the same region as your Firestore database, who knows - you would have to benchmark it.
Your client code could simply throttle itself with some simple logic that measures its progress.
If you have a lot of documents to write as fast as possible, and you don't want to throttle your client code, consider throttling them as individual items of work using a Cloud Tasks queue. The queue can be configured to manage the rate at which the queue of tasks will be executed. This will drastically increase the amount of work you have to do to implement all these writes, but it should always stay in a safe range.
You could use e.g. p-limit to reduce promise concurrency in the general case, or preferably use batched writes.

How to scale S3 to thousands of requests per second?

AWS S3 documentation states
(https://docs.aws.amazon.com/AmazonS3/latest/dev/request-rate-perf-considerations.html):
Amazon S3 automatically scales to high request rates. For example, your application can achieve at least 3,500 PUT/POST/DELETE and 5,500 GET requests per second per prefix in a bucket.
To test this I have the following NodeJS code (using aws-sdk) which asynchronously initiates 1000 uploads of zero bytes (hence, simply adding empty entries to the bucket). There is a timer to measure the throughput:
var t0 = new Date().getTime()
for (var i = 0; i < 1000; i++) {
var s3 = new AWS.S3()
var id = uuid()
console.log('Uploading ' + id)
s3.upload({
Bucket: bucket,
Body : '',
Key : "test/" + id
},
function (err, data) {
if (data) console.log('Uploaded ' + id + ' ' + (new Date().getTime() - t0))
else console.log('Error')
})
}
It takes approximately 25 seconds to complete all upload requests. This is obviously nowhere near the purported 3500 requests per second, rather it is approximately 40 requests per second.
I have approximately 1MB network upload speed and network stats show that for most of the time the bandwidth is only about 25% saturated. Equally, CPU utilisation is also low.
So the question is:
How can I scale S3 upload throughput to achieve something near the 3500 requests per second that can apparently be achieved?
EDIT:
I modified the code like this:
var t0 = new Date().getTime()
for (var i = 0; i < 1000; i++) {
var s3 = new AWS.S3()
var id = String.fromCharCode('a'.charCodeAt(0) + (i % 26)) + uuid()
console.log('Uploading ' + id)
s3.upload({
Bucket: bucket,
Body: '',
Key: id
},
function (err, data) {
if (data) console.log('Uploaded ' + id + ' ' + (new Date().getTime() - t0))
else console.log('Error')
})
}
This uses 26 different prefixes, which the AWS documentation claims should scale the throughput by a factor of 26.
"It is simple to increase your read or write performance exponentially. For example, if you create 10 prefixes in an Amazon S3 bucket to parallelize reads, you could scale your read performance to 55,000 read requests per second."
However, no difference in the throughput is apparent. There is some kind of difference in the behaviour such that the requests appear to complete in a more parallel, rather than sequential fashion - but the completion time is just about the same.
Finally, I tried running the application in x4 separate bash threads (4 threads, 4 cores, 4x1000 requests). Despite the added parallelism from using multiple cores the total execution time is about 80 seconds and therefore did not scale.
for i in {0..3}; do node index.js & done
I wonder if S3 rate-limits individual clients/IPs (although this does not appear to be documented)?
I have a few things to mention before I give a straight answer to your question.
First, I did an experiment at one point, and I achieved 200000 PUT/DELETE requests in about 25 minutes, which is a little over 130 requests per second. The objects I was uploading were about 10 kB each. (I also had ~125000 GET requests in the same time span, so I’m sure that if I had only been doing PUTs, I could have achieved even higher PUT throughput.) I achieved this on a m4.4xlarge instance, which has 16 vCPUs and 64GB of RAM, that was running in the same AWS region as the S3 bucket.
To get more throughput, use more powerful hardware and minimize the number of network hops and potential bottlenecks between you and S3.
S3 is a distributed system. (Their documentation says the data is replicated to multiple AZs.) It is designed to serve requests from many clients simultaneously (which is why it’s great for hosting static web assets).
Realistically, if you want to test the limits of S3, you need to go distributed too by spinning up a fleet of EC2 instances or running your tests as a Lambda Function.
Edit: S3 does not make a guarantee for the latency to serve your requests. One reason for this might be because each request could have a different payload size. (A GET request for a 10 B object will be much faster than a 10 MB object.)
You keep mentioning the time to serve a request, but that doesn’t necessarily correlate to the number of requests per second. S3 can handle thousands of requests per second, but no single consumer laptop or commodity server that I know of can issue thousands of separate network requests per second.
Furthermore, the total execution time is not necessarily indicative of performance because when you are sending stuff over a network, there is always the risk of network delays and packet loss. You could have one unlucky request that has a slower path through the network or that request might just experience more packet loss than the others.
You need to carefully define what you want to find out and then carefully determine how to test it correctly.
Another thing you should look at is the HTTPS agent used.
It used to be the case (and probably still is) that the AWS SDK uses the global agent. If you're using an agent that will reuse connections, it's probably HTTP/1.1 and probably has pipelining disabled for compatibility reasons.
Take a look with a packet sniffer like Wireshark to check whether or not multiple connections outward are being made. If only one connection is being made, you can specify the agent in httpOptions.

Concurrent requests overriding data in Redis

Scenarios: When ever a request comes I need to connect to Redis instance, open the connection, fetch the count, update the count and close the connect(For every request this is the flow).When the requests are coming in sequential order i.e. 1 user sending 100 requests one after the other then the count in Redis is 100.
Issue: Issue is when concurrent requests comes. i.e. 10 users sending 100 requests(each user 10 requests) concurrently then the count is not 100 its around 50.
Example: Assume count in Redis is 0. If 10 requests comes at the same time then 10 connections will be opened and all the 10 connections will fetch the count value as 0 and updated it to 1.
Analysis: I found out that, as the requests are coming concurrently, multiple connections are fetching the same count value and updating it because of it the count value is getting overridden. Can anyone suggest a best way to avoid this problem if you have already encountered this problem.
Here we are using Hapijs, Redis 3.0, ioredis
I would recommend queueing each task so that each request finishes before the next one starts.
Queue.js is a good library I have used before but you can check out others if you want.
Here is an example basically from the docs but adapted slightly for your use case:
var queue = require('../')
var q = queue()
var results = []
var rateLimited = false
q.push(function (cb) {
if(!rateLimited){
// get data and push into results
results.push('two')
}
cb()
})
q.start(function (err) {
if (err) throw err
console.log('all done:', results)
})
This is a very loose example as I just wrote it quickly and without seeing your code base but I hope you get the idea.

Threading: Division of Labor

I seek a little nudge in the right direction to understand Node workers. I currently have Node code that reads data from a file and performs a bunch of subsequent actions with network requests. All of the actions I do with the data currently take place in the callback of the read function.
What I struggle to wrap my head around is how best to take this single read function (which almost certainly is not slowing my application down -- I'm fairly certain it's the later requests I'd like to branch), and divide the manipulation into multiple child processes. Of course, I don't want to perform my battery of actions multiple times on the same row of data, but rather I want to give each worker a slice of the pie. Is my best bet to, in the read-callback, create several arrays with part of the data, and then feed one array into each worker, outside the callback? Are there other options? My end goal is to reduce the time it takes my script to run through x amount of data.
var request = require('request');
var request = request.defaults({
jar: true
})
var yacsv = require('ya-csv');
// Post Log-In Form Information to Appropriate URL -- Occurs only once per script-run -- Log in cookies saved for subsequent requests
request.post({
url: 'xxxxx.com',
body: "login_info",
// On Reponse...
}, function(error, res, body) {
// Instantiate CSV Reader
var reader = yacsv.createCsvFileReader("somefile.csv");
// Read Data from CSV, Row by Row -- Function happens once per CSV-row
// THIS IS WHAT I -THINK- I CAN SPLIT AMONG MULTIPLE WORKERS
var readData = reader.addListener('data', function(data) {
// Bind each field from a CSV row to a corresponding variable for ease of use
//[Variables here]
// Second Request for Search Form -- Uses information from a single row to query more information from a database
request.post({
url: 'xxxxx.com/form',
body: variable_with_csv_data,
}, function(error, res, body) {
// Parse the resulting page, then page elements to variables for ease of output
}
});
});
});
The cluster module is not an altenative to threads. The cluster module allows you to balance http requests to the same application logic over multiple processes, without the option of delegating responsibility.
What is it exactly that you are trying to optimize ?
Is the overall process taking to long ?
Is the separate processing of the data events to slow ?
Are your database calls to slow ?
Are the http requests to slow ?
Also, I would do away with the ya-csv module it seems somewhat outdated to me.

Resources