Why does DynamoDB performance decrease with parallel reads? - node.js

With AWS-XRay tracing enabled on my lambda function i've found that as the number of parallel requests increases to dynamodb the performance of the read's decreases.
Here is an example of the XRay Traces:
Above you can see that the first set of GetItem requests execute in under 300ms. This set only has 6 async read requests running in parallel. The next set of read requests all execute in on average atleast 1.5 seconds - with 57 async read requests running in parallel.
Thoughts on what this could be due to:
this may be due to a "cold start" feature as dynamodb adds capacity to deal with parallel reads? (This dyanmodb instance is pay-by-request, not provisioned)
Additionally, i recognize that this may not be related parallel requests at all, but it may be a good place to start asking questions. Wondering if anyone knows what could be causing such a dramatic performance decrease.

Related

Invisible Delays between Spark Jobs

There are 4 major actions(jdbc write) with respect to application and few counts which in total takes around 4-5 minutes for completion.
But the total uptime of Application is around 12-13minutes.
I see there are certain jobs by name run at ThreadPoolExecutor.java : 1149. Just before this job being reflected on Spark UI, the invisible long delays occur.
I want to know what are the possible causes for these delays.
My application is reading 8-10 CSV files, 5-6 VIEWs from table. Number of joins are around 59, few groupBy with agg(sum) are there and 3 unions are there.
I am not able to reproduce the issue in DEV/UAT env since the data is not that much.
It's in the production where I get the app. executed run by my Manager.
If anyone has come across such delays in their job, please share your experience what could be the potential cause for this, currently I am working around the unions, i.e. caching the associated dataframes and calling count so as to get the benefit of cache in the coming union(yet to test, if union is the reason for delays)
Similarly, I tried the break the long chain of transformations with cache and count in between to break the long lineage.
The time reduced from initial 18 minutes to 12 minutes but the issue with invisible delays still persist.
Thanks in advance
I assume you don't have a CPU or IO heavy code between your spark jobs.
So it really sparks, 99% it is QueryPlaning delay.
You can use
spark.listenerManager.register(QueryExecutionListener) to check different metrics of query planing performance.

Would SQS batch size max limit result in slower processing through Lambdas?

I'm aware that AWS has allowed SQS to be one of the event source mappings for Lambdas. I'm glad that this is possible now as I would then not have to poll from the queue every few seconds through a cron job. However, it appears that the maximum possible value for batchSize is limited to 10. From my understanding, the batchSize is the number of messages a single Lambda invocation will receive from the queue.
This sounds like it could be an issue for me because, in my case, I may have a few hundreds of thousands of messages at a time in the queue. Those messages don't need any heavy processing; they just need to be parsed and saved to the database as a record. It's pretty simple.
If the batchSize is limited to only 10 messages per retrieval, I foresee a few issues that I may have:
It may actually take a long time to finish processing the messages on the queue.
Not only is 10 messages per retrieval slow, since the messages are very simple to process, processing only 10 messages in a single Lambda invocation sounds a little wasteful because, given the simplicity of what is needed to be done to process the messages, I'm pretty sure it can process at least a few thousands messages in a single Lambda invocation.
Having only 10 messages per retrieval may also mean that I need to make more write operations to my database because each of these messages need to inserted as a record on the database.
Are my concerns valid in this case? If so, is there anything else I can do with SQS and Lambdas to overcome those concerns?
Your assumption about a limit of 10 is correct.
Lambda will spin up more instances to run in parallel, if there are more messages available. See Scaling and Processing. This means that if there are 1000 messages available, Lambda might spin up 100 concurrent executions to quickly process all the messages.
Once a lambda function has processed the 10 messages of a batch, it continues with processing other batches. As lambda bills in 100ms intervals, the wasted time is minimal.
As for the database writes you could pre-process the messages before inserting them into the queue.
In that case you need to let you lambda function fetch the messages from the queue and process them rather than lambda getting triggered via SQS. Probably have a cloud watch event which can trigger lambda for you depending upon what your use case is.
Please note that SQS has a limit of max 10 messages in one go but you could write the code to make it much more efficient.
One of the package which is very efficient at is squiss-ts
In this case you could let your lambda function run for 15 mins (max time) and let it process as many messages possible. Idempotency is the key when you are desinging these kind of applications so in case if message wasn't processed in this run, it will be processed in the next run.
Downside of using this approach is that you need to scale your lambda's manually depending on how many messages you are anticipating.
You're right that a larger batch size seems appropriate for your use case.
As of late 2020, if you specify a batch window in seconds, you can then specify a batch size of up to 10,000 messages.
So with this new option you can now configure your lambda to wait and receive much larger batches per invocation.

Spark and 100000k of sequential HTTP calls: driver vs workers

I have to do 100000 sequential HTTP requests with Spark. I have to store responses into S3. I said sequential, because each request returns around 50KB of data, and I have to keep 1 second in order to not exceed API rate limits.
Where to make HTTP calls: from Spark Job's code (executed on driver/master node) or from dataset transformation (executed on worker node)?
Workarrounds
Make HTTP request from my Spark job (on Driver/Master node), create dataset of each HTTP response (each contains 5000 json items) and save each dataset to S3 with help of spark. You do not need to keep dataset after you saved it
Create dataset from all 100000 URLs (move all further computations to workers), make HTTP requests inside map or mapPartition, save single dataset to S3.
The first option
It's simpler and it represents a nature of my compurations - they're sequential, because of 1 second delay. But:
Is it bad to make 100_000 HTTP calls from Driver/Master node?
*Is it more efficient to create/save one 100_000 * 5_000 dataset than creating/saving 100_000 small datasets of size 5_000*
Each time I creating dataset from HTTP response - I'll move response to worker and then save it to S3, right? Double shuffling than...
Second option
Actually it won't benefit from parallel processing, since you have to keep interval of 1 second because request. The only bonus is to moving computations (even if they aren't too hard) from driver. But:
Is it worth of moving computations to workers?
Is it a good idea to make API call inside transformation?
Saving a file <32MB (or whatever fs.s3a.block.size is) to S3 is ~2xGET, 1xLIST and a PUT; you get billed a bit by AWS for each of these calls, plus storage costs.
For larger files, a POST to initiate multipart upload after that first block, one POST per 32 MB (of 32MB, obviously) and a final POST of a JSON file to complete. So: slightly more efficient
Where small S3 sizes matter is in the bills from AWS and followup spark queries: anything you use in spark, pyspark, SQL etc. many small files are slower: Theres a high cost in listing files in S3, and every task pushed out to a spark worker has some setup/commit/complete costs.
regarding doing HTTP API calls inside a worker, well, you can do fun things there. If the result isn't replicable then task failures & retries can give bad answers, but for a GET it should be OK. What is hard is throttling the work; I'll leave you to come up with a strategy there.
Here's an example of uploading files to S3 or other object store in workers; first the RDD of the copy src/dest operations is built up, then they are pushed out to workers. The result of the worker code includes upload duration length info, if someone ever wanted to try and aggregate the stats (though there you'd probably need timestamp for some time series view)
Given you have to serialize the work to one request/second, 100K requests is going to take over a day. if each request takes <1 second, you may as well run it on a single machine. What's important is to save the work incrementally so that if your job fails partway through you can restart from the last checkpoint. I'd personally focus on that problem: how could do this operation such that every 15-20 minutes of work was saved, and on a restart you can carry on from there.
Spark does not handle recovery of a failed job, only task failures. Lose the driver and you get to restart your last query. Break things up.
Something which comes to mind could be
* first RDD takes list of queries and some summary info about any existing checkpointed data, calculates the next 15 minutes of work,
* building up a list of GET calls to delegate to 1+ worker. Either 1 URL/row, or have multiple URLs in a single row
* run that job, save the results
* test recovery works with a smaller window and killing things.
* once happy: do the full run
Maybe also: recognise & react to any throttle events coming off the far end by
1. Sleeping in the worker
1. returning a count of throttle events in the results, so that the driver can initially collect aggregate stats and maybe later tune sleep window for subsequent tasks.

How does Cassandra stress test determine threadcount?

I ran a Cassandra stress-test and the output came back to between 4 and 913 threadcounts. What causes Cassandra to increase and stop the threadcount?
When you use Cassandra Stress, I see these tests
First, Cassandra starts with a small amount of thread and displays the result, then it also raises the thread until (This number seems to depend on the cluster that Cassandra has attached to stress and allowed to connect. Given this parameter, thread are counted.)ends the test.
And in the end, the results of all the tests with the number of thread they used in testing them in
As you can see above, the system I tested on was able to run 32 threads and the test was completed with the same amount and the results were displayed.

Mongodb count performance issues with Node js

I am having issues with doing counts on a single table with up to 1million records. I have a 32 core 244gb ram box that I am running my test on so hardware should not be an issue.
I have indexes set up on all of my queries that I am using to perform counts. I have enabled node max_old_space_size to 15gb.
The process I am following is basically looping through a huge array, creating 1000 promises, within each promise I am performing 12 counts, waiting for the promises to all resolve, and then continuing with the next one thousand batch.
As part of my test, I am doing inserts, updates, and reads as well. All of those, are showing great performance up to 20000/sec on each. However, when I get to the portion of my code doing the counts(), I can see via mongostat that there are only 20-30 commands being executed per second. I have not determined at this point, if my node code is only sending that many, or if mongo is queuing it up.
Meanwhile, in my node.js code, all 1000 promises are started and waiting to evaluate. I know this is a lot of info, so please let me know what more granular details I should provide to get some more insight into why the count performance is so slow.
So basically, for a batch of 1000 records, doing lets say 12 counts each, for a total of 12,000 counts, it is taking close to 10 minutes, on a table of 1million records.
MongoDB Native Client v2.2.1
Node v4.2.1
What I'd like to add is that I have tried changing the maxPoolSize on the driver from 100-1000 with no change in performance. I've tried changing my queries that I perform from yield/generator/promise to callbacks wrapped in promise, which has helped somewhat.
The strange thing is, when my program starts, even if i use just the default number of connections which I see as 7 when running mongostat, I can get around 2500 count() queries per second throughout. However, after a few seconds this goes back down to about 300-400. This leads me to believe that mongo can handle that many all the time, but my code is not able to send that many requests, even though I set maxPoolSize to 300 and start 10000 simultaneous promises resolving in parallel. So what gives, any ideas from anyone ?

Resources