What is a reasonable amount of time to wait when making concurrent requests? - multithreading

I'm working on a crawler and I've noticed that by setting the length of time for waiting 1 minute per request has made the application more reliable and I now get fewer connection resets. Can you recommend a reasonable amount of time to wait? I think 1 minute is quite the belts and braces approach and I would like to reduce this ideally.

Related

Why is Python consistently struggling to keep up with constant generation of asyncio tasks?

I have a Python project with a server that distributes work to one or more clients. Each client is given a number of assignments which contain parameters for querying a target API. This includes a maximum number of requests per second they can make with a given API key. The clients process the response and send the results back to the server to store into a database.
Both the server and clients use Tornado for asynchronous networking. My initial implementation for the clients relied on the PeriodicCallback to ensure that n-number of calls to the API would occur. I thought that this was working properly as my tests would last 1-2 minutes.
I added some telemetry to collect statistics on performance and noticed that the clients were actually having issues after almost exactly 2 minutes of runtime. I had set the API requests to 20 per second (the maximum allowed by the API itself) which the clients could reliably hit. However, after 2 minutes performance would fluctuate between 12 and 18 requests per second. The number of active tasks steadily increased until it hit the maximum amount of active assignments (100) given from the server and the HTTP request time to the API was reported by Tornado to go from 0.2-0.5 seconds to 6-10 seconds. Performance is steady if I only do 14 requests per second. Anything higher than 15 requests will experience issues 2-3 minutes after starting. Logs can be seen here. Notice how the column of "Active Queries" is steady until 01:19:26. I've truncated the log to demonstrate
I believed the issue was the use of a single process on the client to handle both communication to the server and the API. I proceeded to split the primary process into several different processes. One handles all communication to the server, one (or more) handles queries to the API, another processes API responses into a flattened class, and finally a multiprocessing Manager for Queues. The performance issues were still present.
I thought that, perhaps, Tornado was the bottleneck and decided to refactor. I chose aiohttp and uvloop. I split the primary process in a similar manner to that in the previous attempt. Unfortunately, performance issues are unchanged.
I took both refactors and enabled them to split work into several querying processes. However, no matter how much you split the work, you still encounter problems after 2-3 minutes.
I am using both Python 3.7 and 3.8 on MacOS and Linux.
At this point, it does not appear to be a limitation of a single package. I've thought about the following:
Python's asyncio library cannot handle more than 15 coroutines/tasks being generated per second
I doubt that this is true given that different libraries claim to be able to handle several thousand messages per second simultaneously. Also, we can hit 20 requests per second just fine at the start with very consistent results.
The API is unable to handle more than 15 requests from a single client IP
This is unlikely as I am not the only user of the API and I can request 20 times per second fairly consistently over an extended period of time if I over-subscribe processes to query from the API.
There is a system configuration causing the limitation
I've tried both MacOS and Debian which yield the same results. It's possible that's it a *nix problem.
Variations in responses cause a backlog which grows linearly until it cannot be tackled fast enough
Sometimes responses from the API grow and shrink between 0.2 and 1.2 seconds. The number of active tasks returned by asyncio.all_tasks remains consistent in the telemetry data. If this were true, we wouldn't be consistently encountering the issue at the same time every time.
We're overtaxing the hardware with the number of tasks generated per second and causing thermal throttling
Although CPU temperatures spike, neither MacOS nor Linux report any thermal throttling in the logs. We are not hitting more than 80% CPU utilization on a single core.
At this point, I'm not sure what's causing it and have considered refactoring the clients into a different language (perhaps C++ with Boost libraries). Before I dive into something so foolish, I wanted to ask if I'm missing something simple.
Conclusion
Performance appears to vary wildly depending on time of day. It's likely to be the API.
How this conclusion was made
I created a new project to demonstrate the capabilities of asyncio and determine if it's the bottleneck. This project takes two websites, one to act as the baseline and the other is the target API, and runs through different methods of testing:
Spawn one process per core, pass a semaphore, and query up to n-times per second
Create a single event loop and create n-number of tasks per second
Create multiple processes with an event loop each to distribute the work, with each loop performing (n-number / processes) tasks per second
(Note that spawning processes is incredibly slow and often commented out unless using high-end desktop processors with 12 or more cores)
The baseline website would be queried up to 50 times per second. asyncio could complete 30 tasks per second reliably for an extended period, with each task completing their run in 0.01 to 0.02 seconds. Responses were very consistent.
The target website would be queried up to 20 times per second. Sometimes asyncio would struggle despite circumstances being identical (JSON handling, dumping response data to queue, returning immediately, no CPU-bound processing). However, results varied between tests and could not always be reproduced. Responses would be under 0.4 seconds initially but quickly increase to 4-10 seconds per request. 10-20 requests would return as complete per second.
As an alternative method, I chose a parent URI for the target website. This URI wouldn't have a large query to their database but instead be served back with a static JSON response. Responses bounced between 0.06 seconds to 2.5-4.5 seconds. However, 30-40 responses would be completed per second.
Splitting requests across processes with their own event loop would decrease response time in the upper-bound range by almost half, but still took more than one second each to complete.
The inability to reproduce consistent results every time from the target website would indicate that it's a performance issue on their end.

IIS Slows down when to many long running requests in .Net

I have an API Service, that sometimes takes a while to spit out the answer, like maybe 120 to 200 seconds. Most of the time the answer will be produced in 1 to 5 seconds.
When there are let's say 20 requests to IIS that have long-running answers between 120 - 200 seconds. All the other incoming requests seem to take much longer to process and everything gets real slow until those 20 requests produce results.
I have the application pool set 50,000 concurrent requests, but I see this behaviour with as little as 100 x 200 requests and then the requests keep growing until the long running ones complete.
I was under the impression that all requests will run normally and independently of one another. And a few shouldn't slow down the rest. The CPU on the machine never churns over 1%.
Is there anything I might be overlooking? Configuration wise?
After reading lex li answer, and setting the max worker threads might help, how many CPU's is this considered?
And how many worker threads will this enable?
<processModel autoConfig="false" requestQueueLimit="100000" maxWorkerThreads="200" maxIoThreads="200" minWorkerThreads="50" minIoThreads="50" />
And how many under this scenario?

Occasional duplicate request using jmeter

I'm using JMeter 4.0 trying to create a stress test. The purpose is to emulate the types of requests we receive in production, which is generally an array of requests of different types with a certain frequency and occasionally (1 in 1000) duplicate requests of the same type within milliseconds of each other.
I've managed to create a thread group emulating frequent requests of different types and a second thread group emulating duplicate requests (using synchronizing timer to ensure the requests fire off together).
I'm almost finished. My only problem is that there is no relationship between the thread groups whatsoever. If I wanted to perform a duplicate request once every 1000 requests, I'd need to know how long it takes to perform an average request (which is complicated by the fact that there are several request types) and calculate the time it would require for roughly 1000 requests to be made, and add an appropriate constant timer in the other thread group.
This isn't ideal. I'll settle for this if I must, but I was hoping the bright minds of stackoverflow could shine some insight for my issue.
Some ideas I've had:
Add a run counter which cycles every 1000 normal requests and once run counter hits 1000, I perform a second request (though it would be under the same thread and after I've received the response from the first). Could this be made to work using a synchronized timer?
Use a constant throughput timer with "all active threads (shared)" set whose samples per minutes is set to 1000.
Is there a better way still? The actual requests are HTTP requests, though there are several steps prior in preparation of the message to send. I'm already using a constant throughput timer in the first thread group (random service requests) to maintain a specific amount of requests per minute, so I'm not sure if adding a second constant throughput timer in the other thread group would create issues.
Thank you for your time.
You can add If Controller with condition of 1 every 1000 threads
${__jexl3(${__threadNum} % 1000 == 0)}
and inside If Controller execute your duplicate HTTP Request
__threadNum return current thread/user number

Node.js request limit per second while using cluster module

I'm using the cluster module to have multiple worker that fetch data from an API, process it and write an aggregate to the DB. The problem is, that the API has limited the requests per second. Now I'm searching for a solution to sync the limitation across all workers.
I'm thankful for every hint to solve this.
If you have a limit of number of requests per second, you could keep track of how many requests you have left in the master thread and each child could ask the master thread if it can send a request before sending, and the master thread would only fulfill the request when it has requests available for the current second. Here is another answer showing how master -> slave communication works.
At the end of each second, you would then reset the master thread to the number of requests available.
This approach would be best for achieving the maximum, however a much simpler approach would be to start N number of thread and allow them to make K number of requests per second, where K * N is just less than the number of requests allowed per second. The safest and least likely way to hit the limit with this is to do a setTimeout between the end of one request and start of the next request, but that would avoid the delay it takes processing the request. The next best option is for each thread to fire N number of requests at the start of the second and not firing again until the next second.
Your safest solution is to not go close to the limit and instead stick to max of N/2 requests per second where N is the max number of requests per second.

Difference between Think time and Pacing Time in Performace testing

Pacing is used to achieve X number of iterations in X minutes, But I'm able to achieve x number of iterations in X minutes or x hours or x seconds by specifying only think time without using pacing time.
I want to know the actual difference between think time and pacing time? pacing time is necessary to mention between iterations? what this pacing time does?
Think time is a delay added after iteration is complete and before the next one is started. The iteration request rate depends on the sum of the response time and the think time. Because the response time can vary depending on a load level, iteration request rate will vary as well.
For constant request rate, you need to use pacing. Unlike think time, pacing adds a dynamically determined delay to keep iteration request rate constant while the response time can change.
For example, to achieve 3 iteration in 2 minutes, pacing time should be 2 x 60 / 3 = 40 seconds. Here's an example how to use pacing in our tool http://support.stresstimulus.com/display/doc46/Delay+after+the+Test+Case
Think Time
it introduces an element of realism into the test execution.
With think time removed, as is often the case in stress testing, execution speed and throughput can increase tenfold, rapidly bringing an application infrastructure that can comfortably deal with a thousand real users to its knees.
always include think time in a load test.
think time influences the rate of transaction execution
Pacing
another way of affecting the execution of a performance test
affects transaction throughput

Resources