Working on a stateless WCF rest web service and have an operation with 3 independent tasks. Each one can be run independently. Each task consists of a web service call to an external API & a follow up local DB read operation that takes less than 0.25 sec.
First thing that comes to mind is that I should spawn 3 separate threads then join and return result. Using a Thread Pool would probably not be a good idea here as its limited to 250 treads max.
Performance is of concern, but not at the expense of scalability.
Should I be concerned about the overhead of starting & joining 3 separate threads for each web service call ?
Wrap the calls to external service into async Task methods, then call from your WCF method. It will use thread pool and will queue your web service calls nicely if thread pull is exhausted.
You can use async IO to perform the webservice calls. Async IO does not occupy any thread at all while it is running. You can do the same thing for the database calls. This alleviates any threading concern that you might have.
Alternatively, you can rely on the thread-pool. You can increase the limits. You can calculate how many threads you need: If 100 requests arrive per second and each one takes 2 seconds to complete you need 200 threads. This can easily be served by the built-in thread pool assuming you configure appropriate limits.
In case the external service is down and takes 30 seconds to timeout this number now shoots up to 3000 threads which I consider unsafe. So you either need a low timeout, a circuit breaker or async IO.
So in order to decide you need to forecast load and latency.
I'll link to some discussion for why and when to use async IO:
https://stackoverflow.com/a/25087273/122718 Why does the EF 6 tutorial use asychronous calls?
https://stackoverflow.com/a/12796711/122718 Should we switch to use async I/O by default?
Related
Using Ktor and Kotlin 1.5 to implement a REST service backed by Netty. A couple of things about this service:
"Work" takes non-trivial amount of time to complete.
A unique client endpoint sends multiple requests in parallel to this service.
There are only a handful of unique client endpoints.
The service is not scaling as expected. We ran a load test with parallel requests coming from a single client and we noticed that we only have two threads on the server actually processing the requests. It's not a resource starvation problem - there is plenty of network, memory, CPU, etc. and it doesn't matter how many requests we fire up in parallel - it's always two threads keeping busy, while the others are sitting idle.
Is there a parameter we can configure to increase the number of threads available to process requests for specific endpoints?
Netty use what is called Non-blocking IO model (http://tutorials.jenkov.com/java-concurrency/single-threaded-concurrency.html).
In this case you have only a single thread and it can handle a lot of sub-processes in parallel, as long as you follow best practices (not blocking the main thread event loop).
You might need to check the following configuration options for Netty https://ktor.io/docs/engines.html#configure-engine
connectionGroupSize = x
workerGroupSize = y
callGroupSize = z
Default values usually are set rather low and tweaking them could be useful for the time-consuming 'work'. The exact values might vary depending on the available resources.
I have been using Node.JS for a while and just wonder how it handles when multiple clients causing some blocking / time consuming works to response ?
Consider the following situation
1- There are many endpoints and one of them is time consuming and responds in a few seconds.
2- Suppose, 100 clients simultaneously make requests to my endpoints, which one of them takes a considerable amount of time.
Does that endpoint block all event loop and make other requests wait ?
Or , In general, Do requests block each other in Node.JS ?
If not , why ? It is single-threaded, why do not they block each other ?
Node.Js does use threads behind the scenes to perform I/O operations. To be more spesific to your question - there will be a limit where a client will have to wait for an idle thread to perform a new I/O task.
You can make an easy toy example - running several I/O tasks concurrently (by using Promise.all for instance) and measure the time it takes for each to finish. Then add a new task and repeat.
At some point you'll notice two groups. For example 4 requests took 250ms and the other 2 took 350ms (and there you get "requests blocking each other").
Node.Js is commonly refered as single threaded for its default CPU-operations excecution (in contrary to its Non-blocking I/O architecture). therefore it won't be very wise using it for intensive CPU operations, but very efficient when it comes to I/O operations.
DISCLAIMER: If this post is off-topic to this site, please recommend a site where this post would be appropriate.
On Ubuntu 18.04, in bash, I am writing a network-based, threaded application that requires multiple servers. It receives files through the network and processes them, ultimately making an API call that finishes the processing and logs the results to a database for later retrieval and reporting.
So far I have written the application using non-threaded programming models and concepts. That means the files are processed one at a time in real-time. This works great if there is no sudden burst of files and/or a backlog of files to process. The main bottle neck has been the way I sequentially send files to the API one after another, waiting until the entire operation has taken place for one file and the API returns the results. The API has a rate limit of 8 calls per second. But since each call takes from .75 to 1 second, my program waits until the operation is done and only processes about 1 file per second through the API. In short, I did not have to worry about scheduling API calls because I could barely do one call per second.
Since the capacity is there to process 8 files per second, and I need more speed, I have been converting my single-threaded, sequential application into a parallel, scalable, multi-threaded application. This new version can spawn enough threads to send 8 files per second to the REST API and much more. So now I have the opposite problem. I am sending too many requests per second to the REST API and am in danger of triggering penalties, etc. Ultimately, when my traffic is higher, I will upgrade my subscription to the API and get more calls per second, but this current dilemma has got me thinking about how to schedule the API calls with different threads.
The purpose of this post is to discuss an idea about how to schedule these REST API calls across various threads. Specifically, I want to discuss how to coordinate timing and usage of the API while maintaining efficiency and yet not overloading the API. In short, I want to coordinate a group of threads so that the API is properly used. Not too fast and not too slow.
Independent of my application, this idea could be useful in a number of generically similar scenarios.
My idea is to create an "air traffic controller" ("ATC") so that the threads of the application have a centralized timing authority to check when they are ready to submit files to the REST API. The ATC would know how many time slots/calls per time period (in this case, calls per second) the API can schedule. The ATC would be listening for the threads to request a time slot ("launch code") which would give them a time slot in the future to perform their API call. The ATC would decide based on the schedule of other launch codes that it has already handed out.
In my case, from the start of the upload of the file to the API, it could take 0.75 to 1 second to complete the processing and receive a response from the API. This does not affect the count of new API calls that can be performed. It is just a consideration of how long the threads will be waiting once they call the API. It may not be relevant to this overall discussion.
Each thread would obviously have to do some error handling. If the API timed out or threw an error, then the thread would have to handle it and get back in line with the ATC -if appropriate- and ask for a new launch code. Maybe it should report the error to the ATC for centralized logging?
In situations where the file processing needs burst above 8 files per second, there would be a scheduling backlog where the threads should wait their turn as assigned by the ATC.
Here are some other considerations:
Function
The ATC would be a lightweight daemon that does the following:
- listens on some TCP port
- receives a request
security token (?), thread id, priority
- authenticates the request (?)
- examines schedule
- reserves the next available time slot
- returns the launch code
security token (?), current time, launch timing offset to current time, URL and auth token for the API
- expunged expired launch codes
The ATC would need the following:
- to know what port it is supposed to run on
- to know how many slots per time period it was set to schedule
(e.g. 8 per second)
- to have a super fast read/write access to the schedule (associative array?)
- to know the URL and corresponding auth token for the thread to use
- maybe to know multiple URLs and auth tokens for load balancing
Here are more things to consider:
Security
How could we keep the ATC secure while ensuring high performance?
Network-level security (e.g. firewalls allowing only the IP addresses of the file-processing servers?)
Auth tokens or logins and passwords?
Performance
What would the requirements be for this ATC server? Would this be taxing to a CPU and memory?
Timing
How often would an NTP call be needed? By the ATC server? By the servers which call the API?
Scalability
Being able to provide different URLs and auth tokens would allow the ATC to load balance with different API providers.
Threading of the ATC itself
Would the ATC need to spawn threads to be able to handle each new request?
How does a web server handle requests?
How would the various threads share a common schedule?
In a non-threaded environment, the ATC would possibly keep an associative array in memory to keep performance as high as possible. How would the various threads of the ATC have access to the same schedule?
So here is my question. Does this exist? If not, what are some best practices in trying to build the above?
It seems like a beanstalkd kind of network service except it only provides permission/scheduling and is extremely dependant on timing.
What is the best way to scale a Worker Role that is processing many long running Azure Service Bus messages using the QueueClient Message Pump.
If using QueueClient.OnMessageOptions.MaxConcurrentCalls = 6 and QueueClient.OnMessage
does that mean i can only process a max of 6 messages at a time?
Is it bad form to have the long running processing within the OnMessage callback to spawn a new Task to complete it's processing?
Should i be using the QueueClient.OnMessageAsync instead?
Thanks for any help.
By “long running” do you mean IO-bound or CPU-bound?
Assuming IO-bound then I wouldn’t spawn a new Task in the OnMessage callback. This creates thread management overhead that can slow processing down at scale.
Consider using OnMessageAsync if you are using IO-bound operations and make sure that you await the asynchronous implementations of any of these operations. This uses your existing threads much more efficiently.
If your operations are CPU-bound then Task creation may do more for you. The mechanics of this are discussed in a series of excellent posts by Stephen Cleary:
http://blog.stephencleary.com/2013/10/taskrun-etiquette-and-proper-usage.html
The MaxConcurrentCalls property controls the number of concurrent requests to the service bus. Increasing this number has a limited impact if you’re IO-bound and limited by available bandwidth. I would recommend doing a bit of performance testing with the Azure client-side performance counters to get the optimum value for your environment.
I have a service which polls a queue very quickly to check for more 'work' which needs to be done. There is always more more work in the queue than a single worker can handle. I want to make sure a single worker doesn't grab too much work when the service is already at max capacity.
Let say my worker grabs 10 messages from the queue every N(ms) and uses the Parallel Library to process each message in parallel on different threads. The work itself is very IO heavy. Many SQL Server queries and even Azure Table storage (http requests) are made for a single unit of work.
Is using the TheadPool.GetAvailableThreads() the proper way to throttle how much work the service is allowed to grab?
I see that I have access to available WorkerThreads and CompletionPortThreads. For an IO heavy process, is it more appropriate to look at how many CompletionPortThreads are available? I believe 1000 is the number made available per process regardless of cpu count.
Update - Might be important to know that the queue I'm working with is an Azure Queue. So, each request to check for messages is made as an async http request which returns with the next 10 messages. (and costs money)
I don't think using IO completion ports is a good way to work out how much to grab.
I assume that the ideal situation is where you run out of work just as the next set arrives, so you've never got more backlog than you can reasonably handle.
Why not keep track of how long it takes to process a job and how long it takes to fetch jobs, and adjust the amount of work fetched each time based on that, with suitable minimum/maximum values to stop things going crazy if you have a few really cheap or really expensive jobs?
You'll also want to work out a reasonable optimum degree of parallelization - it's not clear to me whether it's really IO-heavy, or whether it's just "asynchronous request heavy", i.e. you spend a lot of time just waiting for the responses to complicated queries which in themselves are cheap for the resources of your service.
I've been working virtually the same problem in the same environment. I ended up giving each WorkerRole an internal work queue, implemented as a BlockingCollection<>. There's a single thread that monitors that queue - when the number of items gets low it requests more items from the Azure queue. It always requests the maximum number of items, 32, to cut down costs. It also has automatic backoff in the event that the queue is empty.
Then I have a set of worker threads that I started myself. They sit in a loop, pulling items off the internal work queue. The number of worker threads is my main way to optimize the load, so I've got that set up as an option in the .cscfg file. I'm currently running 35 threads/worker, but that number will depend on your situation.
I tried using TPL to manage the work, but I found it more difficult to manage the load. Sometimes TPL would under-parallelize and the machine would be bored, other times it would over-parallelize and the Azure queue message visibility would expire while the item was still being worked.
This may not be the optimal solution, but it seems to be working OK for me.
I decided to keep an internal counter of how many message are currently being processed. I used Interlocked.Increment/Decrement to manage the counter in a thread-safe manner.
I would have used the Semaphore class since each message is tied to its own Thread but wasn't able to due to the async nature of the queue poller and the code which spawned the threads.