I am using Netty camel-netty:jar:2.10.0.redhat-60024.
Below is my configuration of Netty listener
netty:tcp://10.1.33.204:9001?textline=true&autoAppendDelimiter=true&delimiter=LINE&keepAlive=true&synchronous=false&orderedThreadPoolExecutor=false&sendBufferSize=2000&receiveBufferSize=2000&decoderMaxLineLength=2000&workerCount=20
Here I see based on debug log , Netty is creating only one worker threads , so incoming mesages are blocked until existing message is processed.
Like:
2014-08-23 12:36:48,394 | DEBUG | w I/O worker #5 | NettyConsumer
| ty.handlers.ServerChannelHandler 85 | 126 -
org.apache.camel.camel-netty - 2.10.0.redhat-60024
Till 5 minute proccess is running but I seee only this thread active. Only when this thread sends reponse it is accepting next request
For TCP, Netty creates a number of worker threads, and assigns each connection to a specific worker thread. All events for that channel are handled by that single thread (note it can be more complex, but that's sufficient for this answer).
It sounds like you're processing your message in the Netty worker thread. Therefore you're blocking processing of any further events on that connection, and all other connections assigned to the worker thread, until your process returns.
Netty is actually creating multiple worker threads. You can see in the debug message that your channel is being handled by I/O worker 5. Netty will create 2 * Runtime.availableProcessors by default but each connection is handled by a single thread unless you intervene.
It's not clear whether you can process requests concurrently and out of order, or whether ordering is important. If ordering is important you can tell camel to use the ordered thread pool executor. This will process the request in a separate thread pool, but subsequent requests on the same connection will still be blocked by the first requests.
If ordering is not important you have a few options. Given that camel appears to be using Netty 3, and allows you to create a custom pipeline, you could use Netty's MemoryAwareThreadPoolExecutor to process requests concurrently. Perhaps take look at What happens when shared MemoryAwareThreadPoolExecutor's threshold is reached? if you do this.
Camel may offer other mechanisms to help but I'm not overly familiar with Camel. The SEDA component might be a good place to start.
Related
I'm dealing with a legacy synchronous server that has operations running for upto a minute and exposes 3 ports to overcome this problem. There is "light-requests" port, "heavy-but-important" requests port and "heavy" port.
They all expose the same service, but since they run on separate ports, they end up with dedicated thread pools.
Now this approach is running into a problem with load balancing, as Envoy can't handle a single services exposing the same proto on 3 different ports.
I'm trying to come up with a single threadpool configuration that would work (probably an extremely overprovisioned one), but I can't find any documentation on what the threadpool settings actually do.
NUM_CQS
Number of completion queues.
MIN_POLLERS
Minimum number of polling threads.
MAX_POLLERS
Maximum number of polling threads.
CQ_TIMEOUT_MSEC
Completion queue timeout in milliseconds.
Is there some reason why you need the requests split into three different thread pools? By default, there is no limit to the number of request handler threads. The sync server will spawn a new thread for each request, so the number of threads will be determined by the number of concurrent requests -- the only real limit is what your server machine can handle. (If you actually want to bound the number of threads, I think you can do so via ResourceQuota::SetMaxThreads(), although that's a global limit, not one per class of requests.)
Note that the request handler threads are independent from the number of polling threads set via MIN_POLLERS and MAX_POLLERS, so those settings probably aren't relevant here.
UPDATE: Actually, I just learned that my description above, while correct in a practical sense, got some of the internal details wrong. There is actually just one thread pool for both polling and request handlers. When a request comes in, an existing polling thread basically becomes a request handler thread, and when the request handler completes, that thread is available to become a polling thread again. The MIN_POLLERS and MAX_POLLERS options allow tuning the number of threads that are used for polling: when a polling thread becomes a request handler thread, if there are not enough polling threads remaining, a new one will be spawned, and when a request handler finishes, if there are too many polling threads, the thread will terminate. But none of this affects the number of threads used for request handlers -- that is still unbounded by default.
I have a Spring Integration context with multiple inbound channel adapters, each with his own poller (currently all the pollers have their refresh time configured with fixed-delay but may use fixed-rate in future). All the inbound adapters output their produced messages to the same processing chain. The question is what is the behaviour of polling and message consuming in such a situation? Imagine, that poller #1 has produced 1000 messages and they are handed to my processing chain. Since processing can take some significant time it is possible that it has come time for the poller #2 to do its job and possibly produce messages. But remeber - my processing chain is still handling messages passed by poller #1. What happens?
Poller #2 is not run at all until all the poller #1 messages are processed.
Poller #2 is run (but how could it be run if we have only one thread?), its messages are stored for later use when all the poller #1 messages are processed.
Processing initiated by poller #1 is interrupted, poller #2 is run, produced messages are passed to the processing chain immediately.
Some other answer
Note that all my channels are direct channels and there are no task executors used.
Pollers are independent tasks handled by the common taskScheduler bean; as long as the task scheduler has sufficient threads, there is no coordination across pollers.
If the pool is exhausted, pollers will run "late".
By default the taskScheduler has 10 threads; but you can reconfigure it.
I have almost the same case, however the behaviour is a bit differs.
My case is :
I have 4 pollers which requests data independently from 4 different
blocking queues ( i have set up timeout for 1 sec for each of them)
I have 4 inbound channel adapters configured to use fixed-delay (100ms) and the pollers above (one to one).
I have thread pool with 4 threads core/max, configured to handle inbound channel adapters (all adapters use this pool)
And now i see at logs that each thread executes all pollers sequently, and if there is an empty queue (i am using blocking queue) then all threads are delayed for 1 sec. This means even if you have enough threads you still may get delay for all your threads if at least one poller is slow. For instance if I would not use timeout for queue reading at all then all threads would stop on empty queue and nothing would be read from all others non-empty queues .
To solve this issue I guess we need to configure separate thread pools for each poller<-->inbound channel adapter.
I understand that the power of Node.js is that it processes all user requests on a single thread working on a queue of request. The idea being there is no context switch of this thread, no system calls.
input thread ---> | request queue| ---> output thread --(processes tasks if not causing system call, else delegates to thread pool).
The thread pool will:-
- execute tasks involving system calls (usually somewhat long running
ones.. e.g. IO tasks)
- put the results as another request task in the queue..
- which will be processed by the single thread working on queue
My question is, inevitable, Node.js code will need to put data in an RDBMS or JMS system. This is most definitely synchronous (even putting in JMS is synchronous.. although producer - consumer are not synchronous). So the thread pool processing these IO tasks will not only make system calls, but also be blocked during this period. JDBC in any case does not support synch calls (I guess due to need to be transactional, and maybe security issues, since txn and security context are attached to threads).
So how do we actually put data in RDBMS efficiently from a Node.js server?
I am new to netty. I would like to develop a server which aims at receiving requests from possibly few(say Max is of 2) clients. But each client will be sending many requests to server continuously. Server has to process such requests and respond to client. So, here I assume that even though if I configure multiple worker threds,it may not be useful as there are only 2 active connections. Worker thread again block till it process and respond to client. So, please let me know how to handle these type of problems.
If I use threadpoolexecutor in worker thread to process both clients requests in multi threaded manner, will it be efficient? Or if it cane achieved through netty framework, plz let me know how to do this?
Thanks in advance...
If I understand correctly: your clients (2) will send many messages, each of them implying an answear as quickly as possible from the server.
2 options can be seen:
The answear process is short time (short enough to not be an isssue for the rate you want to reach, meaning 1 thread is able to answear as fast as you need for 1 client): then you can stay with the standard threads from Netty (1 worker thread for 1 client at a time) set up in the server bootstrap. This is the shortest path.
The answear process is not short time enough (the rate will be terrible, for instance because there is a "long time" process, such as blocking call, database access, file writing, ...): then you can add a thread pool (a group) in the Netty pipeline for you ChannelHandler doing such blocking/long process.
Here is an extract of the API documentation taken from ChannelPipeline:
http://netty.io/4.0/api/io/netty/channel/ChannelPipeline.html
// Tell the pipeline to run MyBusinessLogicHandler's event handler methods
// in a different thread than an I/O thread so that the I/O thread is not blocked by
// a time-consuming task.
// If your business logic is fully asynchronous or finished very quickly, you don't
// need to specify a group.
pipeline.addLast(group, "handler", new MyBusinessLogicHandler());
just add a ChannelHandler with a special EventExecutorGroup to the ChannelPipeline. For example UnorderedThreadPoolEventExecutor (src).
something like this.
UnorderedThreadPoolEventExecutor executorGroup = ...;
pipeline.addLast(executorGroup, new MyChannelHandler());
we have a mule app with HTTP inbound endpoint and I'm trying to figure out how to control the thread count under load. As an experiment I have added the following configuration:
<core:configuration>
<core:default-threading-profile doThreading="false" maxThreadsActive="500" poolExhaustedAction="RUN"/>
</core:configuration>
Under load I'm seeing the thread count peak at over 1000 threads. Am not sure why this is the case give the maxThreadsActive setting and the doThreading="false". Reading about poolExhaustedAction="RUN", I would expect the listener thread to block while processing inbound requests rather than spawn new ones, and finally reject the connection if its backlog queue is full. I never see rejected client connections.
Does Mule maintain a separate thread pool for each inbound endpoint in the app (sorry if this is in the documentation)? Even if so, don't think it helps explain what I'm seeing.
Any help appreciated. We are running a number of mule apps in one container and I'd like to control the total number of threads.
Thanks, Alfie.
Clearly the doThreading attribute on default-threading-profile is not enough to control Mule threading as a whole nor limit with a global cap the specific threading behaviour of transports. I reckon you're getting 500 threads for the HTTP message receiver pool and 500 for the VM message dispatcher pool.
I strongly suggest you reading about tuning Mule: http://www.mulesoft.org/documentation/display/current/Tuning+Performance
My gut feel is that you need to
configure threading on each transport (VM, HTTP), strictly specifying the pool size for receivers and dispatchers,
select flow processing strategies that prevent Mule from spawning new threads (i.e. use synchronous to hog the receiver threads),
select exchange patterns that also prevent Mule from spawning new threads (i.e. use request-response to piggyback the current execution thread).