RedisServerEvents: local.NotifySubscription running in a single thread? - servicestack

I see that in the OnMessage handler implemented by RedisServerEvents is responsible to notify heartbeats to the long running clients through the Local.NotifySubscription method
https://github.com/ServiceStack/ServiceStack/blob/a29892b764309239203fd9281da9f323750cdd4f/ServiceStack/src/ServiceStack.Server/RedisServerEvents.cs#L582
Is this code supposed to be running in the main RunLoop thread? or is there a way to handle heartbeat responses in different threads and, if not, if there are any reasons for not doing so?
Thanks in advance!

Related

Node js : how libuv thread pool works?

I am learning Node Js, I understand the heart of node js is the reactor pattern which is based on the event loop.
When any event occurs it goes to the event queue and then gets picked up by the stack once it's running tasks get over, this happens if the event is a non- blocking one but if it is a blocking request then event loop passes it to a thread from a thread pool of libuv.
Now my doubts are:
Once the execution is over does libuv thread passes the request back to event queue or event loop? , different tutorial has a different scenario.
Thread pool in libuv has 3 threads more, now suppose 10 users tries to login and everyone at the same time (some app like facebook or so), how only, and the threads are blocked as they want to connect to DB, so how only three threads will handle so much load?
I am really confused and did not get a good explanation of these doubts anywhere, any help will be appreciated.
When any event occurs it goes to the event queue and then gets picked up by the stack
event does not get picked up by the stack. it is passed to call stack by event loop.
if it is a blocking request then event loop passes it to a thread from a thread pool of libuv.
There are ONLY FOUR things that use the thread pool - DNS lookup, fs, crypto and zlib. Everything else execute in the main thread irrespective or blocking or not.
So logging is a network request and threadpool does not handle this. Neither libuv nor node has any code to handle this low level operations that are involved with a network request. Instead libuv delegates the request making to the underlying operating system then it just waits on the operating system to emit a signal that some response has come back to the request. OS keeps track of connections in the network stack. But network i/o is handled by your network hardware and your ISP.
Once the execution is over does libuv thread passes the request back to event queue or event loop?
From the node.js point of view does it make a difference?
2) Thread pool in libuv has 3 threads more , now suppose 10 users
tries to login and everyone at the same time (some app like facebook
or so), how only , and the threads are blocked as they want to connect
to DB, so how only three threads will handle so much load ?
libuv uses threat pool but not in a "naive" way. Most asynchronous requests are filesystem/tcp interaction which are handled via select(). You have to worry about the thread pool only when you create a custom C++ module and manually dispatch a CPU/IO blocking task.

Netty multi threading per connection

I am new to netty. I would like to develop a server which aims at receiving requests from possibly few(say Max is of 2) clients. But each client will be sending many requests to server continuously. Server has to process such requests and respond to client. So, here I assume that even though if I configure multiple worker threds,it may not be useful as there are only 2 active connections. Worker thread again block till it process and respond to client. So, please let me know how to handle these type of problems.
If I use threadpoolexecutor in worker thread to process both clients requests in multi threaded manner, will it be efficient? Or if it cane achieved through netty framework, plz let me know how to do this?
Thanks in advance...
If I understand correctly: your clients (2) will send many messages, each of them implying an answear as quickly as possible from the server.
2 options can be seen:
The answear process is short time (short enough to not be an isssue for the rate you want to reach, meaning 1 thread is able to answear as fast as you need for 1 client): then you can stay with the standard threads from Netty (1 worker thread for 1 client at a time) set up in the server bootstrap. This is the shortest path.
The answear process is not short time enough (the rate will be terrible, for instance because there is a "long time" process, such as blocking call, database access, file writing, ...): then you can add a thread pool (a group) in the Netty pipeline for you ChannelHandler doing such blocking/long process.
Here is an extract of the API documentation taken from ChannelPipeline:
http://netty.io/4.0/api/io/netty/channel/ChannelPipeline.html
// Tell the pipeline to run MyBusinessLogicHandler's event handler methods
// in a different thread than an I/O thread so that the I/O thread is not blocked by
// a time-consuming task.
// If your business logic is fully asynchronous or finished very quickly, you don't
// need to specify a group.
pipeline.addLast(group, "handler", new MyBusinessLogicHandler());
just add a ChannelHandler with a special EventExecutorGroup to the ChannelPipeline. For example UnorderedThreadPoolEventExecutor (src).
something like this.
UnorderedThreadPoolEventExecutor executorGroup = ...;
pipeline.addLast(executorGroup, new MyChannelHandler());

Safe to spawn a new thread inside request thread in spring

I have a spring controller. The request thread from the controller is passed to the #Service annotated Service class. Now I want to do some background work and the request thread must some how trigger the background thread and continue with it's own work and should not wait for the background thread to complete.
My first question : is this safe to do this.?
Second question : how to do this.?
Is this safe
Not really. If you have many concurrent users, you'll spawn a thread for everyone of them, and the high number of threads could bring your server to its knees. The app server uses a pool of threads, precisely to avoid this problem.
How to do this
I would do this by using the asynchronous capabilities of Spring. Call a service method annotated with #Async, and the service method will be executed by another thread, from a configurable pool.

Multithreaded socket server using libev

I'm implementing a socket server.
All clients (up to 10k) are supposed to stay connected.
Here's my current design:
The main thread creates an event loop (use epoll by default) and a watcher for accepting clients.
The accept callback
Accept fd and set it to non-blocking mode.
Add watcher for the fd to monitor read events.
The read callback
Read data and add a task to thread pool to send response.
Is it OK to move read part to thread pool, or any other better idea?
Thanks.
Hard to say. You don't want 10k threads running in the background. You should keep the read part in the main thread. This way if suddently all clients start asking for things, you pile those resources only in the threadpool queue (You don't end up with 10k threads running at the same time). Also you might get better performance this way because you avoid doing some unnecessary context switches (between your own threads).
On the other hand if your clients are unlikely to send requests at the same time, or if the replies are very simple, it might be simpler to just have one thread per client, and avoid the context switch between the main thread and the thread pool.

How does NodeJs handle so many incoming requests, does it use a thread pool?

When a request comes into a nodejs server, how does it handle the request?
I understand it has a different way of handling requests, as it doesn't spawn a new thread for each request (or I guess it doesn't use a traditional thread pool either).
Can someone explain to me what is going on under the hood, and does the flavour of linux matter here?
No, it does async IO. There's only one thread that blocks until something happens somewhere, and then it handles that event. This means that one thread in one process can serve many concurrent connections. Somewhat like
endless loop {
event = next_event()
dispatch_event(event)
}
The only exception is filesystem stuff, it uses a thread pool for that under the hood.
Node tells the operating system (through epoll, kqueue, /dev/poll, or select) that it should be notified when a new connection is made, and then it goes to sleep. If someone new connects, then it executes the callback. Each connection is only a small heap allocation
It is "event driven" where it handles IO in an async fashion (non blocking I/O). It internally does threading needed to do epoll, kqueue, /dev/poll, or select handling, but for you as a user/client it is absolutely transparent.
e.g. epoll is not really a thread pool, but an OS' I/O event notification facility, that node.js sits on top of.

Resources