I am learning Linux internals. So I came across the poll system call. As far as I understand, it is used by drivers to provide notification when some data is ready to be read from device and when we have data ready to device.
If device do not have any data to read, process will get sleep and wake up when data become available and vice versa for write case.
Can someone provide me concrete understanding of poll system call with some real example?
poll and select (the latter is very similar to poll with these differences) sys calls are used in so called asynchronous event-driven approach for handling client's requests.
Basically, in network programming there are two major strategies for handling many connections from network clients by the server:
1) more traditional: threaded or process-oriented approach. In this situation network server has main proccess which listens on one specific network port (port 80 in case of web servers) for incomming connections and when connection arrives, it spawns new thread/process to handle this new connection. Apache HTTP server took this approch.
2) aforementioned asynchronous event-driven approach where (in simplest case) network server (for example web server) is application with only one process and it accepts connections (creating socket for each new client) and then it monitors those sockets with poll/select for incoming data. Nginx http web server took this approch.
Related
Had some conversation with a co-worker and it went down a rabbit hole of threads and i was questioning if something like expressjs, which uses the nodejs built in https module, uses workers to listen for connections for each network request or some other design.
Would anyone know how normally http type request wait for connections? threads? workers?
Does nodejs http/s module use worker threads to listen for a request?
No, it does not. Nodejs is an event driven system. Wherever possible, it uses an underlying asynchronous capability in the OS and that is what it does for all TCP/UDP networking, including http.
So, it uses native asycnhronous features in the OS. When some networking event happens (such as an incoming connection or an incoming packet), then the OS will notify nodejs and it will insert an event into the event queue. When nodejs is not doing something else, it will go back to the event queue and pull the next event out of the event queue and run the code associated with that event.
This is all managed for nodejs by the libuv library which provides the OS compatibility layer upon which nodejs runs (on several different platforms). Networking is part of what libuv provides.
This does not involve having a separate thread for every TCP socket. libuv itself uses a few system threads in order to do its job, but it does not create a new thread for each TCP socket you have connected and it's important to note that it is using the native asynchronous networking interfaces in the operating system (not the blocking interfaces).
The libuv project is here (including the source) if you want to learn more about that specific library.
libuv does use a thread pool for some operations that don't have a consistent, reliable asynchronous interface in the OS (like file system access), but that thread pool is not used or needed for networking.
There are also a zillion articles on the web about how various aspects of the nodejs event loop work, including some in the nodejs documentation itself.
I have a C++ client binary that uses epoll() to block on various OS descriptors - for file I/O, socket I/O, OS timers; and now it also needs to be a gRPC client (including streaming replies).
Reading answers to related questions across the web (e.g. about servers) it appears that there is no easy way from C/C++ to ask gRPC to give an fd that can be incorporated into the existing epoll set and then trigger gRPC to do the read & processing for the incoming response. Is that correct?
Alternatively, is the reverse possible: to use files, socket and timers via the gRPC core iomgr framework that are otherwise unrelated to the gRPC service? (for reading local files, communicating with external network equipment and managing the client's internal high-frequency timer needs.
The client in question is a single thread with RT priority (on an embedded (soft) real-time system using the PREEMPT RT). Given that gRPC creates other threads, could that be a problem?
Unfortunately, this isn't possible today, but it will be in the future once we finish the EventEngine effort. Once ready, that will allow you to inject your own event loop into gRPC. However, that probably won't be ready for public use until sometime next year.
For now, the only suggestion I can offer is that if you're using an insecure channel and don't need any name resolution or load balancing functionality, you may be able to use CreateInsecureChannelFromFd() to do the reverse (provide your own fd to use as a gRPC connection).
I know that a server normally open one port and listen it.
Today I learnt that there was a function select in system Unix-Like. With select we can listen multi-sockets.
I just can't imagine a case where we need to use select. If we have two sockets, it means that we are listening two ports, right? So I have a question:
What kind of server would open more than one port but receive and process the same type of requests?
Using select helps with handling reads and writes on multiple sockets. It doesn't have to be multiple server sockets. The most typical use is for multiplexing a large number of client sockets.
You have a server with one listening socket. Each time you accept a connection, you add the new client socket to the multiplexing pool. select then returns any time any of those sockets has data available to read. The big win is that you're doing all this with one thread.
You also get as socket for each connection that you've accepted on the listening (server) socket.
selecting among these (client) sockets and the server socket (readable => new connection) allows you to write apps such as chat servers efficiently.
Ummm... remember the difference between ports and sockets.
A "port" is like a telephone-number. But a single phone-number could be handling any number of "calls!"
A "socket," then, represents a single telephone-call: a currently active connection between this server and a particular client. Each connection, by definition, "takes place over a particular port," but any number of connections might exist at the same time.
(The "accept" operation corresponds to: picking up the phone.)
So, then, what select() buys you is the ability to monitor any number of sockets at one time. It examines all the sockets, waits (if necessary) for something to happen on any one of them, and returns one message to you. Now, the design of your server becomes "a simple loop." No matter how many sockets you're listening to, and no matter how many of them have messages waiting, select() will return messages to you one at a time.
It's basically the case that "every server out there will use a select() loop at its heart, unless there's an exceptionally wonderful reason not to.
Take a look here:
One traditional way to write network servers is to have the main
server block on accept(), waiting for a connection. Once a connection
comes in, the server fork()s, the child process handles the connection
and the main server is able to service new incoming requests.
With select(), instead of having a process for each request, there is
usually only one process that "multi-plexes" all requests, servicing
each request as much as it can.
So one main advantage of using select() is that your server will only
require a single process to handle all requests. Thus, your server
will not need shared memory or synchronization primitives for
different 'tasks' to communicate.
One major disadvantage of using select(), is that your server cannot
act like there's only one client, like with a fork()'ing solution. For
example, with a fork()'ing solution, after the server fork()s, the
child process works with the client as if there was only one client in
the universe -- the child does not have to worry about new incoming
connections or the existence of other sockets. With select(), the
programming isn't as transparent.
http://www.lowtek.com/sockets/select.html
This applies to non-user facing backend applications communicating with each other through HTTP. I'm wondering if there is a guideline for a maximum timeout for a synchronous HTTP request. For example, let's say a request can take up to 10 minutes to complete. Can I simply create a worker thread on the client and, in the worker thread, invoke the request synchronously? Or should I implement the request asynchronously, to return HTTP 202 Accepted and spin off a worker thread on the server side to complete the request and figure out a way to send the results back, presumable through a messaging framework?
One of my concerns is it safe to keep an socket open for an extended period of time?
How long a socket connection can remain open (without activity) depends on the (quality of the) network infrastructure.
A client HTTP request waiting for an answer from a server results in an open socket connection without any data going through that connection for a while. A proxy server might decide to close such inactive connections after 5 minutes. Similarly, a firewall can decide to close connections that are open for more than 30 minutes, active or not.
But since you are in the backend, these cases can be tested (just let the server thread handling the request sleep for a certain time before giving an answer). Once it is verified that socket connections are not closed by different network components, it is safe to rely on socket connections to remain open. Keep in mind though that network cables can be unplugged and servers can crash - you will always need a strategy to handle disruptions.
As for synchronous and asynchronous: both are feasable and both have advantages and disadvantages. But what is right for you depends on a whole lot more than just the reliability of socket connections.
I should state that I'm not asking about specific implementation details (yet), but just a general overview of what's going on. I understand the basic concept behind a socket, and need clarification on the process as a whole. My (probably very wrong) understanding is currently this:
A socket is constantly listening for clients that want to connect (in its own thread). When a connection occurs, an event is raised that spawns another thread to perform the connection process. During the connection process the client is assigned it's own socket in which to communicate with the server. The server then waits for data from the client and when data arrives an event is raised which spawns a thread to read the data from a stream into a buffer.
My questions are:
How off is my understanding?
Does each client socket require it's own thread to listen for data on?
How is data routed to the correct client socket? Is this something taken care of by the guts of TCP/UDP/kernel?
In this threaded environment, what kind of data is typically being shared, and what are the points of contention?
Any clarifications and additional explanation would be greatly appreciated.
EDIT:
Regarding the question about what data is typically shared and points of contention, I realize this is more of an implementation detail than it is a question regarding general process of accepting connections and sending/receiving data. I had looked at a couple implementations (SuperSocket and Kayak) and noticed some synchronization for things like session cache and reusable buffer pools. Feel free to ignore this question. I've appreciated all your feedback.
One thread per connection is bad design (not scalable, overly complex) but unfortunately way too common.
A socket server works more or less like this:
A listening socket is setup to accept connections, and added to a socketset
The socket set is checked for events
If the listening socket has pending connections, new sockets are created by accepting the connections, and then added to the socket set
If a connected socket has events, the relevant IO functions are called
The socket set is checked for events again
This happens in one thread, you can easily handle thousands of connected sockets in a single thread, and there's few valid reasons for making this more complex by introducing threads.
while running
select on socketset
for each socket with events
if socket is listener
accept new connected socket
add new socket to socketset
else if socket is connection
if event is readable
read data
process data
else if event is writable
write queued data
else if event is closed connection
remove socket from socketset
end
end
done
done
The IP stack takes care of all the details of which packets go to what "socket" in which order. Seen from the applications point of view, a socket represents a reliable ordered byte stream (TCP) or an unreliable unordered sequence of packets(UDP)
EDIT: In response to updated question.
I don't know either of the libraries you mention, but on the concepts you mention:
A session cache typically keeps data associated with a client, and can reuse this data for multiple connections. This makes sense when your application logic requires state information, but it's a layer higher than the actual networking end. In the above sample, the session cache would be used by the "process data" part.
Buffer pools are also an easy and often effective optimization of a high-traffic server. The concept is very easy to implement, instead of allocating/deallocating space for storing data you read/write, you fetch a preallocated buffer from a pool, use it, then return it to a pool. This avoids the (sometimes relatively expensive) backend allocation/deallocation mechanisms. This is not directly related to networking, you can just as well use buffer pools for e.g. something that reads chunks of files and process them.
How off is my understanding?
Pretty far.
Does each client socket require it's own thread to listen for data on?
No.
How is data routed to the correct client socket? Is this something taken care of by the guts of TCP/UDP/kernel?
TCP/IP is a number of layers of protocol. There's no "kernel" to it. It's pieces, each with a separate API to the other pieces.
The IP Address is handled in on place.
The port # is handled in another place.
The IP addresses are matched up with MAC addresses to identify a particular host. The port # is what ties a TCP (or UDP) socket to a particular piece of application software.
In this threaded environment, what kind of data is typically being shared, and what are the points of contention?
What threaded environment?
Data sharing? What?
Contention? The physical channel is the number one point of contention. (Ethernet, for example depends on collision-detection.) After that, well, every part of the computer system is a scarce resource shared by multiple applications and is a point of contention.