D-Bus threading model - linux

I am starting to use D-Bus as the IPC mechanism for a new project in Linux/KDE. And I've discovered that the documentation does not really address concurrency at all. How are D-Bus services expected to deal with multiple concurrent calls coming in from different clients? What's the threading model? Can a service assume that it is single-threaded and D-Bus will queue up requests on its own?

As a protocol, D-Bus doesn't address threading.
D-Bus connections receive message serially. At the protocol level, replies to message are asynchronous: i.e. the sender doesn't have to wait for replies before sending more messages.
While in principle a D-Bus implementation could dispatch messages to service implementations concurrently, I don't know of any that do this.
Typically, a D-Bus implementation (or "binding", if you will) allows the service to decide for each method (or even for each method call) whether to respond to incoming method calls synchronously or asynchronously. The details of this are up to the particular implementation you're using.
If you're responding to method calls asynchronously, your service implementation is responsible for making sure that any state is kept consistent while multiple responses are pending. If you always respond synchronously, then you know you're only dealing with one method call at a time.

Related

Does nodejs http/s module use worker threads to listen for a request?

Had some conversation with a co-worker and it went down a rabbit hole of threads and i was questioning if something like expressjs, which uses the nodejs built in https module, uses workers to listen for connections for each network request or some other design.
Would anyone know how normally http type request wait for connections? threads? workers?
Does nodejs http/s module use worker threads to listen for a request?
No, it does not. Nodejs is an event driven system. Wherever possible, it uses an underlying asynchronous capability in the OS and that is what it does for all TCP/UDP networking, including http.
So, it uses native asycnhronous features in the OS. When some networking event happens (such as an incoming connection or an incoming packet), then the OS will notify nodejs and it will insert an event into the event queue. When nodejs is not doing something else, it will go back to the event queue and pull the next event out of the event queue and run the code associated with that event.
This is all managed for nodejs by the libuv library which provides the OS compatibility layer upon which nodejs runs (on several different platforms). Networking is part of what libuv provides.
This does not involve having a separate thread for every TCP socket. libuv itself uses a few system threads in order to do its job, but it does not create a new thread for each TCP socket you have connected and it's important to note that it is using the native asynchronous networking interfaces in the operating system (not the blocking interfaces).
The libuv project is here (including the source) if you want to learn more about that specific library.
libuv does use a thread pool for some operations that don't have a consistent, reliable asynchronous interface in the OS (like file system access), but that thread pool is not used or needed for networking.
There are also a zillion articles on the web about how various aspects of the nodejs event loop work, including some in the nodejs documentation itself.

Is it safe in ZeroMQ to zmq_poll() a REP socket + send() from multiple threads?

I am wondering if a ZeroMQ REP socket is allowed to be poll()-ed on incoming data in one thread and used to send data from the other thread.
The idea I am trying to follow is the following:
A REP socket is not going to receive anything, as long as it did not send a reply to the incoming request. Thus if a zmq_poll() was called for such a socket, it'd just block (until timeout or forever).
Now, while this socket is a part of the zmq_poll() call for incoming data, what happens if another thread prepared a reply and uses this socket to send this reply.
Is is safe to do so or are race conditions possible than?
ZeroMQ has been built on a set of a few maxims.
Zero-sharing is one of these core maxims.
While a user can at her/his own risk experiment with sharing, ZeroMQ best practices avoid doing that, except for very few and very specific cases and not on a socket-level. Sockets are knowingly not thread-safe, for the sake of the higher overall performance and lower latency.
This is the reason why a question "What happens if another thread ..." may sound legitimate, but not inside the domain of ZeroMQ Best Practices zone.

winsock application and multhreading - listening to socket event from another thread

assume we have an application which uses winsock to implement tcp communication.
for each socket we create a thread and block-receiving on it.
when data arrives, we would like to notify other threads (listening threads).
i was wondering what is the best way to implement this:
move away from this design and use a non-blocking socket, then the listening thread will have to iterate constantly and call a non-blocking receive, thus making it thread safe (no extra threads for the sockets)
use asynchronous procedure calls to notify listening threads - which again will have to alert-wait for apc to queue for them.
implement some thread safe message queue, where each socket thread will post messages to it, and the listener, again, will go over it every interval and pull data from it.
also, i read about WSAAsyncSelect, but i saw that this is used to send messages to a window. isnt there something similar for other threads? (well i guess apcs are...)
Thanks!
Use I/O completion ports. See the CreateIoCompletionPort() and the GetQueuedCompletionStatus() functions of the Win32 API (under File Management functions). In this instance, the socket descriptors are used in place of file handles.
You'll always be better off abstracting the mechanics of socket API (listening, accepting, reading & writing) in a separate layer from the application logic. Have an object that captures the state of a connection, which is created during an incoming connection and you can maintain buffers in this object for the incoming and outgoing traffic. This will allow your network interface layer to be independent of the application code. This will also make the code cleaner by separating the application functionality from the underlying communication mechanism.
Blocking or non-blocking socket decision depends on the level of scalability that your applications needs to achieve. If your application needs to support hundreds of incoming connections, adopting a thread-per-socket approach is not going to be very wise. You'll be better off going for an Io ports based implementation, which will make your app immensely scaleable at added code complexity. However, if you only foresee a few 10s of connections at any point in time, you can go for an asynchronous sockets model using Win32 events or messages. Win32 events based approach doesn't scale very well beyond a certain limit as you would have to manage multiple threads if the number of concurrent sockets exceed 63 (as WaitForMultipleObjects can only support a max of 64 sockets). Windows message based mechanism doesn't have this limitation though. OHOH, Win32 event based approach does not require a GUI window to work.
Check out WSAEventSelect along with WSAAsyncSelect API documentation in MSDN.
You might want to take a look at boost::asio package as well. It provides a neat (though a little complex) C++ abstraction over sockets API.

Creating a simple Linux API

I have a simple application on a OpenWRT style router. It's written in C++ currently. The router (embedded Linux) has very limited disk space and RAM. For example there is not enough space to install Python.
So, I want to control this daemon app via the network. I have read some tutorials on creating sockets, and listening to the port for activity. But I haven't been able to integrate the flow into a C++ class. And I haven't been able to figure out how to decode the information received, or how to send a response.
All the tutorials I've read are dead ends, they show you how to make a server that basically just blocks until it receives something, and then returns a message when it got something.
Is there something a little more higher level that can be used for this sort of thing?
Sounds like what you are asking is "how do I build a simple network service that will accept requests from clients and do something in response?" There are a bunch of parts to this -- how do you build a service framework, how do you encode and decode the requests, how do you process the requests and how do you tie it all together?
It sounds like you're having problems with the first and last parts. There are two basic ways of organizing a simple service like this -- the thread approach and the event approach.
In the thread approach, you create a thread for each incoming connection. That thread reads the messages (requests) from that connection (file descriptor), processes them, and writes back responses. When a connection goes away, the thread exits. You have a main 'listening' thread that accepts incoming connections and creates new threads to handle each one.
In the event approach, each incoming request becomes an event. You then have event handlers that processes these events, sending back responses. Its important that the event handlers NOT block and complete promptly, otherwise the service may appear to lock up. Your program has a main event loop that waits for incoming events (generally blocking on a single poll or select call) and reads and dispatches each event as appropriate.
I installed python-mini package with opkg, which has socket and thread support.
Works like a charm on a WRT160NL with backfire/10.03.1.

How can a connection be represented only by a small space in a HTTP server running on node.js?

I have read that a HTTP server created in node.js does not create new threads for each incoming connection(request). Instead it executes a function that has been registered as a callback corresponding to the event of receiving a request.
It is said that each connection is represented by some small space in the heap. I cannot figure this out. Are connections not represented by sockets ? Should sockets not be opened for every connection made to the node.js server and this would mean each connection cannot be represented by just a space allocation in the javascript heap ?
It is described on the nodejs.org website that instead of spawning threads (2mb overhead per thread!) per connection, the server uses select(), epoll, kqueue or /dev/poll to wait until a socket is ready to read / write. It is this method that allows node to avoid thread spawning per connection, and the overhead is that associated with the heap allocation of the socket descriptor for the connection. This implementation detail is largely hidden from developers, and the net.socket API exposed by the runtime provides everything you need to take advantage of that feature without even thinking about it.
Node also exposes its own event API through events.EventEmitter. Many node objects implement events to provide asynchronous (non-blocking) event notification, which is perfect for I/O operations, which in other languages - such as PHP - are synchronous (blocking) by default. In the case of the node net.socket API, events are triggered for several API methods dealing with socket I/O, and the callbacks that are passed by parameter to these methods are triggered when an event occurs. Events can have callback functions bound to them in a variety of different ways, accepting a callback function as a parameter is only a convenience for the developer.
Finally, do not confuse OS events with nodejs events. In the case of the net API, OS events are passed to the nodejs runtime, but nodejs events are javascript.
I hope this helps.

Resources