Golang - do libraries need to be non-blocking? - node.js

My understanding is that non-blocking web servers (node.js, eventmachine, tornado) can grind to a halt if they make a call to a blocking library. Is this true for Golang as well? If one goroutine gets blocked, is another one automatically given access to the CPU, or do they have to wait for the blocked goroutine to 'yield'? If it is the former, then libraries don't need to be non-blocking, do they? I ask because I haven't seen any Redis/Mongo libraries that explicitly state that they're non-blocking.

My understanding is that non-blocking web servers (node.js,
eventmachine, tornado) can grind to a halt if they make a call to a
blocking library. Is this true for Golang as well?
No it isn't. Go routines will yield on IO or the runtime will create new OS threads as necessary.
If one goroutine gets blocked, is another one automatically given
access to the CPU
Yes it is - go routines yield on any sort of IO or channel communication.
or do they have to wait for the blocked goroutine to 'yield'?
No they don't.
If it is the former, then libraries don't need to be non-blocking, do
they? I ask because I haven't seen any Redis/Mongo libraries that
explicitly state that they're non-blocking.
No libraries (or Go code in general) don't need to be non blocking which makes them much easier to write and maintain. This is a major plus point of Go in my opinion. The runtime does the clever bit running 1000s of go routines and you just write simple imperative code.

Related

How worker threads works in Nodejs?

Nodejs can not have a built-in thread API like java and .net
do. If threads are added, the nature of the language itself will
change. It’s not possible to add threads as a new set of available
classes or functions.
Nodejs 10.x added worker threads as an experiment and now stable since 12.x. I have gone through the few blogs but did not understand much maybe due to lack of knowledge. How are they different than the threads.
Worker threads in Javascript are somewhat analogous to WebWorkers in the browser. They do not share direct access to any variables with the main thread or with each other and the only way they communicate with the main thread is via messaging. This messaging is synchronized through the event loop. This avoids all the classic race conditions that multiple threads have trying to access the same variables because two separate threads can't access the same variables in node.js. Each thread has its own set of variables and the only way to influence another thread's variables is to send it a message and ask it to modify its own variables. Since that message is synchronized through that thread's event queue, there's no risk of classic race conditions in accessing variables.
Java threads, on the other hand, are similar to C++ or native threads in that they share access to the same variables and the threads are freely timesliced so right in the middle of functionA running in threadA, execution could be interrupted and functionB running in threadB could run. Since both can freely access the same variables, there are all sorts of race conditions possible unless one manually uses thread synchronization tools (such as mutexes) to coordinate and protect all access to shared variables. This type of programming is often the source of very hard to find and next-to-impossible to reliably reproduce concurrency bugs. While powerful and useful for some system-level things or more real-time-ish code, it's very easy for anyone but a very senior and experienced developer to make costly concurrency mistakes. And, it's very hard to devise a test that will tell you if it's really stable under all types of load or not.
node.js attempts to avoid the classic concurrency bugs by separating the threads into their own variable space and forcing all communication between them to be synchronized via the event queue. This means that threadA/functionA is never arbitrarily interrupted and some other code in your process changes some shared variables it was accessing while it wasn't looking.
node.js also has a backstop that it can run a child_process that can be written in any language and can use native threads if needed or one can actually hook native code and real system level threads right into node.js using the add-on SDK (and it communicates with node.js Javascript through the SDK interface). And, in fact, a number of node.js built-in libraries do exactly this to surface functionality that requires that level of access to the nodejs environment. For example, the implementation of file access uses a pool of native threads to carry out file operations.
So, with all that said, there are still some types of race conditions that can occur and this has to do with access to outside resources. For example if two threads or processes are both trying to do their own thing and write to the same file, they can clearly conflict with each other and create problems.
So, using Workers in node.js still has to be aware of concurrency issues when accessing outside resources. node.js protects the local variable environment for each Worker, but can't do anything about contention among outside resources. In that regard, node.js Workers have the same issues as Java threads and the programmer has to code for that (exclusive file access, file locks, separate files for each Worker, using a database to manage the concurrency for storage, etc...).
It comes under the node js architecture. whenever a req reaches the node it is passed on to "EVENT QUE" then to "Event Loop" . Here the event-loop checks whether the request is 'blocking io or non-blocking io'. (blocking io - the operations which takes time to complete eg:fetching a data from someother place ) . Then Event-loop passes the blocking io to THREAD POOL. Thread pool is a collection of WORKER THREADS. This blocking io gets attached to one of the worker-threads and it begins to perform its operation(eg: fetching data from database) after the completion it is send back to event loop and later to Execution.

QSerialPort - Is it possible to read() and write() on separate threads?

We have a DLL that provides an API for a USB device we make that can appear as a USB CDC com port. We actually use a custom driver on windows for best performance along with async i/o, but we have also used serial port async file i/o in the past with reasonable success as well.
Latency is very important in this API when it is communicating with our device, so we have structured our library so that when applications make API calls to execute commands on the device, those commands turn directly into writes on the API caller's thread so that there is no waiting for a context switch. The library also maintains a listening thread which is always waiting using wait objects on an async read for new responses. These responses get parsed and inserted into thread-safe queues for the API user to read at their convenience.
So basically, we do most of our writing in the API caller's thread, and all of our reading in a listening thread. I have tried porting a version of our code over to using QSerialPort instead of native serial file i/o for Windows and OSX, but I am running into an error whenever I try to write() from the caller's thread (the QSerialPort is created in the listening thread):
QObject: Cannot create children for a parent that is in a different thread.
which seems to be due to the creation of another QObject-based WriteOverlappedCompletionNotifier for the notifiers pool used by QSerialPortPrivate::startAsyncWrite().
Is the current 5.2 version of QSerialPort limited to only doing reads and writes on the same thread? This seems very unfortunate as the underlying operating systems do not have any such thread limitations for serial port file i/o. As far as I can tell, the issue mainly has to do with the fact that all of QSerialPort's notifier classes are based on QObject.
Does anyone have a good work around to this? I might try building my own QSerialPort that uses notifiers not based on QObject to see how far that gets me. The only real advantage QObject seems to be giving here is in the destruction of the notifiers when the port closes.
Minimal Impact Solution
You're free to inspect the QSerialPort and QIODevice code and see what would need to change to make the write method(s) thread-safe for access from one thread only. The notifiers don't need to be children of the QSerialPort at all, they could be added to a list of pointers that's cleaned up upon destruction.
My guess is that perhaps no other changes are necessary to the mainline code, and only mutex protection is needed for access to error state, but you'd need to confirm that. This would have lowest impact on your code.
If you care about release integrity, you should be compiling Qt yourself anyway, and you should be having it as a part of your own source code repository, too. So none of this should be any problem at all.
On the Performance
"those commands turn directly into writes on the API caller's thread so that there is no waiting for a context switch" Modern machines are multicore and multiple threads can certainly run in parallel without any context switching. The underlying issue is, though: why bother? If you need hard-realtime guarantees, you need a hard-realtime system. Otherwise, nothing in your system should care about such minuscule latency. If you're doing this only to make the GUI feel responsive, there's really no point to such overcomplication.
A Comms Thread Approach
What I do, with plenty of success, and excellent performance, is to have the communications protocol and the communications port in the same, dedicated thread, and the users in either the GUI thread, or yet other thread(s). The communications port is generally a QIODevice, like QTcpSocket, QSerialPort, QLocalSocket, etc. Since the communications protocol object is "just" a QObject, it can also live, with the port, in the GUI thread for demostration purposes - it's designed fully asynchronously anyway, and doesn't block for anything but most trivial of computations.
The communications protocol is queuing multiple requests for execution. Even on a single-core machine, once the GUI thread is done submitting all of the requests, the further execution is all in the communications thread.
The QSerialPort implementation uses asynchronous OS APIs. There's little to no benefit to further processing those async replies on separate threads. Those operations have very low overhead and you will not gain anything measurable in your latency by trying to do so. Remember: this is not your code, but merely code that pushes bytes between buffers. Yes, the context switch overhead may be there on heavily loaded or single-core systems, but unless you can measure the difference between its presence and absence, you're fighting imaginary problems.
It is possible to use any QObject from multiple threads, of course, as long as you serialize the access to it via the event queue mutex. This is done for you whenever you use the QMetaObject::invokeMethod or signal-slot connections.
So, add a trivial wrapper around QSerialPort that exposes the write as a thread-safe method. Internally, it should use a signal-slot connection. You can call this thread-safe write from any thread. The overhead in such a call is a mutex lock and 2+n malloc/free calls, where n is the non-zero number of arguments.
In your wrapper, you can also process the readyRead signal, and emit a signal with received data. That signal can be processed by a QObject living in another thread.
Overall, if you do the measurements correctly, and if your port thread's implementation is correct, you should find no benefit whatsoever to all this complication.
If your communications protocol does heavy data processing, this should be factored out. It could go into a separate QObject that can then run on its own thread. Or, it can be simply done using dedicated functors that are executed by QtConcurrent::run.
What if you use QSerialPort to open and configure the serial port, and QSocketNotifier to monitor for read activity (and other QSocketNotifier instances for write completion and error handling, if necessary)?
QSerialPort::handle should give you the file descriptor you need. On Windows, if that function returns a Windows HANDLE, you can use _open_osfhandle to get a file descriptor.
As a follow up, shortly after this discussion I did implement my own thread-safe serial port code for POSIX systems using select() and the like and it is working well on multiple threads in conjunction with Qt and non-Qt applications alike. Basically, I have abandoned using QtSerialPort at all.

libuv uses blocking file system calls internally – Why? How?

I just learned that Node.js crown jewel libuv uses blocking system calls for file operations. The asynchronous behavior is implemented with threads! That raises two questions (I only care about Unix):
Why is it not using the non-blocking filesystem calls like it does for networking?
If there are one million outstanding file reads, it probably does not launch one million threads... What does libuv do??
Most likely to support synchronous operations such as fs.renameSync() vs fs.rename().
It uses a thread pool, as explained in the "Note" at the link you provded.
[...] but invoke these functions in a thread pool and notify watchers registered with the event loop when application interaction is required.
So, it creates a limited number of threads and reuses them as they become available.
Also, regarding the quip of "crown jewel:" Node.js and libuv aren't magic. They're good tools to have at your disposal, but certainly have their limitations.
Though, the hyperbole of "one million file reads" would be a stretch for any platform to manage without constraint.
The same non-blocking API can not be used, as O_NONBLOCK and friends don’t work on regular files! For Linux AIO is available, but it has it’s own quirks (i.e. depends on the filesystem, turns silently blocking for some operations).
I have no idea.

multithread boost-asio server (vs boost async server tutorial)

I'm following the boost-asio tutorial and don't know how to make a multi-threaded server using boost. I've compiled and tested the daytime client and daytime synchronous server and improved the communication (server asks the client for a command, processes it and then returns the result to the client). But this server can handle only one client at one time.
I would like to use boost to make a multi-threaded server. There is also daytime asynchronous server which executes
boost::asio::io_service io_service;
tcp_server server(io_service);
io_service.run();
in the main program function. The question is - is boost creating a thread for each client somewhere inside? Is this a multi-threaded solution? If not - how to make a multi-threaded server with boost? Thanks for any advice.
have a look at this tutorial. in short terms:
io_service.run() in multiple threads gives a thread pool
multiple io_services give completely separated threads
You don't need to explicitly work with threads when you want to support multiple clients. But for that you should use asynchronous calls (as opposed to synchronous, which are used in the tutorials you listed). Have a look at the asynchronous echo tcp server example, it serves multiple clients without using threads.
is boost creating a thread for each client somewhere inside?
When working with asynchronous calls, boost asio is doing these things behind the scenes. It could use threads, but it usually doesn't because there are other, preferred mechanisms for working with multiple sockets at once. For example on linux you have epoll, select and poll (in order of preference). I'm not sure what the situation is on windows, there might be other mechanisms or the preference order might be different. But in any case, boost asio takes care of this, chooses the best mechanism there is for your platform and hides it behind those asynchronous calls.

Is there an use case for non-blocking receive when I have threads?

I know non-blocking receive is not used as much in message passing, but still some intuition tells me, it is needed. Take for example GUI event driven applications, you need some way to wait for a message in a non-blocking way, so your program can execute some computations. One of the ways to solve this is to have a special thread with message queue. Is there some use case, where you would really need non-blocking receive even if you have threads?
Threads work differently than non-blocking asynchronous operations, although you can usually achieve the same effect by having threads that does synchronous operations. However, in the end, it boils down on how to handle doing things more efficiently.
Threads are limited resources, and should be used to process long running, active operations. If you have something that is not really active doing things, but need to wait idly for some time for the result (think some I/O operation over the network like calling web services or database servers), then it is better to use the provided asynchronous alternative for it instead of wasting threads unnecessarily by putting the synchronous call on another thread.
You can have a good read on this issue here for more understanding.
One thread per connection is often not a good idea (wasted memory, not all OS are very good with huge thread counts, etc)
How do you interrupt the blocking receive call? On Linux, for example (and probably on some other POSIX OS) pthreads + signals = disaster. With a non-blocking receive you can multiplex your wait on the receiving socket and some kind of IPC socket used to communicate between your threads. Also maps to the Windows world relatively easily.
If you need to replace your regular socket with something more complex (e.g. OpenSSL) relying on the blocking behavior can get you in trouble. OpenSSL, for example, can get deadlocked on a blocking socket, because SSL protocol has sender/receive inversion scenarios where receive can not proceed before some sending is done.
My experience has been -- "when in doubt use non-blocking sockets".
With blocking IO, it's challenging on many platforms to get your application to do a best effort orderly shutdown in the face of slow, hung, or disconnected clients/services.
With non-blocking IO, you can kill the in-flight operation as soon as the system call returns, which is immediately. If your code is written with premature termination in mind - which is comparatively simple with non-blocking IO - this can allow you to clean up your saved state gracefully.
I can't think of any, but sometimes the non-blocking APIs are designed in a way that makes them easier/more intuitive to use than an explicitly multi-threaded implementation.
Here goes a real situation I have faced recently. Formerly I had a script that would run every hour, managed by crontab, but sometimes users would log to the machine and run the script manually. This had some problems, for example concurrent execution by crontab and user could cause problems, and sometimes users would log in as root - I know, bad pattern, not under my control - and run script with wrong permissions. So we decided to have the routine running as daemon, with proper permissions, and the command users were used to run would now just trigger the daemon.
So, this user executed command would basically do two things: trigger the daemon and wait for it to finish the task. But it also needed a timeout and to keep dumping daemon logs to user while waiting.
If I understand the situation you proposed, I had the case you want: I needed to keep listening from daemon while still interacting with user independently. The solution was asynchronous read.
Lucky for me, I didn't think about using threads. I probably would have thought so if I were coding in Java, but this was Python code.
My point is, that when we consider threads and messaging being perfect, the real trade-off is about writing scheduler for planning the non-blocking receive operations and writing synchronizations codefor threads with shared state (locks etc.). I would say, that both can be sometime easy and sometime hard. So an use case would be when there are many messages asynchronous messages to be received and when there is much data to be operated on based on the messages. This would be quite easy in one thread using non-blocking receive and would ask for much synchronization with many threads and shared state.... I am also thinking about some real life example, I will include it probably later.

Resources