Twisted: use of multiple threads and processes together - multithreading

The Twisted documentation led me to believe that it was OK to combine techniques such as reactor.spawnProcess() and threads.deferToThread() in the same application, that the reactor would handle this elegantly under the covers. Upon actually trying it, I found that my application deadlocks. Using multiple threads by themselves, or child processes by themselves, everything is fine.
Looking into the reactor source, I find that the SelectReactor.spawnProcess() method simply calls os.fork() without any consideration for multiple threads that might be running. This explains the deadlocks, because starting with the call to os.fork() you will have two processes with multiple concurrent threads running and doing who knows what with the same file descriptors.
My question for SO is, what is the best strategy for solving this problem?
What I have in mind is to subclass SelectReactor, so that it is a singleton and calls os.fork() only once, immediately when instantiated. The child process will run in the background and act as a server for the parent (using object serialization over pipes to communicate back and forth). The parent continues to run the application and may use threads as desired. Calls to spawnProcess() in the parent will be delegated to the child process, which will be guaranteed to have only one thread running and can therefore call os.fork() safely.
Has anyone done this before? Is there a faster way?

What is the best strategy for solving this problem?
File a ticket (perhaps after registering) describing the issue, preferably with a reproducable test case (for maximum accuracy). Then there can be some discussion about what the best way (or ways - different platforms may demand different solution) to implement it might be.
The idea of immediately creating a child process to help with further child process creation has been raised before, to solve the performance issue surrounding child process reaping. If that approach now resolves two issues, it starts to look a little more attractive. One potential difficulty with this approach is that spawnProcess synchronously returns an object which supplies the child's PID and allows signals to be sent to it. This is a little more work to implement if there is an intermediate process in the way, since the PID will need to be communicated back to the main process before spawnProcess returns. A similar challenge will be supporting the childFDs argument, since it will no longer be possible to merely inherit the file descriptors in the child process.
An alternate solution (which may be somewhat more hackish, but which may also have fewer implementation challenges) might be to call sys.setcheckinterval with a very large number before calling os.fork, and then restore the original check interval in the parent process only. This should suffice to avoid any thread switching in the process until the os.execvpe takes place, destroying all the extra threads. This isn't entirely correct, since it will leave certain resources (such as mutexes and conditions) in a bad state, but you use of these with deferToThread isn't very common so maybe that doesn't affect your case.

The advice Jean-Paul gives in his answer is good, but this should work (and does in most cases).
First, Twisted uses threads for hostname resolution as well, and I've definitely used subprocesses in Twisted processes that also make client connections. So this can work in practice.
Second, fork() does not create multiple threads in the child process. According to the standard describing fork(),
A process shall be created with a single thread. If a multi-threaded process calls fork(), the new process shall contain a replica of the calling thread ...
Now, that's not to say that there are no potential multithreading issues with spawnProcess; the standard also says:
... to avoid errors, the child process may only execute async-signal-safe operations until such time as one of the exec functions is called ...
and I don't think there's anything to ensure that only async-signal-safe operations are used.
So, please be more specific as to your exact problem, since it isn't a subprocess with threads being cloned.

Returning to this issue after some time, I found that if I do this:
reactor.callFromThread(reactor.spawnProcess, *spawnargs)
instead of this:
reactor.spawnProcess(*spawnargs)
then the problem goes away in my small test case. There is a remark in the Twisted documentation "Using Processes" that led me to try this: "Most code in Twisted is not thread-safe. For example, writing data to a transport from a protocol is not thread-safe."
I suspect that the other people Jean-Paul mentioned were having this problem may be making a similar mistake. The responsibility is on the application to enforce that reactor and other API calls are being made within the correct thread. And apparently, with very narrow exceptions, the "correct thread" is nearly always the main reactor thread.

fork() on Linux definitely leaves the child process with only one thread.
I assume you are aware that, when using threads in Twisted, the ONLY Twisted API that threads are permitted to call is callFromThread? All other Twisted APIs must only be called from the main, reactor thread.

Related

Multi-threaded fork()

In a multi-threaded application, if a thread calls fork(), it will copy the state of only that thread. So the child process created would be a single-thread process. If some other thread were to hold a lock required by the thread which called the fork(), that lock would never be released in the child process. This is a problem.
To counter this, we can modify the fork() in two ways. Either we can copy all the threads instead of only that single one. Or we can make sure that any lock held by the (other) non-copied threads will be released. So what will be the modified fork() system call in both these cases. And which of these two would be better, or what would be the advantages and disadvantages of either option?
This is a thorny question.
POSIX has pthread_atfork() to work through the mess of mixing forks and thread creation. The NOTES section of that man page discusses mutexes etc. However, it acknowledges that getting it right is hard.
The function isn't so much an alternative to fork() as it is a way to explain to the pthread library how your program needs to be prepared for the use of fork().
In general not trying to launch a thread from the child of fork but either exiting that child or calling exec asap, will minimize problems.
This post has a good discussion of pthread_atfork().
...Or we can make sure that any lock held by the (other) non-copied threads will be released.
That's going to be harder than you realize because a program can implement "locks" entirely in user-mode code, in which case, the OS would have no knowledge of them.
Even if you were careful only to use locks that were known to the OS you still have a more general problem: Creating a new process with just the one thread would effectively be no different from creating a new process with all of the threads and then immediately killing all but one of them.
Read about why we don't kill threads. In a nutshell: Locks aren't the only state that needs to be cleaned up. Any of the threads that existed in the parent but not in the child could, at the moment of the fork call, been in the middle of making a mess that needs to be cleaned up. If that thread doesn't exist in the child, then you've lost the knowledge of what needs to be cleaned up.
we can copy all the threads instead of only that single one...
That also is a potential problem. The one thread that calls fork() would know when and why fork() was called, and it would be prepared for the fork call. None of the other threads would have any warning. And, if any of those threads is interacting with something outside of the process (e.g., talking to a remote service) then,where you previously had one client talking to the service, you suddenly have two clients, talking to the same service, and they both think that they are the only one. That's not going to end well.
Don't call fork() from multi-threaded programs.
In one project I worked on: We had a big multi-threaded program that needed to spawn other processes. How we did it is, we had it spawn a simple, single-threaded "helper" program before it created any new threads. Then, whenever it needed to spawn another process, it sent a message to the helper, and the helper did it.

(Python3) Can I spawn a single/many child process(es) inside a thread of a multithreaded program?

I have a use case, where a program spawns multiple threads, viz. one for network communication, one for modifying a couple of JSON files, another for querying and writing to a database. These are spawned in multiple threads because all of them are I/O bound tasks.
The code for Network Comm thread, JSON file handler and database handler will be written by me. The database handling can be significantly optimized if use multiple processes as I have multi-core machine.
I want to understand from Python perspective, how will spawning multiple processes inside a thread will work (if it works)?
After some more searching, I found a page which closely answers my own question.
As described in this post, it is not a good idea to start a process from a thread. The acquired mutexes in the thread will be duplicated with no way to be freed in child process. Also there are many data race conditions that can happen.
However, I like the Solomon's idea, posted in the comments (to my question) and I will try to go ahead with it Or, may be change my architecture.

Correct way of calling fork() after parent has created threads?

I'm implementing a complex application that takes third-party plug-ins, and I want to run the plug-in code in child processes for isolation. The parent process needs to be multithreaded, but I have read that fork may be unsafe in multithreaded processes, particularly if you do not immediately call execve, and that pthread_atfork is not a complete solution.
What do other complex applications do about this? I know Chrome uses both subprocesses and multithreading simultaneously, so it must be possible.
The behavior of fork() in a multithreaded program is well-defined. On success, the child process has exactly one thread -- the same one that called fork() in the parent program. Although this can be a problem, whether it actually is a problem depends on the circumstances.
When is fork()ing a problem for a multithreaded program?
The main reason for fork()ing to present a problem in a multithreaded program is that the child process depends on mutexes, condition variables, etc. that other threads can no longer be relied upon to manipulate. For example, if the child needs to acquire a process-private mutex that it does not already hold, then it may be that that mutex was held by a different thread at the time of the fork. In that case, it will never be released in the child process, because no thread that could release it exists in the child.
When is fork()ing not a problem for a multithreaded program?
One of the common idioms involving fork() is to immediately follow it up by execing another program. That's no problem, regardless of the threadedness of the parent.
Alternatively, if the child process does not depend on any problematic resources, then nothing special need be done. Note that process-shared interthread objects are not "problematic" in this sense. This situation is fairly common, and it sounds like it might be your case.
Otherwise, it's not a problem if the parent's forking thread can and does acquire all the process-private interthread resources that the child will need before it forks. Handlers registered by pthread_atfork() can help with this under some circumstances, but under others, it makes more sense for that to be done in the immediate environs of the fork call.
Overall
You've presented the question as if fork()ing was a deep and troublesome problem for multithreaded programs. It is certainly a problem that should be considered, and it is typically best to avoid using both multiple threads and multiple processes. Therefore, inasmuch as you want multiple processes so as to have separate address spaces and perhaps name spaces into which to load plugins, perhaps you should consider using separate processes wherever you now use threads. On the other hand, if you exercise some thought and care, you can probably make it work just fine for your multi-threaded process to fork children and interact with them.
If you cannot ensure that fork is only used under safe circumstances, as described in John Bollinger's answer, a general workaround is to use a "fork server". Before creating any threads, the original process forks once. The child process is the fork server; it remains single-threaded. The parent process now goes ahead and creates its threads. Whenever the parent would want to call fork, it instead sends a message to the fork server asking it to do so.
If the (ultimate) child processes also need to communicate with the parent, the easiest way to accomplish this is to have the parent create pipes for each child's stdin and stdout, and then transfer the child sides of those pipes to the fork server, using a SCM_RIGHTS special message. You can send file descriptors and data simultaneously. The communication protocol between the fork server and the parent might need to get pretty fancy — look at the posix_spawn API for a more-or-less complete list of all the knobs you might want. (Note: posix_spawn is just a library wrapper around fork; using it will not avoid the original problem.)
The fork server is also responsible for calling waitpid and relaying exit statuses back to the parent. This is trickier than it ought to be, because the standard APIs for waiting for the next of several possible events (select and poll) do not accept a process ID as one of the things to wait for. (BSD's kqueue does, but you're probably not on a BSD.) You have to do a messy dance with SIGCHLD and a pipe-to-self instead.

Linux/POSIX: Why doesn't fork() fork *all* threads

It is well-known that the default way to create a new process under POSIX is to use fork() (under Linux this internally maps to clone(...))
What I want to know is the following: It is well-known that when one calls fork() "The child process is created with a single thread--the one that called fork()"
(cf. https://linux.die.net/man/2/fork). This can of course cause problems if for example some other thread currently holds a lock. To me not also forking all the threads that exist in the process intuitively feels like a "leaky abstraction".
So I would like to know: What is the reason why only the thread calling fork() will exist in the child process instead of all threads of the process? Is there a good technical reason for this?
I know that on Multithreaded fork there is a related question, but the answers given there don't answer mine.
Of these two possibilities:
only the thread calling fork() continues running in the child process
Downside: if another thread was holding on to an internal resource such as a lock, it will not be released.
after fork(), all threads are duplicated into the child process
Downside: threads that were interacting with external resources continue running in parallel. If a thread was appending data to a file: now it happens twice.
Both are bad, but the first one choice only deadlocks the new child process, while the second choice results in corruption outside of the process. This could be described as "bad".
POSIX did standardize pthread_atfork to try to allow automatic cleanup in the first case, but it cannot possibly work.
tl;dr Don't use both threads and forks. Use posix_spawn if you have to.

QProcess, QEventLoop - of any use for parallel-processing

I wonder whether I could use QEventLoop (QProcess?) to parallelize multiple calls to same function with Qt. What is precisely the difference with QtConcurrent or QThread? What is a process and an event loop more precisely? I read that QCoreApplication must exec() as early as possible in main() method, so that I wonder why it is different from main Thread.
could you point as some efficient reference to processes and thread with Qt? I came through the official doc and those things remain unclear.
Thanks and regards.
Process and thread are not Qt-specific concepts. You can search for "process vs. thread" anywhere for that distinction to be explained. For instance: What resources are shared between threads?
Though related concepts, spawning a new process is a more "heavyweight" form of parallelism than spawning a new thread within your existing process. Processes are protected from each other by default, while threads of execution within a process can read and write each other's memory directly. The protection you get from spawning processes comes at a greater run-time cost...and since independent processes can't read each other's memory, you have to share data between them using methods of inter-process communication.
Odds are that you want threads, because they're simpler to use in a case where one is writing all the code in a program. Given all the complexities in multithreaded programming, I'd suggest looking at a good book or reading some websites to start with. See: What are some good resources for learning threaded programming?
But if you want to dive in and just get a feel for how threading in Qt looks, you can spend time looking at the examples:
http://qt-project.org/doc/qt-4.8/examples-threadandconcurrent.html
QtConcurrent is an abstraction library that makes it easier to implement some kinds of parallel programming patterns. It's built on top of the QThread abstractions, and there's nothing it can do that you couldn't code yourself by writing to QThread directly. But it might make your code easier to write and less prone to errors.
As for an event loop...that is merely a generic term for how any given thread of execution in your program waits for work items to process, processes them, and can decide when it is no longer needed. If a thread's job were merely to start up, do some math, and exit...then it wouldn't need an event loop. But starting and stopping a thread takes time and churns resources. So typically threads live for longer periods of time, and have an event loop that knows how to wait for events it needs to respond to.
If you build on top of QtConcurrent, you won't have to worry about an event loop in your worker threads because they are managed automatically in a thread pool. The word count example is pretty simple to see:
http://qt-project.org/doc/qt-4.8/qtconcurrent-wordcount-main-cpp.html

Resources