eventlet.monkey_patch() is ignoring child processes - multithreading

I'm using flask-socketio library in my flask project and eventlet library is necessary to let the emit from child processes through monkey patching, but the problem is when doing eventlet.monkey_patch() the grand-child processes are being ignored. Anyone can help in clearing things up?

I replaced all the processes with threads
Even the multi-processing i replaced it with thread pools
To make sure they have the same memory in the Ram
And avoid the collision with eventlet with multi-processing

Related

Node.js multithreading and asynchronous

I'm a little confused with multithreading and asynchronous in js. What is the difference between a cluster, a stream, a child process, and a worker thread?
The first thing to remember about multithreading in Node.js is that in user-space, there exists no concept of threading, and as such you cannot write any code making use of threads. Any node program is always a single threaded program (in user-space).
Since a node program is a single thread, and runs as a single process, it uses only a single CPU. Most modern processors have multiple CPUs, and in order to make use of all of these CPUs and provide better throughput, you can start the same node program as a cluster.
The cluster module of node, allows you to start a node program, and the first instance launched is launched as the master instance. The master allows you to spawn new workers as separate processes (not threads) using cluster.fork() method. The actual work that is to be done by the node program is done by the workers. The example in the node docs demonstrates this perfectly.
A child process is a process that is spawned from the current process and has an established IPC channel between them to communicate with each other. The master and workers I described in cluster are an example of child processes. the child_process module in node allows you to spawn custom child processes as you require.
Streams are something that is not at all related to multi-threading or multiple processes. Streams are just a way to handle large amounts of data without loading all the data into the working memory at the same time. Ex: Consider you want to read a 10GB log file, and your server only has 4GB of memory. Trying to load the file using fs.readFile will crash your process. Instead you use fs.createReadStream and use that to process the file in smaller chunks that can be loaded into memory.
Hope this explains. For further details you really should read the node docs.
this is a little vague so I'm just gonna give an overview.
Streams are really just data streams like in any other language. Similar to iostreams in C and where you get user input, or other types of data. They're usually masked by another class so you don't know you're using a stream. You won't mess with these unless you're building a new type usually.
Child processes, worker threads, and clusters are all ways of utilizing multi-core processing in Node applications.
Worker threads are basic multithreading the Node way, with each thread having a way to communicate with the parent, and shared memory possible between each thread. You pass in a function and data, and can provide a callback for when the thread is done processing.
Clusters are more for network sharing. Often used behind a master listener port, a master app will listen for connections, then assign them in a round-robin manner to each cluster thread for use. They share the server port(s) across multiple processors to even out the load.
Child processes are a way to create a new process in a similar way to through popen. These can be asynchronous or synchronous (non-blocking or blocking the Node event loop), and can send to and receive from the parent process via stdout/stderr and stdin, respectively. The parent can register listeners to each child process for updates. You can pass a file, a function, or a module to a child process. Generally do not share memory.
I'd suggest reading the documentation yourself and coming back with any specific questions you have, you won't get much with vague questions like this, makes it seem like you didn't do your own part of the work beforehand.
Documentation:
Streams
Worker Threads
Clusters
Child Processes

Understanding uWSGI threads

I'm very new to Python and uWSGI. I'm trying to understand how uWSGI and threads work. I got confused by some of the statements in uWSGI documents. Example:
By default the Python plugin does not initialize the GIL. This means your app-generated threads will not run. If you need threads, remember to enable them with enable-threads. Running uWSGI in multithreading mode (with the threads options) will automatically enable threading support. This “strange” default behavior is for performance reasons, no shame in that.
I created a test project to see this in action, a very simple app that uses a ThreadExecutor pool where threads are not allowed by uWSGI ini file.
When you run the test, uWSGI logs that it starts with multiple processes and one core (wth?, I assume this is thread in their lingo?), it seems like threads are executing just fine.
My question is why is this working even when threads are NOT allowed explicitly in the uWSGI configuration? What's the downside of using threads in such context where threads are not allowed?

Node kill thread but not process

I was wondering if you could terminate just the thread on which the node application is executing but NOT the process.
Now, I know this sounds strange, because node IS single-threaded, BUT I'm working on an .NET application that hosts node in-process. And when I terminate node with process.exit() the whole application gets closed, which is a behavior I don't want. I just want the node thread to get terminated.
Honestly I'm so desperate I even tried, getting a list of all application threads prior to creating the thread on which node is executing, and then another list after it's created, then removing all threads that were present prior to starting node, keeping a reference to the remaining thread, thinking it was the node thread. As you could expect this did not turn out so well.
I'm using EdgeJS to host node, if that makes any difference. It does not have a built in functionality to terminate itself, unfortunately.
Oh and server.close() doesn't work for me, for some reason.
I'm hosting a video streaming server in my node code, if that info can help you in any way.
Thanks for all the help in advance!
The node thread needs to co-operate by clearing everything that is running on the event loop and then it terminates naturally by returning. So just stop listening for requests, close handles etc. You can use process._getActiveRequests and process._getActiveHandles to see what is keeping the event loop alive.
You can also abruptly interrupt the node thread just by calling OS apis but this will leak a lot of garbage until you exit the process so you cannot start/restart node a lot of times before you need to exit the process anyway to free the leaked resources.

How to fork/clone an identical Node child process in the same sense as fork() of Linux system call?

So I was developing a server farm on Node which requires multiple processes per machine to handle the load. Since Windows doesn't quite get along with Node cluster module, I had to manually work it out.
The real problem is when I was forking Node processes, a JS module path was required as the first argument to the child_process.fork() function and once forked, the child process wouldn't inherit anything from its parent. In my case, I want a function that does similar thing as fork() system call in Linux, which clones the parent process, inherits everything and continues execution from exactly where the fork() is done. Can this be achieved on the Node platform?
I don't think node.js is ever going to support fork(2)
The comment from the node github page on the subject
https://github.com/joyent/node/issues/2334#issuecomment-3153822
We're not (ever) going to support fork.
not portable to windows
difficult conceptually for users
entire heap will be quickly copied with a compacting VM; no benefits from copy-on-write
not necessary
difficult for us to do
child_process.fork()
This is a special case of the spawn() functionality for spawning Node
processes. In addition to having all the methods in a normal
ChildProcess instance, the returned object has a communication channel
built-in. See child.send(message, [sendHandle]) for details.

can a multithread program still be running after killing it in system monitor

Is it possible that a program which does not kill its threads properly before exiting still be running some piece of code somewhere even though it has been killed in system monitor? I am running ubuntu in a non virtual environment. My application is made with QT, it contains QThreads, a main thread and concurent functions.
If you kill the process then you kill all its threads. The only cause for concern would be if your application had spawned multiple processes - if that is the case then you may still have code executing on the machine.
This is all very speculative though as I don't know what operating system you code is running on, whether or not your application runs in a virtual environment, etc. Environment-specific factors are very relevant to the discussion, can you share a bit more about your application?
It is not possible, all modern heaviliy used operating systems manage these resources quite tightly. Threads cannot run without a process... They are all brantches from the original thread.
I don't know of any OS that doesn't fully terminate all it's threads when you kill the processes, it's possible to spawn child processes that live on after the main process has exited but in the case of threads i'd say it's not possible.

Resources