How to count the threads and process of WSGI? - multithreading

I have deployed a wsgi application on the apache and I have configured it like this:
WSGIDaemonProcess wsgi-pcapi user= group= processes=2 threads=15
After I restart the apache I am counting the number of threads:
ps -efL | grep | grep -c httpd
The local apache is running only one wsgi app but the number I get back is 36 and I cannot understand why. I know that there are 2 processes and 15 threads which means:
15*2+2=32
So why do I have 4 more?

You mean why do you have 3 extra per mod_wsgi daemon process.
For your configuration, 15 new threads will be created for handling the requests. The other 3 in a process are due to:
The main thread which the process was started as. It will wait until the appropriate signal is received to shutdown the process.
A monitor thread which checks for certain events to occur and which will signal the process to shutdown.
A deadlock thread which checks to see if a deadlock has occurred in the Python interpreter. If it does occur, it will sent an event which thread (2) will detect. Thread (2) would then send a signal to the process to quit. That signal would be detected by thread (1) which would then gracefully exit the process and try and cleanup properly.
So the extra threads are all about ensuring that the whole system is very robust in the event of various things that can occur. Plus ensuring that when the process is being shutdown that the Python sub intepreters are destroyed properly to allow atexit registered Python code to run to do its own cleanup.

Related

when will /proc/<pid> be removed?

Process A opened && mmaped thousand of files when running. Then killl -9 <pid of process A> is issued. Then I have a question about the sequence of below two events.
a) /proc/<pid of process A> cannot be accessed.
b) all files opened by process A are closed.
More background about the question:
Process A is a multi-thread background service. It is started by cmd ./process_A args1 arg2 arg3.
There is also a watchdog process which checked whether process A is still alive periodically(every 1 second). If process A is dead, then restart it. The way watchdog checks process A is as below.
1) collect all numerical subdir under /proc/
2) compares /proc/<all-pids>/cmdline with cmdline of process A. If these is a /proc/<some-pid>/cmdline matches, then process A is alive and do nothing, otherwise restart process A.
process A will do below stuff when doing initialization.
1) open fileA
2) flock fileA
3) mmap fileA into memory
4) close fileA
process A will mmap thousand of files after initialization.
after several minutes, kill -9 <pid of process A> is issued.
watchdog detect the death of process A, restart it. But sometimes process A stuck at step 2 flock fileA. After some debugging, we found that unlock of fileA is executed when process A is killed. But sometimes this event will happen after step 2 flock fileA of new process.
So we guess the way to check process alive by monitor /proc/<pid of process A>
is not correct.
then kill -9 is issued
This is bad habit. You'll better send a SIGTERM first. Because well behaved processes and well designed programs can catch it (and exit nicely and properly when getting a SIGTERM...). In some cases, I even recommend: sending SIGTERM. Wait two or three seconds. sending SIGQUIT. Wait two seconds. At last, send a SIGKILL signal (for those bad programs who have not been written properly or are misbehaving). A few seconds later, you could send a SIGKILL. Read signal(7) and signal-safety(7). In multi-threaded, but Linux specific, programs, you might use signalfd(2) or the pipe(7) to self trick (well explained in Qt documentation, but not Qt specific).
If your Linux system is systemd based, you could imagine your program-A is started with systemd facilities. Then you'll use systemd facilities to "communicate" with it. In some ways (I don't know the details), systemd is making signals almost obsolete. Notice that signals are not multi-thread friendly and have been designed, in the previous century, for single-thread processes.
we guess the way to check process alive by monitor /proc/ is not correct.
The usual (and faster, and "atomic" enough) way to detect the existence of a process (on which you have enough privileges, e.g. which runs with your uid/gid) is to use kill(2) with a signal number (the second argument to kill) of 0. To quote that manpage:
If sig is 0, then no signal is sent, but existence and permission
checks are still performed; this can be used to check for the
existence of a process ID or process group ID that the caller is
permitted to signal.
Of course, that other process can still terminate before any further interaction with it. Because Linux has preemptive scheduling.
You watchdog process should better use kill(pid-of-process-A, 0) to check existence and liveliness of that process-A. Using /proc/pid-of-process-A/ is not the correct way for that.
And whatever you code, that process-A could disappear asynchronously (in particular, if it has some bug that gives a segmentation fault). When a process terminates (even with a segmentation fault) the kernel is acting on its file locks (and "releases" them).
Don't scan /proc/PID to find out if a specific process has terminated. There are lots of better ways to do that, such as having your watchdog program actually launch the server program and wait for it to terminate.
Or, have the watchdog listen on a TCP socket, and have the server process connect to that and send its PID. If either end dies, the other can notice the connect was closed (hint: send a heartbeat packet every so often, to a frozen peer). If the watchdog receives a connection from another server while the first is still running, it can decide to allow it or tell one of the instances to shut down (via TCP or kill()).

Do I need to check for my threads exiting?

I have an embedded application, running as a single process on Linux.
I use sigaction() to catch problems, such as segmentation fault, etc.
The process has a few threads, all of which, like the app, should run forever.
My question is whether (and how) I should detect if one of the threads dies.
Would a seg fault in a thread be caught by the application’s sigaction() handler?
I was thinking of using pthread_cleanup_push/pop, but this page says “If any thread within a process calls exit, _Exit, or _exit, then the entire process terminates”, so I wonder if a thread dying would be caught at the process level …
It is not a must that you need to check whether the child thread is completed.
If you have a need of doing something after the child thread completes its processing you can call thread_join() from the main thread, so that it will wait till the child threads completes execution and you can do the rest after this. If you are using thread_exit in the main thread it will get terminated once it is done, leaving the spawned threads to continue execution. The process will get killed only after all the threads completes execution.
If you want to check the status of the spawned threads you can use a flag to detect whether it is running or not. Check this link for more details
How do you query a pthread to see if it is still running?

Understanding process pool : how does a process pool use wait() to reap child process?

if a process pool is created and there are 10 processes
but my program only use 4 processes
it means there are 6 idle processes
to use a process pool,
generally the pseudo code is like:
pool=create_process_pool(M)
for i in 1:N:
pool.run(task i)
pool.wait()
pool.close()
how does the pool decide when to call pool.wait()?
there are some cases:
1 if M>N, for example M=10, N=6, then there are 4 idle processes. For the 6 used processes, when they finished running and exit, they can inform the pool.wait(), but for the 4 idle processes, since they didn't run, how can they inform the pool.wait() that they finishes?
2 if M < N, is a process finishes a task and exit, it may be used for another task. So how can this process know that it will have no tasks any more and so inform pool.wait()
can anyone explain a bit how process pool works in this regard?
thanks!
You could implement a process pool (e.g. in C++) with
some Process class (in particular, knowing the pid of each fork-ed process). It would have some empty instance (whose pid would be 0).
some global array of Process-es
a Command class representing a command to be started (when possible) in the process pool.
a std::deque<Command> of commands, when possible a Command would fire some Process
an event loop taking account of SIGCHLD; when a SIGCHLD occurs, you would waitpid with WNOHANG and get the pid of the ended Process so find the actual Process instance and do whatever is needed ; that event loop would probably pop Command-s to run (so would start non-idle Process-es), manage pipes, etc...
Then idle processes would just be represented by a Process slot with a zero pid; no need to fork it explicitly. So they won't be unix processes.... just some internal representation in the process pool software.
My point is that a process pool mechanism don't (necessarily) have to start (with fork system call) idle processes. It could maintain a pool of process descriptors, and for idle slots mark the descriptor specially. That process descriptor could actually be a pid_t and empty slots having (pid_t)0 which is never the pid of any real Unix process. So there is no need to create processes in advance (but only lazily, as necessary). Hence, no need for idle processes.
I strongly suggest to take some hours to read Advanced Linux Programming. It will teach you better than what I could in a few minutes.
As an example, look at the Unix (or GNU) batch (and at) command. It does not use any idle process. And it does manage a pool of process queues. It is free software, so you can study (and improve) its source code.

How to kill thread spawned using CLONE_THREAD and blocked on a shared resource in kernel space?

I have a test case where there are threads spawned using CLONE_THREAD option in clone() .Here if i want to kill a particular thread I suppose we should be using SYS_tgkill in systemcall(). But will the kill actually affect a thread if it is waiting in kernel space(say a futex_wait)?
I tried killing a thread created in the above manner.But when SIGKILL is sent to the same the whole process is getting killed.Am i missing something in using syscall(SYS_tgkill,pid,tid,9) ?
SIGKILL always kills the target process. There is no way around this; it's unblockable, unignorable, and uncatchable.
You could try sending another signal (like SIGUSR1 or SIGHUP or SIGRTMIN) and having a signal handler installed that calls pthread_exit (but note that this function is not async-signal-safe, so you must ensure that the signal handler did not interrupt another async-signal-unsafe function) or use cancellation (pthread_cancel) to stop the blocked thread.
This should work for normal blocking operations (like waiting for data from a pipe or socket), but it will not help you if the thread is in an uninterruptable sleep state (like trying to read from a badly scratched CD or failing hard disk).

Terminating a process -- Transition from allproc list to zombieproc list

How does a process get terminated? Lets say a process has three threads A,B & C. Now when we send a SIG_KILL signal to the process. All is fine so far, Now each process has exit status field in its structure! So, when a process is sent a kill signal, my understanding is that it is sent to all the threads. A thread gets killed either when it traps into kernel. If it is alreading in the kernel, it quits when it exits from the kernel. If a thread is sleeping it exits when it wakes up. Is my understanding right or am i misunderstading/missing something?
If my understanding is correct, when is a process put into zombie list? when all the threads exited or as soon as it receives a kill signal?
Lets say a process has three threads A,B & C.
Ok. I assume modern Linux, with kernel supporting threads (Linux 2.6 + glibc > 2.3)
Then the process (or thread group) consists of 3 threads (or, there is a 3 threads with different tids and same tgid=PID)
Now when we send a SIG_KILL signal to the process.
So, you use a tgid (PID) here. Ok.
Now each process has exit status field in its structure!
Wwwhat? Yes, but killing and exiting from thread group have a special code to get right exit code to waiter. For killing, the exit status is get from signal; for exiting (syscall sys_group_exit) it is the argument of syscall.
So, when a process is sent a kill signal,... it is sent to all the threads.
No.
Basically there can be two kinds of signals:
process-wide - it will be delivered to ANY thread in the process
thread (Can't name it correctly) - which is delivered by tid to some thread and not another.
So, SIGKILL is process-wide, it will kill entire process. It is delivered to some thread.
When kernel will deliver this signal - it will call do_group_exit() function ( http://lxr.linux.no/linux+v2.6.28/kernel/exit.c#L1156 called from http://lxr.linux.no/linux+v2.6.28/kernel/signal.c#L1870) to kill all threads in thread group (in process).
There is a zap_other_threads() function to iterate over all threads and kill them (with resending a thread-delivered SIGKILL) http://lxr.linux.no/linux+v2.6.28/kernel/signal.c#L966
when is a process put into zombie list?
After do_exit() kernel function call. It has a tsk->state = TASK_DEAD; line at the end.
when all the threads exited or as soon as it receives a kill signal?
The moment, when task get its state to TASK_DEAD is after receiving SIGKILL. This signal is already redelivered to all threads of the process at this moment. Can't find the actual exit time of threads, but all threads have a flag of pending fatal signal, so they will be killed at any resched.
UPDATE: all threads of process must be killed (must receive the KILL signal and do a cleanup), as they have some accounting information to be accumulated in the first thread (here first mention not the first-started, but the thread which got an original process-wide SIGKILL; or the thread, which called an exit_group syscall). First thread must wait all another threads; and it will change status only after that.
In FreeBSD, a zombie process cannot execute any code. Therefore, everything that needs the moribund process to do something is performed before that point. If you see a process in this state in ps(1) (usually only if it gets stuck), it has a usual state such as D, S, R or I, with E (trying to exit) appended to it.
A signal is delivered to one thread (either a particular thread or any thread, depending on how the signal was generated). The act of terminating the process (default action of various signals) has a process-global effect. One of the things that happens is that the thread that was chosen to deliver the signal (or that called _exit(2)) requests all other threads to exit.
A thread does not have an exit status at the kernel level; the value available via pthread_join() is a userland feature.

Resources