I am running three processes simultaneously by executing them through an Ipython notebook. I want to know what is the best way to kill any or all of the threads whenever I want. I see that interrupting kernal in the notebook stops just the process 1.
From the Julia documentation:
interrupt([pids...])
Interrupt the current executing task on the specified workers. This is equivalent to pressing Ctrl-C on the local machine. If no arguments are given, all workers are interrupted.
Related
In Linux & C, will not waiting (waitpid) for a fork-execve launched process create zombies?
What is the correct way to launch a new program (many times) without waiting and without resource leaks?
It would also be launched from a 2nd worker thread.
Can the first program terminate first cleanly if launched programs have not completed?
Additional: In my case I have several threads that can fork-execve processes at ANY TIME and THE SAME TIME -
1) Some I need to wait for completion and want to report any errors codes with waitpid
2) Some I do not want to block the thread and but would like to report errors
3) Some I don't want to wait and don't care about the outcome and could run after the program terminates
For #2, should I have to create an additional thread to do waitpid ?
For #3, should I do a fork-fork-execve and would ending the 1st fork cause the 2nd process to get cleaned up (no zombie) separately via init ?
Additional: I've read briefly (not sure I understand all) about using nohup, double fork, setgpid(0,0), signal(SIGCHLD, SIG_IGN).
Doesn't global signal(SIGCHLD, SIG_IGN) have too many side effects like getting inherited (or maybe not) and preventing monitoring other processes you do want to wait for ?
Wouldn't relying on init to cleanup resources leak while the program continues to run (weeks in my case)?
In Linux & C, will not waiting (waitpid) for a fork-execve launched process create zombies?
Yes, they become zombies after death.
What is the correct way to launch a new program (many times) without waiting and without resource leaks? It would also be launched from a 2nd worker thread.
Set SIGCHLd to SIG_IGN.
Can the first program terminate first cleanly if launched programs have not completed?
Yes, orphaned processes will be adopted by init.
I ended up keeping an array of just the fork-exec'd pids I did not wait for (other fork-exec'd pids do get waited on) and periodically scanned the list using
waitpid( pids[xx], &status, WNOHANG ) != 0
which gives me a chance report outcome and avoid zombies.
I avoided using global things like signal handlers that might affect other code elsewhere.
It seemed a bit messy.
I suppose that fork-fork-exec would be an alternative to asynchronously monitor the other program's completion by the first fork, but then the first fork needs cleanup.
In Windows, you just keep a handle to the process open if you want to check status without worry of pid reuse, or close the handle if you don't care what the other process does.
(In Linux, there seems no way for multiple threads or processes to monitor the status of the same process safely, only the parent process-thread can, but not my issue here.)
Is there any command that will list all background threads in a GHCi session? And next question is, how to kill one (or all) of them?
Related:
Is there a way to kill all forked threads in a GHCi session without restarting it?
How to be certain that all threads have been killed upon pressing Ctrl+C
No. If you want the ThreadIds of running threads, it is your responsibility to keep track of them when you forkIO.
I want to start a thread from a process and detach it and terminate the process. But the thread will be running continuously in the background. Can I achieve this with c++11 ?
I have detached my thread like this
std::thread(&thread_func, param1, param2).detach();
But it gets terminated once the process is terminated.
Detaching is not the same as running in the background. If you detach a thread then you simply tell the OS "I don't want to join the thread manually after it exits, please take care of that for me". But the OS will usually kill all child threads/processes when the main process exits.
So what you want is to run a deamon. However turning a process into a deamon (note that you can't daemonize a thread) is OS dependent. On linux you would call daemon function:
http://man7.org/linux/man-pages/man3/daemon.3.html
I don't know how to do that on Windows or other OS. Also you may want to read this:
Creating a daemon in Linux
I'm developing code for Linux, and cannot seem to kill processes when running in a Jenkins environment.
I have test script that spawns processes and cleans them up as it goes through the tests. One of the processes also spawns and cleans up one of its own subprocesses. All of the "cleanup" is done by sending a SIGINT, followed by a wait. Everything works fine with this when run from a terminal, except when running through Jenkins.
When the same exact thing is run in Jenkins, processes killed with SIGINT do not die, and the call to wait blocks forever. This wreaks havoc on my test. I could update the logic to not do a blocking wait, but I don't feel I should have to change my production code to accommodate Jenkins.
Any ideas?
Process tree killer may be your answer - https://wiki.jenkins-ci.org/display/JENKINS/ProcessTreeKiller
In testing, this would usually work when I ran the tests from the command line, but would almost always fail when that unit test script was called from another script. Frankly, it was bizarre....
Then I realized that when I had stray processes, they would indeed go away when I killed them with SIGTERM. But WHY?????
I didn't find a 100%-definitive answer. But thinking about it logically, if the process is not attached to a terminal, then maybe the "terminal interrupt" signal (SIGINT), wouldn't work...?
In doing some reading, what I learned is that, basically, when it's a shell that executes a process, the SIGINT action may be set to 'ignore'. That make sense (to me, anyway) because you wouldn't want CTRL-C at the command line to kill all of your background processes:
When the shell executes a process “in the background” (or when another background process executes another process), the newly executed process should ignore the interrupt and quit characters. Thus, before a shell executes a background process, it should set SIGINT and SIGQUIT to SIG_IGN.
Our production code isn't a shell, but it is started from a shell, and Jenkins uses /bin/sh to run stuff. So, this would add up.
So, since there is an implied association between SIGINT and the existence of a TTY, SIGTERM is a better option for killing your own background processes:
It should be noted that SIGINT is nearly identical to SIGTERM. (1)
I've changed the code that kills the proxyserver processes, and the Python unit test code, to use the SIGTERM signal. Now everything runs at the terminal and in Jenkins.
I have big process running. It spawns two threads. I want to debug those two threads separately. But there is only one gdb prompt. How to do this? Means I want to parallely see the execution of the threads.
You can not run just some of the threads under the debugger. They will all run and they will all stop. Some thread may progress more than other, that depends on the scheduler of the OS and is out of the reach of the debugger. With that said, once you stop inside a break point you can review the threads one at a time. You can also set conditional breakpoints which will stop the execution only if a certain thread pass by them.
I think you will find that article useful:
http://ftp.gnu.org/old-gnu/Manuals/gdb-5.1.1/html_node/gdb_24.html#SEC25