What it means that + in the c++ process state - linux

I make the test.cpp and compile this.
int main() {
while(1);
}
g++ test.cpp
And ps -aux | grep a.out.
The process state a.out is R+.
Yes. Of course, the process infinitely runs.
But, I don't understand the +
In ps manual, + is in the foreground process group.
I don't know what it means that a.out is in the foreground process group.
PROCESS STATE CODES
Here are the different values that the s, stat and state output
specifiers (header "STAT" or "S") will display to describe the
state of a process:
D uninterruptible sleep (usually IO)
I Idle kernel thread
R running or runnable (on run queue)
S interruptible sleep (waiting for an event to
complete)
T stopped by job control signal
t stopped by debugger during the tracing
W paging (not valid since the 2.6.xx kernel)
X dead (should never be seen)
Z defunct ("zombie") process, terminated but not
reaped by its parent
For BSD formats and when the stat keyword is used, additional
characters may be displayed:
< high-priority (not nice to other users)
N low-priority (nice to other users)
L has pages locked into memory (for real-time and
custom IO)
s is a session leader
l is multi-threaded (using CLONE_THREAD, like NPTL
pthreads do)
+ is in the foreground process group

It basically means that the process executes within a terminal session, occupying it, with a direct user access to it. A background process runs without direct user interactions. Mostly used when you want to continue using the terminal session but also need to run a time demanding computation.
You can change this state of your program by supplying the & sign after the launch command:
./a.out &
And then you can bring your process back by using the fg command or by the process id you will get after putting the process in the background.

Related

In Linux, how can I wait until a process I didn't start finishes?

I have a monitoring program that I'd like to check on various processes in the system, and know when they terminate. I'd also like to know their exit code, in case they crash. However, my program is not a parent of the processes to be monitored.
In Windows, this is easy: OpenProcess for SYNCHRONIZE rights, WaitForMultipleObjectsEx to wait for any of them to terminate, then GetExitCodeProcess to find out why it terminated (with NTSTATUS error codes if the reason was an exception).
But in Linux, the equivalent of these, waitpid, only works on your own child processes, not unrelated processes. We tried ptrace, but this caused its own issues, such as greatly slowing down signal processing.
This program is intended to run as root.
Is there a way to implement this, other than just polling /proc/12345 until it disappears?
Can't think of an easy way to collect the termination statuses, but as for simple death events, you can, as root, inject an open call to a file you'll have the other end of and then you can do select on your end of the file descriptor.
When the other end dies, it'll generate a close event on the filedescriptor you have the other end of.
A (very ugly) example:
mkfifo /tmp/fifo #A channel to communicate death events
sleep 1000 & #Simulate your victim process
echo $! #Make note of the pid you want
#In another terminal
sudo gdb -ex "attach $thePid" -ex ' call open("/tmp/fifo",0,0)' -ex 'quit'
exec 3>/tmp/fifo
ruby -e 'fd = IO.select([IO.for_fd(3)]); puts "died" '
#In yet another terminal
kill $thePid #the previous terminal will print `died` immediately
#even though it's not the parent of $thePid

Is it feasible to record a program state in Valgrind/DrMemory and then restore that?

I have a program that loads a big chunk of data at startup. That takes up a rather long time and therefore creates an overhead when running Valgrind (memcheck)/DrMemory. So when invoking the program several times with different arguments, it takes up a considerable amount of time
My idea would be to use fork() right after the data loading phase and then hand the children off to Valgrind/DrMemory. Even if the loading phase runs under Valgrind/DrMemory, the overhead would only occur once and all forked child processes should be able to use the preloaded data from there.
Is it feasible to record a program state and declare it as untainted and then later restore that state in Valgrind (memcheck) or DrMemory?
Note: I'm only interested in unixoid platforms, limiting it to Linux alone would also be fine.
My idea would be to use fork() right after the data loading phase and then hand the children off to Valgring/DrMemory.
That's not feasible for many reasons. For example, glibc will cache results of syscall(SYS_getpid) in an internal variable, and having multiple processes that believe they have the same pid (which != their real pid) is an obvious recipe for disaster.
That said, what stops you from running valgrind --trace-children=yes and then forking child processes after initialization? Each of the child processes can do something like this:
char buf[PATH_MAX];
sprintf(buf, "/tmp/parameters-for-%d", getpid());
while (true) {
if (FILE *fp = fopen(buf, "r")) {
// read parameters for this child, and exercise appropriate code paths
return run_with_parameters(fp);
}
sleep(1);
}
When you want child N to run, simply echo "foo bar baz" > /tmp/parameters-for-N and wait for it to complete. All other children will be nicely busy-waiting until you are ready to use them.

Preventing threaded subprocess.popen from terminating my main script when child is killed?

Python 2.7.3 on Solaris 10
Questions
When my subprocess has an internal Segmentation Fault(core) issue or a user externally kills it from the shell with a SIGTERM or SIGKILL, my main program's signal handler handles a SIGTERM(-15) and my parent program exits. Is this real? or is it a bad python build?
Background and Code
I have a python script that first spawns a worker management thread. The worker management thread then spawns one or more worker threads. I have other stuff going on in my main thread that I cannot block. My management thread stuff and worker threads are rock-solid. My services run for years without restarts but then we have this subprocess.Popen scenario:
In the run method of the worker thread, I am using:
class workerThread(threading.Thread):
def __init__(self) :
super(workerThread, self).__init__()
...
def run(self)
...
atempfile = tempfile.NamedTempFile(delete=False)
myprocess = subprocess.Popen( ['third-party-cmd', 'with', 'arguments'], shell=False, stdin=subprocess.PIPE, stdout=atempfile, stderr=subprocess.STDOUT,close_fds=True)
...
I need to use myprocess.poll() to check for process termination because I need to scan the atempfile until I find relevant information (the file may be > 1 GiB) and I need to terminate the process because of user request or because the process has been running too long. Once I find what I am looking for, I will stop checking the stdout temp file. I will clean it up after the external process is dead and before the worker thread terminates. I need the stdin PIPE in case I need to inject a response to something interactive in the child's stdin stream.
In my main program, I set a SIGINT and SIGTERM handler for me to perform cleanup, if my main python program is terminated with SIGTERM or SIGINT(Ctrl-C) if running from the shell.
Does anyone have a solid 2.x recipe for child signal handling in threads?
ctypes sigprocmask, etc.
Any help would be very appreciated. I am just looking for an 'official' recipe or the BEST hack, if one even exists.
Notes
I am using a restricted build of Python. I must use 2.7.3. Third-party-cmd is a program I do not have source for - modifying it is not possible.
There are many things in your description that look strange. First thing, you have a couple of different threads and processes. Who is crashing, who's receinving SIGTERM and who's receiving SIGKILL and due to which operations ?
Second: why does your parent receive SIGTERM ? It can't be implicitly sent. Someone is calling kill to your parent process, either directly or indirectly (for example, by killing the whole parent group).
Third point: how's your program terminating when you're handling SIGTERM ? By definition, the program terminates if it's not handled. If it's handled, it's not terminated. What's really happenning ?
Suggestions:
$ cat crsh.c
#include <stdio.h>
int main(void)
{
int *f = 0x0;
puts("Crashing");
*f = 0;
puts("Crashed");
return 0;
}
$ cat a.py
import subprocess, sys
print('begin')
p = subprocess.Popen('./crsh')
a = raw_input()
print(a)
p.wait()
print('end')
$ python a.py
begin
Crashing
abcd
abcd
end
This works. No signal delivered to the parent. Did you isolate the problem in your program ?
If the problem is a signal sent to multiple processes: can you use setpgid to set up a separate process group for the child ?
Is there any reason for creating the temporary file ? It's 1 GB files being created in your temporary directory. Why not piping stdout ?
If you're really sure you need to handle signals in your parent program (why didn't you try/except KeyboardInterrupt, for example ?): could signal() unspecified behavior with multi threaded programs be causing those problems (for example, dispatching a signal to a thread that does not handle signals) ?
NOTES
The effects of signal() in a multithreaded process are unspecified.
Anyway, try to explain with more precision what are the threads and process of your program, what they do, how were the signal handlers set up and why, who is sending signals, who is receiving, etc, etc, etc, etc, etc.

Signals shortcut keys and process groups

I have one simple question about signals in Linux systems. As I understand every process has it's PID and PGID. When I create a process it gets it's unique PID, now if I would fork a new process with fork() function I would get child process with different PID but the same PGID.
Now, the code
#include<stdio.h>
#include<unistd.h>
int main()
{
int i=3;
int j;
for(j=0;j<i;++j)
{
if (fork() == 0)
{
while(1)
{
}
}
}
printf("created\n");
while(1)
{
}
return 0;
}
when I compile this program and run it with the command
./foo
and wait a sec so he creates his children and I do CTRL-C and then ps aux I can see that the parent and the children are gone, but if I do
./foo
wait for forking to complete and in other terminal do
kill -INT <pid_of_foo>
and ps aux I can see that the parent is gone but children are still alive and eating my CPU.
I am not sure, but it seems that CTRL-C sends the signal to every process that is in some process group and the KILL -SIGNAL pid command sends the signal to the process with PID=pid not PGID=pid.
Am I on the right track? If yes, why the key combination kills processes with PGID and not PID?
Signal delivery, process groups, and sessions
Yes, you are on the right track.
Modern Unix variants since the BSD releases implement sessions and process groups.
You can look at sessions as groups of process groups. The idea was that everything resulting from a single login on a tty or pseudo-tty line is part of a session, and things relating to a single shell pipeline or other logical grouping of processes would be organized into a single process group.
This makes moving "jobs" between the foreground and background and delivering signals more convenient. The shell users mostly doesn't need to worry about individual processes but can control-C a group of related commands in an intuitive manner.
Signals generated by the keyboard are sent to the foreground process group in a session. The CLI kill command you are using delivers signals to individual processes. If you want to try to duplicate the ^C delivery mechanism you can use kill 0; that will send the signal to every member of the same process group, and if executed from a script it may do what you want.
Note: I edited your question to change GPID to PGID.

How can a process kill itself?

#include<stdlib.h>
#include<unistd.h>
#include<signal.h>
int main(){
pid_t pid = fork();
if(pid==0){
system("watch ls");
}
else{
sleep(5);
killpg(getpid(),SIGTERM); //to kill the complete process tree.
}
return 0;
}
Terminal:
anirudh#anirudh-Aspire-5920:~/Desktop/testing$ gcc test.c
anirudh#anirudh-Aspire-5920:~/Desktop/testing$ ./a.out
Terminated
for the first 5 secs the output of the "watch ls" is shown and then it terminates because I send a SIGTERM.
Question: How can a process kills itself ? I have done kill(getpid(),SIGTERM);
My hypothesis:
so during the kill() call the process switches to kernel mode. The kill call sends the SIGTERM to the process and copies it in the process's process table. when the process comes back to user mode it sees the signal in its table and it terminates itself (HOW ? I REALLY DO NOT KNOW )
(I think I am going wrong (may be a blunder) somewhere in my hypothesis ... so Please enlighten me)
This code is actually a stub which I am using to test my other modules of the Project.
Its doing the job for me and I am happy with it but there lies a question in my mind how actually a process kills itself. I want to know the step by step hypothesis.
Thanks in advance
Anirudh Tomer
Your process dies because you are using killpg(), that sends a signal to a process group, not to a process.
When you fork(), the children inherits from the father, among the other things, the process group. From man fork:
* The child's parent process ID is the same as the parent's process ID.
So you kill the parent along with the child.
If you do a simple kill(getpid(), SIGTERM) then the father will kill the child (that is watching ls) and then will peacefully exit.
so during the kill() call the process switches to kernel mode. The kill call sends the SIGTERM to the process and copies it in the process's process table. when the process comes back to user mode it sees the signal in its table and it terminates itself (HOW ? I REALLY DO NOT KNOW )
In Linux, when returning from the kernel mode to the user-space mode the kernel checks if there are any pending signals that can be delivered. If there are some it delivers the signals just before returning to the user-space mode. It can also deliver signals at other times, for example, if a process was blocked on select() and then killed, or when a thread accesses an unmapped memory location.
I think it when it sees the SIGTERM signal in its process tables it first kills its child processes( complete tree since I have called killpg() ) and then it calls exit().
I am still looking for a better answer to this question.
kill(getpid(), SIGKILL); // itself I think
I tested it after a fork with case 0: and it quit regular from separate parent process.
I don't know if this is a standard certification method ....
(I can see from my psensor tool that CPU usage return in 34% like a normal program code with
a counter stopped ) .
This is super-easy in Perl:
{
local $SIG{TERM} = "IGNORE";
kill TERM => -$$;
}
Conversion into C is left as an exercise for the reader.

Resources