File descriptor for ioctl call to make a controlling terminal - linux

On linux to be able to control lifetime of processes forked off of my main process I'm making the main process be the session and group leader by calling setsid(). Then it looks like I need to have the main process make a controlling terminal for the process group, and then, once the main process terminates, all other processes in the process group will receive a SIGHUP. I tried calling open() for a regular file on the filesystem, but ioctl() refuses to accept this fd with 'Inappropriate file descriptor'. Is posix_openpt() what I should be using instead? The man page says that it'll create a pseudo-terminal and return a file descriptor for it. Do I even need an ioctl(fd, TIOCSCTTY, 0) call after posix_openpt(), or not using O_NOCTTY is all I really need? Thanks!

Do I even need an ioctl(fd, TIOCSCTTY, 0) call after posix_openpt(), or not using O_NOCTTY is all I really need?
I just tried on Ubuntu 18.04.5:
If you don't do that and the controlling process is closed, the systemd process becomes the new controlling process of the child process and the child process does not receive SIGHUP.
I'm not sure if this behavior is the same for other Linux distributions, too.
Is posix_openpt() what I should be using instead?
Try the following code:
int master, tty;
master = posix_openpty(O_RDWR);
grantpt(master);
unlockpt(master);
tty = open(ptsname(master), O_RDWR);
ioctl(tty, TIOCSCTTY, 0);
This must be done in the same process that called setsid().
Note: As soon as you completely close the master file, the processes will receive a SIGHUP.
("Completely" means: When you close all copies created by dup() or by creating a child process inheriting the handle.)
If you really want to use the pseudo-TTY, you should not inherit the master handle to child processes (or close() the handle in a child process. However, in your case you only want to use the pseudo-TTY as "workaround", so this is not that important.

Related

Why are hanging SSH commands waiting for output from a pipe with both ends open in 'sshd' on the server?

This is on StackOverflow as opposed to SuperUser/ServerFault since it has to do with the syscalls and OS interactions being performed by sshd, not the problem I'm having using SSH (though assistance with that is appreciated, too :p).
Context:
I invoke a complex series of scripts via SSH, e.g. ssh user#host -- /my/command. The remote command does a lot of complex forking and execcing and eventually results in a backgrounded daemon process running on the remote host. Occasionally (I'm slowly going mad trying to find out reliable reproduction conditions), the ssh command will never return control to the client shell. In those situations, I can go onto the target host and see an sshd: user#notty process with no children hanging indefinitely.
Fixing that issue is not what this question is about. This question is about what that sshd process is doing.
The SSH implementation is OpenSSH, and the version version is 5.3p1-112.el6_7.
The problem:
If I find one of those stuck sshds and strace it, I can see it's doing a select on two handles, e.g. select(12, [3 6], [], NULL, NULL or similar. lsof tells me that one of those handles is the TCP socket connecting back to the SSH client. The other is a pipe, the other end of which is only open in the same sshd process. If I search for that pipe by ID using the answer to this SuperUser question, the only process that contains references to that pipe is the same process. lsof confirms this: both the read and write ends of the pipe are open in the same process, e.g. (for pipe 788422703 and sshd PID 22744):
sshd 22744 user 6r FIFO 0,8 0t0 788422703 pipe
sshd 22744 user 7w FIFO 0,8 0t0 788422703 pipe
Questions:
What is SSH waiting for? If the pipe isn't connected to anything and there are no child processes, I can't imagine what event it could be expecting.
What is that "looped" pipe/what does it represent? My only theory is that maybe if STDIN isn't supplied to the SSH client, the target host sshd opens a dummy STDIN pipe so some of its internal child-management code can be more uniform? But that seems pretty tenuous.
How does SSH get into this situation?
What I've Tried/Additional Info:
Initially, I thought this was a handle leak to a daemon. It's possible to create a waiting, child-less sshd process by issuing a command that backgrounds itself, e.g. ssh user#host -- 'sleep 60 &'; sshd will wait for the streams to be closed to the daemonized process; not just the exit of its immediate child. Since the scripts in question eventually result (way down the process tree) in a daemon being started, it initially seemed possible that the daemon was holding onto a handle. However, that doesn't seem to hold up--using the sleep 60 & command as an example, sshd processes communicating with daemons hold and select on four open pipes, not just two, and at least two of the pipes are connected from sshd to the daemon process, not looped. Unless there's a method of tracking/pointing to a pipe I don't know about (and there likely is--for example, I have no idea how duped filehandles play into close() semaphore waiting or piping), I don't think the pipe-to-self situation represents a waiting-on-daemon case.
sshd periodically receives communication on the TCP socket/ssh connection itself, which wakes it up out of the selects for a brief period of communication (during which strace shows it blocking SIGCHLD), and then it goes back to waiting on the same FDs.
It's possible that I'm being affected by this race condition (SIGCHLD getting delivered before the kernel makes data available in the pipe). However, that seems unlikely, both given the rate at which this condition manifests, and the fact that the processes being run on the target host are Perl scripts, and the Perl runtime closes and flushes open file descriptors on shutdown.
It seems that you're describing the notify pipe. The OpenSSH sshd main loop calls select() to wait until it has something to do. The file descriptors being polled include the TCP connection to the client and any descriptors used to service active channels.
sshd wants to be able to interrupt the select() call when a SIGCHLD signal is received. To do that, sshd installs a signal handler for SIGCHLD and it creates a pipe. When a SIGCHLD signal is received, the signal handler writes a byte into the pipe. The read end of the pipe is included in the list of file descriptors polled by select(). The act of writing to the pipe would cause the select() call to return with an indication that the notify pipe is readable.
All of the code is in serverloop.c:
/*
* we write to this pipe if a SIGCHLD is caught in order to avoid
* the race between select() and child_terminated
*/
static int notify_pipe[2];
static void
notify_setup(void)
{
if (pipe(notify_pipe) < 0) {
error("pipe(notify_pipe) failed %s", strerror(errno));
} else if ((fcntl(notify_pipe[0], F_SETFD, 1) == -1) ||
(fcntl(notify_pipe[1], F_SETFD, 1) == -1)) {
error("fcntl(notify_pipe, F_SETFD) failed %s", strerror(errno));
close(notify_pipe[0]);
close(notify_pipe[1]);
} else {
set_nonblock(notify_pipe[0]);
set_nonblock(notify_pipe[1]);
return;
}
notify_pipe[0] = -1; /* read end */
notify_pipe[1] = -1; /* write end */
}
static void
notify_parent(void)
{
if (notify_pipe[1] != -1)
write(notify_pipe[1], "", 1);
}
[...]
/*ARGSUSED*/
static void
sigchld_handler(int sig)
{
int save_errno = errno;
child_terminated = 1;
#ifndef _UNICOS
mysignal(SIGCHLD, sigchld_handler);
#endif
notify_parent();
errno = save_errno;
}
The code to set up and perform the select call is in another function called wait_until_can_do_something(). It's fairly long so I won't include it here. OpenSSH is open source, and this page describes how to download the source code.

Preventing threaded subprocess.popen from terminating my main script when child is killed?

Python 2.7.3 on Solaris 10
Questions
When my subprocess has an internal Segmentation Fault(core) issue or a user externally kills it from the shell with a SIGTERM or SIGKILL, my main program's signal handler handles a SIGTERM(-15) and my parent program exits. Is this real? or is it a bad python build?
Background and Code
I have a python script that first spawns a worker management thread. The worker management thread then spawns one or more worker threads. I have other stuff going on in my main thread that I cannot block. My management thread stuff and worker threads are rock-solid. My services run for years without restarts but then we have this subprocess.Popen scenario:
In the run method of the worker thread, I am using:
class workerThread(threading.Thread):
def __init__(self) :
super(workerThread, self).__init__()
...
def run(self)
...
atempfile = tempfile.NamedTempFile(delete=False)
myprocess = subprocess.Popen( ['third-party-cmd', 'with', 'arguments'], shell=False, stdin=subprocess.PIPE, stdout=atempfile, stderr=subprocess.STDOUT,close_fds=True)
...
I need to use myprocess.poll() to check for process termination because I need to scan the atempfile until I find relevant information (the file may be > 1 GiB) and I need to terminate the process because of user request or because the process has been running too long. Once I find what I am looking for, I will stop checking the stdout temp file. I will clean it up after the external process is dead and before the worker thread terminates. I need the stdin PIPE in case I need to inject a response to something interactive in the child's stdin stream.
In my main program, I set a SIGINT and SIGTERM handler for me to perform cleanup, if my main python program is terminated with SIGTERM or SIGINT(Ctrl-C) if running from the shell.
Does anyone have a solid 2.x recipe for child signal handling in threads?
ctypes sigprocmask, etc.
Any help would be very appreciated. I am just looking for an 'official' recipe or the BEST hack, if one even exists.
Notes
I am using a restricted build of Python. I must use 2.7.3. Third-party-cmd is a program I do not have source for - modifying it is not possible.
There are many things in your description that look strange. First thing, you have a couple of different threads and processes. Who is crashing, who's receinving SIGTERM and who's receiving SIGKILL and due to which operations ?
Second: why does your parent receive SIGTERM ? It can't be implicitly sent. Someone is calling kill to your parent process, either directly or indirectly (for example, by killing the whole parent group).
Third point: how's your program terminating when you're handling SIGTERM ? By definition, the program terminates if it's not handled. If it's handled, it's not terminated. What's really happenning ?
Suggestions:
$ cat crsh.c
#include <stdio.h>
int main(void)
{
int *f = 0x0;
puts("Crashing");
*f = 0;
puts("Crashed");
return 0;
}
$ cat a.py
import subprocess, sys
print('begin')
p = subprocess.Popen('./crsh')
a = raw_input()
print(a)
p.wait()
print('end')
$ python a.py
begin
Crashing
abcd
abcd
end
This works. No signal delivered to the parent. Did you isolate the problem in your program ?
If the problem is a signal sent to multiple processes: can you use setpgid to set up a separate process group for the child ?
Is there any reason for creating the temporary file ? It's 1 GB files being created in your temporary directory. Why not piping stdout ?
If you're really sure you need to handle signals in your parent program (why didn't you try/except KeyboardInterrupt, for example ?): could signal() unspecified behavior with multi threaded programs be causing those problems (for example, dispatching a signal to a thread that does not handle signals) ?
NOTES
The effects of signal() in a multithreaded process are unspecified.
Anyway, try to explain with more precision what are the threads and process of your program, what they do, how were the signal handlers set up and why, who is sending signals, who is receiving, etc, etc, etc, etc, etc.

Signals shortcut keys and process groups

I have one simple question about signals in Linux systems. As I understand every process has it's PID and PGID. When I create a process it gets it's unique PID, now if I would fork a new process with fork() function I would get child process with different PID but the same PGID.
Now, the code
#include<stdio.h>
#include<unistd.h>
int main()
{
int i=3;
int j;
for(j=0;j<i;++j)
{
if (fork() == 0)
{
while(1)
{
}
}
}
printf("created\n");
while(1)
{
}
return 0;
}
when I compile this program and run it with the command
./foo
and wait a sec so he creates his children and I do CTRL-C and then ps aux I can see that the parent and the children are gone, but if I do
./foo
wait for forking to complete and in other terminal do
kill -INT <pid_of_foo>
and ps aux I can see that the parent is gone but children are still alive and eating my CPU.
I am not sure, but it seems that CTRL-C sends the signal to every process that is in some process group and the KILL -SIGNAL pid command sends the signal to the process with PID=pid not PGID=pid.
Am I on the right track? If yes, why the key combination kills processes with PGID and not PID?
Signal delivery, process groups, and sessions
Yes, you are on the right track.
Modern Unix variants since the BSD releases implement sessions and process groups.
You can look at sessions as groups of process groups. The idea was that everything resulting from a single login on a tty or pseudo-tty line is part of a session, and things relating to a single shell pipeline or other logical grouping of processes would be organized into a single process group.
This makes moving "jobs" between the foreground and background and delivering signals more convenient. The shell users mostly doesn't need to worry about individual processes but can control-C a group of related commands in an intuitive manner.
Signals generated by the keyboard are sent to the foreground process group in a session. The CLI kill command you are using delivers signals to individual processes. If you want to try to duplicate the ^C delivery mechanism you can use kill 0; that will send the signal to every member of the same process group, and if executed from a script it may do what you want.
Note: I edited your question to change GPID to PGID.

How can a process kill itself?

#include<stdlib.h>
#include<unistd.h>
#include<signal.h>
int main(){
pid_t pid = fork();
if(pid==0){
system("watch ls");
}
else{
sleep(5);
killpg(getpid(),SIGTERM); //to kill the complete process tree.
}
return 0;
}
Terminal:
anirudh#anirudh-Aspire-5920:~/Desktop/testing$ gcc test.c
anirudh#anirudh-Aspire-5920:~/Desktop/testing$ ./a.out
Terminated
for the first 5 secs the output of the "watch ls" is shown and then it terminates because I send a SIGTERM.
Question: How can a process kills itself ? I have done kill(getpid(),SIGTERM);
My hypothesis:
so during the kill() call the process switches to kernel mode. The kill call sends the SIGTERM to the process and copies it in the process's process table. when the process comes back to user mode it sees the signal in its table and it terminates itself (HOW ? I REALLY DO NOT KNOW )
(I think I am going wrong (may be a blunder) somewhere in my hypothesis ... so Please enlighten me)
This code is actually a stub which I am using to test my other modules of the Project.
Its doing the job for me and I am happy with it but there lies a question in my mind how actually a process kills itself. I want to know the step by step hypothesis.
Thanks in advance
Anirudh Tomer
Your process dies because you are using killpg(), that sends a signal to a process group, not to a process.
When you fork(), the children inherits from the father, among the other things, the process group. From man fork:
* The child's parent process ID is the same as the parent's process ID.
So you kill the parent along with the child.
If you do a simple kill(getpid(), SIGTERM) then the father will kill the child (that is watching ls) and then will peacefully exit.
so during the kill() call the process switches to kernel mode. The kill call sends the SIGTERM to the process and copies it in the process's process table. when the process comes back to user mode it sees the signal in its table and it terminates itself (HOW ? I REALLY DO NOT KNOW )
In Linux, when returning from the kernel mode to the user-space mode the kernel checks if there are any pending signals that can be delivered. If there are some it delivers the signals just before returning to the user-space mode. It can also deliver signals at other times, for example, if a process was blocked on select() and then killed, or when a thread accesses an unmapped memory location.
I think it when it sees the SIGTERM signal in its process tables it first kills its child processes( complete tree since I have called killpg() ) and then it calls exit().
I am still looking for a better answer to this question.
kill(getpid(), SIGKILL); // itself I think
I tested it after a fork with case 0: and it quit regular from separate parent process.
I don't know if this is a standard certification method ....
(I can see from my psensor tool that CPU usage return in 34% like a normal program code with
a counter stopped ) .
This is super-easy in Perl:
{
local $SIG{TERM} = "IGNORE";
kill TERM => -$$;
}
Conversion into C is left as an exercise for the reader.

Prevent fork() from copying sockets

I have the following situation (pseudocode):
function f:
pid = fork()
if pid == 0:
exec to another long-running executable (no communication needed to that process)
else:
return "something"
f is exposed over a XmlRpc++ server. When the function is called over XML-RPC, the parent process prints "done closing socket" after the function returned "something". But the XML-RPC client hangs as long as the child process is still running. When I kill the child process, the XML-RPC client correctly finishes the RPC call.
It seems to me that I'm having a problem with fork() copying socket descriptors to the child process (parent called closesocket but child still owns a reference -> connection still established). How can I circumvent this?
EDIT: I read about FD_CLOEXEC already, but can't I force all descriptors to be closed on exec?
No, you can't force all file descriptors to be closed on exec. You will need to loop over all unwanted file descriptors in the child after the fork() and close them. Unfortunately, there isn't an easy, portable, way to do that - the usual approach is to use getrlimit() to get the current value of RLIMIT_NOFILE and loop from 3 to that number, trying close() on each candidate.
If you are happy to be Linux-only, you can read the /proc/self/fd/ directory to determine the open file descriptors and close them (except 0, 1 and 2 - which should either be left alone or reopened to /dev/null).

Resources