I have a standard program using fork() and pipe() with the intention of making a system() call for a third party program in the child process and redirecting the output to the parent process. I discovered that if I do this, somehow the parent process is never able to detect that the child process has closed the pipe, thus it is never able to exit from the while loop calling read().
The issue disappears when I replace the system() call to the third party program with some other generic system call like system("ls -l"). What could be potential issues with the call to the third party program using system() that is affecting this program?
#include <iostream>
#include <fstream>
#include <stdlib.h>//system
#include <sys/wait.h>
int main(int argc, char **argv){
//setup pipe
int pipeid_L1[2];
pipe(pipeid_L1);
pid_t pid_L1;
pid_L1 = fork();
if( pid_L1==-1 ){
throw std::logic_error("Fork L1 failed");
}
else if(pid_L1 ==0){//L1 child process
dup2(pipeid_L1[1],STDOUT_FILENO);//redirect standard out to pipe
close(pipeid_L1[0]); //child doesn't read
system( ... some program ... ); //making the system call to a third party program
close(pipeid_L1[1]);
exit(0);
}
else{
//setup pipe
close(pipeid_L1[1]);
int buf_size=64;
char L1_buf[buf_size];
while( read(pipeid_L1[0],L1_buf,buf_size)){ //this while loop never exits if I make the system call to the third party program
... do stuff here ...
}
}
exit(EXIT_SUCCESS);
}
The problem is that the parent will only see the EOF when ALL other processes close the write end of the pipe. There are three relevant processes -- the child you forked, the shell that system forks and execs, and the actual program you run. The first two won't close their end of the pipe until after the program actually exits, so the parent won't see the EOF until that happens and all the processes exit.
If you want the parent to see the EOF as soon as the program closes its stdout, rather than waiting until it exits, you'll need to get rid of those extra processes by using exec rather than system.
Alternately, you can use popen which does all of the needed fork/pipe/exec for you.
Related
I want to understand the lifetime of a pipe? http://linux.die.net/man/2/pipe
Does the data in the pipe stay alive if either the sender or receiver dies/exits?
Can the pipe be created if the receiver is not present? (i.e. has not been forked off yet)?
I need to send data from sender to the receiver. However, the receiver may not have been forked off yet, and may be active about (1~2 seconds after the sender). They share the parent process, but the receiver may be forked off at some point much after the sender or vice versa.
Also it is possible that the sender can finish processing and exit at any time.
I'm trying to see if using pipe's instead of a shared memory queue would work for me.
The pipe MUST be created before the fork. After the fork, each process uses either the read or the write end. It's best to close the not-used end of the pipe immediately after the fork.
If the writing process exits, the reader can read all the remaining data in the pipe, but the subsequent read system call in it returns with 0 bytes read, that's how you know it's over. If the writing process is still keeping the pipe open but does not write anything into it, read blocks until bytes become available.
If the writing process has written a lot of data into the pipe and exits, the data are still available for the reader.
If the reading process exits, the writing process is killed by a SIGPIPE signal. It has the option of handling the signal in different ways, but it's killed by default.
So the pipe may survive the writer, but not the reader. Proof of concept (cső is Hungarian for pipe):
#include <unistd.h>
int main(void)
{
int cso[2];
pipe(cso);
if (fork() == 0) {
close(cso[0]);
write(cso[1], "cso\n", 4);
return 0;
}
close(cso[1]);
sleep(2);
if (fork() == 0) {
char line[4];
read(cso[0], line, 4);
write(1, line, 4);
return 0;
}
close(cso[0]);
return 0;
}
I'm writing a multithread plugin based application. I will not be the plugins author. So I would wish to avoid that the main application crashes cause of a segmentation fault in a plugin. Is it possible? Or the crash in the plugin definitely compromise also the main application status?
I wrote a sketch program using qt cause my "real" application is strongly based on qt library. Like you can see I forced the thread to crash calling the trimmed function on a not-allocated QString. The signal handler is correctly called but after the thread is forced to quit also the main application crashes. Did I do something wrong? or like I said before what I'm trying to do is not achievable?
Please note that in this simplified version of the program I avoided to use plugins but only thread. Introducing plugins will add a new critical level, I suppose. I want to go on step by step. And, overall, I want to understand if my target is feasible. Thanks a lot for any kind of help or suggestions everyone will try to give me.
#include <QString>
#include <QThread>
#include<csignal>
#include <QtGlobal>
#include <QtCore/QCoreApplication>
class MyThread : public QThread
{
public:
static void sigHand(int sig)
{
qDebug("Thread crashed");
QThread* th = QThread::currentThread();
th->exit(1);
}
MyThread(QObject * parent = 0)
:QThread(parent)
{
signal(SIGSEGV,sigHand);
}
~MyThread()
{
signal(SIGSEGV,SIG_DFL);
qDebug("Deleted thread, restored default signal handler");
}
void run()
{
QString* s;
s->trimmed();
qDebug("Should not reach this point");
}
};
int main(int argc, char *argv[])
{
QCoreApplication a(argc, argv);
MyThread th(&a);
th.run();
while (th.isRunning());
qDebug("Thread died but main application still on");
return a.exec();
}
I'm currently working on the same issue and found this question via google.
There are several reasons your source is not working:
There is no new thread. The thread is only created, if you call QThread::start. Instead you call MyThread::run, which executes the run method in the main thread.
You call QThread::exit to stop the thread, which is not supposed to directly stop a thread, but sends a (qt) signal to the thread event loop, requesting it to stop. Since there is neither a thread nor an event loop, the function has no effect. Even if you had called QThread::start, it would not work, since writing a run method does not create a qt event loop. To be able to use exit with any QThread, you would need to call QThread::exec first.
However, QThread::exit is the wrong method anyways. To prevent the SIGSEGV, the thread must be called immediately, not after receiving the (qt) signal in its event loop. So although generally frowned upon, in this case QThread::terminate has to be called
But it is generally said to be unsafe to call complex functions like QThread::currentThread, QThread::exit or QThread::terminate from signal handlers, so you should never call them there
Since the thread is still running after the signal handler (and I'm not sure even QThread::terminate would kill it fast enough), the signal handler exits to where it was called from, so it reexecutes the instruction causing the SIGSEGV, and the next SIGSEGV occurs.
Therefore I have used a different approach, the signal handler changes the register containing the instruction address to another function, which will then be run, after the signal handler exits, instead the crashing instruction. Like:
void signalHandler(int type, siginfo_t * si, void* ccontext){
(static_cast<ucontext_t*>(ccontext))->Eip = &recoverFromCrash;
}
struct sigaction sa;
memset(&sa, 0, sizeof(sa)); sa.sa_flags = SA_SIGINFO;
sa.sa_sigaction = &signalHandler;
sigaction(SIGSEGV, &sa, 0);
The recoverFromCrash function is then normally called in the thread causing the SIGSEGV. Since the signal handler is called for all SIGSEGV, from all threads, the function has to check which thread it is running in.
However, I did not consider it safe to simply kill the thread, since there might be other stuff, depending on a running thread. So instead of killing it, I let it run in an endless loop (calling sleep to avoid wasting CPU time). Then, when the program is closed, it sets a global variabel, and the thread is terminated. (notice that the recover function must never return, since otherwise the execution will return to the function which caused the SIGSEGV)
Called from the mainthread on the other hand, it starts a new event loop, to let the program running.
if (QThread::currentThread() != QCoreApplication::instance()->thread()) {
//sub thread
QThread* t = QThread::currentThread();
while (programIsRunning) ThreadBreaker::sleep(1);
ThreadBreaker::forceTerminate();
} else {
//main thread
while (programIsRunning) {
QApplication::processEvents(QEventLoop::AllEvents);
ThreadBreaker::msleep(1);
}
exit(0);
}
ThreadBreaker is a trivial wrapper class around QThread, since msleep, sleep and setTerminationEnabled (which has to be called before terminate) of QThread are protected and could not be called from the recover function.
But this is only the basic picture. There are a lot of other things to worry about: Catching SIGFPE, Catching stack overflows (check the address of the SIGSEGV, run the signal handler in an alternate stack), have a bunch of defines for platform independence (64 bit, arm, mac), show debug messages (try to get a stack trace, wonder why calling gdb for it crashes the X server, wonder why calling glibc backtrace for it crashes the program)...
I would like to use gprof to profile a daemon. My daemon uses a 3rd party library, with which it registers some callbacks, then calls a main function, that never returns. I need to call kill (either SIGTERM or SIGKILL) to terminate the daemon. Unfortunately, gprof's manual page says the following:
The profiled program must call "exit"(2) or return normally for the
profiling information to be saved in the gmon.out file.
Is there is way to save profiling information for processes which are killed with SIGTERM or SIGKILL ?
First, I would like to thank #wallyk for giving me good initial pointers. I solved my issue as follows. Apparently, libc's gprof exit handler is called _mcleanup. So, I registered a signal handler for SIGUSR1 (unused by the 3rd party library) and called _mcleanup and _exit. Works perfectly! The code looks as follows:
#include <dlfcn.h>
#include <stdio.h>
#include <unistd.h>
void sigUsr1Handler(int sig)
{
fprintf(stderr, "Exiting on SIGUSR1\n");
void (*_mcleanup)(void);
_mcleanup = (void (*)(void))dlsym(RTLD_DEFAULT, "_mcleanup");
if (_mcleanup == NULL)
fprintf(stderr, "Unable to find gprof exit hook\n");
else _mcleanup();
_exit(0);
}
int main(int argc, char* argv[])
{
signal(SIGUSR1, sigUsr1Handler);
neverReturningLibraryFunction();
}
You could add a signal handler for a signal the third party library doesn't catch or ignore. Probably SIGUSR1 is good enough, but will either have to experiment or read the library's documentation—if it is thorough enough.
Your signal handler can simply call exit().
I guess some signals will be sent to the process. Some or one? If more than one in which order do they occure?
And what happens if the Terminate button is pressed and if the process has forked?
And what happens if the process has started other processes by system(...)?
I can't be sure without checking, but I would be surprised if the signal sent was anything other than SIGTERM (or possibly SIGKILL, but that would be a bit unfriendly of CDT).
As for sub-processes, depends what they are actually doing. If they are communicating with their parent processes over a pipe (in any way whatsoever, including reading their stdout), they'll likely find that those file descriptors close or enter the exception state; if they try to use the fds anyway they'll be sent a SIGPIPE. There may also be a SIGHUP in there.
If a sub-process was really completely disjoint (close all open FDs, no SIGTERM handler in the parent which might tell it to exit) then it could theoretically keep running. This is how daemon processes are spawned.
I checked SIGTERM, SIGHUP, SIGPIPE with terminate button. Doesn't work...
I guess it is SIGKILL and this makes me very sad! Also, I didn't find a good solution to run program from external(or built-in plugin) console.
It seems to be SIGKILL. SIGSTOP is used by GDB to stop/resume. From signal man page:
The signals SIGKILL and SIGSTOP cannot be caught or ignored.
I tried to debug following program with eclipse. Pressing terminate in Run session or pause in Debug session does not print anything. Thus it must be either SIGKILL or SIGSTOP.
#include <signal.h>
#include <string.h>
#include <unistd.h>
void handler(int sig) {
printf("\nsig:%2d %s\n", sig, strsignal(sig));
}
int main(int argc, char **argv) {
int signum;
int delay;
if (argc < 2) {
printf("usage: continue <sleep>\n");
return 1;
}
delay = atoi(argv[1]);
for (signum = 1; signum < 64; signum++) {
signal(signum, handler);
}
printf("sleeping %d s\n", delay);
for(;;) {
sleep(delay);
}
return 0;
}
i am using message queue as an ipc between 2 programs.
Now i want to send data from one program to another using message queue and then intimate it through a signal SIGINT.
I dont know how to send a signal from one program to another .
Can anybody pls provide a sample code if they have the solution.
#include <sys/types.h>
#include <signal.h>
int kill(pid_t pid, int sig);
Signal in linux can be send using kill system call just check this link for documentation of kill system call and example. you can see man -2 kill also. and it's not advisable to use SIGINT use SIGUSR1 or SIGUSR2
Note that by using the sigqueue() system call, you can pass an extra piece of data along with your signal. Here's a brief quote from "man 2 sigqueue":
The value argument is used to specify
an accompanying item of data (either
an integer or a pointer value) to be
sent
with the signal, and has the following type:
union sigval {
int sival_int;
void *sival_ptr;
};
This is a very convenient way to pass a small bit of information between 2 processes. I agree with the user above -- use SIGUSR1 or SIGUSR2 and a good sigval, and you can pass whatever you'd like.
You could also pass a pointer to some object in shared memory via the sival_ptr, and pass a larger object that way.
system("kill -2 `pidof <app_name_here>` ");