How to temporarily capture stdout of my own process on Linux? - linux

I'm trying to display output of a method in QTextWidget. The method prints json on stdout, and it's a part of a 3rd party code that I can't change.
Look's like this:
int iperf_json_finish(struct iperf_test *test)
{
char *str;
str = cJSON_Print(test->json_top);
if (str == NULL)
return -1;
fputs(str, stdout);
putchar('\n');
fflush(stdout);
free(str);
cJSON_Delete(test->json_top);
test->json_top = test->json_start = test->json_intervals = test->json_end = NULL;
return 0;
}
So, how I can do that? I'm using qt 4.8 and can't use things like QMessageLogContext.

How to temporarily capture stdout of my own process on Linux?
The trick is to replace the stdout file descriptor with the descriptor of your own pipe, intercept the output with the pipe, and then put original the stdout back.
General scheme on *NIX platforms is:
Save the original file descriptor of the stdout: int orig_stdin = dup(1);
Prepare a pipe: pipe(cap_stdout);
Do not forget to flush the stdio: fflush(stdout);
Replace the stdout with the writing end of the pipe: dup2( 1, cap_stdout[1] );
Start an extra thread reading from the cap_stdout[0] file descriptor. This is where the intercepted data are captured.
Call the 3rd party library function.
Flush the stdio: fflush(stdout);
Signal the reading thread to terminate.
Restore the stdin: dup2( orig_stdin, 1 );
Cleanup: close(cap_stdout[0]); close(cap_stdout[1]); close(orig_stdin);
The data you wanted are what the reading thread has read.
Note that the extra thread is needed because the pipe has a limited buffer space, and without the reader thread, the writes to the (piped) stdout would block, leading to deadlock inside the application.

Related

Pipe data to thread. Read stuck

I want to trigger a callback when data is written on a file descriptor. For this I have set up a pipe and a reader thread, which reads the pipe. When it has data, the callback is called with the data.
The problem is that the reader is stuck on the read syscall. Destruction order is as follows:
Close write end of pipe (I expected this to trigger a return from blocking read, but apparently it doesn't)
Wait for reader thread to exit
Restore old file descriptor context (If stdout was redirected to the pipe, it no longer is)
Close read end of pipe
When the write end of the pipe is closed, on the read end, the read() system call returns 0 if it is blocking.
Here is an example program creating a reader thread from a pipe. The main program gets the data from stdin thanks to fgets() and write those data into the pipe. On the other side, the thread reads the pipe and triggers the callback passed as parameter. The thread stops when it gets 0 from the read() of the pipe (meaning that the main thread closed the write side):
#include <stdio.h>
#include <errno.h>
#include <pthread.h>
#include <unistd.h>
#include <string.h>
int pfd[2];
void read_cbk(char *data, size_t size)
{
int rc;
printf("CBK triggered, %zu bytes: %s", size, data);
}
void *reader(void *p){
char data[128];
void (* cbk)(char *data, size_t size) = (void (*)(char *, size_t))p;
int rc;
do {
rc = read(pfd[0], data, sizeof(data));
switch(rc) {
case 0: fprintf(stderr, "Thread: rc=0\n");
break;
case -1: fprintf(stderr, "Thread: rc=-1, errno=%d\n", errno);
break;
default: cbk(data, (size_t)rc);
}
} while(rc > 0);
}
int main(){
pthread_t treader;
int rc;
char input[128];
char *p;
pipe(pfd);
pthread_create(&treader, NULL, reader , read_cbk);
do {
// fgets() insert terminating \n and \0 in the buffer
// If EOF (that is to say CTRL-D), fgets() returns NULL
p = fgets(input, sizeof(input), stdin);
if (p != NULL) {
// Send the terminating \0 to the reader to facilitate printf()
rc = write(pfd[1], input, strlen(p) + 1);
}
} while (p);
close(pfd[1]);
pthread_join(treader, NULL);
close(pfd[0]);
}
Example of execution:
$ gcc t.c -o t -lpthread
$ ./t
azerty is not qwerty
CBK triggered, 22 bytes: azerty is not qwerty
string
CBK triggered, 8 bytes: string
# Here I typed CTRL-D to generate an EOF on stdin
Thread: rc=0
I found the problem. For redirection, the following has to be done
Create a pipe. This creates two file descriptors. One for reading, and one for writing.
dup2 so the original file descriptor is an alias to the write end of the pipe. This increments the use count of the write end by one
Thus, before synchronizing, I have to restore the context. This means that the following order is correct:
Close write end of pipe
Restore old file descriptor context
Wait for reader thread to exit
Close read end of pipe
For reference to the question, step 2 and 3 must be reorder in order to avoid deadlock.

Dup2() usage and output redirection

I'm trying to redirect the output of a process to the stdin of another process. This is done by dup2(). My question is: do stdin and stdout go back to their place(0,1) after function terminates, or do i have to do something like savestdin = dup(0). More clearly, after the function for one command terminates, at the second call are stdin and stdout on their supposed position?
To get your stdout to go to a forked process' stdin, you'll need to use a combination of pipe and dup. Haven't tested it but it should give you some ideas hopefully:
int my_pipe[2];
pipe(my_pipe); // my_pipe[0] is write end
// my_pipe[1] is read end
// Now we need to replace stdout with the fd from the write end
// And stdin of the target process with the fd from the read end
// First make a copy of stdout
int stdout_copy = dup(1);
// Now replace stdout
dup2(my_pipe[0], 1);
// Make the child proccess
pid_t pid = fork();
if (pid == 0)
{
// Replace stdin
dup2(my_pipe[1], 0);
execv(....); // run the child program
exit(1); // error
}
int ret;
waitpid(pid, &ret, 0);
// Restore stdout
dup2(stdout_copy, 1);
close(stdout_copy);
close(my_pipe[0]);
close(my_pipe[1]);
So to answer your question, when you use dup2() to replace 0 and 1, they will not be restored to the terminal unless you save the original file descriptor with dup() and manually restore it with dup2().

Need to close linux FIFO on the read side after each message

Here is what I did, and it works:
"server" side (reader) pseudocode
mkfifo()
while (true){
open()
select(...NULL)
while (select... timeval(0)) {
read()
}
close()
}
"client" side (writer) real C code
int fd;
char * myfifo = "/tmp/saveterm.fifo";
char *msg = "MESSAGE";
fd = open(myfifo, O_WRONLY);
write(fd, msg, strlen(msg));
close(fd);
Now you see I need to open/close the fifo on the server after each event read. Is this supposed to be so? At first I only opened it once before the loop, but after the first message, the 'select' call would never block again until closed and reopened, and was always returning 1 and ISSET for the descriptor.
Yes, it's supposed to be like this.
Named pipes behave the same way as anonymous pipes: they both represent a single stream that is terminated when the last writer closes it. Specifically, the reader is not meant to hang forever just in case some future program decides to open the pipe and continue writing.
If you want packet based communication through a file, how about using a Unix socket in datagram mode instead?

fork() and buffered IO streams

Buffered IO streams have a strange behavior on fork().
In the sample snippet shown below, the file being read is 252 bytes in size. After the fork(), the child is successfully reading a line and printing on the screen. however, when the control goes back to the parent, the file offset is set to the end of file for some reason and the parent process isn't able to read anything from the stream. If fork() creates a dup of the file descriptors ( which works fine with replicating the same program using system calls read() and write() ), one would expect the parent process to read the next line from the stream but that doesn't seem to happen here. File offset is set to the end of file when the control reaches parent. Can someone shed some light on this ?
int main(void)
{
char buffer[80];
FILE *file;
pid_t pid;
int status;
/* Open the file: */
file = fopen(FILENAME, "r");
if ((pid = fork()) == 0){
fgets(buffer, sizeof(buffer), file);
printf("%s", buffer);
}
else{
waitpid(pid, &status, 0);
printf("Offset [%d]\n", ftell(file));
fgets(buffer, sizeof(buffer), file);
printf("%s", buffer);
}
}
fgets() in the child process is fully buffered as it's reading data from file. On my system a fully buffered buffer is of size 1024..So a single read() has the entire contents of the file (252 bytes) in the fgets() buffer. So as the control gets back to the parent, from the child, the offset is set to the end of the file.
Doing a fflush() in the child process, before it returns, ensures the data in the fgets() buffer is discarded and therefore the file offest is properly set back when the control reaches the parent.

using EOF for signaling on unnamed pipes

I have a test program that uses unnamed pipes created with pipe() to communicate between parent and child processes created with fork() on a Linux system.
Normally, when the sending process closes the write fd of the pipe, the receiving process returns from read() with a value of 0, indicating EOF.
However, it seems that if I stuff the pipe with a fairly large amount of data (maybe 100K bytes0 before the receiver starts reading, the receiver blocks after reading all the data in the pipe - even though the sender has closed it.
I have verified that the sending process has closed the pipe with lsof, and it seems pretty clear that the receiver is blocked.
Which leads to the question: is closing one end of the pipe a reliable way to let the receiver know that there is no more data?
If it is, and there are no conditions that can lead to a read() blocking on an empty, closed FIFO, there's something wrong with my code. If not, it means I need to find an alternate method of signalling the end of the data stream.
Resolution
I was pretty sure that the original assumption was correct, that closing a pipe causes an EOF at the reader side, this question was just a shot in the dark - thinking maybe there was some subtle pipe behavior I was overlooking. Nearly every example you ever see with pipes is a toy that sends a few bytes and exits. Things often work differently when you are no longer doing atomic operations.
In any case, I tried to simplify my code to flush out the problem and was successful in finding my problem. In pseudocode, I ended up doing something like this:
create pipe1
if ( !fork() ) {
close pipe1 write fd
do some stuff reading pipe1 until EOF
}
create pipe2
if ( !fork() ) {
close pipe2 write fd
do some stuff reading pipe2 until EOF
}
close pipe1 read fd
close pipe2 read fd
write data to pipe1
get completion response from child 1
close pipe1 write fd
write data to pipe2
get completion response from child 2
close pipe2 write fd
wait for children to exit
The child process reading pipe1 was hanging, but only when the amount of data in the pipe became substantial. This was occurring even though I had closed the pipe that child1 was reading.
A look at the source shows the problem. When I forked the second child process, it grabbed its own copy of the pipe1 file descriptors, which were left open. Even though only one process should be writing to the pipe, having it open in the second process kept it from going into an EOF state.
The problem didn't show up with small data sets, because child2 was finishing its business quickly, and exiting. But with larger data sets, child2 wasn't returning quickly, and I ended up with a deadlock.
read should return EOF when the writers have closed the write end.
Since you do a pipe and then a fork, both processes will have the write fd open. It could be that in the reader process you have forgotten to close the write portion of the pipe.
Caveat: It has been a long time since I programmed on Unix. So it might be inaccurate.
Here is some code from: http://www.cs.uml.edu/~fredm/courses/91.308/files/pipes.html. Look at the "close unused" comments below.
#include <stdio.h>
/* The index of the "read" end of the pipe */
#define READ 0
/* The index of the "write" end of the pipe */
#define WRITE 1
char *phrase = "Stuff this in your pipe and smoke it";
main () {
int fd[2], bytesRead;
char message [100]; /* Parent process message buffer */
pipe ( fd ); /*Create an unnamed pipe*/
if ( fork ( ) == 0 ) {
/* Child Writer */
close (fd[READ]); /* Close unused end*/
write (fd[WRITE], phrase, strlen ( phrase) +1); /* include NULL*/
close (fd[WRITE]); /* Close used end*/
printf("Child: Wrote '%s' to pipe!\n", phrase);
} else {
/* Parent Reader */
close (fd[WRITE]); /* Close unused end*/
bytesRead = read ( fd[READ], message, 100);
printf ( "Parent: Read %d bytes from pipe: %s\n", bytesRead, message);
close ( fd[READ]); /* Close used end */
}
}

Resources