Buffered IO streams have a strange behavior on fork().
In the sample snippet shown below, the file being read is 252 bytes in size. After the fork(), the child is successfully reading a line and printing on the screen. however, when the control goes back to the parent, the file offset is set to the end of file for some reason and the parent process isn't able to read anything from the stream. If fork() creates a dup of the file descriptors ( which works fine with replicating the same program using system calls read() and write() ), one would expect the parent process to read the next line from the stream but that doesn't seem to happen here. File offset is set to the end of file when the control reaches parent. Can someone shed some light on this ?
int main(void)
{
char buffer[80];
FILE *file;
pid_t pid;
int status;
/* Open the file: */
file = fopen(FILENAME, "r");
if ((pid = fork()) == 0){
fgets(buffer, sizeof(buffer), file);
printf("%s", buffer);
}
else{
waitpid(pid, &status, 0);
printf("Offset [%d]\n", ftell(file));
fgets(buffer, sizeof(buffer), file);
printf("%s", buffer);
}
}
fgets() in the child process is fully buffered as it's reading data from file. On my system a fully buffered buffer is of size 1024..So a single read() has the entire contents of the file (252 bytes) in the fgets() buffer. So as the control gets back to the parent, from the child, the offset is set to the end of the file.
Doing a fflush() in the child process, before it returns, ensures the data in the fgets() buffer is discarded and therefore the file offest is properly set back when the control reaches the parent.
Related
I want to trigger a callback when data is written on a file descriptor. For this I have set up a pipe and a reader thread, which reads the pipe. When it has data, the callback is called with the data.
The problem is that the reader is stuck on the read syscall. Destruction order is as follows:
Close write end of pipe (I expected this to trigger a return from blocking read, but apparently it doesn't)
Wait for reader thread to exit
Restore old file descriptor context (If stdout was redirected to the pipe, it no longer is)
Close read end of pipe
When the write end of the pipe is closed, on the read end, the read() system call returns 0 if it is blocking.
Here is an example program creating a reader thread from a pipe. The main program gets the data from stdin thanks to fgets() and write those data into the pipe. On the other side, the thread reads the pipe and triggers the callback passed as parameter. The thread stops when it gets 0 from the read() of the pipe (meaning that the main thread closed the write side):
#include <stdio.h>
#include <errno.h>
#include <pthread.h>
#include <unistd.h>
#include <string.h>
int pfd[2];
void read_cbk(char *data, size_t size)
{
int rc;
printf("CBK triggered, %zu bytes: %s", size, data);
}
void *reader(void *p){
char data[128];
void (* cbk)(char *data, size_t size) = (void (*)(char *, size_t))p;
int rc;
do {
rc = read(pfd[0], data, sizeof(data));
switch(rc) {
case 0: fprintf(stderr, "Thread: rc=0\n");
break;
case -1: fprintf(stderr, "Thread: rc=-1, errno=%d\n", errno);
break;
default: cbk(data, (size_t)rc);
}
} while(rc > 0);
}
int main(){
pthread_t treader;
int rc;
char input[128];
char *p;
pipe(pfd);
pthread_create(&treader, NULL, reader , read_cbk);
do {
// fgets() insert terminating \n and \0 in the buffer
// If EOF (that is to say CTRL-D), fgets() returns NULL
p = fgets(input, sizeof(input), stdin);
if (p != NULL) {
// Send the terminating \0 to the reader to facilitate printf()
rc = write(pfd[1], input, strlen(p) + 1);
}
} while (p);
close(pfd[1]);
pthread_join(treader, NULL);
close(pfd[0]);
}
Example of execution:
$ gcc t.c -o t -lpthread
$ ./t
azerty is not qwerty
CBK triggered, 22 bytes: azerty is not qwerty
string
CBK triggered, 8 bytes: string
# Here I typed CTRL-D to generate an EOF on stdin
Thread: rc=0
I found the problem. For redirection, the following has to be done
Create a pipe. This creates two file descriptors. One for reading, and one for writing.
dup2 so the original file descriptor is an alias to the write end of the pipe. This increments the use count of the write end by one
Thus, before synchronizing, I have to restore the context. This means that the following order is correct:
Close write end of pipe
Restore old file descriptor context
Wait for reader thread to exit
Close read end of pipe
For reference to the question, step 2 and 3 must be reorder in order to avoid deadlock.
About this code:
FILE *fp;
pid_t parentId = getpid();
fp = fopen("file.txt","rw");
fork();
fputc("a",fp)
fork();
fputc("b",fp)
if (getpid() == parentId) {
while(wait(NULL) != -1); // wait for all child processes to terminate
fclose(fp); // close the file handle
}
I want to understand how many read/writes do I have here?
We have 1 access to read the i-Node itself.
We have 6 characters that are being written to the file from the 4 processes along the code - but how many read and writes are there in this case? Do we have to read every character and then write it to the file, and then read and right the i-Node metadata?
Thanks a lot!
I'm trying to redirect the output of a process to the stdin of another process. This is done by dup2(). My question is: do stdin and stdout go back to their place(0,1) after function terminates, or do i have to do something like savestdin = dup(0). More clearly, after the function for one command terminates, at the second call are stdin and stdout on their supposed position?
To get your stdout to go to a forked process' stdin, you'll need to use a combination of pipe and dup. Haven't tested it but it should give you some ideas hopefully:
int my_pipe[2];
pipe(my_pipe); // my_pipe[0] is write end
// my_pipe[1] is read end
// Now we need to replace stdout with the fd from the write end
// And stdin of the target process with the fd from the read end
// First make a copy of stdout
int stdout_copy = dup(1);
// Now replace stdout
dup2(my_pipe[0], 1);
// Make the child proccess
pid_t pid = fork();
if (pid == 0)
{
// Replace stdin
dup2(my_pipe[1], 0);
execv(....); // run the child program
exit(1); // error
}
int ret;
waitpid(pid, &ret, 0);
// Restore stdout
dup2(stdout_copy, 1);
close(stdout_copy);
close(my_pipe[0]);
close(my_pipe[1]);
So to answer your question, when you use dup2() to replace 0 and 1, they will not be restored to the terminal unless you save the original file descriptor with dup() and manually restore it with dup2().
i'm using linux as operating system and trying to communicate three processes with pipe and file. It should work with any file put on STDIN.
And pipe works just fine, but second process is unavailable to write one char into file properly or third to read.
Firstly of course i initialize function as semlock and semunlock and opening pipe is also there. I appreciate any help cause i have no clue.
if (!(PID[1] = fork ())) {
int BUF_SIZE = 4096;
char d[BUF_SIZE];
while (fgets (d, BUF_SIZE, stdin) != NULL) {
write (mypipe[1], &d, BUF_SIZE);
}
}
if (!(PID[2] = fork ())) {
int reading_size = 0;
char r;
close (mypipe[1]);
semlock (semid1);
while (reading_size = read (mypipe[0], &r, 1)) {
if ((file = fopen ("proces2.txt", "w")) == NULL) {
warn ("error !!!");
exit (1);
}
fputc (r, file);
fclose (file);
semunlock (semid2);
}
}
if (!(PID[3] = fork ())) {
char x;
semlock (semid2);
do {
if ((plikProces3 = fopen ("proces2.txt", "r")) == NULL) {
warn ("Blad przy otwarciu pliku do odczytu !!!");
exit (1);
}
i = getc (plikProces3);
o = fprintf (stdout, "%c", i);
fclose (plikProces3);
semunlock (semid1);
} while (i != EOF);
}
What makes you think the child runs first? You haven't waited for the child process to finish so can hit EOF reading the file, before the previous child has written. Shouldn't the last fork() call be a wait, so you know the file was written? As it stands you have 4 processes, NOT 3!!
Then you are closing the mypipe[1] in the 2nd child process which as it is a forked copy, does not close the pipe inthe first child. You also are trying to write BUFSIZ characters, so you appear to be trying to write out more characters than were written, try "write (mypipe[1], &d, strlen(d));".
It looks very odd, to have the fopen() & fclose() within the character read/write loop. You really want to re-open & re-write 1 character into the file over and over?
Similarly the process2 file seems to be re-opened so the first character within would be written again and again, if it's non-empty.
There are bound to be other bugs, but that should help you for now.
I'm trying to display output of a method in QTextWidget. The method prints json on stdout, and it's a part of a 3rd party code that I can't change.
Look's like this:
int iperf_json_finish(struct iperf_test *test)
{
char *str;
str = cJSON_Print(test->json_top);
if (str == NULL)
return -1;
fputs(str, stdout);
putchar('\n');
fflush(stdout);
free(str);
cJSON_Delete(test->json_top);
test->json_top = test->json_start = test->json_intervals = test->json_end = NULL;
return 0;
}
So, how I can do that? I'm using qt 4.8 and can't use things like QMessageLogContext.
How to temporarily capture stdout of my own process on Linux?
The trick is to replace the stdout file descriptor with the descriptor of your own pipe, intercept the output with the pipe, and then put original the stdout back.
General scheme on *NIX platforms is:
Save the original file descriptor of the stdout: int orig_stdin = dup(1);
Prepare a pipe: pipe(cap_stdout);
Do not forget to flush the stdio: fflush(stdout);
Replace the stdout with the writing end of the pipe: dup2( 1, cap_stdout[1] );
Start an extra thread reading from the cap_stdout[0] file descriptor. This is where the intercepted data are captured.
Call the 3rd party library function.
Flush the stdio: fflush(stdout);
Signal the reading thread to terminate.
Restore the stdin: dup2( orig_stdin, 1 );
Cleanup: close(cap_stdout[0]); close(cap_stdout[1]); close(orig_stdin);
The data you wanted are what the reading thread has read.
Note that the extra thread is needed because the pipe has a limited buffer space, and without the reader thread, the writes to the (piped) stdout would block, leading to deadlock inside the application.