I just read some Go code that does something along the following lines:
type someType struct {
...
...
rpipe io.ReadCloser
wpipe io.WriteCloser
}
var inst someType
inst.rpipe, inst.wpipe, _ := os.Pipe()
cmd := exec.Command("some_binary", args...)
cmd.Stdout = inst.wpipe
cmd.Stderr = inst.wpipe
if err := cmd.Start(); err != nil {
....
}
inst.wpipe.Close()
inst.wpipe = nil
some_binary is a long running process.
Why is inst.wpipe closed and set to nil? What would happen if its not closed? Is it common/necessary to close inst.wpipe?
Is dup2(pipe_fd[1], 1) the C analogue of cmd.Stdout = inst.wpipe; inst.wpipe.Close()?
That code is typical of a program that wants to read output generated by some other program. The os.Pipe() function returns a connected pair of os.File entities (or, on error—which should not be simply ignored—doesn't) where a write on the second (w or wpipe) entity shows up as readable bytes on the first (r / rpipe) entity. But—this is the key to half the answer to your first question—how will a reader know when all writers are finished writing?
For a reader to get an EOF indication, all writers that have or had access to the write side of the pipe must call the close operation. By passing the write side of the pipe to a program that we start with cmd.Start(), we allow that command to access the write side of the pipe. When that command closes that pipe, one of the entities with access has closed the pipe. But another entity with access hasn't closed it yet: we have write access.
To see an EOF, then, we must close off access to our wpipe, with wpipe.Close(). So that answer the first half of:
Why is inst.wpipe closed and set to nil?
The set-to-nil part may or may not have any function; you should inspect the rest of the code to find out if it does.
Is dup2(pipe_fd[1], 1) the C analogue of cmd.Stdout = inst.wpipe; inst.wpipe.Close()?
Not precisely. The dup2 level is down in the POSIX OS area, while cmd.Stdout is at a higher (OS-independent) level. The POSIX implementation of cmd.Start() will wind up calling dup2 (or something equivalent) like this after calling fork (or something equivalent). The POSIX equivalent of inst.wipe.Close() is close(wfd) where wfd is the POSIX file number in wpipe.
In C code that doesn't have any higher level wrapping around it, we'd have something like:
int fds[2];
if (pipe(fds) < 0) ... handle error case ...
pid = fork();
switch (pid) {
case -1: ... handle error ...
case 0: /* child */
if (dup2(fds[1], 1) < 0 || dup2(fds[1], 2) < 0) ... handle error ...
if (execve(prog, args, env) < 0) ... handle error ...
/* NOTREACHED */
default: /* parent */
if (close(fds[1]) < 0) ... handle error ...
... read from fds[0] ...
}
(although if we're careful enough to check for an error from close, we probably should be careful enough to check whether the pipe system call gave us back descriptors 0 and 1, or 1 and 2, or 2 and 3, here—though perhaps we handle this earlier by making sure that 0, 1, and 2 are at least open to /dev/null).
Related
Here is what I did, and it works:
"server" side (reader) pseudocode
mkfifo()
while (true){
open()
select(...NULL)
while (select... timeval(0)) {
read()
}
close()
}
"client" side (writer) real C code
int fd;
char * myfifo = "/tmp/saveterm.fifo";
char *msg = "MESSAGE";
fd = open(myfifo, O_WRONLY);
write(fd, msg, strlen(msg));
close(fd);
Now you see I need to open/close the fifo on the server after each event read. Is this supposed to be so? At first I only opened it once before the loop, but after the first message, the 'select' call would never block again until closed and reopened, and was always returning 1 and ISSET for the descriptor.
Yes, it's supposed to be like this.
Named pipes behave the same way as anonymous pipes: they both represent a single stream that is terminated when the last writer closes it. Specifically, the reader is not meant to hang forever just in case some future program decides to open the pipe and continue writing.
If you want packet based communication through a file, how about using a Unix socket in datagram mode instead?
I have a char device driver that for a virtual device. I want a FIFO in the device driver so that 2 process using the device driver can transfer characters between them. I tried kfifo but I am new to this and find it difficult to use. Can any body please suggest some other way to implement the FIFO in Linux driver.
If you are only going to allow two processes to use the driver, then you can do as this:
In your open handler, make sure that two and only two processes can enter the driver:
If access mode = READ and not alreadyreading then
alreadyreading = 1
else
return -EBUSY
If access mode = WRITE and not alreadywritting then
alreadywritting = 1
else
return -EBUSY
In the same handler, initialize your FIFO, which could be just a single global character variable, and two wait queues: one for read, and one for write. Associated with these queues will be two variables: ready_to_read and ready_to_write. At the beginning, ready_to_read = 0 and ready_to_write = 1.
Then, in the release handler:
If access mode = READ
alreadyreading = 0;
If access mode = WRITE
alreadywritting = 0
To allow a new process to open the device in read or write mode.
In the write handler:
If access mode = READ then // we only support writting if the access mode is write
return -EINVAL
Else
res = wait_event_interruptible (write_queue, ready_to_write);
if (res)
return res; // if process received a signal, exit write
Take a single character from user space (copy_from_user() )
Copy it to the FIFO (the global character variable)
ready_to_write = 0; // no more writtings until a read is performed
ready_to_read = 1; // ready to read! wake up the reading process
wake_up_interruptible (&read_queue);
return 1; // 1 byte written
And finally, in the read handler:
If access mode = READ then // we only support reading if the access mode is read
return -EINVAL
Else
res = wait_event_interruptible (read_queue, ready_to_read);
if (res)
return res; // if process received a signal, exit write
Take character from global variable (our FIFO) and send it to userspace (copy_to_user() )
ready_to_read = 0; // no more reads until a write is performed
ready_to_write = 1; // ready to write! wake up the writting process
wake_up_interruptible (&write_queue);
return 1; // 1 byte read
You can extend this example to allow a FIFO or more than one character: you would need an array of chars, and two indexes: one to know where to read from, and one to know where to write to.
To test your driver, you can open two xterms and do
cat /dev/mydriver
in one, and:
cat > /dev/mydriver
In the oher one. Then, every line you write in the second xterm will be shown in the first one.
You can even modify the driver so when the writting process closes the file, a flag is set so the next time the read process waits to read something, it detects that the write process is ended and then it returns 0 as well (to signal an EOF to the user), so when you press Ctrl-D in the second xterm to end input, the first one ends automatically too. Something like:
(read handler)
res = wait_event_interruptible (read_queue, ready_to_read || write_process_ended);
if (res)
return res; // -ERSTARTSYS if signal
if (write_process_ended)
{
ready_to_write = 1;
return 0; // if write process ended, send an EOF to the user
}
else
{
...
... get byte from FIFO, send to the user, etc.
...
return number_of_bytes_sent_to_user;
}
I've set(semget + init to 1) a semaphore in process A. Forked A, and got B.
Forked B and got C (code of processes B,C is at another .c file, so I am passing the semid as a global integer with extern int semid).
Into the process C code, I try to apply down(semid) and getting "invalid argument" error.
What I am doing in the code for down function is this:
struct sembuf sem_d;
sem_d.sem_num = 0;
sem_d.sem_op = -1;
sem_d.sem_flg = 0;
if ( semop(semid, &sem_d, 1) == -1 )
{
perror("error with down function");
return -1;
}
What am I doing wrong?
I've also have reassured that the semid from when the semaphore is initialized is the same before semop.
Also, in process A,B I am using wait(-1).
I'm not sure if you are allowed to use semget() over forks - it's a different process space after all.
semget() is part of the old System V semaphores anyway.
I would recommend to switch over to the POSIX semaphores - sem_open(), sem_wait() and friends and use named semaphores. Then open the same semaphore names in each process.
I have a test program that uses unnamed pipes created with pipe() to communicate between parent and child processes created with fork() on a Linux system.
Normally, when the sending process closes the write fd of the pipe, the receiving process returns from read() with a value of 0, indicating EOF.
However, it seems that if I stuff the pipe with a fairly large amount of data (maybe 100K bytes0 before the receiver starts reading, the receiver blocks after reading all the data in the pipe - even though the sender has closed it.
I have verified that the sending process has closed the pipe with lsof, and it seems pretty clear that the receiver is blocked.
Which leads to the question: is closing one end of the pipe a reliable way to let the receiver know that there is no more data?
If it is, and there are no conditions that can lead to a read() blocking on an empty, closed FIFO, there's something wrong with my code. If not, it means I need to find an alternate method of signalling the end of the data stream.
Resolution
I was pretty sure that the original assumption was correct, that closing a pipe causes an EOF at the reader side, this question was just a shot in the dark - thinking maybe there was some subtle pipe behavior I was overlooking. Nearly every example you ever see with pipes is a toy that sends a few bytes and exits. Things often work differently when you are no longer doing atomic operations.
In any case, I tried to simplify my code to flush out the problem and was successful in finding my problem. In pseudocode, I ended up doing something like this:
create pipe1
if ( !fork() ) {
close pipe1 write fd
do some stuff reading pipe1 until EOF
}
create pipe2
if ( !fork() ) {
close pipe2 write fd
do some stuff reading pipe2 until EOF
}
close pipe1 read fd
close pipe2 read fd
write data to pipe1
get completion response from child 1
close pipe1 write fd
write data to pipe2
get completion response from child 2
close pipe2 write fd
wait for children to exit
The child process reading pipe1 was hanging, but only when the amount of data in the pipe became substantial. This was occurring even though I had closed the pipe that child1 was reading.
A look at the source shows the problem. When I forked the second child process, it grabbed its own copy of the pipe1 file descriptors, which were left open. Even though only one process should be writing to the pipe, having it open in the second process kept it from going into an EOF state.
The problem didn't show up with small data sets, because child2 was finishing its business quickly, and exiting. But with larger data sets, child2 wasn't returning quickly, and I ended up with a deadlock.
read should return EOF when the writers have closed the write end.
Since you do a pipe and then a fork, both processes will have the write fd open. It could be that in the reader process you have forgotten to close the write portion of the pipe.
Caveat: It has been a long time since I programmed on Unix. So it might be inaccurate.
Here is some code from: http://www.cs.uml.edu/~fredm/courses/91.308/files/pipes.html. Look at the "close unused" comments below.
#include <stdio.h>
/* The index of the "read" end of the pipe */
#define READ 0
/* The index of the "write" end of the pipe */
#define WRITE 1
char *phrase = "Stuff this in your pipe and smoke it";
main () {
int fd[2], bytesRead;
char message [100]; /* Parent process message buffer */
pipe ( fd ); /*Create an unnamed pipe*/
if ( fork ( ) == 0 ) {
/* Child Writer */
close (fd[READ]); /* Close unused end*/
write (fd[WRITE], phrase, strlen ( phrase) +1); /* include NULL*/
close (fd[WRITE]); /* Close used end*/
printf("Child: Wrote '%s' to pipe!\n", phrase);
} else {
/* Parent Reader */
close (fd[WRITE]); /* Close unused end*/
bytesRead = read ( fd[READ], message, 100);
printf ( "Parent: Read %d bytes from pipe: %s\n", bytesRead, message);
close ( fd[READ]); /* Close used end */
}
}
I have a small app written in C designed to run on Linux. Part of the app accepts user-input from the keyboard, and it uses non-canonical terminal mode so that it can respond to each keystroke.
The section of code that accepts input is a simple function which is called repeatedly in a loop:
char get_input()
{
char c = 0;
int res = read(input_terminal, &c, 1);
if (res == 0) return 0;
if (res == -1) { /* snip error handling */ }
return c;
}
This reads a single character from the terminal. If no input is received within a certain timeframe, (specified by the c_cc[VTIME] value in the termios struct), read() returns 0, and get_input() is called again.
This all works great, except I recently discovered that if you run this app in a terminal window, and then close the terminal window without terminating the app, the app does not exit but launches into a CPU intensive infinite loop, where read() continuously returns 0 without waiting.
So how can I have the app exit gracefully if it is run from a terminal window, and then the terminal window is closed? The problem is that read() never returns -1, so the error condition is indistinguishable from a normal case where read() returns 0. So the only solution I see is to put in a timer, and assume there is an error condition if read returns 0 faster than the time specified in c_cc[V_TIME]. But that solution seems hacky at best, and I was hoping there is some better way to handle this situation.
Any ideas or suggestions?
Are you catching signals and resetting things before your program exits? I think SIGHUP is the one you need to focus on. Possibly set a switch in the signal handler, if switch is on when returning from read() clean up and exit.
You should handle timeout with select rather than with terminal settings. If the terminal is configured without timeout, then it will never return 0 on a read except on EOF.
Select gives you the timeout, and read gives you the 0 on close.
rc = select(...);
if(rc > 0) {
char c = 0;
int res = read(input_terminal, &c, 1);
if (res == 0) {/* EOF detected, close your app ?*/}
if (res == -1) { /* snip error handling */ }
return c;
} else if (rc == 0) {
/* timeout */
return 0;
} else {
/* handle select error */
}
Read should return 0 on EOF. I.e. it will read nothing successfully.
Your function will return 0 in that case!
What you should do is compare value returned from read with 1 and process exception.
I.e. you asked for one, but did you get one?
You will probably want to handle errno==EINTR if -1 is returned.
char get_input()
{
char c = 0;
int res = read(input_terminal, &c, 1);
switch(res) {
case 1:
return c;
case 0:
/* EOF */
case -1:
/* error */
}
}