I have a small app written in C designed to run on Linux. Part of the app accepts user-input from the keyboard, and it uses non-canonical terminal mode so that it can respond to each keystroke.
The section of code that accepts input is a simple function which is called repeatedly in a loop:
char get_input()
{
char c = 0;
int res = read(input_terminal, &c, 1);
if (res == 0) return 0;
if (res == -1) { /* snip error handling */ }
return c;
}
This reads a single character from the terminal. If no input is received within a certain timeframe, (specified by the c_cc[VTIME] value in the termios struct), read() returns 0, and get_input() is called again.
This all works great, except I recently discovered that if you run this app in a terminal window, and then close the terminal window without terminating the app, the app does not exit but launches into a CPU intensive infinite loop, where read() continuously returns 0 without waiting.
So how can I have the app exit gracefully if it is run from a terminal window, and then the terminal window is closed? The problem is that read() never returns -1, so the error condition is indistinguishable from a normal case where read() returns 0. So the only solution I see is to put in a timer, and assume there is an error condition if read returns 0 faster than the time specified in c_cc[V_TIME]. But that solution seems hacky at best, and I was hoping there is some better way to handle this situation.
Any ideas or suggestions?
Are you catching signals and resetting things before your program exits? I think SIGHUP is the one you need to focus on. Possibly set a switch in the signal handler, if switch is on when returning from read() clean up and exit.
You should handle timeout with select rather than with terminal settings. If the terminal is configured without timeout, then it will never return 0 on a read except on EOF.
Select gives you the timeout, and read gives you the 0 on close.
rc = select(...);
if(rc > 0) {
char c = 0;
int res = read(input_terminal, &c, 1);
if (res == 0) {/* EOF detected, close your app ?*/}
if (res == -1) { /* snip error handling */ }
return c;
} else if (rc == 0) {
/* timeout */
return 0;
} else {
/* handle select error */
}
Read should return 0 on EOF. I.e. it will read nothing successfully.
Your function will return 0 in that case!
What you should do is compare value returned from read with 1 and process exception.
I.e. you asked for one, but did you get one?
You will probably want to handle errno==EINTR if -1 is returned.
char get_input()
{
char c = 0;
int res = read(input_terminal, &c, 1);
switch(res) {
case 1:
return c;
case 0:
/* EOF */
case -1:
/* error */
}
}
Related
I just read some Go code that does something along the following lines:
type someType struct {
...
...
rpipe io.ReadCloser
wpipe io.WriteCloser
}
var inst someType
inst.rpipe, inst.wpipe, _ := os.Pipe()
cmd := exec.Command("some_binary", args...)
cmd.Stdout = inst.wpipe
cmd.Stderr = inst.wpipe
if err := cmd.Start(); err != nil {
....
}
inst.wpipe.Close()
inst.wpipe = nil
some_binary is a long running process.
Why is inst.wpipe closed and set to nil? What would happen if its not closed? Is it common/necessary to close inst.wpipe?
Is dup2(pipe_fd[1], 1) the C analogue of cmd.Stdout = inst.wpipe; inst.wpipe.Close()?
That code is typical of a program that wants to read output generated by some other program. The os.Pipe() function returns a connected pair of os.File entities (or, on error—which should not be simply ignored—doesn't) where a write on the second (w or wpipe) entity shows up as readable bytes on the first (r / rpipe) entity. But—this is the key to half the answer to your first question—how will a reader know when all writers are finished writing?
For a reader to get an EOF indication, all writers that have or had access to the write side of the pipe must call the close operation. By passing the write side of the pipe to a program that we start with cmd.Start(), we allow that command to access the write side of the pipe. When that command closes that pipe, one of the entities with access has closed the pipe. But another entity with access hasn't closed it yet: we have write access.
To see an EOF, then, we must close off access to our wpipe, with wpipe.Close(). So that answer the first half of:
Why is inst.wpipe closed and set to nil?
The set-to-nil part may or may not have any function; you should inspect the rest of the code to find out if it does.
Is dup2(pipe_fd[1], 1) the C analogue of cmd.Stdout = inst.wpipe; inst.wpipe.Close()?
Not precisely. The dup2 level is down in the POSIX OS area, while cmd.Stdout is at a higher (OS-independent) level. The POSIX implementation of cmd.Start() will wind up calling dup2 (or something equivalent) like this after calling fork (or something equivalent). The POSIX equivalent of inst.wipe.Close() is close(wfd) where wfd is the POSIX file number in wpipe.
In C code that doesn't have any higher level wrapping around it, we'd have something like:
int fds[2];
if (pipe(fds) < 0) ... handle error case ...
pid = fork();
switch (pid) {
case -1: ... handle error ...
case 0: /* child */
if (dup2(fds[1], 1) < 0 || dup2(fds[1], 2) < 0) ... handle error ...
if (execve(prog, args, env) < 0) ... handle error ...
/* NOTREACHED */
default: /* parent */
if (close(fds[1]) < 0) ... handle error ...
... read from fds[0] ...
}
(although if we're careful enough to check for an error from close, we probably should be careful enough to check whether the pipe system call gave us back descriptors 0 and 1, or 1 and 2, or 2 and 3, here—though perhaps we handle this earlier by making sure that 0, 1, and 2 are at least open to /dev/null).
i'm using linux as operating system and trying to communicate three processes with pipe and file. It should work with any file put on STDIN.
And pipe works just fine, but second process is unavailable to write one char into file properly or third to read.
Firstly of course i initialize function as semlock and semunlock and opening pipe is also there. I appreciate any help cause i have no clue.
if (!(PID[1] = fork ())) {
int BUF_SIZE = 4096;
char d[BUF_SIZE];
while (fgets (d, BUF_SIZE, stdin) != NULL) {
write (mypipe[1], &d, BUF_SIZE);
}
}
if (!(PID[2] = fork ())) {
int reading_size = 0;
char r;
close (mypipe[1]);
semlock (semid1);
while (reading_size = read (mypipe[0], &r, 1)) {
if ((file = fopen ("proces2.txt", "w")) == NULL) {
warn ("error !!!");
exit (1);
}
fputc (r, file);
fclose (file);
semunlock (semid2);
}
}
if (!(PID[3] = fork ())) {
char x;
semlock (semid2);
do {
if ((plikProces3 = fopen ("proces2.txt", "r")) == NULL) {
warn ("Blad przy otwarciu pliku do odczytu !!!");
exit (1);
}
i = getc (plikProces3);
o = fprintf (stdout, "%c", i);
fclose (plikProces3);
semunlock (semid1);
} while (i != EOF);
}
What makes you think the child runs first? You haven't waited for the child process to finish so can hit EOF reading the file, before the previous child has written. Shouldn't the last fork() call be a wait, so you know the file was written? As it stands you have 4 processes, NOT 3!!
Then you are closing the mypipe[1] in the 2nd child process which as it is a forked copy, does not close the pipe inthe first child. You also are trying to write BUFSIZ characters, so you appear to be trying to write out more characters than were written, try "write (mypipe[1], &d, strlen(d));".
It looks very odd, to have the fopen() & fclose() within the character read/write loop. You really want to re-open & re-write 1 character into the file over and over?
Similarly the process2 file seems to be re-opened so the first character within would be written again and again, if it's non-empty.
There are bound to be other bugs, but that should help you for now.
I am writing a "sleepy" device driver for an Operating Systems class.
The way it works is, the user accesses the device via read()/write().
When the user writes to the device like so: write(fd, &wait, size), the device is put to sleep for the amount of time in seconds of the value of wait. If the wait time expires then driver's write method returns 0 and the program finishes. But if the user reads from the driver while a process is sleeping on a wait queue, then the driver's write method returns immediately with the number of seconds the sleeping process had left to wait before the timeout would have occurred on its own.
Another catch is that 10 instances of the device are created, and each of the 10 devices must be independent of each other. So a read to device 1 must only wake up sleeping processes on device 1.
Much code has been provided, and I have been charged with the task of mainly writing the read() and write() methods for the driver.
The way I have tried to solve the problem of keeping the devices independent of each other is to include two global static arrays of size 10. One of type wait_head_queue_t, and one of type Int(Bool flags). Both of these arrays are initialized once when I open the device via open(). The problem is that when I call wake_up_interruptible(), nothing happens, and the program terminates upon timeout. Here is my write method:
ssize_t sleepy_write(struct file *filp, const char __user *buf, size_t count, loff_t *f_pos){
struct sleepy_dev *dev = (struct sleepy_dev *)filp->private_data;
ssize_t retval = 0;
int mem_to_be_copied = 0;
if (mutex_lock_killable(&dev->sleepy_mutex))
{
return -EINTR;
}
// check size
if(count != 4) // user must provide 4 byte Int
{
return EINVAL; // = 22
}
// else if the user provided valid sized input...
else
{
if((mem_to_be_copied = copy_from_user(&long_buff[0], buf, count)))
{
return -EFAULT;
}
// check for negative wait time entered by user
if(long_buff[0] > -1)// "long_buff[]"is global,for now only holds 1 value
{
proc_read_flags[MINOR(dev->cdev.dev)] = 0; //****** flag array
retval = wait_event_interruptible_timeout(wqs[MINOR(dev->cdev.dev)], proc_read_flags[MINOR(dev->cdev.dev)] == 1, long_buff[0] * HZ) / HZ;
proc_read_flags[MINOR(dev->cdev.dev)] = 0; // MINOR numbers for each
// device correspond to array indices
// devices 0 - 9
// "wqs" is array of wait queues
}
else
{
printk(KERN_INFO "user entered negative value for sleep time\n");
}
}
mutex_unlock(&dev->sleepy_mutex);
return retval;}
Unlike the many examples on this topic, I am switching the flag back to zero immediately before the call to wait_event_interruptible_timeout() because flag values seem to be lingering between subsequent runs of the program. Here is the code for my read method:
ssize_t sleepy_read(struct file *filp, char __user *buf, size_t count,
loff_t *f_pos){
struct sleepy_dev *dev = (struct sleepy_dev *)filp->private_data;
ssize_t retval = 0;
if (mutex_lock_killable(&dev->sleepy_mutex))
return -EINTR;
// switch the flag
proc_read_flags[MINOR(dev->cdev.dev)] = 1; // again device minor numbers
// correspond to array indices
// TODO: this is not waking up the process in write!
// wake up the queue
wake_up_interruptible(&wqs[MINOR(dev->cdev.dev)]);
mutex_unlock(&dev->sleepy_mutex);
return retval;}
The way I am trying to test the program is to have two main.c's, one for writing to the device and one for reading from the device, and I just ./a.out them in separate consoles in my ubuntu installation in Virtual Box. Another thing, the way it is set up now, neither the writing or reading a.outs return until timeout occurs. I apologize for the spotty formatting of the code. I'm not sure exactly what is going on here, so any help would be much appreciated! Thanks!
Your write method hold sleepy_mutex while wait event. So read method waits on mutex_lock_killable(&dev->sleepy_mutex) while the mutex become unlocked by the writer. It is occured only when writer's timeout exceeds, and write method returns. It is the behaviour you observe.
Usually, wait_event* is executed outside of any critical section. That can be achieved by using _lock-suffixed variants of such macros, or simply wrapping cond argument of such macros with spinlock acquire/release pair:
int check_cond()
{
int res;
spin_lock(&lock);
res = <cond>;
spin_unlock(&lock);
return res;
}
...
wait_event_interruptible(&wq, check_cond());
Unfortunately, wait_event-family macros cannot be used, when condition checking should be protected with a mutex. In that case, you can use wait_woken() function with manual condition checking code. Or rewrite your code without needs of mutex lock/unlock around condition checking.
For achive "reader wake writer, if it is sleep" functionality, you can adopt code from that answer https://stackoverflow.com/a/29765695/3440745.
Writer code:
//Declare local variable at the beginning of the function
int cflag;
...
// Outside of any critical section(after mutex_unlock())
cflag = proc_read_flags[MINOR(dev->cdev.dev)];
wait_event_interruptible_timeout(&wqs[MINOR(dev->cdev.dev)],
proc_read_flags[MINOR(dev->cdev.dev)] != cflag, long_buff[0]*HZ);
Reader code:
// Mutex holding protects this flag's increment from concurrent one.
proc_read_flags[MINOR(dev->cdev.dev)]++;
wake_up_interruptible_all(&wqs[MINOR(dev->cdev.dev)]);
I have a char device driver that for a virtual device. I want a FIFO in the device driver so that 2 process using the device driver can transfer characters between them. I tried kfifo but I am new to this and find it difficult to use. Can any body please suggest some other way to implement the FIFO in Linux driver.
If you are only going to allow two processes to use the driver, then you can do as this:
In your open handler, make sure that two and only two processes can enter the driver:
If access mode = READ and not alreadyreading then
alreadyreading = 1
else
return -EBUSY
If access mode = WRITE and not alreadywritting then
alreadywritting = 1
else
return -EBUSY
In the same handler, initialize your FIFO, which could be just a single global character variable, and two wait queues: one for read, and one for write. Associated with these queues will be two variables: ready_to_read and ready_to_write. At the beginning, ready_to_read = 0 and ready_to_write = 1.
Then, in the release handler:
If access mode = READ
alreadyreading = 0;
If access mode = WRITE
alreadywritting = 0
To allow a new process to open the device in read or write mode.
In the write handler:
If access mode = READ then // we only support writting if the access mode is write
return -EINVAL
Else
res = wait_event_interruptible (write_queue, ready_to_write);
if (res)
return res; // if process received a signal, exit write
Take a single character from user space (copy_from_user() )
Copy it to the FIFO (the global character variable)
ready_to_write = 0; // no more writtings until a read is performed
ready_to_read = 1; // ready to read! wake up the reading process
wake_up_interruptible (&read_queue);
return 1; // 1 byte written
And finally, in the read handler:
If access mode = READ then // we only support reading if the access mode is read
return -EINVAL
Else
res = wait_event_interruptible (read_queue, ready_to_read);
if (res)
return res; // if process received a signal, exit write
Take character from global variable (our FIFO) and send it to userspace (copy_to_user() )
ready_to_read = 0; // no more reads until a write is performed
ready_to_write = 1; // ready to write! wake up the writting process
wake_up_interruptible (&write_queue);
return 1; // 1 byte read
You can extend this example to allow a FIFO or more than one character: you would need an array of chars, and two indexes: one to know where to read from, and one to know where to write to.
To test your driver, you can open two xterms and do
cat /dev/mydriver
in one, and:
cat > /dev/mydriver
In the oher one. Then, every line you write in the second xterm will be shown in the first one.
You can even modify the driver so when the writting process closes the file, a flag is set so the next time the read process waits to read something, it detects that the write process is ended and then it returns 0 as well (to signal an EOF to the user), so when you press Ctrl-D in the second xterm to end input, the first one ends automatically too. Something like:
(read handler)
res = wait_event_interruptible (read_queue, ready_to_read || write_process_ended);
if (res)
return res; // -ERSTARTSYS if signal
if (write_process_ended)
{
ready_to_write = 1;
return 0; // if write process ended, send an EOF to the user
}
else
{
...
... get byte from FIFO, send to the user, etc.
...
return number_of_bytes_sent_to_user;
}
I'm bit of a newbie but I have an legacy app that reads 64 bytes of AES encrypted data from a device using ttyACM0. I now need to read 128 bytes. Sounded simple; increase the sizes of buffers etc. But no matter what I try, I still can only read 64 bytes. After that it just hangs. I verified the communications in Windows with a terminal and cdc-acm driver. Device does not use flow control. I cant upload code because its proprietary but below are some snippets:
The Intialization:
CACS_RefID::Initialise()
{
int iRet = 1;
struct termios dev_settings;
if(( m_fdRefdev = open("/dev/ttyACM0", O_RDWR))<0)
{
g_dbg->debug("CACS_RefID::Failed to open device\n");
return 0;
}
g_dbg->debug("CACS_RefID::Initialse completed\n");
// Configure the port
tcgetattr(m_fdRefdev, &dev_settings);
cfmakeraw(&dev_settings);
//*tcflush
//tcflush(m_fdRefdev, TCIOFLUSH);
tcsetattr(m_fdRefdev, TCSANOW, &dev_settings);
return iRet;
}
The implementation:
int CACS_RefID::Readport_Refid(int ilen, char* buf)
{
int ierr=0, iret = 0, ictr=0;
fd_set fdrefid;
struct timeval porttime_refrd;
FD_ZERO(&fdrefid);
FD_SET(m_fdRefdev,&fdrefid);
porttime_refrd.tv_sec = 1;
porttime_refrd.tv_usec = 0; //10 Seconds wait time for read port
do
{
iret = select(m_fdRefdev + 1, &fdrefid, NULL, NULL, &porttime_refrd);
switch(iret)
{
case READ_TIMEOUT:
g_dbg->debug("Refid portread: Select timeout:readlen=%d \n",ilen);
ierr = -1;
break;
case READ_ERROR:
g_dbg->debug("Refid portread: Select error:readlen=%d \n",ilen);
ierr = -1;
break;
default:
iret = read(m_fdRefdev, buf, ilen);
g_dbg->debug("Refid portread: Read len(%d):%d\n",ilen,iret);
break;
}
}while((ierr == 0) && (iret<ilen) );
//Flush terminal content at Input and Output after every read completion
// tcflush(m_fdRefdev, TCIOFLUSH);
return ierr;
}
If I initialize every time that I before running the implementation, I get 128 bytes but the data is corrupt after 64 bytes. Even before working on it, I get a lot of READ_ERRORs. Looks like the original author expected the device to block with select() but it doesn't.
Is there some type of limitation on ttyACM0 buffer size in the system? Does baud rate matter with the ttyACM driver? Does read() stop reading after all bytes are read (thinking the first 64 are available, then empty, then more data)?
Pouring thru man pages but I'm stymied. ANY help would be greatly appreciated.
Heres my latest:
int CACS_RefID::Get_GasTest_Result(int ilen)
{
int ierr=0, iret = 0, ictr=0, iread=0;
fd_set fdrefid;
struct timeval porttime_refrd;
porttime_refrd.tv_sec = 5;
porttime_refrd.tv_usec = 0; //10 Seconds wait time for read port
if (Get_GasTest_FirstPass == 0)
{
g_dbg->debug("GasTest_Result_firstPass\n");
memset(strresult, 0, sizeof(strresult)); //SLY clear out result buffer
iread=0;
Get_GasTest_FirstPass = 1;
}
do
{
iread = strlen(strresult);
FD_ZERO(&fdrefid);
FD_SET(m_fdRefdev,&fdrefid);
iret = select(m_fdRefdev + 1, &fdrefid, NULL, NULL, &porttime_refrd);
switch(iret)
{
case READ_TIMEOUT: //0
g_dbg->debug("Get_GasTest_Result: Select timeout\n");
ierr = -1;
break;
case READ_ERROR: //-1
g_dbg->debug("Get_GasTest_Result: Select error=%d %s \n", errno,strerror(errno)) ;
ierr = -1;
break;
}
iret = read(m_fdRefdev, (&strresult[0] + iread), (ilen-iread));
g_dbg->debug("Get_GasTest_Result: ilen=%d,iret=%d,iread=%d \n",ilen,iret,iread);
}while((ierr == 0) && (iread<ilen) );
return ierr;
Note: I am now reading data regardless of select errors and STILL only getting 64bytes. I've contacted my device mfg. Must be something odd going on.
Here is one possible problem with your code; this may not be the one that is causing you to only get 64 bytes but it could explain what you are seeing. Assume that you invoke the function Readport_Refid() with a buffer of 128 bytes. In other words, your invocation was something like:
char buffer[128];
Readport_Refid(128, buffer);
Assume for whatever reason that the first call to select() gets you a return value of 1 (since one bit is set). Your code is only setting one bit so you go off and you read()
iret = read(m_fdRefdev, buf, ilen);
g_dbg->debug("Refid portread: Read len(%d):%d\n",ilen,iret);
break;
iret returns 64 (which means 64 bytes are read) and your program prints a nice message and since ierr is still 0 and iret (64) is less than ilen (128) you go round again and call select().
Assume that you get more data and select() returns 1 again. Then you will go read again on the same buffer with the same ilen and overwrite the first 64 bytes that were read.
At the very least, you should do the following. I have only shown below the changed lines. First add an iread variable and make sure you use it to preserve data that you've already read. Then use iread to determine whether you've read enough or not.
int CACS_RefID::Readport_Refid(int ilen, char* buf)
{
int ierr=0, iret = 0, ictr=0, iread = 0;
[...]
default:
iret = read(m_fdRefdev, buf + iread, ilen - iread);
if (iret > 0)
iread += iret;
g_dbg->debug("Refid portread: Read len(%d):%d\n",ilen,iret);
break;
}
}while((ierr == 0) && (iread<ilen) );
[...]
**** EDITED 2013-08-19 ****
I want to reiterate a comment made by #wildplasser
You should really also be setting FD_SET on each trip around the loop. Great catch.
With respect to your new code, does it work or do you still have a problem?
**** EDITED again 2013-08-19 ****
Getting EINTR is nothing to be worried about. You should just plan on resetting FD_SET and trying again.
I can't say I know why but the fix was to call the initialization code at the beginning of the implementation even though it is called previously. If I call it again, I can read in 128 bytes. If I don't, I can only read up to 64 bytes.