linux programming: value of file descriptor is always 3 - linux

I'm new to linux programming. I wrote a very simple program:
#include stdio.h
#include fcntl.h
#include sys/ioctl.h
#include mtd/mtd-user.h
#include errno.h
int main( void )
{
int fd;
fd = open("test.target", O_RDWR);
printf("var fd = %d\n", fd);
close(fd);
perror("perror output:");
return 0;
}
test.target is created just using touch command. the program's output is:
var fd = 3
perror output:: Success
I've tried to open other files, and the file descriptor was always 3.I remembered it's value should be a larger number.If this program has some errors?

This seems normal. Processes start with pre-opened file descriptors: 0 for stdin, 1 for stdout and 2 for stderr. Any new files you open should start with 3. If you close a file, that file descriptor number will be re-used for any new files you open.

If You would open another files without closing the previous one it'll be 4, 5, and so on.
For more info go to http://wiki.bash-hackers.org/howto/redirection_tutorial
It's for bash, but the whole idea is universal.

Related

Pipe data to thread. Read stuck

I want to trigger a callback when data is written on a file descriptor. For this I have set up a pipe and a reader thread, which reads the pipe. When it has data, the callback is called with the data.
The problem is that the reader is stuck on the read syscall. Destruction order is as follows:
Close write end of pipe (I expected this to trigger a return from blocking read, but apparently it doesn't)
Wait for reader thread to exit
Restore old file descriptor context (If stdout was redirected to the pipe, it no longer is)
Close read end of pipe
When the write end of the pipe is closed, on the read end, the read() system call returns 0 if it is blocking.
Here is an example program creating a reader thread from a pipe. The main program gets the data from stdin thanks to fgets() and write those data into the pipe. On the other side, the thread reads the pipe and triggers the callback passed as parameter. The thread stops when it gets 0 from the read() of the pipe (meaning that the main thread closed the write side):
#include <stdio.h>
#include <errno.h>
#include <pthread.h>
#include <unistd.h>
#include <string.h>
int pfd[2];
void read_cbk(char *data, size_t size)
{
int rc;
printf("CBK triggered, %zu bytes: %s", size, data);
}
void *reader(void *p){
char data[128];
void (* cbk)(char *data, size_t size) = (void (*)(char *, size_t))p;
int rc;
do {
rc = read(pfd[0], data, sizeof(data));
switch(rc) {
case 0: fprintf(stderr, "Thread: rc=0\n");
break;
case -1: fprintf(stderr, "Thread: rc=-1, errno=%d\n", errno);
break;
default: cbk(data, (size_t)rc);
}
} while(rc > 0);
}
int main(){
pthread_t treader;
int rc;
char input[128];
char *p;
pipe(pfd);
pthread_create(&treader, NULL, reader , read_cbk);
do {
// fgets() insert terminating \n and \0 in the buffer
// If EOF (that is to say CTRL-D), fgets() returns NULL
p = fgets(input, sizeof(input), stdin);
if (p != NULL) {
// Send the terminating \0 to the reader to facilitate printf()
rc = write(pfd[1], input, strlen(p) + 1);
}
} while (p);
close(pfd[1]);
pthread_join(treader, NULL);
close(pfd[0]);
}
Example of execution:
$ gcc t.c -o t -lpthread
$ ./t
azerty is not qwerty
CBK triggered, 22 bytes: azerty is not qwerty
string
CBK triggered, 8 bytes: string
# Here I typed CTRL-D to generate an EOF on stdin
Thread: rc=0
I found the problem. For redirection, the following has to be done
Create a pipe. This creates two file descriptors. One for reading, and one for writing.
dup2 so the original file descriptor is an alias to the write end of the pipe. This increments the use count of the write end by one
Thus, before synchronizing, I have to restore the context. This means that the following order is correct:
Close write end of pipe
Restore old file descriptor context
Wait for reader thread to exit
Close read end of pipe
For reference to the question, step 2 and 3 must be reorder in order to avoid deadlock.

Cygwin FIFO vs native Linux FIFO - discrepancy in blocking behaviour?

The code shown is based on an example using named pipes from some tutorial site
server.c
#include <stdio.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <unistd.h>
#include <fcntl.h>
#include <string.h>
#define FIFO_FILE "MYFIFO"
int main()
{
int fd;
char readbuf[80];
int read_bytes;
// mknod(FIFO_FILE, S_IFIFO|0640, 0);
mkfifo(FIFO_FILE, 0777);
while(1) {
fd = open(FIFO_FILE, O_RDONLY);
read_bytes = read(fd, readbuf, sizeof(readbuf));
readbuf[read_bytes] = '\0';
printf("Received string: \"%s\". Length is %d\n", readbuf, (int)strlen(readbuf));
}
return 0;
}
When executing the server in Windows, using Cygwin, then the server enters an undesired loop, repeating the same message. For example, if you write in a shell:
$ ./server
|
then the "server" waits for the client, but when the FIFO is not empty, e.g. writing in a new shell
$ echo "Hello" > MYFIFO
then the server enters an infinite loop, repeating the "Hello"-string
Received string: "Hello". Length is 4
Received string: "Hello". Length is 4
...
Furthermore, new strings written to the fifo doesn't seem to be read by the server. However, in Linux the behaviour is quite different. In Linux, the server prints the string and waits for new data to appear on the fifo. What is the reason for this discrepancy ?
You need to fix your code to remove at least 3 bugs:
You're not doing a close(fd) so you will get a file descriptor leak and eventually be unable to open() new files.
You're not checking the value of fd (if it returns -1 then there was an error).
You're not checking the value of read (if it returns -1 then there was an error)... and your readbuf[read_bytes] = '\0'; will not be doing what you expect as a result.
When you get an error then errno will tell you what went wrong.
These bugs probably explain why you keep getting Hello output (especially the readbuf[read_bytes] problem).

Use select like function on regular disk file

I have a computer wich logs some sensors data into 8 different files.
I developed a software that allows you to copy this data to another computer when you connect the two machines using an rj45 cable.
After retrieving data at my computer, I need to send it line by line of each file using a pseudo serial (using socat).
I created a program which uses nested for loops in order to check if data is ready in all the 8 files, and then extract a line and send it to puttySX.
Problem is CPU usage. A way to reduce it, is using blocking function to know if data is ready be to read or not but is there any function like select on sockets or serial ports but for such files?
If no, what should I do? Thanks
You can take a look at inotify which lets you monitor file system events.
Here is a sample code to get you started (this is not production code):
#include <stdio.h>
#include <stdlib.h>
#include <sys/inotify.h>
#define BUF_LEN (sizeof(struct inotify_event) * 1)
int main(int argc, char *argv[])
{
char *filepath;
int fd, wd;
struct inotify_event *event;
char buf[BUF_LEN];
ssize_t ret;
if (argc != 2)
{
fprintf(stderr, "Usage: ./%s <filepath>\n", argv[0]);
return (EXIT_FAILURE);
}
filepath = argv[1];
/* Initialization */
fd = inotify_init();
if (fd == -1)
{
perror("inotify_init()");
return (EXIT_FAILURE);
}
/* Specify which file to monitor */
wd = inotify_add_watch(fd, filepath, IN_MODIFY);
if (wd == -1)
{
perror("inotify_add_watch");
close(fd);
return (EXIT_FAILURE);
}
/* Wait for that file to be modified, */
/* and print a notification each time it does */
for (;;)
{
ret = read(fd, buf, BUF_LEN);
if (ret < 1)
{
perror("read()");
close(fd);
return (EXIT_FAILURE);
}
event = (struct inotify_event *)buf;
if (event->mask & IN_MODIFY)
printf("File modified!\n");
}
close(fd);
return(EXIT_SUCCESS);
}
So,
I post to answer my question. Thanks to #yoones I found some trick to do this.
When a log file is created, I set a bool on true in a ini file looking like this
[CreatedFiles]
cli1=false
cli2=false
cli3=false
cli4=false
cli5=false
cli6=false
cli7=false
cli8=false
Another program uses inotify to detect creation and modification in the corresponding files. Once there's some change it reads the ini file, process the data and when it finishes to read the data, it deletes the log file and write false in the ini file in the corresponding line.
Since I have to process several log files in the same time, each time I read a line, I verify my ini file to see if I have to start to process another log file as well so I can start multiple process in the same time.
I did a infinite while loop so when all processes are done, the program is back to a select call, waiting for some change and not consuming all CPU's resources.
I'm sorry if I'm not so clear, English is not my native language.
Thanks all for you reply and comments.

nonblocking I/O behavior is weird for STDIN_FILENO and STDOUT_FILENO

I have the following code:
void
set_fl(int fd, int flags) /* flags are file status flags to turn on */
{
int val;
if ((val = fcntl(fd, F_GETFL, 0)) < 0)
err_sys("fcntl F_GETFL error");
val |= flags; /* turn on flags */
if (fcntl(fd, F_SETFL, val) < 0)
err_sys("fcntl F_SETFL error");
}
int
main(void)
{
char buf[BUFSIZ];
set_fl(STDOUT_FILENO, O_NONBLOCK); //set STDOUT_FILENO to nonblock
if(read(STDIN_FILENO, buf, BUFSIZ)==-1) { //read from STDIN_FILENO
printf("something went wrong with read()! %s\n", strerror(errno));
}
}
As you can see, I set STDOUT_FILENO to non-blocking mode but it seems the read operation on STDIN_FILENO finished immediately. Why?
$ ./testprog
something went wrong with read()! Resource temporarily unavailable
Thanks
That's exactly right: doing a print of errno and a perror call immediately after the read results in a "resource busy" and an error number of 11, or EAGAIN/EWOULDBLOCK, as shown in this code:
#include <stdio.h>
#include <errno.h>
#include <unistd.h>
#include <fcntl.h>
int main (void) {
char buf;
fcntl (STDOUT_FILENO, F_SETFL, fcntl (STDOUT_FILENO, F_GETFL, 0) | O_NONBLOCK);
fprintf (stderr, "%5d: ", errno); perror("");
read (STDIN_FILENO, &buf, 1);
fprintf (stderr, "%5d: ", errno); perror("");
}
which generates:
0: Success
11: Resource temporarily unavailable
The reason is that file descriptors have two different types of flags (see here in the section detailing duplicating file descriptors):
You can duplicate a file descriptor, or allocate another file descriptor that refers to the same open file as the original. Duplicate descriptors share one file position and one set of file status flags (see File Status Flags), but each has its own set of file descriptor flags (see Descriptor Flags).
The first is file descriptor flags and these are indeed unique per file descriptor. According to the documentation, FD_CLOEXEC (close on exec) is the only one currently in this camp.
All other flags are file status flags, and are shared amongst file descriptors that have been duplicated. These include the I/O operating modes such as O_NONBLOCK.
So, what's happening here is that the standard output file descriptor was duplicated from the standard input one (the order isn't relevant, just the fact that one was duplicated from the other) so that setting non-blocking mode on one affects all duplicates (and that would probably include the standard error file descriptor as well, though I haven't confirmed it).
It's not usually a good idea to muck about with blocking mode on file descriptors that are duplicated, nor with file descriptors that will likely be inherited by sub-processes - those sub-processes don't always take kindly to having their standard files misbehaving (from their point of view).
If you want more fine-grained control over individual file descriptors, consider using select to check descriptors before attempting a read.

Prevent file descriptors inheritance during Linux fork

How do you prevent a file descriptor from being copy-inherited across fork() system calls (without closing it, of course)?
I am looking for a way to mark a single file descriptor as NOT to be (copy-)inherited by children at fork(), something like a FD_CLOEXEC-like hack but for forks (so a FD_DONTINHERIT feature if you like). Anybody did this? Or looked into this and has a hint for me to start with?
Thank you
UPDATE:
I could use libc's __register_atfork
__register_atfork(NULL, NULL, fdcleaner, NULL)
to close the fds in child just before fork() returns. However, the FDs are still being copied so this sounds like a silly hack to me. Question is how to skip the dup()-ing in child of unneeded FDs.
I'm thinking of some scenarios when a fcntl(fd, F_SETFL, F_DONTINHERIT) would be needed:
fork() will copy an event FD (e.g. epoll()); sometimes this isn't wanted, for example FreeBSD is marking the kqueue() event FD as being of a KQUEUE_TYPE and these types of FDs won't be copied across forks (the kqueue FDs are skipped explicitly from being copied, if one wants to use it from a child it must fork with shared FD table)
fork() will copy 100k unneeded FDs to fork a child for doing some CPU-intensive tasks (suppose the need for a fork() is probabilistically very low and programmer won't want to maintain a pool of children for something that normally wouldn't happen)
Some descriptors we want to be copied (0, 1, 2), some (most of them?) not. I think full FD table duping is here for historic reasons but I am probably wrong.
How silly does this sound:
patch fcntl() to support the dontinherit flag on file descriptors (not sure if the flag should be kept per-FD or in a FD table fd_set, like the close-on-exec flags are being kept
modify dup_fd() in kernel to skip copying of dontinherit FDs, same as FreeBSD does for kq FDs
consider the program
#include <stdio.h>
#include <unistd.h>
#include <err.h>
#include <stdlib.h>
#include <fcntl.h>
#include <time.h>
static int fds[NUMFDS];
clock_t t1;
static void cleanup(int i)
{
while(i-- >= 0) close(fds[i]);
}
void clk_start(void)
{
t1 = clock();
}
void clk_end(void)
{
double tix = (double)clock() - t1;
double sex = tix/CLOCKS_PER_SEC;
printf("fork_cost(%d fds)=%fticks(%f seconds)\n",
NUMFDS,tix,sex);
}
int main(int argc, char **argv)
{
pid_t pid;
int i;
__register_atfork(clk_start,clk_end,NULL,NULL);
for (i = 0; i < NUMFDS; i++) {
fds[i] = open("/dev/null",O_RDONLY);
if (fds[i] == -1) {
cleanup(i);
errx(EXIT_FAILURE,"open_fds:");
}
}
t1 = clock();
pid = fork();
if (pid < 0) {
errx(EXIT_FAILURE,"fork:");
}
if (pid == 0) {
cleanup(NUMFDS);
exit(0);
} else {
wait(&i);
cleanup(NUMFDS);
}
exit(0);
return 0;
}
of course, can't consider this a real bench but anyhow:
root#pinkpony:/home/cia/dev/kqueue# time ./forkit
fork_cost(100 fds)=0.000000ticks(0.000000 seconds)
real 0m0.004s
user 0m0.000s
sys 0m0.000s
root#pinkpony:/home/cia/dev/kqueue# gcc -DNUMFDS=100000 -o forkit forkit.c
root#pinkpony:/home/cia/dev/kqueue# time ./forkit
fork_cost(100000 fds)=10000.000000ticks(0.010000 seconds)
real 0m0.287s
user 0m0.010s
sys 0m0.240s
root#pinkpony:/home/cia/dev/kqueue# gcc -DNUMFDS=100 -o forkit forkit.c
root#pinkpony:/home/cia/dev/kqueue# time ./forkit
fork_cost(100 fds)=0.000000ticks(0.000000 seconds)
real 0m0.004s
user 0m0.000s
sys 0m0.000s
forkit ran on a Dell Inspiron 1520 Intel(R) Core(TM)2 Duo CPU T7500 # 2.20GHz with 4GB RAM; average_load=0.00
If you fork with the purpose of calling an exec function, you can use fcntl with FD_CLOEXEC to have the file descriptor closed once you exec:
int fd = open(...);
fcntl(fd, F_SETFD, FD_CLOEXEC);
Such a file descriptor will survive a fork but not functions of the exec family.
No. Close them yourself, since you know which ones need to be closed.
There's no standard way of doing this to my knowledge.
If you're looking to implement it properly, probably the best way to do it would be to add a system call to mark the file descriptor as close-on-fork, and to intercept the sys_fork system call (syscall number 2) to act on those flags after calling the original sys_fork.
If you don't want to add a new system call, you might be able to get away with intercepting sys_ioctl (syscall number 54) and just adding a new command to it for marking a file description close-on-fork.
Of course, if you can control what your application is doing, then it might be better to maintain user-level tables of all file descriptors you want closed on fork and call your own myfork instead. This would fork, then go through the user-level table closing those file descriptors so marked.
You wouldn't have to fiddle around in the Linux kernel then, a solution that's probably only necessary if you don't have control over the fork process (say, if a third party library is doing the fork() calls).

Resources