linux dup2 doens't seem to work with pipe? - linux

I was trying dup2 on linux. My test program is: I open a pipe, try to dup stdin to fifo write, dup stdout to fifo read, I wish when I run this program, the stdin is writen into the fifo, and fifo automatically dumps the content to stdout:
#include<stdio.h>
#include<unistd.h>
int main(){
int pipefd[2];
pipe(pipefd);
int& readfd=pipefd[0];
int& writefd=pipefd[1];
dup2(STDIN_FILENO,writefd);
dup2(STDOUT_FILENO,readfd);
char buf[1024];
scanf("%s",buf);
return 0;
}
I run this program, didn't see an extra stdout print. My question:
(1) Is my stdin "scanf" being writen to fifo writefd?
(2) If yes, could the content be auto-directed to my console output? If not how to fix it?

If I get man dup2 right, the dup2(oldfd, newfd) system call creates a copy of oldfd file descriptor numbered newfd, silently closing newfd if it was previously open. So your dup2(STDIN_FILENO,writefd) line closes the write end of the pipe and replaces it with a copy of stdin. The next line does the same for the read end and stdout, respectively. So you don't get your stdin and stdout connected through pipe. Instead, you create a pipe and then close both its ends and replace them with copies of your original stdin and stdout descriptors. After that your scanf("%s",buf); just reads a string from the original stdin as usual. You can add a line like printf("%c\n", buf[1]) just after that, and it will print the second character of the string to the original stdout. Note that at that point, in fact there is no pipe created with pipe(pipefd) — both its ends was already closed.

Related

Save "perf stat" output per second to a file

I want to get perf output and analyse it. I used
while (true) {
system("sudo perf kvm stat -e r10f4 -a sleep 1 2>&1 | sed '1,3d' | sed '2,$d' > 'system.log' 2>&1");
sleep(0.5);
}
The code above frequently uses perf, which is costly. Instead, I'm running perf stat like: perf kvm stat -a -I 1000 > system.log 2>&1.
This command will keep writting data to "system.log", but I only need the data in a second.
I'm wondering how to let the new data overwrites the older data every second.
Does anybody know if my thoughts are feasible? Or other ways which can solve my problem.
You mean have perf keep overwriting, instead of appending, so you only have the final second?
One way would be to have your own program pipe perf stat -I1000 into itself (e.g. via C stdio popen or with POSIX pipe / dup2 / fork/exec). Then you get to implement the logic that chooses where and how to write the lines to a file. Instead of just writing them normally, you can lseek to the start of the output file before every write. Or pwrite instead of write to always write at file-position 0. You can append some spaces for padding out to a fixed width to make sure a longer line didn't leave some characters in the file that you're not overwriting. (Or ftruncate the output file after writes with lines shorter than the previous line).
Or: redirect perf stat -I 1000 >> file.log with O_APPEND, and periodically truncate the length.
A file opened for append will automatically write at wherever the current end is, so you can leave perf ... -I 1000 running and truncate the file every second or every 5 seconds or so. So at most you have 5 lines to read through to get to the final one you actually want. If using system to run it through a shell, use >>. Or use open(O_APPEND)/dup2 if using fork / execve.
To do the actual truncation, truncate() by path, or ftruncate() by open fd, in a sleep loop. Ideally you'd truncate right before a new line was about to be written, so most of the time there'd be a line. But unless you make extra system calls like fstat or inotify you won't know when exactly to expect another one, although dead reckoning and assuming that sleep sleeps for the minimum time can work ok.
Or: redirect perf stat -I 1000 (without append), and lseek its fd from the parent process, to create the same effect as piping through a process / thread that seeks before write.
Why does forking my process cause the file to be read infinitely shows that parent/child processes with file descriptors that share the same open file description can influence each other's file position with lseek.
This only works if you do the redirect yourself with open / dup2, not if you leave it to a shell via system. You need your own descriptor that refers to the same open file description
Not sure if you could play tricks like lseek on the same fd that a child process had open; in that case avoiding O_APPEND could let you set the write position to 0 without truncating, so the next write will overwrite the contents. But that only works if you open / dup2 / fork/exec and the fd in the parent has its file position tied to the fd in the child process.
Totally untested example
This might not actually compile even with the right headers, but hopefully illustrates the idea.
// must *not* use O_APPEND, otherwise file position is ignored for write.
int fd = open("system.log", O_WRONLY|O_CREAT|O_TRUNC, 0666);
if ((pid = fork()) != 0){
// parent
while(1) {
sleep(1);
lseek(fd, 0, SEEK_SET); // reset the position at which perf will do its next write
}
} else {
// child
dup2(fd, 1); // redirect stdout to the open file
// TODO: look up best practices for fork/exec, and error checking
execlp("sudo", "perf", "kvm", "stat", "-er10f4", "-a", "sleep", "1", NULL);
perror("execlp sudo perf failed");
}
Note the total lack of error checking of return values. This is more like pseudocode, even though it's written in (hopefully) valid C syntax.
And BTW, you can reduce that pipeline to one sed command, just sed -n 4p to print line 4 but not other lines. But perf ... stat -I 1000 doesn't waste lines so there'd be no need to filter on line numbers that way.

How to write messages into a FIFO and read it from another process simultaneously

In Unix system, I just knew that we could use FIFO file for communication between two processes and I've tested it with C projects.
Now I'm wondering if we can do something like this:
Open two terminals.
Use one to write messages into a FIFO and use the
other to read it.
When I put something into the FIFO at the first terminal, the second terminal will show it immediately.
I've tried the following, but it doesn't work. On one terminal:
mkfifo fifo.file
echo "hello world" > fifo.file
On the other terminal:
cat fifo.file
Now I can see the "hello world". However, both processes finish immediately and I can't continue typing / reading the fifo.fileanymore.
From info mkfifo:
Once you have created a FIFO special file in this way, any process
can open it for reading or writing, in the same way as an ordinary file.
However, it has to be open at both ends simultaneously before you can
proceed to do any input or output operations on it. Opening a FIFO for
reading normally blocks until some other process opens the same FIFO for
writing, and vice versa.
So you should open the file for reading in one process (terminal):
cat fifo.file
And open the file for writing in another process (terminal):
echo 'hello' > fifo.file
cat in the sample above stops reading from the file when the end of file(input) occurs. If you want to continue reading from the file, use tail -F command, for instance:
tail -F fifo.file
If you want to write and simultaneously send the strings to another end of the pipe, use cat as follows:
cat > fifo.file
The strings will be sent to another end of the pipe as you type. Press Ctrl-D to stop writing.

Need to get arrow key codes from stdin after reading file using stdin

I am creating a NASM assembly code to read 2d array of numbers present in file from stdin
i am running the executable like this -> ./abc < input.txt .
and after that i will display the read 2d array on terminal then i want to get keys codes of arrow keys (which normal appear in terminal as special characters) i wrote code for it but its not working. ( I did echo off in termios setting for that)
Although it was working when i am taking file name as an argument & reading and not from stdin but using fopen with proper fd.
./abc abc.txt
in this case after displaying the read 2d array i am able to get arrow keys codes in program but not in earlier case.
Please help me in this matter.
By using input redirection you disconnect stdin from your terminal and instead connect it to a pipe that your shell is reading the file into.
You could use cat input.txt - | ./abc, but you would have to pres Enter to flush the line buffer and make cat pipe the current line into your program.
I would suggest not messing with stdin and just taking the input file as an argument, like you already did before.

What does dup2 actually do in this case?

I need some clarification here:
I have some code like this:
child_map[0] = fileno(fd[0]);
..
pid = fork();
if(pid == 0)
/* child process*/
dup2(child_map[0], STDIN_FILENO);
Now, will STDIN_FILENO and child_map[0] point to the same file descriptor ? Will the future inputs be taken from the file pointed to by child_map[0] and STDIN_FILENO ?
I thought STDIN_FILENO means the standard output(terminal).
After the dup2(), child_map[0] and STDIN_FILENO will continue to be separate file descriptors, but they will refer to the same open file description. That means that if, for example, child_map[0] == 5 and STDIN_FILENO == 0, then both file descriptor 5 and 0 will remain open after the dup2().
Referring to the same open file description means that the file descriptors are interchangeable - they share attributes like the current file offset. If you perform an lseek() on one file descriptor, the current file offset is changed for both.
To close the open file description, all file descriptors that point to it must be closed.
It is common to execute close(child_map[0]) after the dup2(), which leaves only one file descriptor open to the file.
It causes all functions which read from stdin to get their data from the specified file descriptor, instead of the parent's stdin (often a terminal, but could be a file or pipe depending on shell redirection).
In fact, this is how a shell would launch processes with redirected input.
e.g.
cat somefile | uniq
uniq's standard input is bound to a pipe, not the terminal.
STDIN_FILENO is stdin, not stdout. (There's a STDOUT_FILENO too.) Traditionally the former is 0 and the latter 1.
This code is using dup2() to redirect the child's stdin from another file descriptor that the parent had opened. (It is in fact the same basic mechanism used for redirection in shells.) What usually happens afterward is that some other program that reads from its stdin is execed, so the code has set up its stdin for that.

How does my Perl program get standard input on Linux?

I am fairly new to Perl programming, but I have a fair amount of experience with Linux. Let’s say I have the following code:
while(1) {
my $text = <STDIN>;
my $text1 = <STDIN>;
my $text2 = <STDIN>;
}
Now, the main question is: Does STDIN in Perl read directly from /dev/stdin on a Linux machine or do I have to pipe /dev/stdin to the Perl script?
If you don't feed anything to the script, it will sit there waiting for you to enter something. When you do, it will be put into $text and then the script will continue to wait for you to enter something. When you do, that will go into $text1. Subsequently, the script will once again wait for you to enter something. Once that is done, the input will go into $text2. Then, the whole thing will repeat indefinitely.
If you invoke the script as
$ script < input
where input is a file, the script will read lines from the file similar to above, then, when the stream runs out, will start assigning undef to each variable for an infinite period of time.
AFAIK, there is no programming language where reading from the predefined STDIN (or stdin) file handle requires you to invoke your program as:
$ script < /dev/stdin
It reads directly from the STDIN file descriptor. If you run that script it will just wait for input; if you pipe data to it, it will loop until all the data is consumed and then wait forever.
You may want to change that to:
while (my $test = <STDIN>) {
# blah de blah
}
so an EOF will terminate your program.
Perl's STDIN is, by default, just hooked up to whatever the standard input file descriptor is. Beyond that, Perl doesn't really care how or where the data came from. It's the same to Perl if you're reading the output from a pipe, redirecting a file, or typing interactively at the terminal.
If you care about each of those situations and you want to handle each differently, then you might try different approaches.

Resources