Reading FIFO doesn't show for the first time - linux

In Unix, I've made a FIFO and I tried to read it with tail:
mkfifo fifo.file
tail -f fifo.file
Then I try to write messages into it from another process so I do as below:
cat > fifo.file
Then I type messages such as:
abc
def
Before I type Ctrl-D, nothing is printed at the first process (tail -f fifo.file).
Then I type Ctrl-D, the two lines above are printed.
Now If I do cat > fifo.file again and I type one line such as qwe and type Enter at the end of line, this string will be printed immediately at the first process.
I'm wondering why I get two different behaviors with the same command.
Is it possible to make it the second behavior without the first, meaning that when I cat the first time, I can see messages printed once I type Enter, instead of Ctrl-D?

This is just how tail works. Basically it outputs the specified file contents only when EOF occurs which Ctrl-D effectively sends to the terminal. And the -f switch just makes tail not exit and continue reading when that happens.
Meaning no matter the switches tail still needs EOF to output anything at all.
Just to test this you can use simple cat instead of tail:
term_1$ mkfifo fifo.file
term_1$ cat < fifo.file
...
term_2$ cat > fifo.file

Related

Unix strip EOF (not EOL/whitespace) from stream in pipe

I have a program which takes input and standard input until eof is given (CTRL-D on linux). I want to run this program with a lot of default input, then continue entering things until I manually hit CTRL-D to stop it. Is there any way to remove the EOF that a bash pipeline puts in?
IE: cat somedata.dat | <insert answer here> | ./myprogram such that myprogram never receives EOF on stdin.
Bash doesn't actually add an "end-of-file" character; there isn't such a thing. Rather, the problem is that ./myprogram reaches the end of its standard input (which is hooked up to the pipe), so the next time it tries to read a character, it gets end-of-file instead. There's no way to have it suddenly switch over to "stealing" the standard input from the terminal, because it's not hooked up to that input at all.
Instead, to feed more input to ./myprogram than just what's in somedata.dat, you can ask cat to start reading (and forwarding) its own standard input after it's finished reading somedata.dat:
cat somedata.dat - | ./myprogram
or
cat somedata.dat /dev/stdin | ./myprogram
Edited to add (per further question in the comments): If you have a more complicated pipeline feeding into ./myprogram, rather than just a file, then you can run your main command and then cat, piping the whole thing to ./myprogram:
{
reallyConfusingTransform < somedata.dat
cat
} | ./myprogram
or in one line:
{ reallyConfusingTransform < somedata.dat ; cat ; } | ./myprogram
(Note that I've also eliminated a "useless use of cat" (UUOC), but if you really prefer to use cat that way, you can still write cat somedata.dat | reallyConfusingTransform instead of reallyConfusingTransform < somedata.dat.)

"cat a | cat b" ignoring contents of a

The formal definition of pipe states that the STDOUT of the left file will be immediately piped to the STDIN of the right file.I have two files, hello.txt and human.txt. cat hello.txt returns Hello and cat human.txt returns I am human.Now if I do cat hello.txt | cat human.txt, shouldn't that return Hello I am human?Instead I'm seeing command not found.I am new to shell scripting.Can someone explain?
Background: A pipe arranges for the output of the command on the left (that is, contents written to FD 1, stdout) to be delivered as input to the command on the right (on FD 0, stdin). It does this by connecting the processes with a "pipe", or FIFO, and executing them at the same time; attempts to read from the FIFO will wait until the other process has written something, and attempts to write to the FIFO will wait until the other process is ready to read.
cat hello.txt | cat human.txt
...feeds the content of hello.txt into the stdin of cat human.txt, but cat human.txt isn't reading from its stdin; instead, it's been directed by its command line arguments to read only from human.txt.
Thus, that content on the stdin of cat human.txt is ignored and never read, and cat hello.txt receives a SIGPIPE when cat human.txt exits, and thereafter exits as well.
cat hello.txt | cat - human.txt
...by contrast will have the second cat read first from stdin (you could also use /dev/stdin in place of - on many operating systems, including Linux), then from a file.
You don't need to pipe them rather you can read from multiple file like below which will in-turn concatenate the content of both file content
cat hello.txt human.txt
| generally used when you want to fed output of first command to the second command in pipe. In this case specifically your second command is reading from a file and thus don't need to be piped. If you want to you can do like
echo "Hello" | cat - human.txt
First thing the command will not give an error it will print I m human i.e the contents of human.txt
Yeah you are right about pipe definition , but on the right side of pipe there should be some command.
If the command is for receiving the input and providing the output than it will give you output,otherwise the command will do its own behaviour
But here there is a command i.e cat human.txt on the right side but it will print its own contents and does no operation on the received input .
And also this error comes when when you write like
cat hello.txt | human.txt
bash will give you this error :
human.txt: command not found

Weird behavior when prepending to a file with cat and tee

One solution to the problem from prepend to a file one liner shell? is:
cat header main | tee main > /dev/null
As some of the comments noticed, this doesn't work for large files.
Here's an example where it works:
$ echo '1' > h
$ echo '2' > t
$ cat h t | tee t > /dev/null
$ cat t
1
2
And where it breaks:
$ head -1000 /dev/urandom > h
$ head -1000 /dev/urandom > t
$ cat h t | tee t > /dev/null
^C
The command hangs and after killing it we are left with:
$ wc -l t
7470174 t
What causes the above behavior where the command gets stuck and adds lines infinitely? What is different in the 1 line files scenario?
The behavior is completely non-deterministic. When you do cat header main | tee main > /dev/null, the following things happen:
1) cat opens header
2) cat opens main
3) cat reads header and writes its content to stdout
4) cat reads main and writes its content to stdout
5) tee opens main for writing, truncating it
6) tee reads stdin and writes the data read into main
The ordering above is one possible ordering, but these events may occur in many different orders. 5 must precede 6, 2 must precede 4, and 1 must precede 3, but it is entirely possible for the ordering to be 5,1,3,2,4,6. In any case, if the files are large, it is very likely that step 5 will take place before step 4 is complete, which will cause portions of data to be discarded. It is entirely possible that step 5 happens first, in which case all of the data previously in main will be lost.
The particular case that you are seeing is very likely a result of cat blocking on a write and going to sleep before it has finished reading the input. tee then writes more data to t and tries to read from the pipe, then goes to sleep until cat writes more data. cat writes a buffer, tee puts it into t, and the cycle repeats, with cat re-reading the data that tee is writing into t.
cat header main | tee main > /dev/null
That is a terrible, terrible idea. You should never have a pipeline both reading from and writing to a file.
You can put the result in a temporary file first, and then move it into place:
cat header main >main.new && mv main{.new,}
Or to minimize the amount of time two copies of the file exist and never have both visible in the directory at the same time, you could delete the original once you've opened it up for reading and write the new file directly into its previous location. However, this does mean there's a brief gap during which the file doesn't exist at all.
exec 3<main && rm main && cat header - <&3 >main && exec 3<&-

Reading from named pipe from background process

I have a program that has a window and also outputs to stdout. I am reading the programs output and writing one line of that output to a pipe. This is done in the background while the program is still running. I send a command to the window and wait for my single line from grep. However, even though the program has already produced this text tail will not exit unitl I stop the program.
I want tail to return this one line as soon as it can so I can terminate the program with "\e" to the window.
bin/Prog | grep "TEXT" > /tmp/pipe2 &
xvkbd -window Window -text "2"
tail -n1 /tmp/pipe2 >> out.t
xvkbd -window Window -text "\e"
The tail command doesn't know it has reached the last line of input until it gets EOF, and it won't get EOF until the grep terminates and closes its standard output. Also, grep will buffer its output when the output device is not 'interactive', and a named pipe is not 'interactive', so grep's output won't be written to the pipe until its input generates EOF, which won't happen until the bin/Prog exits. So, until the program exits, the grep and the tail are stuck, and since you are waiting for grep and tail to exit before telling the program to exit, you have a deadlock.
You might do better with tail -n +1 which looks for one line of output at the start (or sed 1q or head -n 1 or …). However, you're still stuck with grep buffering its output, which leaves you in a quandary.

Scanning the output of a program at shell

Hello I am currently trying to make a script file in Linux that has as input the output of o prgram and scans it to find how many occurences of some words are existed. To be clearer I want to scan the output and store to variables how many times some words exist in that output. I am new to scripitin in linux. I tried storing the output in a file and then scan it line by line in order to find the words but for somre reason the loop i use to parse it never ends. Can you help me?
./program > buffer.txt
while read LINE
do
echo $LINE | grep word1 #when i use grep command the loop never ends
done <a.txt
Edit: In C an equivalent program would be
char* word="word1"
while(/*parse all the lines at a text */)
{
fgetline("file_a",&buffer)
if(strcmp(buffer,word)==0)
strcpy(word1,"word") //continue the search with this
}
the easiest thing to do is to skip writing to a file altogether
if ./program | grep -q word1 &>/dev/null; then
echo "TRUE"
fi
-q tells grep to be quiet, but it can still produce error messages occasionally which you can suppress w/the &>/dev/null. If you'd prefer to see any error messages, just remove that part.
If you want ./program's errors and stdout to be analyzed by grep then you'll need to redirect stderr to stdout like this
Your code is working fine,
the while loop will exit.
try adding the following line after the while loop and verify
echo "While loop Exited!"

Resources