Reading from named pipe from background process - linux

I have a program that has a window and also outputs to stdout. I am reading the programs output and writing one line of that output to a pipe. This is done in the background while the program is still running. I send a command to the window and wait for my single line from grep. However, even though the program has already produced this text tail will not exit unitl I stop the program.
I want tail to return this one line as soon as it can so I can terminate the program with "\e" to the window.
bin/Prog | grep "TEXT" > /tmp/pipe2 &
xvkbd -window Window -text "2"
tail -n1 /tmp/pipe2 >> out.t
xvkbd -window Window -text "\e"

The tail command doesn't know it has reached the last line of input until it gets EOF, and it won't get EOF until the grep terminates and closes its standard output. Also, grep will buffer its output when the output device is not 'interactive', and a named pipe is not 'interactive', so grep's output won't be written to the pipe until its input generates EOF, which won't happen until the bin/Prog exits. So, until the program exits, the grep and the tail are stuck, and since you are waiting for grep and tail to exit before telling the program to exit, you have a deadlock.
You might do better with tail -n +1 which looks for one line of output at the start (or sed 1q or head -n 1 or …). However, you're still stuck with grep buffering its output, which leaves you in a quandary.

Related

How to overwrite previous output in bash

I have a bash script, that outputs top most CPU intensive processes every second to the terminal.
tmp=$(ps -e -eo pid,cmd,%mem,%cpu,user --sort=-%cpu | head -n 11)
printf "\n%s\n" "$tmp[pid]"
I know that I can move my cursor to the predeclared position, but that fails every time terminal is not cleared.
I could also just go to the beginning of the line and write over it, but that again makes a problem when current output is shorter that the previous and when the number of lines is not the same as it was at the previous output.
Is there a way to completely erase the previous output and write from there?
Yes, you can clear a part of the screen before each iteration (see https://unix.stackexchange.com/questions/297502/clear-half-of-the-screen-from-the-command-line), but the function watch does it for you. Try:
watch -n 1 "ps -e -eo pid,cmd,%mem,%cpu,user --sort=-%cpu | head -n 11"

Why am I getting "cat: write error: Broken pipe" rarely and not always

I am running some scripts with commands having cat pipelined with grep like:
cat file.txt | grep "pattern"
Most of the times there are no problems. But sometimes I get below error:
cat: write error: Broken pipe
So how do I find out when the command is causing this problem and why?
The reason is because the pipe is closed by grep when it still has some data to be read from cat. The signal SIGPIPE is caught by cat and it exits.
What usually happens in a pipeline is the shell runs cat in one process and grep in another. The stdout of cat is connected to the write-end of the pipe and stdin of grep to the read end. What happened was grep hit a pattern search that did not exist and exited immediately causing the read end of the pipe to be closed, which cat does not like since it has some more data to be write out to the pipe. Since the write actions happens to an other which has been closed other end, SIGPIPE is caught by the cat on which it immediately exits.
For such a trivial case, you could remove the pipeline usage altogether and run it as grep "pattern" file.txt when the file's contents are made available over the stdin of grep on which it could read from.
You can use only grep without pipe like this :
grep "pattern" file.txt
I think it's better to resolve this problem

Reading FIFO doesn't show for the first time

In Unix, I've made a FIFO and I tried to read it with tail:
mkfifo fifo.file
tail -f fifo.file
Then I try to write messages into it from another process so I do as below:
cat > fifo.file
Then I type messages such as:
abc
def
Before I type Ctrl-D, nothing is printed at the first process (tail -f fifo.file).
Then I type Ctrl-D, the two lines above are printed.
Now If I do cat > fifo.file again and I type one line such as qwe and type Enter at the end of line, this string will be printed immediately at the first process.
I'm wondering why I get two different behaviors with the same command.
Is it possible to make it the second behavior without the first, meaning that when I cat the first time, I can see messages printed once I type Enter, instead of Ctrl-D?
This is just how tail works. Basically it outputs the specified file contents only when EOF occurs which Ctrl-D effectively sends to the terminal. And the -f switch just makes tail not exit and continue reading when that happens.
Meaning no matter the switches tail still needs EOF to output anything at all.
Just to test this you can use simple cat instead of tail:
term_1$ mkfifo fifo.file
term_1$ cat < fifo.file
...
term_2$ cat > fifo.file

"cat a | cat b" ignoring contents of a

The formal definition of pipe states that the STDOUT of the left file will be immediately piped to the STDIN of the right file.I have two files, hello.txt and human.txt. cat hello.txt returns Hello and cat human.txt returns I am human.Now if I do cat hello.txt | cat human.txt, shouldn't that return Hello I am human?Instead I'm seeing command not found.I am new to shell scripting.Can someone explain?
Background: A pipe arranges for the output of the command on the left (that is, contents written to FD 1, stdout) to be delivered as input to the command on the right (on FD 0, stdin). It does this by connecting the processes with a "pipe", or FIFO, and executing them at the same time; attempts to read from the FIFO will wait until the other process has written something, and attempts to write to the FIFO will wait until the other process is ready to read.
cat hello.txt | cat human.txt
...feeds the content of hello.txt into the stdin of cat human.txt, but cat human.txt isn't reading from its stdin; instead, it's been directed by its command line arguments to read only from human.txt.
Thus, that content on the stdin of cat human.txt is ignored and never read, and cat hello.txt receives a SIGPIPE when cat human.txt exits, and thereafter exits as well.
cat hello.txt | cat - human.txt
...by contrast will have the second cat read first from stdin (you could also use /dev/stdin in place of - on many operating systems, including Linux), then from a file.
You don't need to pipe them rather you can read from multiple file like below which will in-turn concatenate the content of both file content
cat hello.txt human.txt
| generally used when you want to fed output of first command to the second command in pipe. In this case specifically your second command is reading from a file and thus don't need to be piped. If you want to you can do like
echo "Hello" | cat - human.txt
First thing the command will not give an error it will print I m human i.e the contents of human.txt
Yeah you are right about pipe definition , but on the right side of pipe there should be some command.
If the command is for receiving the input and providing the output than it will give you output,otherwise the command will do its own behaviour
But here there is a command i.e cat human.txt on the right side but it will print its own contents and does no operation on the received input .
And also this error comes when when you write like
cat hello.txt | human.txt
bash will give you this error :
human.txt: command not found

How can you read the most recent line from the linux program screen?

I use screen to run a minecraft server .jar, and I would like to write a bash script to see if the most recent line has changed every five minutes or so. If it has, then the script would start from the beginning and make the check again in another five minutes. If not, it should kill the java process.
How would I go about getting the last line of text from a screen via a bash script?
If I have understand, you can redirect the output of your program in a file and work on it, with the operator >.
Try to run :
ls -l > myoutput.txt
and open the file created.
You want to use the tail command. tail -n 1 will give you the last line of the file or redirected standard output, while tail -f will keep the tail program going until you cancel it yourself.
For example:
echo -e "Jello\nPudding\nSkittles" | tail -n 1 | if grep -q Skittles ; then echo yes; fi
The first section simply prints three lines of text:
Jello
Pudding
Skittles
The tail -n 1 finds the last line of text ("Skittles") and passes that to the next section.
grep -q simply returns TRUE if your pattern was found or FALSE if not, without actually dumping or outputting anything to screen.
So the if grep -q Skittles section will check the result of that grep -q Skittles pattern and, if it found Skittles, prints 'yes' to the screen. If not, nothing gets printed (try replacing Skittles with Pudding, and even though it was in the original input, it never made it out the other end of the tail -n 1 call).
Maybe you can use that logic and output your .jar to standard output, then search that output every 5 minutes?

Resources