In linux: writing into a FIFO - linux

I created a new FIFO using the mkfifo command. I have a text file f.txt.
I want to write the text file into my FIFO. How? Is there a unix command for that?

You can use cat:
mkfifo /tmp/foo
cat f.txt > /tmp/foo
You'll see that it hangs, because you also need a reader process, as cat.
cat /tmp/foo
You can also start first the reader process and then the writer one.

Just redirect into the pipe:
mkfifo /tmp/pipe
cat f.txt > /tmp/pipe &
cat /tmp/pipe
Note, this is roughly what cat f.txt | cat does, but this a named pipe instead of an anonymous pipe.

Same as any file I think:
cat f.txt > myfifo
Most things can be treated like files in Linux/Unix

Related

How to write a shell script to append multiple line of data to 3-different file at a time and search if that data already exists and ignore it?

Tried using:
sed -i $ a 'hello' << foo.txt
when I'm trying to use is for multiple files I unattended. Someone plz help to sort this out. Appreciate your response! Thanks
Check out the tee command. Something like
echo "new line" | tee -a file1 file2 file3
To keep it from sending also to stdout you can redirect afterwards to /dev/null
echo "new line" | tee -a file1 file2 file3 > /dev/null
You can read more at its manpage man tee.

Pipe printf to ls in Bash?

So I'm learning about pipes in bash and I found this pithy description:
A Unix pipe connects the STDOUT (standard output) file descriptor of
the first process to the STDIN (standard input) of the second. What
happens then is that when the first process writes to its STDOUT, that
output can be immediately read (from STDIN) by the second process.
Source
Given this understanding, let's connect the STDOUT of printf to the STDIN of ls. For simplicity, print the parent directory (printf ..).
~/Desktop/pipes$ mkdir sub
~/Desktop/pipes$ ls
sub
~/Desktop/pipes$ cd sub
(no files)
~/Desktop/pipes/sub$ printf .. | ls
(no files)
~/Desktop/pipes/sub$
I want to be doing: ls .. but it seems that all I'm getting is ls. Why is this so? How can I ls the parent directory using pipes? Am I misunderstanding pipes?
Many programs don't read from stdin, not just ls. It is also possible that a program might not write to stdout either.
Here is a little experiment that might clarify things. Carry out these steps:
cat > file1
This is file1
^D
The ^D is you pressing <CTRL>+D, which is the default end-of-file. So, first we are calling the cat program and redirecting its stdout to file1. If you don't supply an input filename then it reads from stdin, so we type "This is file1".
Now do similar:
cat > file2
This is file2
^D
Now if you:
cat < file1
You get:
This is file1
what if you:
cat file1 | cat file2
or:
cat file2 < file1
Why? Because if you supply an input filename then the cat program does not read stdin, just like ls.
Now, how about:
cat - file1 < file2
By convention the - means a standard stream, stdin when reading or stdout when writing.
The problem is ls does not read from stdin as you intended it to do. You need to use a tool that reads from the stdin like xargs and feed the read input to ls
printf "someSampleFolderOrFile" | xargs ls
xargs man page,
xargs - build and execute command lines from standard input

How to pipe a process' output to logger command and to head command?

What I'm trying to do is:
Start a process that outputs text continously
Pipe that output to two commands:
A logger script
'head' command, so I can save the first lines the initial process output.
What I tried so far (unsuccessfuly) is:
./myProgram | tee > (myLogger log.txt) | head > firstLines.txt
The problem is that the myProgram exits as soon as head is finished.
Even if I use -i in tee command, I can't get myProgram to keep running.
Since the logger may append the incoming text to an existing file, executing
head log.txt > firstLines.txt
won't work in this case.
You can use awk as an alternative to do both:
./myProgram |
awk 'NR<=10{print > "firstLines.txt"} NR>10{close("firstLines.txt")} 1' > log.txt
Like this maybe:
yes | awk 'FNR<4 {print >>"file"; close("file")} 1' | more
where yes is your program, file is where you send the output of head to, and more is your logger.

Simple tee example

Can someone please explain why tee works here:
echo "testtext" | tee file1 > file2
My understanding was that tee duplicates the input and prints 1 to screen.
The above example allows the output from echo to be sent to 2 files, the first redirecting to the second.
I would expect 'testtext' to be printed to screen and passed through file1 and landing in file2. Similar as to how the text in the following example would only end up in file2.
echo "testtext" > file1 > file2
Can anyone explain what i am missing in my understanding?
Edit
Is it because its writing to file and then to stdout which gets redirected?
Your description is right: tee receives data from stdin and writes it both into file and stdout. But when you redirect tee's stdout into another file, there is obviously nothing written into terminal because the data ended up inside the second file.
Is it because its writing to file and then to stdout which gets redirected?
Exactly.
What you are trying to do could be done like this (demonstrating how tee works):
$ echo "testtext" | tee file1 | tee file2
testtext
But since tee from gnu coreutils accepts several output files to be specified, one can do just:
$ echo "testtext" | tee file1 file2
testtext
But your idea of passed through file1 and landing in file2 is not correct. Your shell example:
echo "testtext" > file1 > file2
makes the shell open both files file1 and file2 for writing which effectively truncates them and since stdout can be only redirected into another file directly, only the last redirection is effective (as it overrides the previous ones).
tee writes its input to each of the files named in its arguments (it can take more than one) as well as to standard output. The example could also be written
echo "testtext" | tee file1 file2 > /dev/null
where you explicitly write to the two files, then ignore what goes to standard output, rather than redirecting standard output to one of the files.
The > file2 in the command you showed does not somehow "extract" what was written to file1, leaving standard output to be written to the screen. Rather, > file2 instructs the shell to pass a file handle opened on file2 (rather than the terminal) to tee for it to use as standard output.
"is it because its writing to file and then to stdout which gets redirected?"
That is correct
tee sends output to the specified file, and to stdout.
the last ">" redirects standout to the second file specified.

Line Buffered Cat

is there a way to do line-bufferd cat? For example, I want to watch a UART device, and I only want to see it's message when there is a whole line. Can I do some thing like:
cat --line-buffered /dev/crbif0rb0c0ttyS0
Thanks.
You can also use bash to your advantage here:
cat /dev/crbif0rb0c0ttyS0 | while read line; do echo $line; done
Since the read command reads a line at a time, it will perform the line buffering that cat does not.
No, but GNU grep with --line-buffered can do this. Just search for something every line has, such as '^'.
Pipe it through perl in a no-op line-buffered mode:
perl -pe 1 /dev/whatever

Resources