Example of using named pipes in Linux shell (Bash) - linux

Can someone post a simple example of using named pipes in Bash on Linux?

One of the best examples of a practical use of a named pipe...
From http://en.wikipedia.org/wiki/Netcat:
Another useful behavior is using netcat as a proxy. Both ports and hosts can be redirected. Look at this example:
nc -l 12345 | nc www.google.com 80
Port 12345 represents the request.
This starts a nc server on port 12345 and all the connections get redirected to google.com:80. If a web browser makes a request to nc, the request will be sent to google but the response will not be sent to the web browser. That is because pipes are unidirectional. This can be worked around with a named pipe to redirect the input and output.
mkfifo backpipe
nc -l 12345 0<backpipe | nc www.google.com 80 1>backpipe

Here are the commands:
mkfifo named_pipe
echo "Hi" > named_pipe &
cat named_pipe
The first command creates the pipe.
The second command writes to the pipe (blocking). The & puts this into the background so you can continue to type commands in the same shell. It will exit when the FIFO is emptied by the next command.
The last command reads from the pipe.

Open two different shells, and leave them side by side. In both, go to the /tmp/ directory:
cd /tmp/
In the first one type:
mkfifo myPipe
echo "IPC_example_between_two_shells">myPipe
In the second one, type:
while read line; do echo "What has been passed through the pipe is ${line}"; done<myPipe
First shell won't give you any prompt back until you execute the second part of the code in the second shell. It's because the fifo read and write is blocking.
You can also have a look at the FIFO type by doing a ls -al myPipe and see the details of this specific type of file.
Next step would be to embark the code in a script!

Creating a named pipe
$ mkfifo pipe_name
On Unix-likes named pipe (FIFO) is a special type of file with no content. The mkfifo command creates the pipe on a file system (assigns a name to it), but doesn't open it. You need to open and close it separately like any other file.
Using a named pipe
Named pipes are useful when you need to pipe from/to multiple processes or if you can't connect two processes with an anonymous pipe. They can be used in multiple ways:
In parallel with another process:
$ echo 'Hello pipe!' > pipe_name & # runs writer in a background
$ cat pipe_name
Hello pipe!
Here writer runs along the reader allowing real-time communication between processes.
Sequentially with file descriptors:
$ # open the pipe on auxiliary FD #5 in both ways (otherwise it will block),
$ # then open descriptors for writing and reading and close the auxiliary FD
$ exec 5<>pipe_name 3>pipe_name 4<pipe_name 5>&-
$
$ echo 'Hello pipe!' >&3 # write into the pipe through FD #3
...
$ exec 3>&- # close the FD when you're done
$ # (otherwise reading will block)
$ cat <&4
Hello pipe!
...
$ exec 4<&-
In fact, communication through a pipe can be sequential, but it's limited to 64 KB (buffer size).
It's preferable to use descriptors to transfer multiple blocks of data in order to reduce overhead.
Conditionally with signals:
$ handler() {
> cat <&3
>
> exec 3<&-
> trap - USR1 # unregister signal handler (see below)
> unset -f handler writer # undefine the functions
> }
$
$ exec 4<>pipe_name 3<pipe_name 4>&-
$ trap handler USR1 # register handler for signal USR1
$
$ writer() {
> if <condition>; then
> kill -USR1 $PPID # send the signal USR1 to a specified process
> echo 'Hello pipe!' > pipe_name
> fi
> }
$ export -f writer # pass the function to child shells
$
$ bash -c writer & # can actually be run sequentially as well
$
Hello pipe!
FD allows data transfer to start before the shell is ready to receive it. Required when used sequentially.
The signal should be sent before data to prevent a deadlock if pipe buffer will fill up.
Destroying a named pipe
The pipe itself (and its content) gets destroyed when all descriptors to it are closed. What's left is just a name.
To make the pipe anonymous and unavailable under the given name (can be done when the pipe is still open) you could use the rm con­sole com­mand (it's the opposite of mkfifo command):
$ rm pipe_name

Terminal 1:
$ mknod new_named_pipe p
$ echo 123 > new_named_pipe
Terminal 1 created a named pipe.
It wrote data in it using echo.
It is blocked as there is no receiving end (as pipes both named and unnamed need receiving and writing ends to it)
Terminal 2:
$ cat new_named_pipe
$ 123
$
From Terminal 2, a receiving end for the data is added.
It read the data in it using cat.
Since both receiving and writing ends are there for the new_named_pipe it displays the information and blocking stops
Named pipes are used everywhere in Linux, most of the char and block files we see during ls -l command are char and block pipes (All of these reside at /dev).
These pipes can be blocking and non-blocking, and the main advantage is these provides the simplest way for IPC.

Related

Bash pipe execution order

If I open an nc in listening on Terminal1
nc -l 35555
And perform a connection in this way on Terminal2
mkfifo /tmp/f
cat /tmp/f | nc 127.0.0.1 35555 > /tmp/f
Now I have few questions
On Terminal2 you can see something in waiting. What is that?
Is it the cat process or the nc process?
In which order are they executed?
If I write a string on Terminal1, that string just come back.
Why cat /tmp/f | nc 127.0.0.1 35555 > /tmp/f doesn't create a cycle?
The received string comes out from nc and goes in /tmp/f. Because of the cat that string begins input of nc, sent back to Terminal1 and again in /tmp/f etc. etc.
Why isn't the cat of a fifo a blocking operations? Technically it not end, so why the nc is executed?
Thanks.
Is it the cat process or the nc process?
It's both. cat is waiting for something to be written to the FIFO. nc is waiting for something to be written to its stdin or to receive something over the network socket.
In which order are they executed?
All commands in a pipeline are run concurrently.
Why cat /tmp/f | nc 127.0.0.1 35555 > /tmp/f doesn't create a cycle?
The first terminal is not sending back what it received from the network, it just prints it to the terminal. When you type something on terminal 1 it gets written to the network once. nc on terminal 2 writes it to /tmp/f. Then it reads it and sends it to the network. nc on terminal 1 reads it and prints it. That's the end.
-Why isn't the cat of a fifo a blocking operations? Technically it not end, so why the nc is executed?
Commands in a pipeline don't wait for the previous one to exit. That would make it impossible to write code like
tail -f filename | grep word
since tail -f never exits (until you kill it with Ctl-c, but that also kills grep). All the processes run, which allows the later programs to process partial output of the earlier programs.
So nc doesn't wait for cat to exit. It runs at the same time, and probably uses select() or epoll() to listen for input from both stdin and the network socket at the same time (it might also use separate threads for each). So while cat is blocking on the FIFO, it can read the network data from terminal 1, and then write it to the FIFO.

Linux Named Pipes - MKFIFO query

i am fairly new in the named Linux BASH, named pipes etc.
I am following an example from this article:
https://www.linuxjournal.com/content/using-named-pipes-fifos-bash
All works well and as expected. However this is only the beginning.
I would like to be able to call writer script from reader to pass info between the 2 scripts in the pipe without having to create cron job for the writer script.
The idea is that someone triggers the reader script without elevated permissions.
The reader calls the writer which has some hard-coded sudo user (for testing purposes), evaluates data and returns the result to the reader.
Any advise is appreciated.
As I understand it, you require the following:
A writer which listens for requests to write data to the named pipe.
A reader which sends requests for data to the writer, and reads the data from the named pipe.
The writer process should run as a privileged user, and the reader should run as an under-privileged user.
1 and 2 are possible with the scripts below, where:
The writer is run in the background and listens for requests: sh writer.sh &
When the reader is run, it sends a signal to the writer to trigger the writing of data to the named pipe
The reader then subsequently reads from the pipe and outputs the data.
3 is not possible because:
A process with lower privileges cannot send signals to a process with a higher privilege. See this
Alternatively, a script run by a user with lower privileges cannot launch another script with higher privileges (i.e the reader cannot launch a writer with higher privileges)
writer.sh
#!/bin/bash
# Store the value of the writer process
echo $$ > /tmp/pid
# Specify location of named pipe
pipe=/tmp/datapipe
# Create Data pipe if it doesn't exist
if [[ ! -p $pipe ]]; then
echo "Pipe does not exist. Creating..."
mkfifo $pipe
fi
# Send data to pipe
echo "Hello" >$pipe
# Send data to pipe based on trigger
function write_data {
echo "Writing data"
echo "Here is some data" >$pipe &
}
# Kill process based on trigger
function kill {
echo "Exiting"
exit
}
# Listen for signals
trap write_data SIGINT
trap kill KILL
# listen
while true; do
sleep 1;
done
reader.sh
#!/bin/bash
pipe=/tmp/datapipe
# Read the writer pid
pid=$(cat /tmp/pid)
# Trigger writer to create data
kill -s SIGINT $pid
# Read data from named pipe
if read line <$pipe; then
echo $line
fi

catch error message and create Journal entry

The following code I put on top of many scripts I am using...
#!/bin/bash
# redirect all error messages to the protocol,
# then print the same message to stdout
exec 1> >(logger -i -s -t $0 -p 4) 2>&1
This line of code will make protocol entries of errors occurred while running my scripts.
But this code doesn't work like I want when communicate via stdin and stdout. I want to log error messages only.
And to be honest, I don't know how I managed to get this line of coding to work.
Nonetheless, I am searching for a code to replace this combination of exec and logger with a function put in trap "createErrorMessage" ERR. But I don't know how to catch/receive error messages this way.
My goal is to create protocol entries of all error messages only.
To be clear, I don't want to use $? after every piece of code and I don't want to catch every piece of code with Variable=$().
Is this even possible ?
While logger sends output to syslog, systemd-cat performs the same kind of function for systemd. See for example:
echo "hello" | systemd-cat
journalctl | tail -10
If you are running your scripts as systemd service units, then there's no need to use systemd-cat: By default systemd will send STDOUT and STDERR of services it controls the journal.
See man systemd-cat for more about that tool.
Maybe do not understand what do you want, but let say have this script mytest.sh:
date > jj
cat jj - jjj # jj + stdin + (nonexistenxt) jjj #e.g. error too
mkdir jj #error
so, when you will use this as script with redirections,
echo "Hello world" | bash mytest.sh > output
you will get:
1.) in the file output
Thu Apr 6 21:21:13 CEST 2017
Hello world
and on the screen - the errors
cat: jjj: No such file or directory
mkdir: jj: File exists
Now, change the above script too
((
date > jj
cat jj - jjj # jj + stdin + (nonexistext jjj)
mkdir jj #error
) 3>&1 1>&2 2>&3 | tee >(logger -i -t $0 -p 4)) 3>&1 1>&2 2>&3
Note, the removed the -s from the logger args.
Now when you will use it again
echo "Hello world" | bash mytest.sh > output
in the output will be the stdout as you expected
the stderr will do to screen (and you can redirect it again)
and the logger will log all errors.
As you sure know, it works
swaps the stdout and stderr
pipes the stdout (now the stderr) to the logger process
swaps the stdout/stderr back
It probably could be simpler, because the logger by using the -s could duplicate the messages into the stderr itself, but this works universally. Unfortunately, it is inefficient thus using 2 more forks. note the ((.
Using the:
somefunc() { some actions...; }
trap 'somefunc' ERR
will not help you as you expecting. Doing some fancy redirections in the somefunc is too late, because the somefunc triggered after the error happens, e.g. the error-message is already printed to stderr.

Why my named pipe input command line just hangs when it is called?

Why my named pipe input command line just hangs when it is called?
Based on the answers:
Writing to stdin of background process
Accessing bash command line args $# vs $*
Send command to a background process
Can I redirect output to a log file and background a process at the same time?
I wrote two shell scripts to communicate with my game server. And worked the first time I did it. Since it them they do not work anymore. Every time I do ./send.sh commands the command line hangs until I hit Ctrl+C.
It also hangs and does nothing when I do directly echo commamd > /tmp/srv-input
The scripts
It does start the server and configure it to read/receive my commands while it run in background:
start_czero_server.sh
#!/bin/sh
# Go to the game server application folder where the game application `hlds_run` is
cd /home/user/Half-Life
pkill -f hlds
# Set up a pipe named `/tmp/srv-input`
rm /tmp/srv-input
mkfifo /tmp/srv-input
cat > /tmp/srv-input &
echo $! > /tmp/srv-input-cat-pid
# Start the server reading from the pipe named `/tmp/srv-input`
# And also output all its console to the file `/home/user/Half-Life/my_logs.txt`
cat /tmp/srv-input | ./hlds_run -console -game czero +port 27015 > my_logs.txt 2>&1 &
# Successful execution
exit 0
This second script it just a wrapper which allow me easily to send commands to the my server:
send.sh
#!/bin/sh
echo "$#" > /tmp/srv-input
# Successful execution
exit 0
Now every time I want to send a command to my server I just do on the terminal:
./send.sh mp_timelimit 30
I always keep another open terminal open just to listen to my server server console. To do it just use the tail command with the -f flag to follow my server console output:
./tail -f /home/user/Half-Life/my_logs.txt
You would be better off just having hlds_run read directly from the pipe instead of having cat pipe it in.
Try
./hlds_run … > my_logs.txt 2>&1 < /tmp/srv-input &
Instead of
cat /tmp/srv-input | ./hlds_run …

Use netcat and write to stdin for remote shell

I am exploiting a buffer overflow vulnerability (for university) on a running server and I am able to redirect the process to exec a shell.
My exploit looks like this:
perl -e 'print "\xaa\xaa\..."' | nc -q0 machineAtUni 1234
So the server reads from the socket, eip will be overwritten and a shell executed. The problem is that I see some message from the shell but I can't insert anything. I think that the shell itself reads from stdin till EOF but how can I achieve that I can send commands to it (that the connection stays open and I am able to write to stdin)?
netcat's stdin is connected to the pipe, not your terminal, so it's not sending anything you type. You can do:
{ perl -e 'print "\xaa\xaa\..."'; cat; } | nc -q0 machineAtUni 1234
so that when the perl script finishes, cat will read from the terminal and write to the pipe.

Resources