Bash pipe execution order - linux

If I open an nc in listening on Terminal1
nc -l 35555
And perform a connection in this way on Terminal2
mkfifo /tmp/f
cat /tmp/f | nc 127.0.0.1 35555 > /tmp/f
Now I have few questions
On Terminal2 you can see something in waiting. What is that?
Is it the cat process or the nc process?
In which order are they executed?
If I write a string on Terminal1, that string just come back.
Why cat /tmp/f | nc 127.0.0.1 35555 > /tmp/f doesn't create a cycle?
The received string comes out from nc and goes in /tmp/f. Because of the cat that string begins input of nc, sent back to Terminal1 and again in /tmp/f etc. etc.
Why isn't the cat of a fifo a blocking operations? Technically it not end, so why the nc is executed?
Thanks.

Is it the cat process or the nc process?
It's both. cat is waiting for something to be written to the FIFO. nc is waiting for something to be written to its stdin or to receive something over the network socket.
In which order are they executed?
All commands in a pipeline are run concurrently.
Why cat /tmp/f | nc 127.0.0.1 35555 > /tmp/f doesn't create a cycle?
The first terminal is not sending back what it received from the network, it just prints it to the terminal. When you type something on terminal 1 it gets written to the network once. nc on terminal 2 writes it to /tmp/f. Then it reads it and sends it to the network. nc on terminal 1 reads it and prints it. That's the end.
-Why isn't the cat of a fifo a blocking operations? Technically it not end, so why the nc is executed?
Commands in a pipeline don't wait for the previous one to exit. That would make it impossible to write code like
tail -f filename | grep word
since tail -f never exits (until you kill it with Ctl-c, but that also kills grep). All the processes run, which allows the later programs to process partial output of the earlier programs.
So nc doesn't wait for cat to exit. It runs at the same time, and probably uses select() or epoll() to listen for input from both stdin and the network socket at the same time (it might also use separate threads for each). So while cat is blocking on the FIFO, it can read the network data from terminal 1, and then write it to the FIFO.

Related

Pipe output of multiple programs to netcat on same port

So right now I have a server that is pinging an api and writes to stdout. I pipe that to nc as such.
node server.js | nc -lk 9999
I then have a job that listens to that and outputs 5 second snapshots of the data.
My question is that when I try to do that in multiple sessions with multiple server.js programs eg. server1.js, server2.js which all hit different parts of the api and pipe them all to port 9999. Only the first program gets picked up by the snapshot job. How do I pipe multiple outputs to:
nc -lk 9999
such that they all get handled as if it's the output of one program?
If you execute multiple times that command, it simply won't work. You cannot listen with several instances of netcat on the same port. You should get a nc: Address already in use error.
What you can do is something like:
mkfifo queue
nc -lk 9999 < queue
node server.js | cat > queue
Btw I did a nice and fully working Netcat porting in Node.js: https://github.com/roccomuso/netcat
This easily address the issue you're encountering, because you can implement the whole logic in JS without using the native nc bin or pipe data on the stdin.

Best method to output log content to listening port

I am outputting content of a log via netcat to an application over the network. I don't know if what I'm doing is the most efficient, especially when I notice the netcat session becomes non-responsive. I have to stop netcat and start it again for the application to work again.
The command I run is:
/bin/tail -n1 -f /var/log/custom_output.log | /bin/nc -l -p 5020 --keep-open
This needs to run like this 24/7. Is this the most efficient way of doing it? How can I improve on it so I don't have to restart the process daily?
EDIT
So I realised that when the log is being rotated, netcat is locked onto a file that's not longer being written to. I can deal with this easily enough.
The question still stands. Is this the best way to do something like this?
It's been 6 years, but maybe someone will come in handy.
To account for log rotation, use tail with the -F flag.
nc (aka netcat) variant
LOG_FILE="/var/log/custom_output.log"
PORT=5020
tail -n0 -F "$LOG_FILE" | nc -k -l -p $PORT
Notes:
Flag -k in nc is analog to --keep-open in "the OpenBSD rewrite of netcat";
Multiple clients can connect to nc at the same time, but only the first one will be receive appended log lines;
tail will run immediately, so it will collect appended log lines even if no client is connected. Thus, the first client can receive some buffered data - all log lines that have been appended since tail was run.
socat variant
LOG_FILE="/var/log/custom_output.log"
PORT=5020
socat TCP-LISTEN:$PORT,fork,reuseaddr SYSTEM:"tail -n0 -F \"$LOG_FILE\" </dev/null"
Note: here socat will fork (clone itself) on each client connection and start a separate tail process. Thus:
Each connected client will receive appended log lines at the same time;
Clients will not receive any previously buffered by tail strings.
additional
You can redirect stderr to stdout in the tail process by adding 2>&1 (in both variants). In this case, clients will receive auxiliary message lines, e.g.:
tail: /var/log/custom_output.log: file truncated;
tail: '/var/log/custom_output.log' has become inaccessible: No such file or directory - printed when the log file has been removed or renamed, only if -F is used;
tail: '/var/log/custom_output.log' has appeared; following new file - printed when a new log file is created, only if -F is used.

Use netcat and write to stdin for remote shell

I am exploiting a buffer overflow vulnerability (for university) on a running server and I am able to redirect the process to exec a shell.
My exploit looks like this:
perl -e 'print "\xaa\xaa\..."' | nc -q0 machineAtUni 1234
So the server reads from the socket, eip will be overwritten and a shell executed. The problem is that I see some message from the shell but I can't insert anything. I think that the shell itself reads from stdin till EOF but how can I achieve that I can send commands to it (that the connection stays open and I am able to write to stdin)?
netcat's stdin is connected to the pipe, not your terminal, so it's not sending anything you type. You can do:
{ perl -e 'print "\xaa\xaa\..."'; cat; } | nc -q0 machineAtUni 1234
so that when the perl script finishes, cat will read from the terminal and write to the pipe.

How to delay pipe netcat to connect on first input

Running in bash under Ubuntu:
I have a source that generates me some output, but not straight away. Let's assume it is a first netcat listening on a socket: netcat -l 12345.
And I would like to pipe it to an outgoing netcat (connecting over TCP), e.g. netcat -l 12345 | netcat localhost 54321. But the tricky bit is, that I know there is nothing listening for that incoming connection on localhost 54321 when I run the command, but I know there will be one when the first actual character arrives through the pipe.
So my question is: is there a way either:
to delay the execution of the outgoing netcat until the first character arrives into the pipe, or
to delay the outgoing netcat from trying to establish the TCP connection until it receives the first character on its standard input? (no straight option for that in man, switching to UDP is not acceptable)
Thanks in advance!
Edit: In reality, the source is more complex than a netcat, namely it is a listening netcat piped through all sort of stream modification.
Using the research you already did and that I commented to (by not knowing it was an answer to your own question), here is the full delayed_netcat.sh:
#!/bin/bash
read line
netcat "${#}" < <(echo $line ; cat)
This first waits for a line of input and later prepends that line using a simple echo to the "newly generated" input to the actual netcat. The rest of stdin is just redirected using cat which slurps it from stdin and adds it to the input of netcat. It also supports passing commandline options and arguments to the "real" netcat.
The usage is as follows:
netcat -l 12345 | cmd1 | cmd2 | ... | ./delayed_netcat.sh localhost 54321
The netcat is delayed till the first line is read. If you really want to start it after the first character is read the parts with read and echo need some rewrite.
Port Forwarding or Port Mapping with netcat:
ncat -l -p 12345 -c 'ncat localhost 54321'
Using socat:
socat TCP4-LISTEN:12345 TCP4:localhost:54321
This command exits after the first connection is done.
I have found an answer to my question, but it is awful... so still looking for something better.
netcat -l 12345 | gawk '(NR==1){print""}{print;fflush()}' | ./delayed_netcat.sh
where ./delayed_netcat.sh:
#!/bin/sh
read line
netcat localhost 12345
So the read line delays the netcat localhost 12345 by waiting for and consuming the first input line, and I use gawk '(NR==1){print""}{print;fflush()}' to insert an empty line just before the first record... I'm sure there is room for much improvement to that!

Example of using named pipes in Linux shell (Bash)

Can someone post a simple example of using named pipes in Bash on Linux?
One of the best examples of a practical use of a named pipe...
From http://en.wikipedia.org/wiki/Netcat:
Another useful behavior is using netcat as a proxy. Both ports and hosts can be redirected. Look at this example:
nc -l 12345 | nc www.google.com 80
Port 12345 represents the request.
This starts a nc server on port 12345 and all the connections get redirected to google.com:80. If a web browser makes a request to nc, the request will be sent to google but the response will not be sent to the web browser. That is because pipes are unidirectional. This can be worked around with a named pipe to redirect the input and output.
mkfifo backpipe
nc -l 12345 0<backpipe | nc www.google.com 80 1>backpipe
Here are the commands:
mkfifo named_pipe
echo "Hi" > named_pipe &
cat named_pipe
The first command creates the pipe.
The second command writes to the pipe (blocking). The & puts this into the background so you can continue to type commands in the same shell. It will exit when the FIFO is emptied by the next command.
The last command reads from the pipe.
Open two different shells, and leave them side by side. In both, go to the /tmp/ directory:
cd /tmp/
In the first one type:
mkfifo myPipe
echo "IPC_example_between_two_shells">myPipe
In the second one, type:
while read line; do echo "What has been passed through the pipe is ${line}"; done<myPipe
First shell won't give you any prompt back until you execute the second part of the code in the second shell. It's because the fifo read and write is blocking.
You can also have a look at the FIFO type by doing a ls -al myPipe and see the details of this specific type of file.
Next step would be to embark the code in a script!
Creating a named pipe
$ mkfifo pipe_name
On Unix-likes named pipe (FIFO) is a special type of file with no content. The mkfifo command creates the pipe on a file system (assigns a name to it), but doesn't open it. You need to open and close it separately like any other file.
Using a named pipe
Named pipes are useful when you need to pipe from/to multiple processes or if you can't connect two processes with an anonymous pipe. They can be used in multiple ways:
In parallel with another process:
$ echo 'Hello pipe!' > pipe_name & # runs writer in a background
$ cat pipe_name
Hello pipe!
Here writer runs along the reader allowing real-time communication between processes.
Sequentially with file descriptors:
$ # open the pipe on auxiliary FD #5 in both ways (otherwise it will block),
$ # then open descriptors for writing and reading and close the auxiliary FD
$ exec 5<>pipe_name 3>pipe_name 4<pipe_name 5>&-
$
$ echo 'Hello pipe!' >&3 # write into the pipe through FD #3
...
$ exec 3>&- # close the FD when you're done
$ # (otherwise reading will block)
$ cat <&4
Hello pipe!
...
$ exec 4<&-
In fact, communication through a pipe can be sequential, but it's limited to 64 KB (buffer size).
It's preferable to use descriptors to transfer multiple blocks of data in order to reduce overhead.
Conditionally with signals:
$ handler() {
> cat <&3
>
> exec 3<&-
> trap - USR1 # unregister signal handler (see below)
> unset -f handler writer # undefine the functions
> }
$
$ exec 4<>pipe_name 3<pipe_name 4>&-
$ trap handler USR1 # register handler for signal USR1
$
$ writer() {
> if <condition>; then
> kill -USR1 $PPID # send the signal USR1 to a specified process
> echo 'Hello pipe!' > pipe_name
> fi
> }
$ export -f writer # pass the function to child shells
$
$ bash -c writer & # can actually be run sequentially as well
$
Hello pipe!
FD allows data transfer to start before the shell is ready to receive it. Required when used sequentially.
The signal should be sent before data to prevent a deadlock if pipe buffer will fill up.
Destroying a named pipe
The pipe itself (and its content) gets destroyed when all descriptors to it are closed. What's left is just a name.
To make the pipe anonymous and unavailable under the given name (can be done when the pipe is still open) you could use the rm con­sole com­mand (it's the opposite of mkfifo command):
$ rm pipe_name
Terminal 1:
$ mknod new_named_pipe p
$ echo 123 > new_named_pipe
Terminal 1 created a named pipe.
It wrote data in it using echo.
It is blocked as there is no receiving end (as pipes both named and unnamed need receiving and writing ends to it)
Terminal 2:
$ cat new_named_pipe
$ 123
$
From Terminal 2, a receiving end for the data is added.
It read the data in it using cat.
Since both receiving and writing ends are there for the new_named_pipe it displays the information and blocking stops
Named pipes are used everywhere in Linux, most of the char and block files we see during ls -l command are char and block pipes (All of these reside at /dev).
These pipes can be blocking and non-blocking, and the main advantage is these provides the simplest way for IPC.

Resources