Use netcat and write to stdin for remote shell - linux

I am exploiting a buffer overflow vulnerability (for university) on a running server and I am able to redirect the process to exec a shell.
My exploit looks like this:
perl -e 'print "\xaa\xaa\..."' | nc -q0 machineAtUni 1234
So the server reads from the socket, eip will be overwritten and a shell executed. The problem is that I see some message from the shell but I can't insert anything. I think that the shell itself reads from stdin till EOF but how can I achieve that I can send commands to it (that the connection stays open and I am able to write to stdin)?

netcat's stdin is connected to the pipe, not your terminal, so it's not sending anything you type. You can do:
{ perl -e 'print "\xaa\xaa\..."'; cat; } | nc -q0 machineAtUni 1234
so that when the perl script finishes, cat will read from the terminal and write to the pipe.

Related

Bash pipe execution order

If I open an nc in listening on Terminal1
nc -l 35555
And perform a connection in this way on Terminal2
mkfifo /tmp/f
cat /tmp/f | nc 127.0.0.1 35555 > /tmp/f
Now I have few questions
On Terminal2 you can see something in waiting. What is that?
Is it the cat process or the nc process?
In which order are they executed?
If I write a string on Terminal1, that string just come back.
Why cat /tmp/f | nc 127.0.0.1 35555 > /tmp/f doesn't create a cycle?
The received string comes out from nc and goes in /tmp/f. Because of the cat that string begins input of nc, sent back to Terminal1 and again in /tmp/f etc. etc.
Why isn't the cat of a fifo a blocking operations? Technically it not end, so why the nc is executed?
Thanks.
Is it the cat process or the nc process?
It's both. cat is waiting for something to be written to the FIFO. nc is waiting for something to be written to its stdin or to receive something over the network socket.
In which order are they executed?
All commands in a pipeline are run concurrently.
Why cat /tmp/f | nc 127.0.0.1 35555 > /tmp/f doesn't create a cycle?
The first terminal is not sending back what it received from the network, it just prints it to the terminal. When you type something on terminal 1 it gets written to the network once. nc on terminal 2 writes it to /tmp/f. Then it reads it and sends it to the network. nc on terminal 1 reads it and prints it. That's the end.
-Why isn't the cat of a fifo a blocking operations? Technically it not end, so why the nc is executed?
Commands in a pipeline don't wait for the previous one to exit. That would make it impossible to write code like
tail -f filename | grep word
since tail -f never exits (until you kill it with Ctl-c, but that also kills grep). All the processes run, which allows the later programs to process partial output of the earlier programs.
So nc doesn't wait for cat to exit. It runs at the same time, and probably uses select() or epoll() to listen for input from both stdin and the network socket at the same time (it might also use separate threads for each). So while cat is blocking on the FIFO, it can read the network data from terminal 1, and then write it to the FIFO.

redirect the ouput of command into a socket on linux

I'm using netcat to connect to a server.
the problem is that i want to send somme non printable to the server caracters.
I wanted to achieve this with a command redirection in linux.
lets say this is the command: nc hostname port
so when i checked the file descriptors of the command nc in the folder: cd /proc/$(pidof nc)/fd is saw the there was another fd with number 3 that conserns the socket. 3 -> socket:[1675643]
the problem is that i wanted to redirect the output of let's say echo -ne '\xff\x0f\xab\xde' > ./3 to the socket.
I couldn't do so and the ouput is: bash: ./3: No such device or address
One cannot output something to a socket which is opened only by another process.
In order to first use interactive input/output and afterwards send the echo string, you can do:
(cat; echo -ne '\xff\x0f\xab\xde')|nc hostname port
(press the EOF character Ctrl-D to end your input and start the echo).

Output a linux command to a url/port or scocket instead of writing it to a file

I have a command which out outputs certain data which i store in a ext file using a '>>' command.Now Instead of doing that I want to have a socket or a port on any server which will catch the output of the command.Basically i want to output all my script data to a socket or url which ever is possible.
Any help in this direction is most welcomed.
You can use socat to listening on a port 12345 and echo any data sent to it like this:
socat -u TCP-LISTEN:12345,keepalive,reuseaddr,fork STDOUT
If you want to capture it to a file as well (file.log), you can use the same command with tee:
socat -u TCP-LISTEN:12345,keepalive,reuseaddr,fork STDOUT | tee file.log
You can run your program to output to bash's TCP virtual device:
./prog > /dev/tcp/localhost/12345
If you don't want to use bash magic then you can also use socat to send the data:
./prog | socat - TCP-CONNECT:localhost:12345
The above example assume you are running your program and "logger" on the same system but you can replace "localhost" with the hostname or address of the system you wish to send to (where the socat is listening).

How to interact with telnet using empty

I need to replace a very simple expect script that looks like this:
#!/usr/bin/expect
spawn telnet 192.168.1.175
expect {
"assword" {send "lamepassword\r"}
}
interact
With the equivalent bash script using empty, like this:
#!/bin/bash
empty -f -i in -o out telnet 192.168.1.175
empty -w -i out -o in "assword" "lamepassword\n"
After which I need the user to interact with telnet, which I do not know how to do. The closest thing that comes to my mind is binding stdin and stdout with named pipes using something like socat - in. Any suggestions are more than welcome!
I tried cat out & cat /dev/stdin >in, it works, but it has an extra
newline, tab completion does not work and ctr+c terminates cat and
not the running host process. I am trying to persuade socat to act
according to those needs.
Using socat for transmitting keyboard input to the telnet process is a good idea. Example:
cat out & socat -u -,raw,echo=0 ./in
For allowing Ctrl-C to terminate socat, add escape=3:
cat out & socat -u -,raw,echo=0,escape=3 ./in
But note that this will not terminate the telnet session, since it did start in daemon mode, so you can reconnect to telnet by executing socat again. To end telnet, you could just logout.

Example of using named pipes in Linux shell (Bash)

Can someone post a simple example of using named pipes in Bash on Linux?
One of the best examples of a practical use of a named pipe...
From http://en.wikipedia.org/wiki/Netcat:
Another useful behavior is using netcat as a proxy. Both ports and hosts can be redirected. Look at this example:
nc -l 12345 | nc www.google.com 80
Port 12345 represents the request.
This starts a nc server on port 12345 and all the connections get redirected to google.com:80. If a web browser makes a request to nc, the request will be sent to google but the response will not be sent to the web browser. That is because pipes are unidirectional. This can be worked around with a named pipe to redirect the input and output.
mkfifo backpipe
nc -l 12345 0<backpipe | nc www.google.com 80 1>backpipe
Here are the commands:
mkfifo named_pipe
echo "Hi" > named_pipe &
cat named_pipe
The first command creates the pipe.
The second command writes to the pipe (blocking). The & puts this into the background so you can continue to type commands in the same shell. It will exit when the FIFO is emptied by the next command.
The last command reads from the pipe.
Open two different shells, and leave them side by side. In both, go to the /tmp/ directory:
cd /tmp/
In the first one type:
mkfifo myPipe
echo "IPC_example_between_two_shells">myPipe
In the second one, type:
while read line; do echo "What has been passed through the pipe is ${line}"; done<myPipe
First shell won't give you any prompt back until you execute the second part of the code in the second shell. It's because the fifo read and write is blocking.
You can also have a look at the FIFO type by doing a ls -al myPipe and see the details of this specific type of file.
Next step would be to embark the code in a script!
Creating a named pipe
$ mkfifo pipe_name
On Unix-likes named pipe (FIFO) is a special type of file with no content. The mkfifo command creates the pipe on a file system (assigns a name to it), but doesn't open it. You need to open and close it separately like any other file.
Using a named pipe
Named pipes are useful when you need to pipe from/to multiple processes or if you can't connect two processes with an anonymous pipe. They can be used in multiple ways:
In parallel with another process:
$ echo 'Hello pipe!' > pipe_name & # runs writer in a background
$ cat pipe_name
Hello pipe!
Here writer runs along the reader allowing real-time communication between processes.
Sequentially with file descriptors:
$ # open the pipe on auxiliary FD #5 in both ways (otherwise it will block),
$ # then open descriptors for writing and reading and close the auxiliary FD
$ exec 5<>pipe_name 3>pipe_name 4<pipe_name 5>&-
$
$ echo 'Hello pipe!' >&3 # write into the pipe through FD #3
...
$ exec 3>&- # close the FD when you're done
$ # (otherwise reading will block)
$ cat <&4
Hello pipe!
...
$ exec 4<&-
In fact, communication through a pipe can be sequential, but it's limited to 64 KB (buffer size).
It's preferable to use descriptors to transfer multiple blocks of data in order to reduce overhead.
Conditionally with signals:
$ handler() {
> cat <&3
>
> exec 3<&-
> trap - USR1 # unregister signal handler (see below)
> unset -f handler writer # undefine the functions
> }
$
$ exec 4<>pipe_name 3<pipe_name 4>&-
$ trap handler USR1 # register handler for signal USR1
$
$ writer() {
> if <condition>; then
> kill -USR1 $PPID # send the signal USR1 to a specified process
> echo 'Hello pipe!' > pipe_name
> fi
> }
$ export -f writer # pass the function to child shells
$
$ bash -c writer & # can actually be run sequentially as well
$
Hello pipe!
FD allows data transfer to start before the shell is ready to receive it. Required when used sequentially.
The signal should be sent before data to prevent a deadlock if pipe buffer will fill up.
Destroying a named pipe
The pipe itself (and its content) gets destroyed when all descriptors to it are closed. What's left is just a name.
To make the pipe anonymous and unavailable under the given name (can be done when the pipe is still open) you could use the rm con­sole com­mand (it's the opposite of mkfifo command):
$ rm pipe_name
Terminal 1:
$ mknod new_named_pipe p
$ echo 123 > new_named_pipe
Terminal 1 created a named pipe.
It wrote data in it using echo.
It is blocked as there is no receiving end (as pipes both named and unnamed need receiving and writing ends to it)
Terminal 2:
$ cat new_named_pipe
$ 123
$
From Terminal 2, a receiving end for the data is added.
It read the data in it using cat.
Since both receiving and writing ends are there for the new_named_pipe it displays the information and blocking stops
Named pipes are used everywhere in Linux, most of the char and block files we see during ls -l command are char and block pipes (All of these reside at /dev).
These pipes can be blocking and non-blocking, and the main advantage is these provides the simplest way for IPC.

Resources