Does the NodeJS's child process args array sanitize arguments? - node.js

Say you're building an app that runs a shell command based on unverified input.
Concatenating the arguments as a string is obviously a massive security risk, but is it the same case with the args option?
The docs don't mention anything about this. I ran a quick test:
var child = require("child_process");
child.spawn("touch", ["./filename", "&& touch ./hacked"]);
filename is created, but hacked isn't. Does that mean I can plug anything in the args array and assume it's safe?

I think the issue you're seeing isn't that it's sanitizing your input for you, I think it's that you can't use spaces in your arguments. See this answer.
I couldn't find anything online that gave any indication that your child spawning arguments are sanitized.

Related

Argparse: is it possible to combine help texts from multiple parsers?

I'm writing a module with custom logging utilities to be imported in other scripts.
It's based on the standard-library logging module.
One of these utilities looks like this:
import argparse as ap
def parse_log_args() -> dict:
log_arg_parser = ap.ArgumentParser(description='Parses arguments that affect logging')
log_arg_parser.add_argument(
'--level',
dest='level',
help='Sets logging level',
choices=['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL']
)
log_args, _ = log_arg_parser.parse_known_args()
return vars(log_args)
This function looks for arguments that have to do with logging (even though only --level is defined for the time being) and parses those independently of (and before) all others so that the logging can be configured early on and used in the rest of the script.
The goal here is to remain flexible and be able to quickly plug-in support for these arguments, both in scripts that expect no other arguments and in those that do.
From the point of view of simply parsing arguments this works: this function runs first, parses --level and then the script-specific parser comes and handles the rest.
The problem, however, is the help text. When I run a script that calls this function with --help it only displays the help text from this first parser and not from the script-specific one. So something like this:
Parses arguments that affect logging
optional arguments:
-h, --help show this help message and exit
--level {DEBUG,INFO,WARNING,ERROR,CRITICAL}
Sets logging level
Is there a way to combine the help-texts from all the ArgumentParser instances in a script?
Alternatively: Is there a different way to achieve this in a flexible/plug-in kind of way, that is, without modifying existing ArgumentParsers or having to add them to scripts that don't yet use them?
PS: A similar question has been asked before here: Argparse combine --help directives but the proposed ideas don't really solve the problem:
Define the first parser with add_help=False: This would hide the option from the user which I would prefer not to do.
Use subcommands somehow: doesn't seem to be applicable here.
I think this might fit the bill:
import argparse
part1 = argparse.ArgumentParser(add_help=False)
#... some parsing takes place ...
part2 = argparse.ArgumentParser(add_help=True, parents=[part1])
part1 parser must be fully initialized for parents to work.
More on the topic:
https://docs.python.org/3/library/argparse.html#parents

What is the Lua "replacement" for the pre_exec command in Conky files?

I'm not great at programming, but I was trying to fiddle around with a conky_rc file I liked that I found that seemed pretty straight-forward.
As the title states, I have now learned that the previous command of pre_exec has been long removed and superseded by Lua.
Unfortunately, I cannot seem to find anything directly related to this other than https://github.com/brndnmtthws/conky/issues/62. The thread https://github.com/brndnmtthws/conky/issues/146 references it, and its "solution" states: Basically, there is no replacement and you should use Lua or use a very large interval and execi.
I have found a few more threads that all include the question as to why this function was discontinued, but with no actual answers. So, to reiterate mine, I have absolutely no knowledge of Lua (I've heard of it before, and I've now added a few websites to look at tomorrow since I have spent most of the evening trying to figure out this Conky thing), and I'll probably just give up and do the execi option (my computer can handle it but, I just think it's so horribly inefficient).
Is there an appropriate Lua option? If so, would someone please direct me to either the manual or wiki for it, or explain it? Or is the "proper" Lua solution this?
#Vincent-C It's not working for your script is because the function
ain't getting call. from the quick few tests I did, it seem
lua_startup_hook need the function to be in another file that is
loaded using lua_load, not really sure how the hook function thingy
all works cause I rather just directly use the config as lua since it
is lua.
Basically just call the io.popen stuff and concat it into conky.text
conky.text = [[ a lot of stuff... ${color green} ]];
o = io.popen('fortune -s | cowsay', 'r') conky.text = conky.text ..
o:read('*a')
The comment by asl97 on the first page you cited appears to provide an answer, but a bit of explanation would probably help.
asl97 provides the following general purpose Lua function to use as a substitute for $pre_exec, preceded by a require statement to make io available for use by the function:
require 'io'
function pre_exec(cmd)
local handle = io.popen(cmd)
local output = handle:read("*a")
handle:close()
return output
end
Adding this block of code to your conky configuration file will make the function available for use therein. For testing, I added it above the conky.config = { ... } section.
Calling the Lua pre_exec function will return a string containing the output of the command passed to it. The conky.text section from [[ to ]] is also a string, so it can then be conactenated to the string returned by pre_exec using the .. operator, as shown in the usage section provided by asl97.
In my test, I did the following silly bit, which worked as expected, to display "Hello World!" and the output of the date function with spacing above and below each at the top of my conky display:
conky.text = pre_exec("echo; echo Hello World!; echo; date; echo")..[[
-- lots of boring conky stuff --
]]
More serious commands can, of course, be used with pre_exec, as shown by asl97.
One thing that asl97 didn't explain was how to provide how to concatenate so that the pre_exec output is in the middle of the conky display rather than just the beginning. I tested and found that you can do it like the following:
conky.text = [[
-- some conky stuff --
]]..pre_exec("your_important_command")..[[
-- more conky stuff --
]]

Python3 - sanitizing user input passed to shell as parameter

What is the recommended method of sanitizing user_input_parameter passed to the shell like
subprocess.Popen(['sudo', 'rm -rf', user_input_parameter])
The command should accept all parameters but malicious activies like breaking out of the command should be mitigated.
Python's implementation of subprocess protects against shell injection, documentation says so:
17.5.2. Security Considerations
Unlike some other popen functions, this implementation will never
implicitly call a system shell. This means that all characters,
including shell metacharacters, can safely be passed to child
processes. If the shell is invoked explicitly, via shell=True, it is
the application’s responsibility to ensure that all whitespace and
metacharacters are quoted appropriately to avoid shell injection
vulnerabilities.
When using shell=True, the shlex.quote() function can be used to
properly escape whitespace and shell metacharacters in strings that
are going to be used to construct shell commands.
This will however NOT protect against a user passing a malicious input - in your case for example deleting something that was not intended to be deleted. I would not pass user input to the command directly like that - you should verify if whatever you want to be deleted is being deleted and not something completely different. That is however part of application's logic already - regarding shell injection (breaking out of the command) - that should be fine with subprocess.
I made this little example:
#!/usr/bin/env python3
import subprocess
user_input_parameter = '/; id'
subprocess.Popen(['ls', user_input_parameter])
Which outputs this when executed:
$ python3 asdf.py
ls: /; id: No such file or directory
$
To demonstrate subprocess passes the input as an argument to the parameter.
All of this is true only if shell=False (default as of writing this answer) for subprocess methods, otherwise you basically enable shell (bash, etc.) execution and allow for injection to happen if inputs are not properly sanitized.
Btw, you need to pass each parameter separately, so you would need to run it like this (but please don't do that):
subprocess.Popen(['sudo', 'rm', '-rf', user_input_parameter])

python subprocess - how to change dir to run command?

I've got a python script where I run a cmd using subprocess.getoutput(), and store the resulting output in a list. Now, I need to be able to have the script change to a target dir and run the command there. It should be simple, but passing the cwd arg to getoutput() is not working.
Any ideas?
Example:
out = subprocess.getoutput(" ".join(cmd), cwd='/my/target/path').splitlines()
From the doc it looks like I can easily do this with subprocess.Popen, but then it's difficult to get the output into a list of strings. I've only been able to get the results into a list of binary strings.
subprocess.getoutput is a Legacy Shell Invocation Function. It doesn't take cwd argument and returns a tuple of (status, output). You've got several problems before you even get to the list of bytes.
When python runs a program, it doesn't know what encoding its output is going to have you need to supply that somehow. Assuming the encoding is `utf-8', the basic operation is
mylist = []
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, cwd='/my/target/path')
for line in mylist:
mylist.append(line.decode('utf-8'))
proc.wait()
In this implementation, anything written to stderr just goes to your programs stderr. Notice also that I kept the command as a list and didn't do shell=True. There are several helper functions that do some of the work for you, but that's pretty simple already.

capturing pipe exit status in R

I am using R's pipe() function to capture output from a shell command, but I would also like to get the exit code from the command.
I know that I could use system2 here, but I need the advantage of a pipe i.e. the ability to process output in a streaming fashion.
I am considering writing my own library to wrap the popen() and pclose() C functions to take advantage of the fact that pclose() returns the exit status, but maybe this can be avoided.
Any suggestions? Thanks!
Note
There are certainly ways to do this with temporary files, named pipes, etc but I would ideally like to avoid these workarounds. I am willing to compile a shared library with an R->C function in it (and I'm even willing to copy-paste part of the R source code), but I'm not willing to rebuild R.
Update
I started reading through the R source code and found the unchecked pclose call:
in src/main/connections.c:
static void pipe_close(Rconnection con)
{
pclose(((Rfileconn)(con->private))->fp);
con->isopen = FALSE;
}
I tried going forward with the approach of implementing an R_pclose C function that duplicates the R code for close() but saves this return value. I unfortunately ran into this static variable in src/main/connections.c
static Rconnection Connections[NCONNECTIONS];
Since I'd have to run objcopy --globalize-symbol=Connections libR.so /path/to/my/libR.so anyway to access the variable, it looks like my best solution is to rebuild R with my own patch to capture the pclose return value.
Ugly hack: you can wrap your command call into a small shell script which writes the exit code of its child to some temporary file. So when the stream has ended, you could wait until that file has non-zero size, then read the status from there. I hope someone comes up with a better solution, but at least this is a kind of solution.

Resources