I want to download a url in a remote host using ssh, i was using exec(), it was working:
const cmd = `mkdir -p /home/username/test; wget --no-check-certificate -q -U \"\" -c \"${url}\" -O /home/username/test/img.jpg`;
const out = execSync(`ssh -o ConnectTimeout=8 -o StrictHostKeyChecking=no -p 2356 username#${ip} '${cmd}'`);
But it's usafe to use the url variable this way, the value of this variable is from user input, so i found some posts on stackoverflow saying that i need to use spawn:
const url = 'https://example.com/image.jpg';
const out = spawnSync('ssh', [
'-o', 'ConnectTimeout=8',
'-o', 'StrictHostKeyChecking=no',
'-p', '2356',
`username#${ip}`,
`mkdir -p /home/username/test; wget --no-check-certificate -q -U "" -c "${url}" -O /home/username/test/img.jpg`,
]);
What about if const url = 'https://example.com/image.jpg"; echo 5; "';, the echo will be executed, could someone tell me how to execute this code in a safe way?
There are two aspect. First, you are correct that execSync is unsafe. To quote from the documentation:
Never pass unsanitized user input to this function. Any input containing shell metacharacters may be used to trigger arbitrary command execution.
A solution is to use execSpawn, as you pointed out, for example, like in this related answer.
However, in your example, you are calling ssh and passing it a text, which will be executed by the shell on the remote system. Because of that, you are still vulnerable to the attack, as you showed in the example. But note that it is no longer a NodeJs related exploit, but an exploit on ssh and the shell.
To mitigate the attack, I would recommend to concentrate on the remote server. Instead of sending it a command over ssh, which it should trust and execute in the shell, you could provide a clear defined API from the server. In the HTTP interface, you can accept input and do a proper validation (instead of simplify trusting it). The advantage is that you do not need to deal with the subtleties of the shell.
If you are forced to work with ssh, you could validate the URL and only if it is safe, forward it to the server. It is still not ideal from a security perspective. First, the remote server will need to trust you completely (often it is better to avoid that and instead validate as locally as possible). Second, the validation itself is not straightforward. You will need to decide if a string looks like an URL (e.g. by using new URL(url)), but the more difficult aspect is to make sure that no exploits slip through. I don't have a concrete example, but I would be cautious to assume all strings that pass the URL parser will be safe to execute in a shell environment.
In summary, if possible try to avoid ssh with passing shell command in that situation (input data controlled by an attacker). Instead prefer a normal API like a HTTP interface (or other text or binary protocols). If it is not possible, try hard to sanitize the data before sending it out. Maybe you know in advance how a URL will look like (e.g. the list of allowed hostnames, allowed paths, etc). But realize that there might be hidden examples that you will overlook, and never underestimate the creativity of an attacker.
Related
If you run a command in Terminal, say
rsync -avuP [SourcPath] [DestPath]
That command will get logged in, say .bash_history, .zsh_history, .bash_sessions, etc.
So if you make use of something as notoriously insecure as sshpass
say sshpass -P password -p MySecetPassword [Some command requiring std input], that too will be logged.
But what happens when you do the equivalent when spawning a process using Node JS?
passw = functionThatRetrievesPasswordSecurely();
terminalCmd = "sshpass";
terminalArgs = ["-P, "password", "-p", passw, "command", "requiring", "a", "password", "entry"];
spawn = require("child_process").spawn(terminalCmd, terminalArgs);
spawn.stdout.on("data", data => {console.log("stdout, no details"});
spawn.stderr.on("data", data => {console.log("stderr, no details"});
spawn.on("close", data => {console.log("Process complete, no details"});
Are the terminalCmd or terminalArgs logged anywhere?
If so, where?
If not, is this a secure way to make use opf sshpass?
There isn't a node specific history file for execs unless you created one by logging the arguments. There can be lower level OS logging that captures this type of data, like an audit log.
Passing a password on the command line is still considered the least secure way.
Try -f to pass a file or -d for a file descriptor instead (or ssh keys should always be the first port of call)
The man page explains...
The -p option should be considered the least secure of all of sshpass's options. All system users can see the password in the command line with a simple "ps" command. Sshpass makes a minimal attempt to hide the password, but such attempts are doomed to create race conditions without actually solving the problem. Users of sshpass are encouraged to use one of the other password passing techniques, which are all more secure.
In particular, people writing programs that are meant to communicate the password programatically are encouraged to use an anonymous pipe and pass the pipe's reading end to sshpass using the -d option.
There is quite a common issue in unix world, that is when you start a process with parameters, one of them being sensitive, other users can read it just by executing ps -ef. (For example mysql -u root -p secret_pw
Most frequent recommendation I found was simply not to do that, never run processes with sensitive parameters, instead pass these information other way.
However, I found that some processes have the ability to change the parameter line after they processed the parameters, looking for example like this in processes:
xfreerdp -decorations /w:1903 /h:1119 /kbd:0x00000409 /d:HCG /u:petr.bena /parent-window:54526138 /bpp:24 /audio-mode: /drive:media /media /network:lan /rfx /cert-ignore /clipboard /port:3389 /v:cz-bw47.hcg.homecredit.net /p:********
Note /p:*********** parameter where password was removed somehow.
How can I do that? Is it possible for a process in linux to alter the argument list they received? I assume that simply overwriting the char **args I get in main() function wouldn't do the trick. I suppose that maybe changing some files in /proc pseudofs might work?
"hiding" like this does not work. At the end of the day there is a time window where your password is perfectly visible so this is a total non-starter, even if it is not completely useless.
The way to go is to pass the password in an environment variable.
I'd like to prepare a simple script for connecting to some VPN network. The password to the network consists of two elements: pretty complicated pass + randomized token. I don't want to remember this password but store it encrypted in some secure directory. Now, the script I need should ask me for a passphrase and some token, read decrypt a pass from a file and run some commands. All those are pretty easy except one thing: is it possible to decrypt a file to a variable instead of file? I mean I'd like to get something like
PASS=`mdecrypt password.nc`
but as far as I know mdecrypt generates a file as a result instead of returning value. I know I could run something like
`mdecrypt password.nc`
PASS=`cat password`
`unlink password`
but is there some easier (one liner) solution?
uset the -F option
-F Force output on standard output or input from stdin if that is a
terminal. By default mcrypt will not output encrypted data to
terminal
Is it safe to pass a key to the openssl command via the command line parameters in Linux? I know it nulls out the actual parameter, so it can't be viewed via /proc, but, even with that, is there some way to exploit that?
I have a python app that I want to use OpenSSL to do the encryption/description through stdin/stdout streaming in a subprocess, but I want to know my keys are safe.
Passing the credentials on the command line is not safe. It will result in your password being visible in the system's process listing - even if openssl erases it from the process listing as soon as it can, it'll be there for an instant.
openssl gives you a few ways to pass credentials in - the man page has a section called "PASS PHRASE ARGUMENTS", which documents all the ways you can pass credentials into openssl. I'll explain the relevant ones:
env:var
Lets you pass the credentials in an environment variable. This is better than using the process listing, because on Linux your process's environment isn't readable by other users by default - but this isn't necessarily true on other platforms.
The downside is that other processes running as the same user, or as root, will be able to easily view the password via /proc.
It's pretty easy to use with python's subprocess:
new_env=copy.deepcopy(os.environ)
new_env["MY_PASSWORD_VAR"] = "my key data"
p = subprocess.Popen(["openssl",..., "-passin", "env:MY_PASSWORD_VAR"], env=new_env)
fd:number
This lets you tell openssl to read the credentials from a file descriptor, which it will assume is already open for reading. By using this you can write the key data directly from your process to openssl, with something like this:
r, w = os.pipe()
p = subprocess.Popen(["openssl", ..., "-passin", "fd:%i" % r], preexec_fn=lambda:os.close(w))
os.write(w, "my key data\n")
os.close(w)
This will keep your password secure from other users on the same system, assuming that they are logged in with a different account.
With the code above, you may run into issues with the os.write call blocking. This can happen if openssl waits for something else to happen before reading the key in. This can be addressed with asynchronous i/o (e.g. a select loop) or an extra thread to do the write()&close().
One drawback of this is that it doesn't work if you pass closeFds=true to subprocess.Popen. Subprocess has no way to say "don't close one specific fd", so if you need to use closeFds=true, then I'd suggest using the file: syntax (below) with a named pipe.
file:pathname
Don't use this with an actual file to store passwords! That should be avoided for many reasons, e.g. your program may be killed before it can erase the file, and with most journalling file systems it's almost impossible to truly erase the data from a disk.
However, if used with a named pipe with restrictive permissions, this can be as good as using the fd option above. The code to do this will be similar to the previous snippet, except that you'll need to create a fifo instead of using os.pipe():
pathToFifo = my_function_that_securely_makes_a_fifo()
p = subprocess.Popen(["openssl", ..., "-passin", "file:%s" % pathToFifo])
fifo = open(pathToFifo, 'w')
print >> fifo, "my key data"
fifo.close()
The print here can have the same blocking i/o problems as the os.write call above, the resolutions are also the same.
No, it is not safe. No matter what openssl does with its command line after it has started running, there is still a window of time during which the information is visible in the process' command line: after the process has been launched and before it has had a chance to null it out.
Plus, there are many ways for an accident to happen: for example, the command line gets logged by sudo before it is executed, or it ends up in a shell history file.
Openssl supports plenty of methods of passing sensitive information so that you don't have to put it in the clear on the command line. From the manpage:
pass:password
the actual password is password. Since the password is visible to utilities (like 'ps' under Unix) this form should only be used where security is not important.
env:var
obtain the password from the environment variable var. Since the environment of other processes is visible on certain platforms (e.g. ps under certain Unix OSes) this option should be used with caution.
file:pathname
the first line of pathname is the password. If the same pathname argument is supplied to -passin and -passout arguments then the first line will be used for the input password and the next line for the output password. pathname need not refer to a regular file: it could for example refer to a device or named pipe.
fd:number
read the password from the file descriptor number. This can be used to send the data via a pipe for example.
stdin
read the password from standard input.
All but the first two options are good.
I have a script that needs to know what username it is run from.
When I run it from shell, I can easily use $ENV{"USER"}, which is provided by bash.
But apparently - then the same script is run from cron, also via bash - $ENV{"USER"} is not defined.
Of course, I can:
my $username = getpwuid( $< );
But it doesn't look nice - is there any better/nicer way? It doesn't have to be system-independent, as the script is for my personal use, and will be only run on Linux.
Try getting your answer from several places, first one wins:
my $username = $ENV{LOGNAME} || $ENV{USER} || getpwuid($<);
crontab sets $LOGNAME so you can use $ENV{"LOGNAME"}. $LOGNAME is also set in my environment by default (haven't looked where it gets set though) so you might be able to use only $LOGNAME instead of $USER.
Although I agree with hacker, don't know what's wrong about getpwuid.
Does this look prettier?
use English qw( −no_match_vars );
my $username = getpwuid $UID;
Sorry, why doesn't that "look nice"? That's the appropriate system call to use. If you're wanting an external program to invoke (e.g. something you could use from a bash script too), there are the tools /usr/bin/id and /usr/bin/whoami for use.
Apparently much has changed in Perl in recent years, because some of the answers given here do not work for fetching a clean version of "current username" in modern Perl.
For example, getpwuid($<) prints a whole bunch of stuff (as a list in list context, or pasted-together as a string in scalar context), not just the username, so you have to use (getpwuid($<))[0] instead if you want a clean version of the username.
Also, I'm surprised no one mentioned getlogin(), though that doesn't always work. For best chance of actually getting the username, I suggest:
my $username = getlogin() || (getpwuid($<))[0] || $ENV{LOGNAME} || $ENV{USER};