Sending passwords securely via command line without being exposed in ps/wmic (Windows,Unix) - security

We have an launcher application in Windows and Unix which execs (starts application using exec system call) an application like RDP, putty, MSSQL. In order to invoke it, we pass parameters to such as username, password, IP.
Recently we found that, using wmic or ps one can find out what parameters have been passed it, thereby exposing sensitive information like passwords.
Is there any way where we can either mask those passwords or hide the parameters all together.
Note: My launcher gets parameters from a some other service, so asking for password after invoking application is not a option! Passwords have to be passed to application as parameter.
Any solutions?

This is not possible (at least not on Linux, in a reliable way) to pass program arguments securely.
A possible workaround is to pass the name of a file (or some other resource - e.g. some "reference" to some database entry) containing that password, or use some other inter-process communication facility (e.g. on Linux, fifo(7), shm_overview(7), pipe(7), unix(7), etc...) to pass these sensitive informations. You might also consider using environment variables (see environ(7) & getenv(3)).
On Linux look also into proc(5) to understand what it is able to show about processes - thru /proc/1234/ for the process of pid 1234. Maybe you want seccomp facilities.
On Unix, be aware of the setuid mechanism -tricky to understand-. Use it carefully (it is the basic block of most security or authentication machinery such as sudo and login) since a simple mistake could open a huge vulnerability.
For a software written to work both on Unix & Windows, I recommend passing the password in some file (e.g. in /tmp/secretpassword) and giving the name/tmp/secretpassword (or some D:\foo\bar on Windows) of that file thru some program argument, and make sure to use wisely the file permission mechanisms to ensure that file is not readable by those who don't need it.

Related

Is passing application configuration in stdin a secure alternative to environment variables?

I'm trying to figure out the best approach to web application configuration. The goals are:
Configurability. Force the configuration to be specified in deploy time. Make sure the configuration is kept separate from the code or deployable artifact.
Security. Keep secrets from leaking from deployment environment.
Simplicity. Make sure the solution is simple, and natural to the concept of OS process.
Flexibility. Makes no assumptions about where the configuration is stored.
According to 12 factor app web application configuration is best provided in environment variables. It is simple and flexible but it looks like there are some security concerns related to this.
Another approach could be to pass all the configuration as command line arguments. This again is simple, flexible and natural to the OS but the whole configuration is then visible in host's process list. This might or might not be an issue (I'm no OS expert) but the solution is cumbersome at least.
A hybrid approach is taken by a popular framework Dropwizard where command line argument specifies config file location and the config is read from there. The thing is it brakes the flexibility constraint making assumptions about the location of my configuration (local file). It also makes my application implement file access which, while often easily achieved in most languages/frameworks/libraries, is not inherently simple.
I was thinking of another approach which would be to pass all the configuration in application's stdin. Ultimately one could do cat my-config-file.yml | ./my-web-app in case of locally stored file or even wget https://secure-config-provider/my-config-file.yml | ./my-web-app. Piping seems simple and native to OS process. It seems extremely flexible as well as it separates the question of how is the config provided onto host OS.
The question is whether it conforms to the security constraint. Is it safe to assume that once piped content has been consumed it is permanently gone?
I wasn't able to google anyone trying this hence the question.
Writing secrets into the stdin of a process is more secure than environment variables - if done correctly.
In fact, it is the most secure way I know of to pass secrets from one process to another - again if done correctly.
Of course this applies to all file-like inputs which do not have file system presence and which otherwise cannot be opened by other processes, stdin is just one instance of that which is available by default and easy to write to.
Anyway, the key thing with environment variables, as the post you linked describes, is that once you put something into the environment variables it leaks into all child processes, unless you take care to clean it up.
But also, it's possible for other processes running as your user, or as any privileged/administrative user, to inspect the environment of your running process.
For example, on Linux, take a look at the files /proc/*/environ. That file exists for each running process, and you can inspect its contents for any process that is running as your user. If you are root, you can look at the environ of any process of any user.
This means that any local code execution exploit, even some unprivileged ones, could get access to your secrets in your environment variables, and it is very simple to do so. Still better than having them in a file, but not by much.
But when you pipe things into the stdin for a process, outside processes can only intercept that if they are able to use the debugging system calls to "attach" to the process, and monitor system calls or scan its memory. This is a much more complex process, it's less obvious where to look, and most importantly, it can be secured more.
For example Linux can be configured to prevent unprivileged user processes from even calling the debugger system calls onto other processes started by the same user that they didn't start, and some distros are starting to turn this on by default.
This means that a properly executed writing of data to stdin is in almost all cases at least as or more secure than using an environment variable.
Note, however, that you have to "do it correctly". For example, these two won't give you the same security benefits:
my-command </some/path/my-secret-config
cat /some/path/my-secret-config | my-command
Because the secret still exists on disk. So you get more flexibility but not more security. (If, however, the cat is actually sudo cat or otherwise has more privileged access to the file than my-command, then it could be a security benefit.)
Now let's look at a more interesting case:
echo "$my_secret_data" | my-command
Is this more or less secure than an environment variable? It depends:
If you are calling this inside a typical shell, then echo is probably a "builtin", which means the shell never needs to execute an external echo program, and the variable stays within its memory before being written to the stdin.
But if you are invoking something like this from outside of a shell, then this is actually a big security leak, because it will put the variable into the command line of the executed external echo program, which on many systems can be seen by any other running process, even other unprivileged users!
So as long as you understand that, and use the right functionality to make sure you are writing directly from whatever has the credentials to your process, stdin is probably the most secure option you have.
TL;DR: stdin can give you a much smaller "surface area" for the data to leak, which means that it can help you get more security, but whether or not you do depends on exactly how you're using it and how the rest of your system is set up.
Personally, I use stdin for passing in secret data whenever I can.

How can I access files of different users and retain permission restrictions in Linux using Node.JS?

I'm trying to reimplement an existing server service in Node.JS. That service can be compared to a classic FTP server: authenticated users can read/create/modify files, but restricted to the permissions given to the matching system user name.
I'm pretty sure I can't have Node.JS run as root and switch users using seteuid() or alike since that would break concurrency.
Instead, can I let my Node.JS process run as ROOT and manually check permissions when accessing files? I'm thinking about some system call like "could user X create a file in directory Y?"
Otherwise, could I solve this by using user groups? Note that the service must be able to delete/modify a file created by the real system user, which may not set a special group just so that the service can access the file.
Running node as root sounds dangerous, but I assume there aren't many options left for you. Most FTP servers run as root too, for the same reason. Though, it means you need to pay a severe attention to the security of the code you are going to run.
Now to the question:
You are asking whether you can reimplement the UNIX permissions control in node.js. Yes you can, but Should Not! It is almost 100% chance you will leave holes or miss edge cases Unix core has already taken care of.
Instead use the process.setuid(id) as you mentioned. It will not defeat concurrency, but you need to think of parallel concurrency rather than async now. That is an extra work, but will release you of an headache of reinventing the Unix security.
Alternatively, if all of the operations you want to carry on filesystem involve shell commands, then you can simply modify them to the following pattern:
runuser -l userNameHere -c 'command'

Running shell scripts with sudo through my web app

I have some functionality that interfaces with the server's OS in my web application. I've written a bash script and am able to run it from within my app.
However, some functionality of the script requires superuser privileges.
What is the most sane way to run this script securely? It is being passed arguments from a web form, but should only be able to be called by authenticated users that I trust not to haxxor it.
Whichever way you do this is likely to be very dangerous. Can you perhaps write a local daemon with the required privileges, and use some some of message-bus that produces/consumes events to be processed by this super-user requiring component?
That way, you can carefully validate the contents of the message, and reduce the likelihood of exploitation..
What is the most sane way to run this script securely?
If you really care about security, require the web client to provide a passphrase and use an ssh key. Then run the script under ssh-agent, and for the sensitive parts do ssh root#localhost command.... You probably will want to create ssh keypairs just for this purpose, as typing one's normal SSH passphrase into a web form is not something I would do (who trusts your web form, anyway?).
If you don't want quite this much security, and if you really, really believe that your web form can correctly authenticate its users, without any bugs, you could instead decide to trust the web server to run the commands you need. (I wouldn't.) In this case I would use the /etc/sudoers file to allow the web server to run the commands of interest without providing a password. Then your script should use sudo to run those commands.

Security strategies for storing password on disk

I am building a suite of batch jobs that require regular access to a database, running on a Solaris 10 machine. Because of (unchangable) design constraints, we are required use a certain program to connect to it. Said interface requires us to pass a plain-text password over a command line to connect to the database. This is a terrible security practice, but we are stuck with it.
I am trying to make sure things are properly secured on our end. Since the processing is automated (ie, we can't prompt for a password), and I can't store anything outside the disk, I need a strategy for storing our password securely.
Here are some basic rules
The system has multiple users.
We can assume that our permissions are properly enforced (ie, if a file with a is chmod'd to 600, it won't be publically readable)
I don't mind anyone with superuser access looking at our stored password
Here is what i've got so far
Store password in password.txt
$chmod 600 password.txt
Process reads from password.txt when it's needed
Buffer overwritten with zeros when it's no longer needed
Although I'm sure there is a better way.
This is not a solution for cryptography. No matter the cipher used, the key will be equally accessible to the attacker. Cyrpto doesn't solve all problems.
chmod 400 is best, this makes it read only. chmod 600 is read write, which may or may not be a requirement. Also make sure its chown'ed by the the process that needs it. This is really the best you can do. Even if you are sharing the machine with other users they shouldn't be able to access it. Hopefully this is a dedicated machine, in that case there isn't much of a threat. SELinux or AppArmor will help harden the system from cross process/cross user attacks.
Edit:
shred is the tool you need to securely delete files.
Edit: Based on Moron/Mike's comments the unix command ps aux will display all running processes and the command used to invoke them. For instance the following command will be exposed to all users on the system: wget ftp://user:password#someserver/somefile.ext. A secure alternative is to use the CURL library. You should also disable your shells history. In bash you can do this by setting an environment variable export HISTFILE=
You're not far from the best approach given your constraints. You have two issues to deal with. The first is password storage. The second is using the password securely.
Dealing with the second one first -- you have a huge issue in the use of the command line program. Using options to the 'ps' command, a user can see the arguments used in running the command line program. From what you've written, this would contain the password in plain text. You mention this is an unchangeable interface. Even so, as an ethical programmer, you should raise the issue. If this were a banking application handling financial transactions, you might consider finding another job rather than being part of an unethical solution.
Moving on to securely storing the password, you don't mention what language you are using for your batch files. If you are using a shell script, then you have little recourse than than to hard code the password within the shell script or read it in plain-text from a file. From your description of storing the password in a separate file, I'm hoping that you might be writing a program in a compiled language. If so, you can do a little better.
If using a compiled language, you can encrypt the password in the file and decrypt within your program. The key for decryption would reside in the program itself so it couldn't be read easily. Besides this, I would
chmod 400 the file to prevent other users from reading it
add a dot prefix ('.') to the file to hide it from normal directory listing
rename the file to make it less interesting to read.
be careful not to store the key in a simple string -- the 'strings' command will print all printable strings from a unix executable image.
Having done these things, the next steps would be to improve the key management. But I wouldn't go this far until the 'ps' issue is cleared up. There's little sense putting the third deadbolt on the front door when you plan to leave the window open.
Don't fill the password buffer with zeros, this is pointless. The kernel can decide to swap it to an arbitrary location in the swap file or say after allocation of some memory the kernel will move the page tables around, resulting in other page tables containing the password while you only have access to the new copy.
You can prctl(2) with PR_SET_NAME to change the process name on the fly. Unfortunately I can't currently think of any other way than injecting some code into the running process via ptrace(2), which means enemy processes will race to read the process list before you get a chance to change the new processes name :/
Alternatively, you can grab the grsecurity kernel patches, and turn on CONFIG_GRKERNSEC_PROC_USER:
If you say Y here, non-root users will
only be able to view their own
processes, and restricts them from
viewing network-related information,
and viewing kernel symbol and module
information.
This will stop ps from being able to view the running command, as ps reads from /proc/<pid>/cmdline
Said interface requires us to pass a
plain-text password over a command
line to connect to the database. This
is a terrible security practice, but
we are stuck with it.
It's only a bad security practice because of problems in the O/S architecture. Would you expect other users to be able to intercept your syscalls? I wouldn't blame a developer who fell into this trap.

best approah (security) to do some admin work through web page in Linux?

I want to build a web based admin tools that allow the system admin to run pre-configured commands and scripts through a web page (simple and limited webmin), what is the best approach?
I already started with Ubuntu installing LAMP and give the user www-data root's privileges !!!
as I learned (please check the link) this is a really bad move !!!, so how to build such web-based system without the security risk?
cheers
I did something like this a couple of years ago. It was (I like think) fairly secure and only accessible to a limited number of pre-vetted, authenticated users, but it still left me with an uneasy feeling! If you can avoid doing it, I'd recommend you do :)
I had a database sitting between the frontend web-tier and the script which was actually executing actions. The relevant table contained a symbolic command name and an optional numeric argument, which was sufficient for my needs. This allows you to audit what's been executed, provides a quick and dirty way to have a non-www user do things, and means if the website is compromised they're constrained by the DB structure (somewhat) and the script which pulls data from it.
The data from the DB can be read by a daemon running in a separate, unprivileged account. The daemon pulls and sanitises data from the DB and maps the 'command' to an actual executable (with a hard-coded map, so commandA executes A, commandB executes foo, and anything else would get flagged as an error). The account can be locked down using AppArmor (or SELinux, I imagine) to prevent it from executing, reading or writing anything you don't expect it to. Have a system in place to alert you of any errors from either the daemon or AppArmor/SELinux.
The executables which the daemon runs can be setuid'd if appropriate, or you can use the sudoers mechanism to allow the unprivileged account to execute them without a password.
I already started with Ubuntu installing LAMP and give the user www-data root's privileges
Don't do this.
If you really want to execute some very specific scripts under root privileged. Create such predefined very limited scripts, allow their password-less execution with sudo for specific user and then run them via script and don't forget authentication.
Generally this is bad idea.
SSH is your best friend.

Resources