I am building a suite of batch jobs that require regular access to a database, running on a Solaris 10 machine. Because of (unchangable) design constraints, we are required use a certain program to connect to it. Said interface requires us to pass a plain-text password over a command line to connect to the database. This is a terrible security practice, but we are stuck with it.
I am trying to make sure things are properly secured on our end. Since the processing is automated (ie, we can't prompt for a password), and I can't store anything outside the disk, I need a strategy for storing our password securely.
Here are some basic rules
The system has multiple users.
We can assume that our permissions are properly enforced (ie, if a file with a is chmod'd to 600, it won't be publically readable)
I don't mind anyone with superuser access looking at our stored password
Here is what i've got so far
Store password in password.txt
$chmod 600 password.txt
Process reads from password.txt when it's needed
Buffer overwritten with zeros when it's no longer needed
Although I'm sure there is a better way.
This is not a solution for cryptography. No matter the cipher used, the key will be equally accessible to the attacker. Cyrpto doesn't solve all problems.
chmod 400 is best, this makes it read only. chmod 600 is read write, which may or may not be a requirement. Also make sure its chown'ed by the the process that needs it. This is really the best you can do. Even if you are sharing the machine with other users they shouldn't be able to access it. Hopefully this is a dedicated machine, in that case there isn't much of a threat. SELinux or AppArmor will help harden the system from cross process/cross user attacks.
Edit:
shred is the tool you need to securely delete files.
Edit: Based on Moron/Mike's comments the unix command ps aux will display all running processes and the command used to invoke them. For instance the following command will be exposed to all users on the system: wget ftp://user:password#someserver/somefile.ext. A secure alternative is to use the CURL library. You should also disable your shells history. In bash you can do this by setting an environment variable export HISTFILE=
You're not far from the best approach given your constraints. You have two issues to deal with. The first is password storage. The second is using the password securely.
Dealing with the second one first -- you have a huge issue in the use of the command line program. Using options to the 'ps' command, a user can see the arguments used in running the command line program. From what you've written, this would contain the password in plain text. You mention this is an unchangeable interface. Even so, as an ethical programmer, you should raise the issue. If this were a banking application handling financial transactions, you might consider finding another job rather than being part of an unethical solution.
Moving on to securely storing the password, you don't mention what language you are using for your batch files. If you are using a shell script, then you have little recourse than than to hard code the password within the shell script or read it in plain-text from a file. From your description of storing the password in a separate file, I'm hoping that you might be writing a program in a compiled language. If so, you can do a little better.
If using a compiled language, you can encrypt the password in the file and decrypt within your program. The key for decryption would reside in the program itself so it couldn't be read easily. Besides this, I would
chmod 400 the file to prevent other users from reading it
add a dot prefix ('.') to the file to hide it from normal directory listing
rename the file to make it less interesting to read.
be careful not to store the key in a simple string -- the 'strings' command will print all printable strings from a unix executable image.
Having done these things, the next steps would be to improve the key management. But I wouldn't go this far until the 'ps' issue is cleared up. There's little sense putting the third deadbolt on the front door when you plan to leave the window open.
Don't fill the password buffer with zeros, this is pointless. The kernel can decide to swap it to an arbitrary location in the swap file or say after allocation of some memory the kernel will move the page tables around, resulting in other page tables containing the password while you only have access to the new copy.
You can prctl(2) with PR_SET_NAME to change the process name on the fly. Unfortunately I can't currently think of any other way than injecting some code into the running process via ptrace(2), which means enemy processes will race to read the process list before you get a chance to change the new processes name :/
Alternatively, you can grab the grsecurity kernel patches, and turn on CONFIG_GRKERNSEC_PROC_USER:
If you say Y here, non-root users will
only be able to view their own
processes, and restricts them from
viewing network-related information,
and viewing kernel symbol and module
information.
This will stop ps from being able to view the running command, as ps reads from /proc/<pid>/cmdline
Said interface requires us to pass a
plain-text password over a command
line to connect to the database. This
is a terrible security practice, but
we are stuck with it.
It's only a bad security practice because of problems in the O/S architecture. Would you expect other users to be able to intercept your syscalls? I wouldn't blame a developer who fell into this trap.
Related
We have an launcher application in Windows and Unix which execs (starts application using exec system call) an application like RDP, putty, MSSQL. In order to invoke it, we pass parameters to such as username, password, IP.
Recently we found that, using wmic or ps one can find out what parameters have been passed it, thereby exposing sensitive information like passwords.
Is there any way where we can either mask those passwords or hide the parameters all together.
Note: My launcher gets parameters from a some other service, so asking for password after invoking application is not a option! Passwords have to be passed to application as parameter.
Any solutions?
This is not possible (at least not on Linux, in a reliable way) to pass program arguments securely.
A possible workaround is to pass the name of a file (or some other resource - e.g. some "reference" to some database entry) containing that password, or use some other inter-process communication facility (e.g. on Linux, fifo(7), shm_overview(7), pipe(7), unix(7), etc...) to pass these sensitive informations. You might also consider using environment variables (see environ(7) & getenv(3)).
On Linux look also into proc(5) to understand what it is able to show about processes - thru /proc/1234/ for the process of pid 1234. Maybe you want seccomp facilities.
On Unix, be aware of the setuid mechanism -tricky to understand-. Use it carefully (it is the basic block of most security or authentication machinery such as sudo and login) since a simple mistake could open a huge vulnerability.
For a software written to work both on Unix & Windows, I recommend passing the password in some file (e.g. in /tmp/secretpassword) and giving the name/tmp/secretpassword (or some D:\foo\bar on Windows) of that file thru some program argument, and make sure to use wisely the file permission mechanisms to ensure that file is not readable by those who don't need it.
I'm trying to figure out the best approach to web application configuration. The goals are:
Configurability. Force the configuration to be specified in deploy time. Make sure the configuration is kept separate from the code or deployable artifact.
Security. Keep secrets from leaking from deployment environment.
Simplicity. Make sure the solution is simple, and natural to the concept of OS process.
Flexibility. Makes no assumptions about where the configuration is stored.
According to 12 factor app web application configuration is best provided in environment variables. It is simple and flexible but it looks like there are some security concerns related to this.
Another approach could be to pass all the configuration as command line arguments. This again is simple, flexible and natural to the OS but the whole configuration is then visible in host's process list. This might or might not be an issue (I'm no OS expert) but the solution is cumbersome at least.
A hybrid approach is taken by a popular framework Dropwizard where command line argument specifies config file location and the config is read from there. The thing is it brakes the flexibility constraint making assumptions about the location of my configuration (local file). It also makes my application implement file access which, while often easily achieved in most languages/frameworks/libraries, is not inherently simple.
I was thinking of another approach which would be to pass all the configuration in application's stdin. Ultimately one could do cat my-config-file.yml | ./my-web-app in case of locally stored file or even wget https://secure-config-provider/my-config-file.yml | ./my-web-app. Piping seems simple and native to OS process. It seems extremely flexible as well as it separates the question of how is the config provided onto host OS.
The question is whether it conforms to the security constraint. Is it safe to assume that once piped content has been consumed it is permanently gone?
I wasn't able to google anyone trying this hence the question.
Writing secrets into the stdin of a process is more secure than environment variables - if done correctly.
In fact, it is the most secure way I know of to pass secrets from one process to another - again if done correctly.
Of course this applies to all file-like inputs which do not have file system presence and which otherwise cannot be opened by other processes, stdin is just one instance of that which is available by default and easy to write to.
Anyway, the key thing with environment variables, as the post you linked describes, is that once you put something into the environment variables it leaks into all child processes, unless you take care to clean it up.
But also, it's possible for other processes running as your user, or as any privileged/administrative user, to inspect the environment of your running process.
For example, on Linux, take a look at the files /proc/*/environ. That file exists for each running process, and you can inspect its contents for any process that is running as your user. If you are root, you can look at the environ of any process of any user.
This means that any local code execution exploit, even some unprivileged ones, could get access to your secrets in your environment variables, and it is very simple to do so. Still better than having them in a file, but not by much.
But when you pipe things into the stdin for a process, outside processes can only intercept that if they are able to use the debugging system calls to "attach" to the process, and monitor system calls or scan its memory. This is a much more complex process, it's less obvious where to look, and most importantly, it can be secured more.
For example Linux can be configured to prevent unprivileged user processes from even calling the debugger system calls onto other processes started by the same user that they didn't start, and some distros are starting to turn this on by default.
This means that a properly executed writing of data to stdin is in almost all cases at least as or more secure than using an environment variable.
Note, however, that you have to "do it correctly". For example, these two won't give you the same security benefits:
my-command </some/path/my-secret-config
cat /some/path/my-secret-config | my-command
Because the secret still exists on disk. So you get more flexibility but not more security. (If, however, the cat is actually sudo cat or otherwise has more privileged access to the file than my-command, then it could be a security benefit.)
Now let's look at a more interesting case:
echo "$my_secret_data" | my-command
Is this more or less secure than an environment variable? It depends:
If you are calling this inside a typical shell, then echo is probably a "builtin", which means the shell never needs to execute an external echo program, and the variable stays within its memory before being written to the stdin.
But if you are invoking something like this from outside of a shell, then this is actually a big security leak, because it will put the variable into the command line of the executed external echo program, which on many systems can be seen by any other running process, even other unprivileged users!
So as long as you understand that, and use the right functionality to make sure you are writing directly from whatever has the credentials to your process, stdin is probably the most secure option you have.
TL;DR: stdin can give you a much smaller "surface area" for the data to leak, which means that it can help you get more security, but whether or not you do depends on exactly how you're using it and how the rest of your system is set up.
Personally, I use stdin for passing in secret data whenever I can.
I'm trying to reimplement an existing server service in Node.JS. That service can be compared to a classic FTP server: authenticated users can read/create/modify files, but restricted to the permissions given to the matching system user name.
I'm pretty sure I can't have Node.JS run as root and switch users using seteuid() or alike since that would break concurrency.
Instead, can I let my Node.JS process run as ROOT and manually check permissions when accessing files? I'm thinking about some system call like "could user X create a file in directory Y?"
Otherwise, could I solve this by using user groups? Note that the service must be able to delete/modify a file created by the real system user, which may not set a special group just so that the service can access the file.
Running node as root sounds dangerous, but I assume there aren't many options left for you. Most FTP servers run as root too, for the same reason. Though, it means you need to pay a severe attention to the security of the code you are going to run.
Now to the question:
You are asking whether you can reimplement the UNIX permissions control in node.js. Yes you can, but Should Not! It is almost 100% chance you will leave holes or miss edge cases Unix core has already taken care of.
Instead use the process.setuid(id) as you mentioned. It will not defeat concurrency, but you need to think of parallel concurrency rather than async now. That is an extra work, but will release you of an headache of reinventing the Unix security.
Alternatively, if all of the operations you want to carry on filesystem involve shell commands, then you can simply modify them to the following pattern:
runuser -l userNameHere -c 'command'
Ok, so over the past year I have built some rather complex automation scripts (mostly bash, but with some perl here and there) for some of the more common work we do at my place of business. They rely heavily on ImageMagick, Ghostscript, and PhantomJS to name just a few. They also traverse a huge number of directories across the network and several different file systems and host OSs... Frankly the fact that they work is a bit of a miracle and perhaps a testament to my willingness to keep beating my head against the wall... Also, trust me, this is easier and more effective than trying to corral my resources. Our archives are... organic... and certain high-ranking individuals in the company think of them as belonging to them and do not look out for the interests of the company in their management. They are, at least, relatively well backed-up these days.
In any case these scripts automate the production of a number of image-based print-ready products of varying degrees of complexity up to multi-hundred page image-heavy books, and as such some of them accept absurdly complex argument structures to do all the things they do. (P.S. embedded Javascript in SVGS is a MAGICAL thing!) These systems have been in "working beta" for a while now, which basically means I've been hand entering the commands at a terminal to run them, and I want to move them up to production and offer them as a webservice so that those in production who are not friends with the command line can use them, and to also potentially integrate them with our new custom-developed order management system.
TL;DR below
so that's the background, the problem is this:
I'm running everything on a headless CentOS 6.4 virtual machine with SELINUX disabled.
Apache2 serves up my interface.sh CGI just fine, and the internet has already helped me make the POST data into shell variables. Now I need to launch the worker scripts that actually direct the heavy lifting and coordinate the binaries... from the CGI:
#get post data from form and make it into variables...
/bin/bash /path/to/script/worker.sh $arg1 $arg2 $arg3 $arg5 $arg6 $opt1 $arg7
Nothing.
httpd log shows permission denied, fair enough!
Ok, googling suggests that the script being called by the CGI must also be owned by the apache user and group, or by root with 755 permissions. Done!
now httpd log show permission denied for things worker.sh is trying to do :/
Google has lead me to believe that for security reasons fcgi requires that everything interacted with by the CGI process chain must have correctly controlled permissions, all the way down to the binaries and source files... Sure, this is smart for security and damage control, but almost impossible in my case. We have very dynamic data and terabytes of resources... :/
the script worker.sh exports its own environment and runs all it's commands as root. This is largely to overcome the minefield of permissions disasters that I have to contend with and CentOS's own paranoia about allowing stuff to happen. I had hoped this might be a work around, but no.
One suggestion I have seen is to simply write out the composed command to a text file and have cron or incron do something with the text file. Seems like that would work... BUT, I'd love to be able to get STDIO back into my web page as there are verbose errors and notifications (though no interaction) in many of these worker scripts, and I would like to provide notification of completion as well. Is there any way to do this that doesn't require a permissions war to be waged?
To run certain commands as another user, you can use sudo.
Set up sudo to allow passwordless access to run your command by the apache user. Then you can have the CGI script call sudo /path/to/script args to run it as root (or -u for another user of your choice).
It's very hard to make this secure, so you should make sure your CGI script is only accessible by trustworthy individuals.
I want to encrypt a folder by encfs or ecryptfs in linux. I can do it, but i want just specific process can access to it and decryption accrues automatically for that process.
No key to encryption needed by process.
Can any help me?
File systems are made exactly for the idea to allow access for more than one process. To want to restrict this access now to only one process is somewhat the opposite of this idea, so it won't be smooth, however you solve your task.
A much more straight-forward way if you want just one process have access would be to not use a file system but a database or just the contents of a single file. This way it would be easy to restrict the access to exactly one process.
If you want to stick to the encfs (or similar) you could let the process run as a specific user which should be the only user to have read and execute permissions on the mounted file system's root.