Suppose I have in a Bash shell script an environmental variable that holds a sensitive value (e.g. a password). How may I securely overwrite the memory that holds this variable's value before exiting my script?
If possible, the technique used to do so would not be dependent on the particular implementation of Bash I'm using. I'd like to find a standards-respecting/canonical way to do this that works on all correct Bash implementations.
Please note that the following are not in the scope of the question:
1. How the sensitive value is placed into the environmental variable
2. How the sensitive value stored in the environmental variable is passed to the program that consumes it
7/10/2017 5:03 AM Update to Address Comment by rici
rici, thank you for your comment, copied here:
"Exiting the script is really the only way to reliably delete an
environment variable from the script's resident memory. Why do you
feel the string is less safe after the script terminates than while it
is running?"
My intent here is to follow good practice and actively scrub all cryptographically-sensitive values from memory as soon as I am through using them.
I do not know if Bash actively scrubs the memory used by a script when that script exits. I suspect that it does not. If it indeed does not, the sensitive cryptographic value will remain resident in memory and is subject to capture by an adversary.
In C/C++, one can easily scrub a value's memory location. I am trying to find out of this is possible in Bash. It may be that Bash is simply not the right tool for security-sensitive applications.
First off, we need to distinguish between environment variables and shell variables. Environment variables exist for the lifetime of the process and cannot be overwritten. Not only that, but on many systems they are trivially visible to other processes. For example Linux provides the /proc filesystem which allows for lots of introspection of running processes, including observing their environment variables.
Here's an example of a Bash script that attempts to overwrite an environment variable. Notice that although the value within the script changes, the process' environment is not changed:
$ SECRET=mysecret bash -c \
'strings /proc/$$/environ | grep SECRET
SECRET=overwritten
echo "SECRET=$SECRET"
strings /proc/$$/environ | grep SECRET'
SECRET=mysecret
SECRET=overwritten
SECRET=mysecret
So it is never safe to store secrets in environment variables unless you control all access to the machine.
Holding a secret in a (non-environment) shell variable is much more secure, as an attacker would need to be able to access the memory of the process, which is generally something only the kernel can do. And while you're correct that minimizing the time you hold onto such secrets is a good practice, it's not generally worth jumping through lots of hoops for. It's far more important to secure your system and execution environment, because a motivated attacker who has sufficient access can observe a secret even if it only lives in memory for a brief time. Holding a secret in memory for longer than strictly necessary is only a marginal increase in risk, whereas running a privileged program on an insecure system already means all bets are off.
Related
I know it's frowned upon to use passwords in command line interfaces like in this example:
./commandforsomething -u username -p plaintextpassword
My understanding that the reason for that (in unix systems at least) is because it'll be able to be read in the scrollback as well as the .bash_history file (or whatever flavor shell you use).
HOWEVER, I was wondering if it was safe to use that sort of interface with sensitive data programatically while programming things. For example, in perl, you can execute a command using two ``, the exec command, or system command (I'm not 100% sure on the differences between these apart from the return value from the two backticks being the output of the executed command versus the return value... but that's a question for another post I guess).
So, my question is this: Is it safe to do things LIKE
system("command", "userarg", "passwordarg");
as it essentially does the same thing, just without getting posted in scrollback or history? (note that I only use perl as an example - I don't care about the answer specific to perl but instead the generally accepted principle).
It's not only about shell history.
ps shows all arguments passed to the program. The reason why passing arguments like this is bad is that you could potentially see other users' passwords by just looping around and executing ps. The cited code won't change much, as it essentially does the same.
You can try to pass some secrets via environment, since if the user doesn't have an access to the given process, the environment won't be shown. This is better, but is a pretty bad solution too (e.g.: in case program fails and dumps a core, all passwords will get written to disk).
If you use environment variables, use ps -E which will show you environment variables of the process. Use it as a different users than the one executing the program. Basically simulate the "attacker" and see if you can snoop the password. On a properly configured system you shouldn't be able to do it.
I have a few scripts in my $profile that require passwords for things like connecting to a corporate VPN or sending a command to a virtualized VM.
I don't want to type these passwords over and over and storing the passwords in my $profile is insecure. So I've come up with a solution. On $profile startup, I do something like this
$env:VpnPassword = (Get-Credential Domain\George.Mauer).GetNetworkCredential().Password
So when powershell starts I enter a password one time and in any scripts I can then use $env:VpnPassword.
I've confirmed that the variable is available only to the PS Session. And my reasoning is, since it seems to be in memory, that's a reasonably safe place to store it.
Is my logic sound? Are the $env values I'm creating stored only in memory? What about the pagefile? Is that something that could be used to somehow grab these strings? Is there a better way to achieve what I'm trying to do without introducing whole new systems?
I verified $env is ONLY in ram. It left no pointers when PS closed. the underlying destructor fires even if you punch the PS exe into breaking, it still cleans up. Even with a shared memory attack, that address space is only available to system32, etc and kernel mode drivers as far as I know. You would have to know the exact memory address and size of the data to find anything assuming you even had a process that was elevated enough. That being said, I dont really see any need for encryption.
This answer will begin the self destruct sequence on WM_CLOSE...
I'm reading a book about hacking the kernel and one area the author keeps coming back to is shell code, that many attempts at kernel hacking try to find a way to execute shell code.
Can someone elaborate more on this topic, particularly can you clarify "shell code."
How does shell code get around sudo in *NIX systems or not being Admin in a windows machine?
Are there examples of shell code attacks that aren't OS specific? I would think one has to be targeting specific OS.
Shell code is the payload used when exploiting a vulnerability that is used to create a command shell from which the attacker can control the machine.
A typical shell code when run might open a network connection and spawn cmd.exe on a Windows machine (or /bin/sh on Linux/unix) piping stdin and stdout over the network connection. An attacker may complete the connection from his machine and enter commands and get feedback as if he was sitting at the compromised machine.
A buffer overflow is not shell code. It is the vulnerability that is exploited to execute the shell code.
The buffer overflow is exploited to copy the shell code to the user's machine and overwrite the return address on the program's stack. When the currently executing function returns, the processor jumps to the uploaded shell code which creates the shell for the attacker.
For more information on exploiting buffer overflows, have a look at Smashing the Stack for Fun and Profit.
You can try to use the -fno-stack-protector flag for gcc but I'm not very familiar with OSX or whatever stack protections it may use.
If you want to play around with buffer overflows, modern compilers and modern OSs have protections in place that make this difficult. Your best bet would be to grab yourself a Linux distro and turn them off. See this question for more information on disabling these protections.
Note you don't need to have a buffer overflow to execute a shell code. I've demonstrated opening a remote shell using a command injection exploit to upload and execute a batch file.
Essentially it's finding a buffer overflow or similar technique that allows you to insert malicious code into a process running as root.
For example, if you used a fixed sized buffer and you overrun that buffer, you can essentially overwrite memory contents and use this to execute a malicious payload.
A simple shell code snippet that can come back to bite you is:
/bin/sh
or inside a C program:
system("/bin/sh");
If you can direct your exploits to execute such a line of code (e.g. through a buffer overflow that hijacks the intended control path of the program), you get a shell prompt with the victim's privileges and you're in.
Basically, when a program runs, everything that's related to it (Variables, Instructions etc.) is stored in the Memory, as a Buffer.
Memory is essentially a hell lot of bits in your RAM.
So, for the purpose of our example, let's say that there's a variable Name that get's stored in Bit# 1-10. Let's assume that Bits 11-30 is used for storing Instructions. It's clear that the programmer expects Name to be 10 bits long. If I give a 20-bit-long Name, it's buffer's gonna overflow into the area that holds the instructions. So I'm gonna design the latter 10 bits of my Name such that the instructions will get overwritten by naughty ones.
'innocentmeNAUGHTYCOD'
That's an Attack.
Though not all instances are this obvious, there's some vulnerability in almost every large piece of code. It's all about how you exploit it.
I know that system wide environment variables can be set by adding entries into
/etc/environment
or
/etc/profile
But that requires system restart or X restart.
Is it possible to set an environment variable in Ubuntu / Linux so that immediately available system wide without restarting OS or logging out the user?
The simple answer is: you cannot do this in general.
Why can there be no general solution?
The "why?" needs a more detailed explanation. In Linux, the environment is process-specific. Each process environment is stored in a special memory area allocated exclusively for this process.
As an aside: To quickly inspect the environment of a process, have a look at /proc/<pid>/env (or try /proc/self/env for the environment of the currently running process, such as your shell).
When a ("parent") process starts another ("child") process (via fork(2)), the environment the environment of the parent is copied to produce the environment of the child. There is no inheritance-style association between those two environments thereafter, they are completely separate. So there is no "global" or "master" environment we could change, to achieve what you want.
Why not simply change the per-process environment of all running processes? The memory area for the environment is in a well-defined location (basically right before the memory allocated for the stack), so you can't easily extend it, without corrupting other critical memory areas of the process.
Possible half-solutions for special cases
That said, one can imagine several special cases where you could indeed achieve what you want.
Most obviously, if you do "size-neutral" changes, you could conceivable patch up all environments of all processes. For example, replace every USER=foo environment variable (if present), with USER=bar. A rather special case, I fear.
If you don't really need to change the environments of all processes, but only of a class of well-known ones, more creative approaches might be possible. Vorsprung's answer is an impressive demonstration of doing exactly this with only Bash processes.
There are probably many other special cases, where there is a possible solution. But as explained above: no solution for the general case.
This perl program uses gdb to change the USER variable in all currently running bash shells to whatever is given as the program arg. To set a new variable the internal bash call "set_if_not" could be used
#!/usr/bin/perl
use strict;
use warnings;
my #pids = qx(ps -C bash -o pid=);
my $foo = $ARGV[0];
print "changing user to $foo";
print #pids;
open( my $gdb, "|gdb" ) || die "$! gdb";
select($gdb);
$|++;
for my $pid ( #pids ) {
print "attach $pid\n";
sleep 1;
print 'call bind_variable("USER","' . $foo . '",0)' . "\n";
sleep 1;
print "detach\n";
}
This only works with bash ( I only tested it with version 4.1 on Ubuntu 10.04 LTS) and does not alter the environment for arbitrary already running programs. Obviously it must be run as root.
I fear the solution here is a frustrating one: Don't use environment variables. Instead use a file.
So, instead of setting up /etc/environment with:
SPECIAL_VAR='some/path/I/want/later/'
And calling it with:
$SPECIAL_VAR
Instead, create a file at ~/.yourvars with the content:
SPECIAL_VAR='some/path/I/want/later/'
And source the thing every time you need the variable:
cd `source ~/.yourvars; echo $SPECIAL_VAR`
A hack? Perhaps. But it works.
On Unix, is there any way that one process can change another's environment variables (assuming they're all being run by the same user)? A general solution would be best, but if not, what about the specific case where one is a child of the other?
Edit: How about via gdb?
Via gdb:
(gdb) attach process_id
(gdb) call putenv ("env_var_name=env_var_value")
(gdb) detach
This is quite a nasty hack and should only be done in the context of a debugging scenario, of course.
You probably can do it technically (see other answers), but it might not help you.
Most programs will expect that env vars cannot be changed from the outside after startup, hence most will probably just read the vars they are interested in at startup and initialize based on that. So changing them afterwards will not make a difference, since the program will never re-read them.
If you posted this as a concrete problem, you should probably take a different approach. If it was just out of curiosity: Nice question :-).
Substantially, no. If you had sufficient privileges (root, or thereabouts) and poked around /dev/kmem (kernel memory), and you made changes to the process's environment, and if the process actually re-referenced the environment variable afterwards (that is, the process had not already taken a copy of the env var and was not using just that copy), then maybe, if you were lucky and clever, and the wind was blowing in the right direction, and the phase of the moon was correct, perhaps, you might achieve something.
Quoting Jerry Peek:
You can't teach an old dog new tricks.
The only thing you can do is to change the environment variable of the child process before starting it: it gets the copy of the parent environment, sorry.
See http://www.unix.com.ua/orelly/unix/upt/ch06_02.htm for details.
Just a comment on the answer about using /proc. Under linux /proc is supported but, it does not work, you cannot change the /proc/${pid}/environ file, even if you are root: it is absolutely read-only.
I could think of the rather contrived way to do that, and it will not work for arbitrary processes.
Suppose that you write your own shared library which implements 'char *getenv'. Then, you set up 'LD_PRELOAD' or 'LD_LIBRARY_PATH' env. vars so that both your processes are run with your shared library preloaded.
This way, you will essentially have a control over the code of the 'getenv' function. Then, you could do all sorts of nasty tricks. Your 'getenv' could consult external config file or SHM segment for alternate values of env vars. Or you could do regexp search/replace on the requested values. Or ...
I can't think of an easy way to do that for arbitrary running processes (even if you are root), short of rewriting dynamic linker (ld-linux.so).
Or get your process to update a config file for the new process and then either:
perform a kill -HUP on the new process to reread the updated config file, or
have the process check the config file for updates every now and then. If changes are found, then reread the config file.
It seems that putenv doesn't work now, but setenv does.
I was testing the accepted answer while trying to set the variable in the current shell with no success
$] sudo gdb -p $$
(gdb) call putenv("TEST=1234")
$1 = 0
(gdb) call (char*) getenv("TEST")
$2 = 0x0
(gdb) detach
(gdb) quit
$] echo "TEST=$TEST"
TEST=
and the variant how it works:
$] sudo gdb -p $$
(gdb) call (int) setenv("TEST", "1234", 1)
$1 = 0
(gdb) call (char*) getenv("TEST")
$2 = 0x55f19ff5edc0 "1234"
(gdb) detach
(gdb) quit
$] echo "TEST=$TEST"
TEST=1234
Not as far as I know. Really you're trying to communicate from one process to another which calls for one of the IPC methods (shared memory, semaphores, sockets, etc.). Having received data by one of these methods you could then set environment variables or perform other actions more directly.
If your unix supports the /proc filesystem, then it's trivial to READ the env - you can read the environment, commandline, and many other attributes of any process you own that way. Changing it... Well, I can think of a way, but it's a BAD idea.
The more general case... I don't know, but I doubt there's a portable answer.
(Edited: my original answer assumed the OP wanted to READ the env, not change it)
UNIX is full of Inter-process communication. Check if your target instance has some. Dbus is becoming a standard in "desktop" IPC.
I change environment variables inside of Awesome window manager using awesome-client with is a Dbus "sender" of lua code.
Not a direct answer but... Raymond Chen had a [Windows-based] rationale around this only the other day :-
... Although there are certainly unsupported ways of doing it or ways that work with the assistance of a debugger, there’s nothing that is supported for programmatic access to another process’s command line, at least nothing provided by the kernel. ...
That there isn’t is a consequence of the principle of not keeping track of information which you don’t need. The kernel has no need to obtain the command line of another process. It takes the command line passed to the CreateProcess function and copies it into the address space of the process being launched, in a location where the GetCommandLine function can retrieve it. Once the process can access its own command line, the kernel’s responsibilities are done.
Since the command line is copied into the process’s address space, the process might even write to the memory that holds the command line and modify it. If that happens, then the original command line is lost forever; the only known copy got overwritten.
In other words, any such kernel facilities would be
difficult to implement
potentially a security concern
However the most likely reason is simply that there's limited use cases for such a facility.
Yes, you can, but it doesn't make sense. To do that you should invade into a running process and change its memory.
Environment variables are the part of the running process, that also means env vars are copied and stored to the memory of a process when it started, and only the process itself owns it.
If you change evn vars when the process is trying to read them it could be corrupted, but you still can change them if you attach a debugger to the process, because it has access to the memory of the attached process.