Shell script password security of command-line parameters - linux

If I use a password as a command-line parameter it's public on the system using ps.
But if I'm in a bash shell script and I do something like:
...
{ somecommand -p mypassword }
...
is this still going to show up in the process list? Or is this safe?
How about sub-processes: (...)? Unsafe right?
coprocess?

Command lines will always be visible (if only through /proc).
So the only real solution is: don't. You might supply it on stdin, or a dedicated fd:
./my_secured_process some parameters 3<<< "b#dP2ssword"
with a script like (simplicity first)
#!/bin/bash
cat 0<&3
(this sample would just dump a bad password to stdout)
Now all you need to be concerned with is:
MITM (spoofed scripts that eaves drop the password, e.g. by subverting PATH)
bash history retaining your password in the commandline (look at HISTIGNORE for bash, e.g.)
the security of the script that contains the password redirection
security of the tty's used; keyloggers; ... as you can see, we have now descended into 'general security principles'

How about using a file descriptor approach:
env -i bash --norc # clean up environment
set +o history
read -s -p "Enter your password: " passwd
exec 3<<<"$passwd"
mycommand <&3 # cat /dev/stdin in mycommand
See:
Hiding secret from command line parameter on Unix

The called program can change its command line by simply overwriting argv like this:
#include <stdlib.h>
#include <string.h>
int main(int argc, char** argv) {
int arglen = argv[argc-1]+strlen(argv[argc-1])+1 - argv[0];
memset(argv[0], arglen, 0);
strncpy(argv[0], "secret-program", arglen-1);
sleep(100);
}
Testing:
$ ./a.out mySuperPassword &
$ ps -f
UID PID PPID C STIME TTY TIME CMD
me 20398 18872 0 11:26 pts/3 00:00:00 bash
me 20633 20398 0 11:34 pts/3 00:00:00 secret-program
me 20645 20398 0 11:34 pts/3 00:00:00 ps -f
$
UPD: I know, it is not completely secure and may cause race conditions, but many programs that accept password from command line do this trick.

The only way to escape from being shown in the the process list is if you reimplement the entire functionality of the program you want to call in pure Bash functions. Function calls are not seperate processes. Usually this is not feasible, though.

Related

Switch user without creating an intermediate process

I'm able to use sudo or su to execute a command as another user. By combining with exec, I'm able to replace the current process with sudo or su, and a child process running the command. But I want to replace the current process with the command running as another user. How do I do that?
Testing with sleep inf as the command, and someguy as the user:
exec su someguy -c 'sleep inf'
This gives me from pstree:
bash───su───sleep
And
exec sudo -u someguy sleep inf
gives
bash───sudo───sleep
In both cases I just want the sleep command, with bash as the parent.
I expect I could do this from C with something some sequence of setuid() and exec().
The difference between sudo sleep and exec sudo sleep is that in the second command sudo process replaces bash image and calling shell process exits when sleep exits
pstree -p $$
bash(8765)───pstree(8943)
((sleep 1; pstree -p $$ )&); sudo -u user sleep 2
bash(8765)───sudo(8897)───sleep(8899)
((sleep 1; pstree -p $$ )&); exec sudo -u user sleep 2
sudo(8765)───sleep(8993)
however the fact that sudo or su fork a new process depends on design and their implementation (some sources found here).
From sudo man page :
Process model
When sudo runs a command, it calls fork(2), sets up the execution environment as described above, and calls the execve system call in the child process. The main sudo
process waits until the command has completed, then passes the command's exit status to the security policy's close function and exits. If an I/O logging plugin is config-
ured or if the security policy explicitly requests it, a new pseudo-terminal (“pty”) is created and a second sudo process is used to relay job control signals between the
user's existing pty and the new pty the command is being run in. This extra process makes it possible to, for example, suspend and resume the command. Without it, the com-
mand would be in what POSIX terms an “orphaned process group” and it would not receive any job control signals. As a special case, if the policy plugin does not define a
close function and no pty is required, sudo will execute the command directly instead of calling fork(2) first. The sudoers policy plugin will only define a close function
when I/O logging is enabled, a pty is required, or the pam_session or pam_setcred options are enabled. Note that pam_session and pam_setcred are enabled by default on sys-
tems using PAM.
I do not share the observation and the conclusions. See below:
I created two shellscripts:
$ cat just_sudo.sh
#!/bin/bash
sudo sleep inf
$ cat exec_sudo.sh
#!/bin/bash
exec sudo sleep inf
So, one with an exec, one without. If I do a pstree to see the starting situation, I get:
$ pstree $$
bash───pstree
$ echo $$
17250
This gives me the baseline. Next I launched both scripts:
$ bash just_sudo.sh &
[1] 1218
$ bash exec_sudo.sh &
[2] 1220
And then, pstree gives:
$ pstree $$
bash─┬─bash───sleep
├─pstree
└─sleep
the first being the just_sudo, the second is the exec_sudo. Both run as root:
$ ps -ef | grep sleep
root 1219 1218 0 14:01 pts/4 00:00:00 sleep inf
root 1220 17250 0 14:01 pts/4 00:00:00 sleep inf
once again the first is the just_sudo and the second the exec_sudo. You can see that the parent-PID for the sleep in the exec_sudo is the interactive shell from which the scripts are launched and the PID is 1220, which was the PID we saw when the script was launched in the background.
If you use two terminal windows and do not put it in the background, this will work also:
terminal 1 terminal 2
$ echo $$
16053 $ pstree 16053
bash
$ sudo sleep inf
$ pstree 16053
bash───sleep
^C
$ exec sudo sleep inf
$ pstree 16053
sleep
^C
( window is closed )
So, on my linux system, the behavior is not as you suggest.The only way that the sudo may remain in the process-tree is if it runs in the existing tty (so without an exec), or if it is invoked with a pseudo-terminal, for example as exec sudoedit.
I am not sure if this can be done using sudo or su. But you can easily achieve this using a simple c program. I will be showing a very bare minimal one with hard coded command and user id, but you can always customize it to your liking
test.c
#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
#include <stddef.h>
int runAs(int gid, int uid, char *command[]) {
setgid(gid);
setuid(uid);
char *args[]={"sleep","inf",NULL};
execvp(args[0],args);
}
int main(int argc, char *argv[] )
{
runAs(1000, 65534, argv);
return 0;
}
Note: on my machine 1000 is the uid/gid of vagrant user and group. 65534 is uid and gid of nobody user and group
build.sh
#!/bin/bash
sudo gcc test.c -o sosu
sudo chown root:root sosu
sudo chmod u+s sosu
Now time for a test
$ pstree $$ -p
bash(23251)───pstree(28627)
$ ./sosu
Now from another terminal
$ pstree -p 23251
bash(23251)───sleep(28687)
$ ps aux | grep [2]8687
nobody 28687 0.0 0.0 7288 700 pts/0 S+ 11:40 0:00 sleep inf
As you can see the process is run as nobody and its a child of bash
In order to free a command, you have to give him std io:
For this,you could either close all stdin, stdout and stderr or let them point elsewhere.
Try this:
su - someguy -c 'exec nohup sleep 60 >/tmp/sleep.log 2>/tmp/sleep.err <<<"" &'
Note:
su - someguy -c 'exec nohup sleep 60 &'
Is enough, and
su - someguy -c 'exec sleep 60 >/tmp/sleep.log 2>/tmp/sleep.err <<<"" &'
Will work too.
Consider having a look at man nohup
Note 2: Under bash, you could use:
su - someguy -c 'exec sleep 60 & disown -h'
... And read help disown or man bash.
Little demo showing how to close all IOs:
su - someguy -c 'exec 0<&- ; exec 1>&- ; exec 2>&- ; exec sleep 60 &'
quick test:
pstree $(ps -C sleep ho pid)
sleep

Updating environment variables in Bash

I have one long-running script which does some work with AWS.
I have another script which puts environment variables for AWS authentication but that is only valid for 15 mins.
Now I can't change the long running script so is there any way that I can have a cron job or anything else which can update the environment variables in the shell where long script is running?
Elaborating the comment:
Assumptions
The long running script cannot be modified.
The long running script will call an executable file that can be modified (for the sake of the example, lets assume that the executable file is /usr/local/bin/callable).
You've permissions to rename /usr/local/bin/callable and create a new file under that file path and name.
Either the long running script is running as root, or the /usr/local/bin/callable must be able to perform privilege escalation with the setuid bit set.
You'll need gdb installed.
You'll need to have gcc installed if the long running script isn't running as root.
Risks
If this is a critical system and security is a moderate to major concern, do not use any of the following procedures.
Although unlikely to happen, but attaching to a running process and injecting calls to it may cause unexpected or undefined behaviours. If this is a critical system doing some critical procedures, do not use any of the following procedures.
Generally, all these procedures are a bad idea, but they represent one possible solution. But...
Use at your own risk.
Procedures (for long running script running as root)
bash# mv /usr/local/bin/callable /usr/local/bin/callable.orig
bash# cat > /usr/local/bin/callable << EOF
> #!/bin/bash
>
> echo -e "attach ${PPID}\ncall setenv(\"VAR_NAME\", \"some_value\", 1)\ndetach" | /usr/bin/gdb >& /dev/null
>
> /usr/local/bin/callable.orig
>
> EOF
bash# chmod 755 /usr/local/bin/callable
Procedures (for long running script NOT running as root)
bash# mv /usr/local/bin/callable /usr/local/bin/callable.orig
bash# cat > /usr/local/bin/callable.c << EOF
> #include <stdio.h>
> #include <sys/types.h>
> #include <unistd.h>
> #include <stdlib.h>
> int main(void) {
> char inject[128]; /* You may want to increase this size, based on your environment variables that will affect the size of the string */
> uid_t save_uid = getuid();
> gid_t save_gid = getgid();
> sprintf(inject, "echo -e \"attach %u\ncall setenv(\\\"VAR_NAME\\\", \\\"some_value\\\", 1)\ndetach\" | /usr/bin/gdb >& /dev/null", getppid());
> setreuid(0, 0);
> setregid(0, 0);
> system(inject);
> setregid(save_gid, save_gid);
> setreuid(save_uid, save_uid);
> system("/usr/local/bin/callable.orig");
> return 0;
> }
> EOF
bash# gcc -o /usr/local/bin/callable /usr/local/bin/callable.c
bash# rm -f /usr/local/bin/callable.c
bash# chown root:long_running_script_exclusive_group /usr/local/bin/callable
bash# chmod 4750 /usr/local/bin/callable
Bonus
Instead of intercepting, you can, as you stated, use a cronjob to attach to the process with gdb (this will, at least, avoid you to intecept the long running script with another script and, in the worst case, the need to create a setuid binary to do it). You will, however, need to know or fetch the PID of the long running script shell process (as it is changing for each time it is called). It is also prone to failure, due to syncing problems (the script may not be running when the crontab triggers).
References
Changing environment variable of a running process
Is there a way to change another process's environment variables?

Hiding command-line arguments to a Perl script

Let's say I have written a Perl script called "foo.pl" that takes in a password argument via the -p switch.
However, while it is running, anyone can do a ps and see the entire command-line string, including the password:
$ ps a |grep 'foo\.pl'
32310 pts/4 S+ 0:00 /usr/bin/perl -w ./foo.pl -p password
32313 pts/5 S+ 0:00 grep foo.pl
What is the easiest/simplest way to hide the password and replace it with something like xxxxxx?
Ask for the password from inside the script, so you don't have to pass it as an argument.
Update
Apparently this work for me, simulating a mysql behaviour:
#!/usr/bin/perl
($0 = "$0 #ARGV") =~ s/--password=\K\S+/x/;
<STDIN>;
$ ./s --user=me --password=secret
^Z
$ ps
PID TTY TIME CMD
1637 ttys000 0:00.12 -bash
2013 ttys000 0:00.00 ./s --user=me --password=x
Under MacOS 10.6
Passing passwords on the command line is not really a good idea, as already mentioned.
But: you can usually (it is OS-dependent) change the name that is shown by ps by assigning to $0.
e.g. (tested on Linux)
$ cat secret.pl
#!/usr/bin/perl
$0 = "my secret perl script";
sleep 15;
$ ./secret.pl -p foobar &
[2] 426
$ ps a | grep perl
426 pts/0 S 0:00 my secret perl script
428 pts/0 S+ 0:00 grep perl
See the section on $0 in the perlvar manpage for details.
There are a couple of ways to go. The most immediate is to (like sidyll says) prompt for the password in the actual script. Don't put in on the command line, and you won't have to hide it.
Another option is a private password file. This file can be read through shell interpolation, but it's still kind of a kludge.
You could add a bit more flexibility to the private password file by wrapping your script in a "launcher" script. Essentially, you write a script whose sole purpose is to "set up" the password file, and then launch your real script.

linux: redirect stdout after the process started [duplicate]

I have some scripts that ought to have stopped running but hang around forever. Is there some way I can figure out what they're writing to STDOUT and STDERR in a readable way?
I tried, for example, to do:
$ tail -f /proc/(pid)/fd/1
but that doesn't really work. It was a long shot anyway.
Any other ideas?
strace on its own is quite verbose and unreadable for seeing this.
Note: I am only interested in their output, not in anything else. I'm capable of figuring out the other things on my own; this question is only focused on getting access to stdout and stderr of the running process after starting it.
Since I'm not allowed to edit Jauco's answer, I'll give the full answer that worked for me (Russell's page relies on un-guaranteed behaviour that, if you close file descriptor 1 for STDOUT, the next creat call will open FD 1.
So, run a simple endless script like this:
import time
while True:
print 'test'
time.sleep(1)
Save it to test.py, run with
$ python test.py
Get the PID:
$ ps auxw | grep test.py
Now, attach gdb:
$ gdb -p (pid)
and do the fd magic:
(gdb) call creat("/tmp/stdout", 0600)
$1 = 3
(gdb) call dup2(3, 1)
$2 = 1
Now you can tail /tmp/stdout and see the output that used to go to STDOUT.
There's several new utilities that wrap up the "gdb method" and add some extra touches. The one I use now is called "reptyr" ("Re-PTY-er"). In addition to grabbing STDERR/STDOUT, it will actually change the controlling terminal of a process (even if it wasn't previously attached to a terminal).
The best use of this is to start up a screen session, and use it to reattach a running process to the terminal within screen so you can safely detach from it and come back later.
It's packaged on popular distros (Ex: 'apt-get install reptyr').
http://onethingwell.org/post/2924103615/reptyr
GDB method seems better, but you can do this with strace, too:
$ strace -p <PID> -e write=1 -s 1024 -o file
Via the man page for strace:
-e write=set
Perform a full hexadecimal and ASCII dump of all the
data written to file descriptors listed in the spec-
ified set. For example, to see all output activity
on file descriptors 3 and 5 use -e write=3,5. Note
that this is independent from the normal tracing of
the write(2) system call which is controlled by the
option -e trace=write.
This prints out somewhat more than you need (the hexadecimal part), but you can sed that out easily.
I'm not sure if it will work for you, but I read a page a while back describing a method that uses gdb
I used strace and de-coded the hex output to clear text:
PID=some_process_id
sudo strace -f -e trace=write -e verbose=none -e write=1,2 -q -p $PID -o "| grep '^ |' | cut -c11-60 | sed -e 's/ //g' | xxd -r -p"
I combined this command from other answers.
strace outputs a lot less with just -ewrite (and not the =1 suffix). And it's a bit simpler than the GDB method, IMO.
I used it to see the progress of an existing MythTV encoding job (sudo because I don't own the encoding process):
$ ps -aef | grep -i handbrake
mythtv 25089 25085 99 16:01 ? 00:53:43 /usr/bin/HandBrakeCLI -i /var/lib/mythtv/recordings/1061_20111230122900.mpg -o /var/lib/mythtv/recordings/1061_20111230122900.mp4 -e x264 -b 1500 -E faac -B 256 -R 48 -w 720
jward 25293 20229 0 16:30 pts/1 00:00:00 grep --color=auto -i handbr
$ sudo strace -ewrite -p 25089
Process 25089 attached - interrupt to quit
write(1, "\rEncoding: task 1 of 1, 70.75 % "..., 73) = 73
write(1, "\rEncoding: task 1 of 1, 70.76 % "..., 73) = 73
write(1, "\rEncoding: task 1 of 1, 70.77 % "..., 73) = 73
write(1, "\rEncoding: task 1 of 1, 70.78 % "..., 73) = 73^C
You can use reredirect (https://github.com/jerome-pouiller/reredirect/).
Type
reredirect -m FILE PID
and outputs (standard and error) will be written in FILE.
reredirect README also explains how to restore original state of process, how to redirect to another command or to redirect only stdout or stderr.
You don't state your operating system, but I'm going to take a stab and say "Linux".
Seeing what is being written to stderr and stdout is probably not going to help. If it is useful, you could use tee(1) before you start the script to take a copy of stderr and stdout.
You can use ps(1) to look for wchan. This tells you what the process is waiting for. If you look at the strace output, you can ignore the bulk of the output and identify the last (blocked) system call. If it is an operation on a file handle, you can go backwards in the output and identify the underlying object (file, socket, pipe, etc.) From there the answer is likely to be clear.
You can also send the process a signal that causes it to dump core, and then use the debugger and the core file to get a stack trace.

How can a process intercept stdout and stderr of another process on Linux?

I have some scripts that ought to have stopped running but hang around forever. Is there some way I can figure out what they're writing to STDOUT and STDERR in a readable way?
I tried, for example, to do:
$ tail -f /proc/(pid)/fd/1
but that doesn't really work. It was a long shot anyway.
Any other ideas?
strace on its own is quite verbose and unreadable for seeing this.
Note: I am only interested in their output, not in anything else. I'm capable of figuring out the other things on my own; this question is only focused on getting access to stdout and stderr of the running process after starting it.
Since I'm not allowed to edit Jauco's answer, I'll give the full answer that worked for me (Russell's page relies on un-guaranteed behaviour that, if you close file descriptor 1 for STDOUT, the next creat call will open FD 1.
So, run a simple endless script like this:
import time
while True:
print 'test'
time.sleep(1)
Save it to test.py, run with
$ python test.py
Get the PID:
$ ps auxw | grep test.py
Now, attach gdb:
$ gdb -p (pid)
and do the fd magic:
(gdb) call creat("/tmp/stdout", 0600)
$1 = 3
(gdb) call dup2(3, 1)
$2 = 1
Now you can tail /tmp/stdout and see the output that used to go to STDOUT.
There's several new utilities that wrap up the "gdb method" and add some extra touches. The one I use now is called "reptyr" ("Re-PTY-er"). In addition to grabbing STDERR/STDOUT, it will actually change the controlling terminal of a process (even if it wasn't previously attached to a terminal).
The best use of this is to start up a screen session, and use it to reattach a running process to the terminal within screen so you can safely detach from it and come back later.
It's packaged on popular distros (Ex: 'apt-get install reptyr').
http://onethingwell.org/post/2924103615/reptyr
GDB method seems better, but you can do this with strace, too:
$ strace -p <PID> -e write=1 -s 1024 -o file
Via the man page for strace:
-e write=set
Perform a full hexadecimal and ASCII dump of all the
data written to file descriptors listed in the spec-
ified set. For example, to see all output activity
on file descriptors 3 and 5 use -e write=3,5. Note
that this is independent from the normal tracing of
the write(2) system call which is controlled by the
option -e trace=write.
This prints out somewhat more than you need (the hexadecimal part), but you can sed that out easily.
I'm not sure if it will work for you, but I read a page a while back describing a method that uses gdb
I used strace and de-coded the hex output to clear text:
PID=some_process_id
sudo strace -f -e trace=write -e verbose=none -e write=1,2 -q -p $PID -o "| grep '^ |' | cut -c11-60 | sed -e 's/ //g' | xxd -r -p"
I combined this command from other answers.
strace outputs a lot less with just -ewrite (and not the =1 suffix). And it's a bit simpler than the GDB method, IMO.
I used it to see the progress of an existing MythTV encoding job (sudo because I don't own the encoding process):
$ ps -aef | grep -i handbrake
mythtv 25089 25085 99 16:01 ? 00:53:43 /usr/bin/HandBrakeCLI -i /var/lib/mythtv/recordings/1061_20111230122900.mpg -o /var/lib/mythtv/recordings/1061_20111230122900.mp4 -e x264 -b 1500 -E faac -B 256 -R 48 -w 720
jward 25293 20229 0 16:30 pts/1 00:00:00 grep --color=auto -i handbr
$ sudo strace -ewrite -p 25089
Process 25089 attached - interrupt to quit
write(1, "\rEncoding: task 1 of 1, 70.75 % "..., 73) = 73
write(1, "\rEncoding: task 1 of 1, 70.76 % "..., 73) = 73
write(1, "\rEncoding: task 1 of 1, 70.77 % "..., 73) = 73
write(1, "\rEncoding: task 1 of 1, 70.78 % "..., 73) = 73^C
You can use reredirect (https://github.com/jerome-pouiller/reredirect/).
Type
reredirect -m FILE PID
and outputs (standard and error) will be written in FILE.
reredirect README also explains how to restore original state of process, how to redirect to another command or to redirect only stdout or stderr.
You don't state your operating system, but I'm going to take a stab and say "Linux".
Seeing what is being written to stderr and stdout is probably not going to help. If it is useful, you could use tee(1) before you start the script to take a copy of stderr and stdout.
You can use ps(1) to look for wchan. This tells you what the process is waiting for. If you look at the strace output, you can ignore the bulk of the output and identify the last (blocked) system call. If it is an operation on a file handle, you can go backwards in the output and identify the underlying object (file, socket, pipe, etc.) From there the answer is likely to be clear.
You can also send the process a signal that causes it to dump core, and then use the debugger and the core file to get a stack trace.

Resources