Updating environment variables in Bash - linux

I have one long-running script which does some work with AWS.
I have another script which puts environment variables for AWS authentication but that is only valid for 15 mins.
Now I can't change the long running script so is there any way that I can have a cron job or anything else which can update the environment variables in the shell where long script is running?

Elaborating the comment:
Assumptions
The long running script cannot be modified.
The long running script will call an executable file that can be modified (for the sake of the example, lets assume that the executable file is /usr/local/bin/callable).
You've permissions to rename /usr/local/bin/callable and create a new file under that file path and name.
Either the long running script is running as root, or the /usr/local/bin/callable must be able to perform privilege escalation with the setuid bit set.
You'll need gdb installed.
You'll need to have gcc installed if the long running script isn't running as root.
Risks
If this is a critical system and security is a moderate to major concern, do not use any of the following procedures.
Although unlikely to happen, but attaching to a running process and injecting calls to it may cause unexpected or undefined behaviours. If this is a critical system doing some critical procedures, do not use any of the following procedures.
Generally, all these procedures are a bad idea, but they represent one possible solution. But...
Use at your own risk.
Procedures (for long running script running as root)
bash# mv /usr/local/bin/callable /usr/local/bin/callable.orig
bash# cat > /usr/local/bin/callable << EOF
> #!/bin/bash
>
> echo -e "attach ${PPID}\ncall setenv(\"VAR_NAME\", \"some_value\", 1)\ndetach" | /usr/bin/gdb >& /dev/null
>
> /usr/local/bin/callable.orig
>
> EOF
bash# chmod 755 /usr/local/bin/callable
Procedures (for long running script NOT running as root)
bash# mv /usr/local/bin/callable /usr/local/bin/callable.orig
bash# cat > /usr/local/bin/callable.c << EOF
> #include <stdio.h>
> #include <sys/types.h>
> #include <unistd.h>
> #include <stdlib.h>
> int main(void) {
> char inject[128]; /* You may want to increase this size, based on your environment variables that will affect the size of the string */
> uid_t save_uid = getuid();
> gid_t save_gid = getgid();
> sprintf(inject, "echo -e \"attach %u\ncall setenv(\\\"VAR_NAME\\\", \\\"some_value\\\", 1)\ndetach\" | /usr/bin/gdb >& /dev/null", getppid());
> setreuid(0, 0);
> setregid(0, 0);
> system(inject);
> setregid(save_gid, save_gid);
> setreuid(save_uid, save_uid);
> system("/usr/local/bin/callable.orig");
> return 0;
> }
> EOF
bash# gcc -o /usr/local/bin/callable /usr/local/bin/callable.c
bash# rm -f /usr/local/bin/callable.c
bash# chown root:long_running_script_exclusive_group /usr/local/bin/callable
bash# chmod 4750 /usr/local/bin/callable
Bonus
Instead of intercepting, you can, as you stated, use a cronjob to attach to the process with gdb (this will, at least, avoid you to intecept the long running script with another script and, in the worst case, the need to create a setuid binary to do it). You will, however, need to know or fetch the PID of the long running script shell process (as it is changing for each time it is called). It is also prone to failure, due to syncing problems (the script may not be running when the crontab triggers).
References
Changing environment variable of a running process
Is there a way to change another process's environment variables?

Related

Start lots of background jobs but keep their logs separated

I have little experiences in shell commands in unix.
So far, I have checked stackOverflow and know how to run simple shell scripts in order by
using echo
echo $(sh dosomthing1.sh)
echo $(sh dosomthing2.sh)
directly using sh xxx and wait
sh dosomthing1.sh
wait
sh dosomthing2.sh
using &&
sh dosomthing1.sh && sh dosomthing2.sh
But these ways seem to be helpless to solve my problem...
Here is my problem:
I have a basic shell script to do a maven compile and then using "nohup xxx &" to start a java application in background. the script is shown below:
#get the input env parameter
env=$1
#goto application root directory
cd /applicationDir
#to compile
mvn install -Dmaven.test.skip=true
#to start with parameter env
nohup java -jar -Dspring.profiles.active=$env myApplication.jar &
#to tail the log
tail -20f myApplication.log
I have too many different applications with the same startup scripts and it is hard to start them one by one. I need to start them with one command.
All the shell scripts are expected to be processed one by one in order. If there are any exceptions, skip and run the next one.
And when I tried to write a script like this:
sh start1.sh
wait
echo "application 1 was start up"
sh start2.sh
wait
echo "application 2 was start up"
...
sh startxxx.sh
wait
echo "application xxx was start up"
Though all the children shell scripts will process in order as what I expected, and the output infomations looked like the shell is functioning well, but the fact is only the last application will be started, all the previous command "nohup xxxx &" will be shut down.
Also I have tried to write like this:
sh start1.sh &
sh start2.sh &
...
sh startxxx.sh &
Although the result was what I want, all the application will be started well, but during processing the scripts, because of the parallel running of the scripts, the consoled output is unreadable. It comes to a good result but not a graceful way.
I have no idea how to solve this problem...
Please help me with this, thank you very much!
When you have a script with commands, you cam do chmod +x start.sh. Now the script can be started with ./start.sh. You will avoid an additional sh process and with ls -l you can see which scripts are executable.
In your scripts you have tail -f. This will be very confusing for a backgound process. Start the scripts in the background and view the logging from the console. I do hope that each script is using a different myApplication.jar and myApplication.log.
When the logging in the logfile is duplicated in stdout (your commandline window), you can throw that logging away.
./start1.sh > /dev/null 2>&1 &
./start2.sh > /dev/null 2>&1 &
./startxxx.sh > /dev/null 2>&1 &
The processes will be killed when you logout before the scripts are terminated. This can be avoided with nohup:
nohup ./start1.sh > /dev/null 2>&1 &
nohup ./start2.sh > /dev/null 2>&1 &
nohup ./startxxx.sh > /dev/null 2>&1 &
Edit:
OPS wants to start programs in a fixed order.
Starting scripts exactly one after another in order, should be possible by calling them in the right order (perhaps with an additional sleep 1).
When you need to wait for program 1 finished some init stuff, you need to check that. Use 1 script calling all scripts and add some control statements, like
nohup java something &
while ! grep -q "Started" myApplication.log; do
sleep 1
done
When the java program has an error the while will wait for ever, so replace this with some max retrycount
for ((retry=0l retry<100; retry++)); do
grep -q "Started" myApplication.log && break
sleep 1
done
https://man7.org/linux/man-pages/man8/cron.8.html
This might help you. Cron is a task scheduler, which you can use to run programs in sequence. If the man page is difficult to understand, look for tutorials on it; I'm sure some would exist.

Missing File Output When Script Command Runs as su -c command -m user

I have a script that needs to check if a lockfile exists that only root can access and then the script runs the actual test script that needs to be run as a different user.
The problem is the test script is supposed to generate xml files and those files do not exist. (aka I can't find them)
Relevant part of the script
if (mkdir ${lockdir} ) 2> /dev/null; then
echo $$ > $pidfile
trap 'rm -rf "$lockdir"; exit $?' INT TERM EXIT
if [ -f "$puppetlock" ]
then
su -c "/opt/qa-scripts/start-qa-test.sh > /var/log/qaTests/test-$(date +\"m-%d-%T\".log" -m "qaUser"
lockdir is what gets created when the test is run to signify that the test process has begun.
puppetlock checks if puppet is running by looking for the lock file puppet creates.
qaUser does not have the rights to check if puppetlock exists.
start-qa-test.sh ends up calling java to execute an automated test. My test-date.log file displays what console would see if the test was run.
However the test is supposed to produce some xml files in a directory called target. Those files are missing.
In case it's relevant start-qa-test.sh is trying to run something like this
nohup=true
/usr/bin/java -cp .:/folderStuff/$jarFile:/opt/folderResources org.junit.runnt.JUNITCore org.some.other.stuff.Here
Running start-qa-test.sh produces the xml output in the target folder. But running it through su -c it does not.
Edit
I figured out the answer to this issue.
I changed the line to
su - qaUser -c "/opt/qa-scripts/start-qa-test.sh > /var/log/qaTests/test-$(date +\"m-%d-%T\".log"
That allowed the output to show up at the /home/qaUser
Try redirecting the output of stout and stderr in the line:
su -c "/opt/qa-scripts/start-qa-test.sh > /var/log/qaTests/test-$(date +\"m-%d-%T\".log" -m "qaUser" 2>&1

setuid on an executable doesn't seem to work

I wrote a small C utility called killSPR to kill the following processes on my RHEL box. The idea is for anyone who logs into this linux box to be able to use this utility to kill the below mentioned processes (which doesn't work - explained below).
cadmn#rhel /tmp > ps -eaf | grep -v grep | grep " SPR "
cadmn 5822 5821 99 17:19 ? 00:33:13 SPR 4 cadmn
cadmn 10466 10465 99 17:25 ? 00:26:34 SPR 4 cadmn
cadmn 13431 13430 99 17:32 ? 00:19:55 SPR 4 cadmn
cadmn 17320 17319 99 17:39 ? 00:13:04 SPR 4 cadmn
cadmn 20589 20588 99 16:50 ? 01:01:30 SPR 4 cadmn
cadmn 22084 22083 99 17:45 ? 00:06:34 SPR 4 cadmn
cadmn#rhel /tmp >
This utility is owned by the user cadmn (under which these processes run) and has the setuid flag set on it (shown below).
cadmn#rhel /tmp > ls -l killSPR
-rwsr-xr-x 1 cadmn cusers 9925 Dec 17 17:51 killSPR
cadmn#rhel /tmp >
The C code is given below:
/*
* Program Name: killSPR.c
* Description: A simple program that kills all SPR processes that
* run as user cadmn
*/
#include <stdio.h>
int main()
{
char *input;
printf("Before you proceed, find out under which ID I'm running. Hit enter when you are done...");
fgets(input, 2, stdin);
const char *killCmd = "kill -9 $(ps -eaf | grep -v grep | grep \" SPR \" | awk '{print $2}')";
system(killCmd);
return 0;
}
A user (pmn) different from cadmn tries to kill the above-mentioned processes with this utility and fails (shown below):
pmn#rhel /tmp > ./killSPR
Before you proceed, find out under which ID I'm running. Hit enter when you are done...
sh: line 0: kill: (5822) - Operation not permitted
sh: line 0: kill: (10466) - Operation not permitted
sh: line 0: kill: (13431) - Operation not permitted
sh: line 0: kill: (17320) - Operation not permitted
sh: line 0: kill: (20589) - Operation not permitted
sh: line 0: kill: (22084) - Operation not permitted
pmn#rhel /tmp >
While the user waits to hit enter above, the process killSPR is inspected and is seen to be running as the user cadmn (shown below) despite which killSPR is unable to terminate the processes.
cadmn#rhel /tmp > ps -eaf | grep -v grep | grep killSPR
cadmn 24851 22918 0 17:51 pts/36 00:00:00 ./killSPR
cadmn#rhel /tmp >
BTW, none of the main partitions have any nosuid on them
pmn#rhel /tmp > mount | grep nosuid
pmn#rhel /tmp >
The setuid flag on the executable doesn't seem to have the desired effect. What am I missing here? Have I misunderstood how setuid works?
First and foremost, setuid bit simply allows a script to set the uid. The script still needs to call setuid() or setreuid() to run in the the real uid or effective uid respectively. Without calling setuid() or setreuid(), the script will still run as the user who invoked the script.
Avoid system and exec as they drop privileges for security reason. You can use kill() to kill the processes.
Check These out.
http://linux.die.net/man/2/setuid
http://man7.org/linux/man-pages/man2/setreuid.2.html
http://man7.org/linux/man-pages/man2/kill.2.html
You should replace your system call with exec call. Manual for system say's it drops privileges when run from suid program.
The reason is explained in man system:
Do not use system() from a program with set-user-ID or set-group-ID
privileges, because strange values for some environment variables might
be used to subvert system integrity. Use the exec(3) family of func‐
tions instead, but not execlp(3) or execvp(3). system() will not, in
fact, work properly from programs with set-user-ID or set-group-ID
privileges on systems on which /bin/sh is bash version 2, since bash 2
drops privileges on startup. (Debian uses a modified bash which does
not do this when invoked as sh.)
If you replace system with exec you will need to be able to use shell syntax unless you call /bin/sh -c <shell command>, this is what is system actually doing.
Check out this link on making a shell script a daemon:
Best way to make a shell script daemon?
You might also want to google some 'linux script to service', I found a couple of links on this subject.
The idea is that you wrap a shell script that has some basic stuff in it that allows a user to control a program run as another user by calling a 'service' type script instead. For example, you could wrap up /usr/var/myservice/SPRkiller as a 'service' script that could then just be called as such from any user: service SPRkiller start, then SPRkiller would run, kill the appropriate services (assuming the SPR 'program' is run as a non-root user).
This is what it sounds like you are trying to achieve. Running a program (shell script/C program/whatever) carries the same user restrictions on it no matter what (except for escalation bugs/hacks).
On a side note, you seem to have a slight misunderstanding of user rights on Linux/Unix as well as what certain commands and functions do. If a user does not have permissions to do a certain action (like kill the process of another user), then calling setuid on the program you want to kill (or on kill itself) will have no effect because the user does not have permission to another users 'space' without super user rights. So even if you're in a shell script or a C program and called the same system command, you will get the same effect.
http://www.linux.com/learn/ is a great resource, and here's a link for file permissions
hope that helps

Run a script in the same shell(bash)

My problem is specific to the running of SPECCPU2006(a benchmark suite).
After I installed the benchmark, I can invoke a command called "specinvoke" in terminal to run a specific benchmark. I have another script, where part of the codes are like following:
cd (specific benchmark directory)
specinvoke &
pid=$!
My goal is to get the PID of the running task. However, by doing what is shown above, what I got is the PID for the "specinvoke" shell command and the real running task will have another PID.
However, by running specinvoke -n ,the real code running in the specinvoke shell will be output to the stdout. For example, for one benchmark,it's like this:
# specinvoke r6392
# Invoked as: specinvoke -n
# timer ticks over every 1000 ns
# Use another -n on the command line to see chdir commands and env dump
# Starting run for copy #0
../run_base_ref_gcc43-64bit.0000/milc_base.gcc43-64bit < su3imp.in > su3imp.out 2>> su3imp.err
Inside it it's running a binary.The code will be different from benchmark to benchmark(by invoking under different benchmark directory). And because "specinvoke" is installed and not just a script, I can not use "source specinvoke".
So is there any clue? Is there any way to directly invoke the shell command in the same shell(have same PID) or maybe I should dump the specinvoke -n and run the dumped materials?
You can still do something like:
cd (specific benchmark directory)
specinvoke &
pid=$(pgrep milc_base.gcc43-64bit)
If there are several invocation of the milc_base.gcc43-64bit binary, you can still use
pid=$(pgrep -n milc_base.gcc43-64bit)
Which according to the man page:
-n
Select only the newest (most recently started) of the matching
processes
when the process is a direct child of the subshell:
ps -o pid= -C=milc_base.gcc43-64bit --ppid $!
when not a direct child, you could get the info from pstree:
pstree -p $! | grep -o 'milc_base.gcc43-64bit(.*)'
output from above (PID is in brackets): milc_base.gcc43-64bit(9837)

Shell script password security of command-line parameters

If I use a password as a command-line parameter it's public on the system using ps.
But if I'm in a bash shell script and I do something like:
...
{ somecommand -p mypassword }
...
is this still going to show up in the process list? Or is this safe?
How about sub-processes: (...)? Unsafe right?
coprocess?
Command lines will always be visible (if only through /proc).
So the only real solution is: don't. You might supply it on stdin, or a dedicated fd:
./my_secured_process some parameters 3<<< "b#dP2ssword"
with a script like (simplicity first)
#!/bin/bash
cat 0<&3
(this sample would just dump a bad password to stdout)
Now all you need to be concerned with is:
MITM (spoofed scripts that eaves drop the password, e.g. by subverting PATH)
bash history retaining your password in the commandline (look at HISTIGNORE for bash, e.g.)
the security of the script that contains the password redirection
security of the tty's used; keyloggers; ... as you can see, we have now descended into 'general security principles'
How about using a file descriptor approach:
env -i bash --norc # clean up environment
set +o history
read -s -p "Enter your password: " passwd
exec 3<<<"$passwd"
mycommand <&3 # cat /dev/stdin in mycommand
See:
Hiding secret from command line parameter on Unix
The called program can change its command line by simply overwriting argv like this:
#include <stdlib.h>
#include <string.h>
int main(int argc, char** argv) {
int arglen = argv[argc-1]+strlen(argv[argc-1])+1 - argv[0];
memset(argv[0], arglen, 0);
strncpy(argv[0], "secret-program", arglen-1);
sleep(100);
}
Testing:
$ ./a.out mySuperPassword &
$ ps -f
UID PID PPID C STIME TTY TIME CMD
me 20398 18872 0 11:26 pts/3 00:00:00 bash
me 20633 20398 0 11:34 pts/3 00:00:00 secret-program
me 20645 20398 0 11:34 pts/3 00:00:00 ps -f
$
UPD: I know, it is not completely secure and may cause race conditions, but many programs that accept password from command line do this trick.
The only way to escape from being shown in the the process list is if you reimplement the entire functionality of the program you want to call in pure Bash functions. Function calls are not seperate processes. Usually this is not feasible, though.

Resources