Executing function inside chroot in bash - linux

What would be the ideal way to pass a function into a chroot from the host, in bash?
For example,
install_script () {
wget some_source_files && configure && make && make install
}
and,
some_command -v foo >/dev/null 2>&1 || install_script
but if i want to execute the same from the host into chroot, How do i go about doing it?
One way i can think of is to pass the function to a file inside the chrooted directory,
cat > $chrooted_dir/etc/install_script.sh <<"EOF"
#!/bin/bash
wget source_files ; ./configure ; make ; make install
EOF
and execute from host,
chroot $chrooted_dir /bin/bash "check_command || /etc/install_script.sh"
But i am wondering if there is a more elegant way to approach this? Ideally, i would like to execute the commands from a cript in the host and have some installation performed inside chroot system.
P.S: I would also appreciate of any relevant sources/links to understand bash handles function declarations and subsequent inherits on chrooting.

You can export functions and execute them in an inheriting shell:
install_func () {
wget some_source_files && configure && make && make install
}
export -f install_func
chroot "$chrooted_dir" /bin/bash -c "install_func"
The function will be turned into an environment variable with the name BASH_FUNC_install_func%%, which will be inherited and reinterpretted as a function by the chrooted bash.

Related

Set environmental variables for different user in Docker

I am aware that we can specify the option -e during the run command to set environment variables in a docker. This only sets the PATH for the root user. Let us say if I have another user called admin and want to set the environment variables for that user as well, how can I achieve that?
This is the command I tried to set environment variables.
docker run -t -d -v /usr/hdp:/usr/hdp -v /usr/lib/jvm/:/usr/lib/jvm/ -e JAVA_HOME="${java_home}" -e HADOOP_HOME="${hadoop_home}" -e PATH=$PATH:$JAVA_HOME/bin -e PATH=$PATH:$HADOOP_HOME/bin gtimage
This only sets the PATH under root user but not for my admin user which a software that I installed during docker build has created.
I don't have a perfect solution for my question above but I tried something like below to login as user and set environment variables for that user. I don't recommend the below way unless you could not find a solution for your problem. Please let me know if you find a better approach than this
docker exec $containervalue bash -c 'env | grep PATH >> temp && chmod 775 temp && mv temp /opt/nagios'
docker exec --user ngadmin $containervalue bash -c 'cat ~/temp >> ~/.bashrc && source ~/.bashrc'

How to sudo run a local script over ssh

I try to sudo run a local script over ssh,
ssh $HOST < script.sh
and I tried
ssh -t $HOST "sudo -s && bash" < script.sh
Actually, I searched a lot in google, find some similar questions, however, I don't find a solution which can sudo run a local script.
Reading the error message of
$ ssh -t $HOST "sudo -s && bash" < script.sh
Pseudo-terminal will not be allocated because stdin is not a terminal.
makes it pretty clear what's going wrong here.
You can't use the ssh parameter -t (which sudo needs to ask for a password) whilst redirecting your script to bash's stdin of your remote session.
If it is acceptable for you, you could transfer the local script via scp to your remote machine and then execute the script without the need of I/O redirection:
scp script.sh $HOST:/tmp/ && ssh -t $HOST "sudo -s bash /tmp/script.sh"
Another way to fix your issue is to use sudo in non-interactive mode -n but for this you need to set NOPASSWD within the remote machine's sudoers file for the executing user. Then you can use
ssh $HOST "sudo -n -s bash" < script.sh
To make Edward Itrich's answer more scalable and geared towards frequent use, you can set up a system where you only run a one line script that can be quickly ported to any host, file or command in the following manner:
Create a script in your Scripts directory if you have one by changing the name you want the script to be (I use this format frequently to change 1 word for my script name and create the file, set permissions and open for editing):
newscript="runlocalscriptonremotehost.sh"
touch $newscript && chmod +x $newscript && nano $newscript
In nano fill out the script as follows placing the directory and name information of the script you want to run remotely in the variable lines of runlocalscriptonremotehost.sh(only need to edit lines 1-3):
HOSTtoCONTROL="sudoadmin#192.168.0.254"
PATHtoSCRIPT="/home/username/Scripts/"
SCRIPTname="scripttorunremotely.sh"
scp $PATHtoSCRIPT$SCRIPTname $HOSTtoCONTROL:/tmp/ && ssh -t $HOSTtoCONTROL "sudo -s bash /tmp/$SCRIPTname"
Then just run:
sh ./runlocalscriptonremotehost.sh
Keep runlocalscriptonremotehost.sh open in a tabbed text editor for quick updating, go ahead and create a bash alias for the script and you have yourself an app-ified version of this frequently used operation.
First of all divide your objective in 2 parts. 1) ssh to the host. 2) run the command you want as sudo. After you are certain that you can 1) access the host and 2) have sudo privileges then you can combine the two commands with &&. What x_cmd && y_cmd does is that the y_cmd gets executed after x_cmd has exited successfully.

Changing a users shell with a script?

Is there a proper way to change a user's shell variable with a script depending on what shell they use? Such as
GITSHELL=/bin/bash
if [ GITSHELL = $SHELL ]
then
chsh git -s /usr/bin/git-shell
else
chsh git -s /bin/bash
Obviously there is some problems with this. I'm not sure how to word this so i can
A: Run it as root and still call the user git's shell instead of root's using su in the statement somewhere or
B: Run it as git and su -c root to change git's shell after the initial if?
The goal is to have a script i can run to change the user git's shell to bash when i need that user to temporarily have to ability to create folders and run git init --bare. And then i would run the script again to change the shell back to git-bash for security reasons.
Is this possible or should i go about this a different way?
Let the user make that decision, controlled via an environment variable. Just write your script like this:
: ${GITSHELL:=/usr/bin/git-shell}
git -s "$GITSHELL"
Now, your script will use /usr/bin/git-shell unless explicitly overriden:
$ ./script.sh # use /usr/bin/git-shell
$ GITSHELL=/bin/bash ./script.sh # use /bin/bash

How to demand root privileges in a shell script? [duplicate]

This question already has answers here:
How to check if running as root in a bash script
(21 answers)
Closed 7 years ago.
Say you have a shell script that could potentially require root privileges. You don't want to force the user to run it as sudo. However, if it does require that privilege, you want to prompt the user to enter their password, rather than complaining and forcing them to re-enter the command with sudo.
How would you go about doing this in a Bash script? sudo true seems to work, but it feels like a hack. Is there a better way?
Here's what I often do. This is loose pseudo-code but should give you the idea:
# myscript -- possibly execute as sudo
if (passed -e option) then
read variables from envfile
else
...
need_root = ...
# set variables ...
if ($need_root && $uid != 0) then
env [or whatever] > /tmp/envfile
exec sudo myscript -e/tmp/envfile ...
fi
fi
# stuff to execute as root [or not] ...
The command
sudo -nv
checks whether the user has current sudo credentials (-v), but will fail rather than prompting if access has expired (-n).
So this:
if sudo -nv 2>/dev/null && sudo -v ; then
sudo whoami
else
echo No access
fi
will check whether the user's sudo credentials are current, and prompt for a password only if they're not.
There is a possible race condition: the user's credentials could expire just after the check.
As ghoti points out in a comment, this may not work if the sudoers file is set up to allow only certain commands to be executed. For that and other reasons, be sure to check whether each sudo command succeeded or failed.
If your plan is to use sudo for privilege escalation, one wrinkle you may have to deal with is that that sudo can be set up to permit root access to some commands and not others. For example let's imagine you've got a server that runs VirtualBox, with different people managing the applications than are managing the OS. Your sudoers file might contain something like the following:
Cmnd_Alias SAFE = /bin/true, /bin/false, /usr/bin/id, /usr/bin/who*
Cmnd_Alias SHUTDOWN = /sbin/shutdown, /sbin/halt, /sbin/reboot
Cmnd_Alias SU = /bin/su, /usr/bin/vi*, /usr/sbin/visudo
Cmnd_Alias SHELLS = /bin/sh, /bin/bash, /bin/csh, /bin/tcsh
Cmnd_Alias VBOX = /usr/bin/VBoxManage
%wheel ALL=(ALL) ALL, !SU, !SHELLS, !SHUTDOWN
%staff ALL=(ALL) !SU, !SHELLS, NOPASSWD: SAFE
%operator ALL=(ALL) SAFE, SHUTDOWN
%vboxusers ALL=(ALL) NOPASSWD: VBOX
In this case, a member of the vboxusers unix group will always get a successful response to a sudo -nv, because of the existence of the NOPASSWD entry for that group. But a member of that group running any other command than VBoxManage will get a password challenge and a security warning.
So you need to determine whether the command you need to run can be run without a password prompt. If you don't know how sudo is configured on the system where your script is running, the canonical test is to run the command. Running sudo -nv will only tell you whether you are authenticated; it won't tell you what commands you have access to.
That said, if you can safely rely on a sudo configuration where, say, membership in wheel group gives you access to all commands, for example with:
%wheel ALL=(ALL) ALL
then you can use sudo -nv to test for escalation capabilities. But your script might have some things that it runs as root, and some things it doesn't. You might also want to consider other tools besides sudo for privilege escalation.
One strategy might be to set a variable to preface commands if the need is there, but leave the variable blank if you're already root (i.e. running the entire script inside sudo).
if ! which sudo >/dev/null 2>/dev/null; then
PM_SU_CMD="su - root -c"
elif sudo -nv 2>/dev/null; then
PM_SU_CMD="sudo"
else
echo "ERROR: I can't get root." >&2
exit 1
fi
Of course, if we are already root, unset this to avoid potential conflict:
[ `ps -o uid= $$` -eq 0 ] && unset PM_SU_CMD
(Note that this is a query of the system's process table; we don't want to rely on the shell's environment, because that can be spoofed.)
Then, certain system utilities might be made more easily available using functions:
# Superuser versions for commands that need root privileges
find_s () { $PM_SU_CMD "/usr/bin/find $*"; }
mkdir_s () { $PM_SU_CMD "/bin/mkdir -p $1"; }
rm_s () { $PM_SU_CMD "/bin/rm $*"; }
Then within your script, you'd simply use the _s version of things that need to be run with root privileges.
This is of course by no means the only way to solve this problem.

Undefined variable in shell script by using ssh

I use shell script to run R program as following:
host_list="server#com" Directory="/home/program/" ssh -f "$host_list" 'cd $Directory && nohup Rscript L_1.R> L_1_sh.txt'
But it always says
Directory: Undefined variable.
SSH does not propagate all your environment variables. You're only setting on the environment of the local client ssh program, not on the server side. As a hack, just stick it inside the commands that ssh is running remotely, instead of the pre-command environment setup.
host_list="server#com" ssh -f "$host_list" 'Directory="/home/program/"; cd "$Directory" && nohup ...'
Here's a simpler version of the command that will let you test it without depending on your particular program setup.
ssh localhost Dir="/etc"; echo Dir is "$Dir"; cd "$Dir" && pwd && ps
I'm not sure but maybe you can try those:
In bash single quotes '' does not repace variables Manual
Try to use ${Directory} or change the variable name (maybe is
reserved)

Resources