What does `location=$(type -p "htop")` mean in a script? [duplicate] - linux

This question already has answers here:
How can I check if a program exists from a Bash script?
(39 answers)
check if a program is already installed [duplicate]
(1 answer)
Closed 3 months ago.
The script is this
#!/bin/bash
echo
echo "################################################################"
echo " Installing Htop "
echo "################################################################"
echo
if ! location=$(type -p "htop"); then
sudo apt install -y htop
fi
I'm confused as to what this code snippet from the script does
location=$(type -p "htop");
I need a clear explanation about this.

! negates the exit status of the following command;
location=... assigns a value to the variable $location;
$(...) is command substitution. It expands to the output of the enclosed command, whose exit status is propagated as the assignment's exit status;
type -p htop (double quotes are not needed here) searches for an executable htop in the $PATH and returns the full path to it. It fails if no such executable exists and there's no alias nor function named htop (in which case it returns an empty string, but doesn't fail).
Putting it all together, it searches for an executable named htop, assings the the full path to it to $location, and if it can't be found (and there's no alias or function defining it), it runs sudo apt install -y htop, which on some systems (that use apt to manage packages) tries to install the htop package with root privileges, answering yes to any questions.

In short, the exit status of the assignment is the exit status of the command substitution, and the exit status of the command substitution is the exit status fo type.
type -p htop has an exit status of 0 if htop is a command that can be executed, with the output being the full path to the command.
The idea here is that location is assigned the full path to htop if it exists, and if it doesn't, then sudo apt install -y htop is run to install it. (With the slight problem, alluded to in the comments, that location remains if htop needs to be installed.)

Related

Behavior of variables using su in linux [duplicate]

This question already has an answer here:
Single vs double quotes confusion in Bash [duplicate]
(1 answer)
Closed 10 months ago.
I'm installing Poetry in a dockerfile, but I want to do it under a different user (to play nicely with VSCode). I don't understand the behavior of the su command though.
When I run su vscode -c "echo $HOME" I get /root. However, when I run su vscode, and subsequently run echo $HOME, I get /home/vscode`.
Even stranger, when I run su vscode -c "echo $HOME && curl -sSL https://install.python-poetry.org | python3", I get /root as output of the first command, but poetry is installed to /home/vscode/.local/bin. I'm at a loss here... can someone shine some light on this?
"echo $HOME" is evaluated by your current shell before su is executed. So su will only be passed as argument "echo /root" (already-evaluated). If you want the variable to be evaluated by the shell spawned by su, you need to escape it: 'echo $HOME'
See 2.2 Quoting in the POSIX specification

How to get to root and then execute shell commands in Python3 on Ubuntu?

I am running some shell commands with os.system that needs to be run as root.
I tried-
os.system("sudo su")
os.system("other commands")
and also-
home_dir = os.system("sudo su")
os.system("other commands")
But both the above scripts just become root and then stop executing, so the rest of my commands aren't executed.
I'm running Python 3.6.9 on an Ubuntu 18.04 VM.
The root privileges gained by sudo only apply to the command that is run through sudo, and do not raise the privileges of the caller (in this case, your python script). So your first command os.system("sudo su") would run an interactive root shell, but after you have exited from that and then your python code does the subsequent call to os.system("other commands"), these will run under its ordinary user privileges.
You could run each command one at a time via sudo:
os.system("sudo some_command")
os.system("sudo some_other_command")
Note that each command will be separately logged by sudo in the system log, and that even if there are several commands, sudo shouldn't ask for a password more than once within a short time interval.
Or if you need to do a sequence of steps like changing directories that might not be possible in the caller (for example, if the directory is not accessible by the non-root user that is running the python script), then you could do for example:
os.system("sudo sh -c 'cd some_dir && some_other_command'")
(Just for info, && is similar to ; but the other command is only run if the cd succeeded, so it is safer, although this point relates to shell syntax rather than python.)
If there are a lot of commands, of course you also have the option of just making a separate "helper" shell-script and running the entire script through sudo.
os.system("sudo sh /path/to/myscript.sh")
Finally to note, if you are running your python script in a non-interactive environment, you may need to tell sudo not to prompt for a password, at least for the relevant invoking user and target commands. For details, do man sudoers and look for examples involving NOPASSWD.

Linux: Quiet mode and how to make command wait to finish before next

If i do a command like say
yum install -y -q packageX
How do i ensure that it waits for finish before doing the next command?
My goal is to have as little unnecessary output as possible but to do each command sequentially with each completing
Linux commands are generally already silent unless there is a problem, that way you only have to pay attention if paying attention is required. Some commands have options to silence their useful and non-problematic output, use man COMMAND_NAME for that or check out TL;DR pages here which are like man but beginner friendly: https://tldr.sh/
For your specific case here you're already using the silenced version of yum as you've passed it the -q flag. The man docs for yum, man yum or online (http://man7.org/linux/man-pages/man8/yum.8.html) state that -q
-q, --quiet
Run without output.
As for the commands:
Use && to chain commands where the success (specifically if the command returns 0 which is often attributed to success) of the previous command is required for the next to be executed.
Here's an example:
cd ./foo && ls
This translates as: attempt to change directory into the folder foo in the current directory, if-and-only-if that succeeds (returns 0) run ls. If foo doesn't exist or it otherwise cannot change directory into foo then ls will not run.
In your case if you wanted to run a command only if that package of yours successfully installed you would do the following where ls is a command that is perhaps more interesting in this case.
yum install -y -q packageX && ls
Just for completeness as conversations about && often bring about ;, if you don't care whether the last command completes successfully (returns 0) and just want to chain commands use ; instead.
cd ./foo; ls
Now even if cd ./foo fails the ls will still execute.

Launching a bash shell from a sudo-ed environment

Apologies for the confusing Question title. I am trying to launch an interactive bash shell from a shell script ( say shel2.sh) which has been launched by a parent script (shel1.sh) in a sudo-ed environment. ( I am creating a guided deployment
script for my software which needs to be installed as super-user , hence the sudo, but may need the user to access the shell. )
Here's shel1.sh
#!/bin/bash
set -x
sudo bash << EOF
echo $?
./shel2.sh
EOF
echo shel1 done
And here's shel2.sh
#!/bin/bash
set -x
bash --norc --verbose --noprofile -i
echo $?
echo done
I expected this to launch an interactive bash shell which waits for my input before returning to shel1.sh. This is what I see:
+ ./shel1.sh
+ sudo bash
0
+ bash --norc --verbose --noprofile -i
bash-4.3# exit
+ echo 0
0
+ echo done
done
+ echo shel1 done
shel1 done
The bash-4.3# calls an exit automatically and quits. Interestingly if I invoke the bash shell with -l (or --login) the automatic entry is logout !
Can someone explain what is happening here ?
When you use a here document, you are tying up the shell's -- and its spawned child processes' -- standard input to the here document input.
You can avoid using a here document in many situations. For example, replace the here document with a single-quoted string.
#!/bin/bash
set -x
sudo bash -c '
# Aside: How is this actually useful?
echo $?
# Spawned script inherits the stdin of "sudo bash"
./shel2.sh'
echo shel1 done
Without more details, it's hard to see where exactly you want to go with this, but most modern Linux platforms have package managers which allow all kinds of hooks for installation, so that you would typically not need to do this sort of thing. Have you looked into that?

How to run command during Docker build which requires a tty?

I have some script I need to run during a Docker build which requires a tty (which Docker does not provide during a build). Under the hood the script uses the read command. With a tty, I can do things like (echo yes; echo no) | myscript.sh.
Without it I get strange errors I don't completely understand. So is there any way to use this script during the build (given that its not mine to modify?)
EDIT: Here's a more definite example of the error:
FROM ubuntu:14.04
RUN echo yes | read
which fails with:
Step 0 : FROM ubuntu:14.04
---> 826544226fdc
Step 1 : RUN echo yes | read
---> Running in 4d49fd03b38b
/bin/sh: 1: read: arg count
The command '/bin/sh -c echo yes | read' returned a non-zero code: 2
RUN <command> in Dockerfile reference:
shell form, the command is run in a shell, which by default is /bin/sh -c on Linux or cmd /S /C on Windows
let's see what exactly /bin/sh is in ubuntu:14.04:
$ docker run -it --rm ubuntu:14.04 bash
root#7bdcaf403396:/# ls -n /bin/sh
lrwxrwxrwx 1 0 0 4 Feb 19 2014 /bin/sh -> dash
/bin/sh is a symbolic link of dash, see read function in dash:
$ man dash
...
read [-p prompt] [-r] variable [...]
The prompt is printed if the -p option is specified and the standard input is a terminal. Then a line
is read from the standard input. The trailing newline is deleted from the line and the line is split as
described in the section on word splitting above, and the pieces are assigned to the variables in order.
At least one variable must be specified. If there are more pieces than variables, the remaining pieces
(along with the characters in IFS that separated them) are assigned to the last variable. If there are
more variables than pieces, the remaining variables are assigned the null string. The read builtin will
indicate success unless EOF is encountered on input, in which case failure is returned.
By default, unless the -r option is specified, the backslash ``\'' acts as an escape character, causing
the following character to be treated literally. If a backslash is followed by a newline, the backslash
and the newline will be deleted.
...
read function in dash:
At least one variable must be specified.
let's see read function in bash:
$ man bash
...
read [-ers] [-a aname] [-d delim] [-i text] [-n nchars] [-N nchars] [-p prompt] [-t timeout] [-u fd] [name...]
If no names are supplied, the line read is assigned to the variable REPLY. The return code is zero,
unless end-of-file is encountered, read times out (in which case the return code is greater than
128), or an invalid file descriptor is supplied as the argument to -u.
...
So I guess your script myscript.sh is start with #!/bin/bash or something else but not /bin/sh.
Also, you can change your Dockerfile like below:
FROM ubuntu:14.04
RUN echo yes | read ENV_NAME
Links:
https://docs.docker.com/engine/reference/builder/
http://linux.die.net/man/1/dash
http://linux.die.net/man/1/bash
Short answer : You can't do it straightly because docker build or either buildx didn't implement [/dev/tty, /dev/console]. But there is a hacky solution where you can achieve what you need but I highly discourage using it since it break the concept of CI. That's why docker didn't implement it.
Hacky solution
FROM ubuntu:14.04
RUN echo yes | read #tty requirement command
As mentioned in docker reference document the RUN consist of two stage, first is execution of command and the second is commit to the image as a new layer. So you can do the stages manually on your own where we will provide tty to first stage(execution) and then commit the result.
Code:
cd
cat >> tty_wrapper.sh << EOF
echo yes | read ## Your command which needs tty
rm /home/tty_wrapper.sh
EOF
docker run --interactive --tty --detach --privileged --name name1 ubuntu:14.04
docker cp tty_wrapper.sh name1:/home/
docker exec name1 bash -c "cd /home && chmod +x tty_wrapper.sh && ./tty_wrapper.sh "
docker commit name1 your:tag
Your new image is ready.
Here is a description about the code.
At first we make a bash script which wrap our tty to it and then remove itself after fist execute. Then we run a container with provided tty option(you can remove privileged if you don't need). Next step we copy wrapped bash script inside container and do the execution & commit stage on our own.
You don't need a tty for feeding your data to your script . just doing something like (echo yes; echo no) | myscript.sh as you suggested will do. also please make sure you copy your file first before trying to execute it . something like COPY myscript.sh myscript.sh
Most likely you don't need a tty. As the comment on the question shows, even the example provided is a situation where the read command was not properly called. A tty would turn the build into an interactive terminal process, which doesn't translate well to automated builds that may be run from tools without terminals.
If you need a tty, then there's the C library call to openpty that you would use when forking a process that includes a pseudo tty. You may be able to solve your problem with a tool like expect, but it's been so long that I don't remember if it creates a ptty or not. Alternatively, if your application can't be built automatically, you can manually perform the steps in a running container, and then docker commit the resulting container to make an image.
I'd recommend against any of those and to work out the procedure to build your application and install it in a non-interactive fashion. Depending on the application, it may be easier to modify the installer itself.

Resources