Linux: Quiet mode and how to make command wait to finish before next - linux

If i do a command like say
yum install -y -q packageX
How do i ensure that it waits for finish before doing the next command?
My goal is to have as little unnecessary output as possible but to do each command sequentially with each completing

Linux commands are generally already silent unless there is a problem, that way you only have to pay attention if paying attention is required. Some commands have options to silence their useful and non-problematic output, use man COMMAND_NAME for that or check out TL;DR pages here which are like man but beginner friendly: https://tldr.sh/
For your specific case here you're already using the silenced version of yum as you've passed it the -q flag. The man docs for yum, man yum or online (http://man7.org/linux/man-pages/man8/yum.8.html) state that -q
-q, --quiet
Run without output.
As for the commands:
Use && to chain commands where the success (specifically if the command returns 0 which is often attributed to success) of the previous command is required for the next to be executed.
Here's an example:
cd ./foo && ls
This translates as: attempt to change directory into the folder foo in the current directory, if-and-only-if that succeeds (returns 0) run ls. If foo doesn't exist or it otherwise cannot change directory into foo then ls will not run.
In your case if you wanted to run a command only if that package of yours successfully installed you would do the following where ls is a command that is perhaps more interesting in this case.
yum install -y -q packageX && ls
Just for completeness as conversations about && often bring about ;, if you don't care whether the last command completes successfully (returns 0) and just want to chain commands use ; instead.
cd ./foo; ls
Now even if cd ./foo fails the ls will still execute.

Related

What does `location=$(type -p "htop")` mean in a script? [duplicate]

This question already has answers here:
How can I check if a program exists from a Bash script?
(39 answers)
check if a program is already installed [duplicate]
(1 answer)
Closed 3 months ago.
The script is this
#!/bin/bash
echo
echo "################################################################"
echo " Installing Htop "
echo "################################################################"
echo
if ! location=$(type -p "htop"); then
sudo apt install -y htop
fi
I'm confused as to what this code snippet from the script does
location=$(type -p "htop");
I need a clear explanation about this.
! negates the exit status of the following command;
location=... assigns a value to the variable $location;
$(...) is command substitution. It expands to the output of the enclosed command, whose exit status is propagated as the assignment's exit status;
type -p htop (double quotes are not needed here) searches for an executable htop in the $PATH and returns the full path to it. It fails if no such executable exists and there's no alias nor function named htop (in which case it returns an empty string, but doesn't fail).
Putting it all together, it searches for an executable named htop, assings the the full path to it to $location, and if it can't be found (and there's no alias or function defining it), it runs sudo apt install -y htop, which on some systems (that use apt to manage packages) tries to install the htop package with root privileges, answering yes to any questions.
In short, the exit status of the assignment is the exit status of the command substitution, and the exit status of the command substitution is the exit status fo type.
type -p htop has an exit status of 0 if htop is a command that can be executed, with the output being the full path to the command.
The idea here is that location is assigned the full path to htop if it exists, and if it doesn't, then sudo apt install -y htop is run to install it. (With the slight problem, alluded to in the comments, that location remains if htop needs to be installed.)

How to get to root and then execute shell commands in Python3 on Ubuntu?

I am running some shell commands with os.system that needs to be run as root.
I tried-
os.system("sudo su")
os.system("other commands")
and also-
home_dir = os.system("sudo su")
os.system("other commands")
But both the above scripts just become root and then stop executing, so the rest of my commands aren't executed.
I'm running Python 3.6.9 on an Ubuntu 18.04 VM.
The root privileges gained by sudo only apply to the command that is run through sudo, and do not raise the privileges of the caller (in this case, your python script). So your first command os.system("sudo su") would run an interactive root shell, but after you have exited from that and then your python code does the subsequent call to os.system("other commands"), these will run under its ordinary user privileges.
You could run each command one at a time via sudo:
os.system("sudo some_command")
os.system("sudo some_other_command")
Note that each command will be separately logged by sudo in the system log, and that even if there are several commands, sudo shouldn't ask for a password more than once within a short time interval.
Or if you need to do a sequence of steps like changing directories that might not be possible in the caller (for example, if the directory is not accessible by the non-root user that is running the python script), then you could do for example:
os.system("sudo sh -c 'cd some_dir && some_other_command'")
(Just for info, && is similar to ; but the other command is only run if the cd succeeded, so it is safer, although this point relates to shell syntax rather than python.)
If there are a lot of commands, of course you also have the option of just making a separate "helper" shell-script and running the entire script through sudo.
os.system("sudo sh /path/to/myscript.sh")
Finally to note, if you are running your python script in a non-interactive environment, you may need to tell sudo not to prompt for a password, at least for the relevant invoking user and target commands. For details, do man sudoers and look for examples involving NOPASSWD.

Is it possible to suppress NPM's echo of the commands it is running?

I've got a bash script that starts up a server and then runs some functional tests. It's got to happen in one script, so I'm running the server in the background. This all happens via 2 npm commands: start:nolog and test:functional.
All good. But there's a lot of cruft in the output that I don't care about:
✗ ./functional-tests/runInPipeline.sh
(... "good" output here)
> #co/foo#2.2.10 pretest:functional /Users/jcol53/Documents/work/foo
> curl 'http://localhost:3000/foo' -s -f -o /dev/null || (echo 'Website must be running locally for functional tests.' && exit 1)
> #co/foo#2.2.10 test:functional /Users/jcol53/Documents/work/foo
> npm run --prefix functional-tests test:dev:chromeff
> #co/foo-functional-tests#1.0.0 test:dev:chromeff /Users/jcol53/Documents/work/foo/functional-tests
> testcafe chrome:headless,firefox:headless ./tests/**.test.js -r junit:reports/functional-test.junit.xml -r html:reports/functional-test.html --skip-js-errors
That's a lot of lines that I don't need there. Can I suppress the #co/foo-functional-tests etc lines? They aren't telling me anything worthwhile...
npm run -s kills all output from the command, which is not what I'm looking for.
This is probably not possible but that's OK, I'm curious, maybe I missed something...

Linux: how to change maximum number of files a process can open?

I have to execute a process on a cluster of machines. Size of cluster is of order 100. So I cannot execute processes manually, I have to execute them by script(which uses ssh, currently I am using python-paramiko for this). Number of tcp sockets these processes open is more than 1024(default limit of linux.) So I need to change that using {ulimit -n 10000}. This makes the changes for that shell session only. And this command works only with root user. So my script is not able to do that.
I tried to execute this command
sudo su && ulimit -n 10000 && <commandToExecuteMyProcess>
But this didn't work. The commands after "sudo su" didn't execute at all. They execute only when I logout of the su session.
This article shows way to make the changes permanently. But when I open limits.conf, I didn't find anything there. It only has some commented notes.
Please suggest me some way to increase the limit permanently or change it by script for each session.
That's not how it works: sudo su just opens a new shell so you can introduce commands as root, and after you exit that shell it executes the rest of the line as normal user.
Second: your this is a special case because ulimit is not actually a program, but a bash shell built-in command, so it must be used within bash, that is why something like sudo ulimit -n 10000 won't work: sudo can't find that program because it doesn't exist.
So, the only alternative is a bit ugly but works:
sudo bash -c 'ulimit -n 10000 && <command>'
Everything inside '...' will execute in a bash session of the root user.
Note that you can replace && with ; in this case: that's because it is being executed as root and ulimit -n 10000 will always complete successfully.

How to run command during Docker build which requires a tty?

I have some script I need to run during a Docker build which requires a tty (which Docker does not provide during a build). Under the hood the script uses the read command. With a tty, I can do things like (echo yes; echo no) | myscript.sh.
Without it I get strange errors I don't completely understand. So is there any way to use this script during the build (given that its not mine to modify?)
EDIT: Here's a more definite example of the error:
FROM ubuntu:14.04
RUN echo yes | read
which fails with:
Step 0 : FROM ubuntu:14.04
---> 826544226fdc
Step 1 : RUN echo yes | read
---> Running in 4d49fd03b38b
/bin/sh: 1: read: arg count
The command '/bin/sh -c echo yes | read' returned a non-zero code: 2
RUN <command> in Dockerfile reference:
shell form, the command is run in a shell, which by default is /bin/sh -c on Linux or cmd /S /C on Windows
let's see what exactly /bin/sh is in ubuntu:14.04:
$ docker run -it --rm ubuntu:14.04 bash
root#7bdcaf403396:/# ls -n /bin/sh
lrwxrwxrwx 1 0 0 4 Feb 19 2014 /bin/sh -> dash
/bin/sh is a symbolic link of dash, see read function in dash:
$ man dash
...
read [-p prompt] [-r] variable [...]
The prompt is printed if the -p option is specified and the standard input is a terminal. Then a line
is read from the standard input. The trailing newline is deleted from the line and the line is split as
described in the section on word splitting above, and the pieces are assigned to the variables in order.
At least one variable must be specified. If there are more pieces than variables, the remaining pieces
(along with the characters in IFS that separated them) are assigned to the last variable. If there are
more variables than pieces, the remaining variables are assigned the null string. The read builtin will
indicate success unless EOF is encountered on input, in which case failure is returned.
By default, unless the -r option is specified, the backslash ``\'' acts as an escape character, causing
the following character to be treated literally. If a backslash is followed by a newline, the backslash
and the newline will be deleted.
...
read function in dash:
At least one variable must be specified.
let's see read function in bash:
$ man bash
...
read [-ers] [-a aname] [-d delim] [-i text] [-n nchars] [-N nchars] [-p prompt] [-t timeout] [-u fd] [name...]
If no names are supplied, the line read is assigned to the variable REPLY. The return code is zero,
unless end-of-file is encountered, read times out (in which case the return code is greater than
128), or an invalid file descriptor is supplied as the argument to -u.
...
So I guess your script myscript.sh is start with #!/bin/bash or something else but not /bin/sh.
Also, you can change your Dockerfile like below:
FROM ubuntu:14.04
RUN echo yes | read ENV_NAME
Links:
https://docs.docker.com/engine/reference/builder/
http://linux.die.net/man/1/dash
http://linux.die.net/man/1/bash
Short answer : You can't do it straightly because docker build or either buildx didn't implement [/dev/tty, /dev/console]. But there is a hacky solution where you can achieve what you need but I highly discourage using it since it break the concept of CI. That's why docker didn't implement it.
Hacky solution
FROM ubuntu:14.04
RUN echo yes | read #tty requirement command
As mentioned in docker reference document the RUN consist of two stage, first is execution of command and the second is commit to the image as a new layer. So you can do the stages manually on your own where we will provide tty to first stage(execution) and then commit the result.
Code:
cd
cat >> tty_wrapper.sh << EOF
echo yes | read ## Your command which needs tty
rm /home/tty_wrapper.sh
EOF
docker run --interactive --tty --detach --privileged --name name1 ubuntu:14.04
docker cp tty_wrapper.sh name1:/home/
docker exec name1 bash -c "cd /home && chmod +x tty_wrapper.sh && ./tty_wrapper.sh "
docker commit name1 your:tag
Your new image is ready.
Here is a description about the code.
At first we make a bash script which wrap our tty to it and then remove itself after fist execute. Then we run a container with provided tty option(you can remove privileged if you don't need). Next step we copy wrapped bash script inside container and do the execution & commit stage on our own.
You don't need a tty for feeding your data to your script . just doing something like (echo yes; echo no) | myscript.sh as you suggested will do. also please make sure you copy your file first before trying to execute it . something like COPY myscript.sh myscript.sh
Most likely you don't need a tty. As the comment on the question shows, even the example provided is a situation where the read command was not properly called. A tty would turn the build into an interactive terminal process, which doesn't translate well to automated builds that may be run from tools without terminals.
If you need a tty, then there's the C library call to openpty that you would use when forking a process that includes a pseudo tty. You may be able to solve your problem with a tool like expect, but it's been so long that I don't remember if it creates a ptty or not. Alternatively, if your application can't be built automatically, you can manually perform the steps in a running container, and then docker commit the resulting container to make an image.
I'd recommend against any of those and to work out the procedure to build your application and install it in a non-interactive fashion. Depending on the application, it may be easier to modify the installer itself.

Resources