Option -l of exec shell command - linux

Could you please clarify on the use of -l option of exec shell command. I didn't notice any difference when I ran exec ls | cat and exec -l ls | cat.

The -l option of exec adds a - at the beginning of the name of your command. For example:
exec -l diff | head
-diff: missing operand after '-diff'
-diff: Try '-diff --help' for more information.
Note the - everywhere before diff.
The point of all this? If you have a - before a command to start a shell it will act as a login shell. From man bash:
A login shell is one whose first character of argument zero is a -, or one started with the --login option.
Now, man exec states that:
If the -l option is supplied, the shell places a dash at the beginning of the zeroth argument passed to command. This is
what login(1) does.
So exec -l bash will run bash as a login shell. To test this, we can use the fact that a login bash executes the file ~/.bash_profile, so:
$ cat ~/.bash_profile
#!/bin/sh
printf "I am a login shell!\n"
If I start a login bash, the command printf "I am a login shell!\n" will be executed. Now to test with exec:
$ exec bash
$
Nothing is displayed, we are on a non-login shell.
$ exec -l bash
I am a login shell!
$
Here we have a login shell.

Related

Run bash commands inside container with alias

We have docker contianers, and i would like to run a bash command inside my container, through a script. Like so:
bin/run-in-container ls -la
where the script run-in-container looks something like this:
#!/usr/bin/env bash
docker compose exec container_name bash -ic "$#"
I cannot get my script to interpret all parameters, inside single quotes.
this interprets to
docker compose exec api bash -ic ls -la
but what I want to to interpret is
docker compose exec api bash -ic 'ls -la'
If i try to concatenate a string of single quotes and my parameters, it renders the character escaped:
#!/usr/bin/env bash
escaped_single_qoute="'"
docker compose exec container_name bash -ic $escaped_single_qoute "$#" $escaped_single_qoute
But this interprets into :
docker compose exec api bash -ic ''\''ls' '-la'\'''
Here is a MCVE: gist with Dockerfile and code
services:
node:
restart: unless-stopped
image: 888aaen/bash_stackoverflow:latest
volumes:
# Mount .bash_aliases file.
- "./.bash_aliases:/home/node/.bash_aliases"
script
#!/usr/bin/env bash
docker compose exec node bash -ic "$#"
UPDATE: not solved
KamilCuk came with a great solution, but this does not run with aliases inside the contianer.
docker compose exec container_name bash -ic '"$#"' _ "$#"
# usage example: ./script ls -la
# usage example: ./script sh -c "ls -la ; echo another command"
Let's say we have these aliases:
# ~/.bash_aliases
alias ll='ls -l'
alias la='ls -la'
I want to be able to run
bin/run-in-container ll
which is possible with
docker compose exec api bash -ic 'la src/'
but not with
bin/run-in-container la src/
If you want to pass a single command, just pass the command:
docker compose exec container_name "$#"
# usage example: ./script ls -la
# usage example: ./script sh -c "ls -la ; echo another command"
If you want to pass a single command with an (odd?) requirement in running the command inside bash interactive shell, you would forward the arguments and execute them inside the shell:
docker compose exec container_name bash -ic '"$#"' _ "$#"
# usage example: ./script ls -la
# usage example: ./script sh -c "ls -la ; echo another command"
If you want to pass a shell script, just like to eval, just concatenate arguments with space:
docker compose exec container_name bash -ic "$*"
# usage example: ./script ls -la
# usage example: ./script ls -la ';' echo another command
# usage example: ./script "$(printf "%q " sh -c "ls -la ; echo another command")"
Let's say we have these aliases:
That's all unorthodox. Aliases are for interactive shells, the 1000 years old advice is to use functions instead of aliases. .bash_aliases is a nonstandard (but common) file, that is not being sourced on bash startup unless explicitly mentioned in .bashrc file for interactive non-login shell. See https://www.gnu.org/software/bash/manual/html_node/Bash-Startup-Files.html .
If your intention is to provide ll command for non-interactive use, instead create an executable in /usr/local/bin named ll that will call ls -l "$#" in it.
Another common convention, is to put functions (not aliases! aliases are for interactive shells) inside /etc/profile.d and then run an login non-interactive shell. People are used to doing sh -l when needing environment over ssh, the convention is somewhat understood by users.

What kind of command is "sudo", "su", or "torify"

I know what they do. I was just wondering what kind of command are they. How can you make one using shell scripting.
For example, command like:
ignoreError ls /Home/
ignoreError mkdir /Home/
ignoreError cat
ignoreError randomcommand
Hope you get the idea
The way to do it in a shell script is with the "$#" construct.
"$#" expands to a quoted list of all of the arguments you passed to your shell script. $1 would be the command you want your shell script to run, and $2 $3 etc are the arguments to that command.
The only example I have is from cygwin. Cygwin does not have sudo, but I have this script that emulates it:
#!/usr/bin/bash
cygstart --action=runas "$#"
So when I run a command like
$ sudo ls -l
my sudo script does whatever it needs to do (cygstart --action=runas) and calls the ls command with the -l argument.
Try this script:
#!/bin/sh
"$#"
Call it, for example, run, make it runnable chmod u+x run, and try it:
$ run ls -l #or ./run ls -l
...
output of ls
...
The idea is that the script takes the parameters specified on the command line and use them as a (sub)command... Modify the script this way:
#!/bin/sh
echo "Trying to run $*"
"$#"
and you will see.

sudo command behaviour with quotes

I need your help in understanding this behaviour of sudo.
sudo -s -- 'ls -l' this command works but sudo 'ls -l' throws error saying
sudo: ls -l: command not found I realize it treats the entire string within quote as single command (including the spaces) but what I don't get is how does it work fine with -s flag but fails when -s is not there.
Without -s, the first argument is the name of the command to execute. With -s, the first argument is a string passed to the -c option of whatever shell ($SHELL or your system shell) is used to execute the argument.
That is, assuming $SHELL is sh, the following are equivalent:
sudo -s -- 'ls -l'
sudo -- sh -c 'ls -l'
From the sudo man page:
-s [command]
The -s (shell) option runs the shell specified by the SHELL environment variable if it is set or the shell as specified in
the password database. If a command is specified, it is passed to the
shell for execution via the shell's -c option. If no command is
specified, an interactive shell is executed.
It behaves like it does because a new shell is spawned which breaks up the words in your "quoted command" like shells do.

Why does "/usr/bin/env bash -x" only work in command line?

I am playing with a docker CentOS image, and find executing "/usr/bin/env bash -x" command is OK in terminal:
bash-4.1# /usr/bin/env bash -x
bash-4.1# exit
+ exit
exit
But after writing this command into a script and execute it, it doesn't work, and prompts "No such file or directory":
bash-4.1# ls -lt a.sh
-rwxr-xr-x. 1 root root 23 May 20 04:27 a.sh
bash-4.1# cat a.sh
#!/usr/bin/env bash -x
bash-4.1# ./a.sh
/usr/bin/env: bash -x: No such file or directory
Is there any difference between two methods?
The short answer is that you only get one parameter to the interpreter which is specified via the "#!" mechanism. That became "bash -x".
Usually the limitation is more apparent, e.g., using
#!/bin/bash -x -i
would pass "-x -i" as the parameter, and get unexpected results.
Sven Mascheck comments on this in his page on the topic:
most systems deliver all arguments as a single string
The shebang line should have at most one argument.
When you give more arguments, they will not be split. You can compare this with the commandline command
bash-4.1# /usr/bin/env "bash -x"

Redirecting the output of program which itself is an argument

Let me present the scenario first with the command which is not working under linux bash environment.
$ timed-run prog1 1>/dev/null 2>out.tmp
Here in the above case I want to redirect the output of program 'prog1' to /dev/null and out.tmp file. But this command is redirecting the output (if any) of timed-run to out.tmp.
Any help will be appreciated.
From a simple example, I experience exactly the opposite.
$ time ls 1> foo 2> bar
real 0m0.002s
user 0m0.004s
sys 0m0.000s
$ more foo
<show files>
$ more bar
<empty>
$
The output of ls is redirected, and the output of time is not!
The problem here is in timed-run not in bash. If you run the same command replacing timed-run with the standard time command this works as you expect. Mainly timed run needs to run the arguments of prog1 through the shell again. If it is a shell script you can do this with the eval command. For example:
#!/bin/sh
echo here is some output
echo $*
eval $*
now run
timed-run prog1 '1>/dev/null' '2>output.tmp'
How about using sh -c 'cmd' like so:
time -p sh -c 'ls -l xcvb 1>/dev/null 2>out.tmp'
time -p sh -c 'exec 0</dev/null 1>/dev/null 2>out.tmp; ls -l xcvb'
# in out.tmp:
# ls: xcvb: No such file or directory

Resources