Passing (and executing) a quoted command pipeline as a function argument [duplicate] - linux

This question already has answers here:
How to run script commands from variables?
(3 answers)
Execute command in a variable don't execute the latter part of a pipe
(1 answer)
Closed 1 year ago.
Today I encountered something quite strange
I have two scripts:
wrong.sh:
execute()
{
${#}
}
execute "ls -l | less"
right.sh:
execute()
{
${#}
}
execute ls -l | less
Running sh wrong.sh gives the following output:
ls: less: No such file or directory
ls: |: No such file or directory
While running sh right.sh gives me the same thing as running ls -l | less
May I know:
(1) While running sh wrong.sh will give me that wrong output
(2) How to modify the execute function so that running sh wrong.sh will gives me the same thing as running ls -l | less

In your wrong.sh invocation, you are trying to run a command with the rather unusual 12 byte long name ls -l | less (which might actually exist, say as /usr/bin/'ls -l | less')
If you want to interpret a string as a shell command, the easiest thing to do is:
sh -c "$command_as_string"
So in your case
execute()
{
sh -c "$1" # no ${#} if you only intend to ever pass a single string
# or $SHELL -c "$1" if you want the *same* shell as you're calling execute() from
}

Related

How to detect if a bash script is already running, considering its arguments [duplicate]

This question already has answers here:
Quick-and-dirty way to ensure only one instance of a shell script is running at a time
(43 answers)
What is the best way to ensure only one instance of a Bash script is running? [duplicate]
(14 answers)
Closed 1 year ago.
Sorry for my poor english ;)
I need to check if a script is already running or not. I don't want to use a lock file, as it can be tricky (ie: if my script wrote a lock file, but crashed, I will consider it as running).
I also need to take parameters into account. ie:
test.sh 123
should be considered as a different process than
test.sh 456
I tried this :
#!/bin/bash
echo "inside test.sh, script name with arguments: $0 +$*$"
echo " simple pgrep on script name with arguments:"
pgrep -f "$0 +$*$"
echo " counting simple pgrep on script name with arguments with wc -l"
echo $(pgrep -f "$0 +$*$" | wc -l)
echo " counting pgrep echo result with wc -w"
processes=$(pgrep -f "$0 +$*$")
nbProcesses=$(echo $processes | wc -w)
echo $nbProcesses
sleep 300
When I try, I get this result:
[frederic.charrier#charrier tmp]$ /tmp/test.sh 123
inside test.sh, script name with arguments: /tmp/test.sh +123$
simple pgrep on script name with arguments:
123976
counting simple pgrep on script name with arguments with wc -l
2
counting pgrep echo result with wc -w
1
^Z
[1]+ Stoppé /tmp/test.sh 123
[frederic.charrier#charrier tmp]$ /tmp/test.sh 123
inside test.sh, script name with arguments: /tmp/test.sh +123$
simple pgrep on script name with arguments:
123976
124029
counting simple pgrep on script name with arguments with wc -l
3
counting pgrep echo result with wc -w
2
My questions are:
when I run the script the first time, it's running once. So pgrep is returning only one result: 123976, which is fine. But why a "wc -l" on 123976 is returning 2?
when I run the script a second time, I get the same strange behavior: pgrep returns the correct result, pgrep | wc -l returns something wrong, and "echo pgrep ... | wc -w" returns the correct result. Why?
How to detect if a bash script is already running
If you are aware of the drawbacks of your method, using pgrep looks fine. Note that both $0 and $* can have regex-syntax stuff in them, you have to escape them first, and I think I would also do pgrep -f "^$0... to match it from the beginning.
why a "wc -l" on 123976 is returning 2?
Because command substitution $(..) spawns a subshell, so there are two shells running, when pgrep is executed.
Overall, echo $(cmd) is an antipattern. Just run it cmd.
In some cases, like when there is single one command inside command substitution, bash optimizes and replaces (exec) the subshell with the command itself, effectively eliminating the subshell. This is an optimization. That's why processes=$(pgrep ..) returns 1.
Why?
There is one more process running.

Question regarding scope of variables/environment variables in Linux

I would like to get a better understanding about the theorical/technical reason of the following behaviour
On a Linux shell I run the following:
MY_VAR="foo" && python3 -c "import os; print('MY_VAR' in os.environ)"
And the result is False
I understand that this is due to the fact that in order to access a variable from a subprocess of the current shell (In this case Python), we need to export it, so when running like this
EXPORT MY_VAR="foo" && python3 -c "import os; print('MY_VAR' in os.environ)"
Result is True
I know this also happens with bash scripts that we call from terminal, in order for the script to have access to the variable it needs to be exported before
However when running something like the following:
MY_VAR_2="foo_2" && echo "This line matches foo_2" | grep ${MY_VAR_2} | wc -l
Result is 1 so there is match
My question is, in this case why MY_VAR_2 was "available" to grep with no need of using EXPORT ?
Isn't grep also a program and therefore a subprocess of the existing shell ?
With:
MY_VAR_2="foo_2" && echo "This line matches foo_2" | grep ${MY_VAR_2} | wc -l
You are executing commands in the same shell and hence the variable MY_VAR_2 will be available to all commands
If you change the line to:
MY_VAR_2="foo_2" && bash -c 'echo "This line matches foo_2"' | bash -c 'grep ${MY_VAR_2} | wc -l'
Because separate shells are opened, the variable will no longer be available unless you use export and so:
export MY_VAR_2="foo_2" && bash -c 'echo "This line matches foo_2"' | bash -c 'grep ${MY_VAR_2} | wc -l'
In the case of python, you will in effect be opening an new shell and so the same logic applies.

Pipe echo of change the current directory to sh does not work [duplicate]

This question already has answers here:
Why can't I change directories using "cd" in a script?
(33 answers)
Closed 4 years ago.
If I do the code:
echo "printf 'working'" | sh
the code prints out working
but when I want to change the current directory this way:
echo "cd ../" | sh
the current directory isn't changed.
Do you know the reason behind that behavior?
do you know how to echo cd command to sh in a working way?
echo "cd /" | sh
actually creates 2 new processes: echo, and sh. The sh process most probably does change the directory, but then just exits. You could test this by
echo "cd ../; touch Jimmix_was_here" | sh
ls -l ../Jimmix_was_here
which should show empty file Jimmix_was_here file, with current timestamp (if you had write permission to the parent directory; otherwise the first command would throw error.)
There's no way to change current directory of a process from within a child; after all if it was possible, it would be a security hole!
Note: this reminds me of a seemingly paradoxical fact: why /bin/cd exists?
Note 2: Try pstree | cat and find both pstree and cat--they are siblings!

ssh and execute several commands as another user through a heredoc [duplicate]

This question already has answers here:
Usage of expect command within a heredoc
(1 answer)
Pass commands as input to another command (su, ssh, sh, etc)
(3 answers)
Closed 4 years ago.
I have a script that I need to execute through ssh as another use, is there a way to pass whole script like this:
ssh -t user#server.com sudo -u user2 sh -c << EOF
cd /home
ls
dir=$(pwd)
echo "$dir"
echo "hello"
....
EOF
Returns: sh: -c: option requires an argument
ssh'ing and sudo'ing separately is not an option and putting .sh file directly on the machine is not possible.
sh -c requires a command string as the argument. Since you are reading the commands from standard input (through heredoc), you need to use sh -s option:
ssh -t user#server.com sudo -u user2 sh -s << 'EOF'
cd /home
ls
dir=$(pwd)
echo "$dir"
echo "hello"
...
EOF
From man sh:
-c
string If the -c option is present, then commands are read from string. If there are arguments after the string, they are assigned to the positional parameters, starting with $0.
-s
If the -s option is present, or if no arguments remain after option processing, then commands are read from the standard input. This option allows the positional parameters to be set when invoking an interactive shell.
Need to quote the heredoc marker to prevent the parent shell from interpreting the content.

reading command line arguments through pipe to sh

I am running a shell script by piping it to sh. For example:
curl commands.io/count-duplicate-lines-in-a-file | sh
The only way I could figure out how to pass in the filename was to use:
read file </dev/tty
You can check out the script here:
Count duplicate lines in a file
Is there another way to pass in the filename as an argument to the script without first saving it to a file locally, setting permissions and running it?
The idea is you can use Monitor to capture terminal input/output and then re-run it from the command line using curl piped to sh.
Use -s option:
echo 'echo "$#"' | sh -s 1 2 3 4
Output:
1 2 3 4
Another way is to use process substitution if shell supports it:
bash <(echo 'echo "$#"') 1 2 3 4

Resources