i have uploaded a test script remote.sh to a remote webserver like this :
#!/usr/bin/bash
echo "input var is : $1"
and i have a local script local.sh like this :
#!/usr/bin/bash
curl -sS https://remote_host/remote.sh | bash
then i run the local script with some inline parameter :
./local.sh "some input here."
but the remote script i grabbed doesn't seem to see the local inline parameter. how can this be done ?
Your code is starting a second copy of bash, and not passing the arguments retrieved to it.
I would generally suggest not starting a second copy of bash at all:
#!/usr/bin/env bash
eval "$(curl -sS https://remote_host/remote.sh)"
...but you could proceed to do so and pass them through. The following passes that code on the command line, leaving stdin free (so the new copy of bash being started can use it to prompt the user):
#!/bin/sh
code=$(curl -sS https://remote_host/remote.sh) || exit
exec bash -c "$code" bash "$#"
...or, to continue using stdin to pass code, bash -s can be used:
#!/bin/sh
curl -sS https://remote_host/remote.sh | bash -s -- "$#"
By the way -- everywhere I use /bin/sh above you could substitute /bin/bash or any other POSIX-compliant shell; the point being made is that the code given above does not depend on behaviors that are unspecified in the POSIX.2 standard.
Related
I have a test.sh file which takes as a parameter a bash command, it does some logic, i.e. setting and checking some env vars, and then executes that input command.
#!/bin/bash
#Some other logic here
echo "Run command: $#"
eval "$#"
When I run it, here's the output
% ./test.sh echo "ok"
Run command: echo ok
ok
But the issue is, when I pass something like sh -c 'echo "ok"', I don't get the output.
% ./test.sh sh -c 'echo "ok"'
Run command: sh -c echo "ok"
%
So I tried changing eval with exec, tried to execute $# directly (without eval or exec), even tried to execute it and save the output to a variable, still no use.
Is there any way to run the passed command in this format and get the ourput?
Use case:
The script is used as an entrypoint for the docker container, it receives the parameters from docker CMD and executes those to run the container.
As a quickfix I can remove the sh -c and pass the command without it, but I want to make the script reusable and not to change the commands.
TL;DR:
This is a typical use case (perform some business logic in a Docker entrypoint script before running a compound command, given at command line) and the recommended last line of the script is:
exec "$#"
Details
To further explain this line, some remarks and hyperlinks:
As per the Bash user manual, exec is a POSIX shell builtin that replaces the shell [with the command supplied] without creating a new process.
As a result, using exec like this in a Docker entrypoint context is important because it ensures that the CMD program that is executed will still have PID 1 and can directly handle signals, including that of docker stop (see also that other SO answer: Speed up docker-compose shutdown).
The double quotes ("$#") are also important to avoid word splitting (namely, ensure that each positional argument is passed as is, even if it contains spaces). See e.g.:
#!/usr/bin/env bash
printargs () { for arg; do echo "$arg"; done; }
test0 () {
echo "test0:"
printargs $#
}
test1 () {
echo "test1:"
printargs "$#"
}
test0 /bin/sh -c 'echo "ok"'
echo
test1 /bin/sh -c 'echo "ok"'
test0:
/bin/sh
-c
echo
"ok"
test1:
/bin/sh
-c
echo "ok"
Finally eval is a powerful bash builtin that is (1) unneeded for your use case, (2) and actually not advised to use in general, in particular for security reasons. E.g., if the string argument of eval relies on some user-provided input… For details on this issue, see e.g. https://mywiki.wooledge.org/BashFAQ/048 (which recaps the few situations where one would like to use this builtin, typically, the command eval "$(ssh-agent -s)").
As the title says, within linux how can I feed input to the bash when I do sudo bash
Lets say I have a bash script that reads the name.
The way I execute the script is through sudo using:
cat read-my-name-script.sh | sudo bash
Lets just say this is how I execute the script throught the network.
Now I want to fill the name automatically, is there a way to feed the input. I tried doing this: cat read-my-name-script.sh < name-input-file | sudo bash where the name-input-file is a file for the input that the user will be using to feed the script.
I am new to linux and learning to automate the input and wanted to create a file for input where the user can fill it and feed it to my script.
This is convoluted, but might do what you want.
sudo bash -c "$(cat read-my-name.sh)" <name-input-file
The -c says the next quoted argument are the commands to run (so, read the script as a string on the command line, instead of from a file), and the calling shell interpolates the contents of the file inside the double quotes before the sudo command gets evaluated. So if read-my-name.sh contains
#!/bin/bash
read -p "I want your name please"
then the command gets expanded into
sudo bash -c '#!/bin/bash
read -p "I want your name please"' <name-input-file
(where of course at this time the shell has actually removed the outer double quotes altogether; I put in single quotes in their place instead to show how this would look as actually executable, syntactically valid code).
I think you need that:
while read -r arg; do sudo bash read-my-name-script.sh "$arg";done <name-input-file
So each line of name-input-file will be passed as argument to sudo bash read-my-name-script.sh
If your argslist located on http server, you can do that:
while read -r arg; do sudo bash read-my-name-script.sh "$arg";done < <(wget -q -O- http://some/address/in/internet/name-input-file)
UPD
add [[ -f name-input-file ]] && readarray -t args <name-input-file
to read-my-name-script.sh
and use "${args[#]}" as arguments of command in the script.
For example echo "${args[#]}" or cmd "${args[0]}" "${args[1]}" ... "${args[100]}" in any order.
In this case you can use
wget -q -O- http://some/address/in/internet/read-my-name-script.sh | bash
for run your script with arguments from name-input-file whitout saving script to the local machine
I am trying to ssh to another server in a shell script and run some scripts.
Currently my line looks something like:
ssh user#$SERVER '$(typeset -a >> /dev/null); PROFILE_LOCATION=`locate db2profile| grep -i $INST_NAME| grep -v bak`; . $PROFILE_LOCATION; function1; function2;'
I've tried both ' and " , as well as using a combination of those with \; or ';'
How do I use the variables I have in my current shell script in my ssh into another server and running multiple commands? Thanks!!
If you want function declarations, and your shell is bash, use typeset -p rather than typeset -a (which will provide a textual dump of variables but not functions). Also, you need to actually run that in a context where it'll be locally evaluated (and ensure that your remote shell is something that understands it, not /bin/sh).
The following hits all those points:
evaluate_db2profile() {
local db2profile
db2profile=$(locate db2profile | grep -i "$INST_NAME" | grep -v bak | head -n 1)
[ -n "$db2profile" ] && . "$db2profile"
}
ssh "user#$SERVER" bash -s <<EOF
$(typeset -p)
evaluate_db2profile
function1
function2
EOF
Because <<EOF is used rather than <<'EOF', the typeset -p command is run locally and substituted into the heredoc. (You could also accomplish this by using double rather than single quotes in the one-line formulation, but see below).
Defining evaluate_db2profile locally as a function ensures that typeset -p will emit it in a format that the remote shell can evaluate, without need to be concerned about escaping.
Using bash -s on the remote command line ensures that the shell interpreting your functions is bash, not /bin/sh. If your code is written for ksh, run ksh -s to achieve that same effect.
I would like to connect to different shells (csh, ksh etc.,) and execute command inside each switched shell.
Following is the sample program which reflects my intention:
#!/bin/bash
echo $SHELL
csh
echo $SHELL
exit
ksh
echo $SHELL
exit
Since, i am not well versed with Shell scripting need a pointer on how to achieve this. Any help would be much appreciated.
If you want to execute only one single command, you can use the -c option
csh -c 'echo $SHELL'
ksh -c 'echo $SHELL'
If you want to execute several commands, or even a whole script in a child-shell, you can use the here-document feature of bash and use the -s (read commands from stdin) on the child shells:
#!/bin/bash
echo "this is bash"
csh -s <<- EOF
echo "here go the commands for csh"
echo "and another one..."
EOF
echo "this is bash again"
ksh -s <<- EOF
echo "and now, we're in ksh"
EOF
Note that you can't easily check the shell you are in by echo $SHELL, because the parent shell expands this variable to the text /././bash. If you want to be sure that the child shell works, you should check if a shell-specific syntax is working or not.
It is possible to use the command line options provided by each shell to run a snippet of code.
For example, for bash use the -c option:
bash -c $code
bash -c 'echo hello'
zsh and fish also use the -c option.
Other shells will state the options they use in their man pages.
You need to use the -c command line option if you want to pass commands on bash startup:
#!/bin/bash
# We are in bash already ...
echo $SHELL
csh -c 'echo $SHELL'
ksh -c 'echo $SHELL'
You can pass arbitrary complex scripts to a shell, using the -c option, as in
sh -c 'echo This is the Bourne shell.'
You will save you a lot of headaches related to quotes and variable expansion if you wrap the call in a function reading the script on stdin as:
execute_with_ksh()
{
local script
script=$(cat)
ksh -c "${script}"
}
prepare_complicated_script()
{
# Write shell script on stdout,
# for instance by cat-ting a here-document.
cat <<'EOF'
echo ${SHELL}
EOF
}
prepare_complicated_script | execute_with_ksh
The advantage of this method is that it easy to insert a tee in the pipe or to break the pipe to control the script being passed to the shell.
If you want to execute the script on a remote host through ssh you should consider encode your script in base 64 to transmit it safely to the remote shell.
Consider the following shell script on example.com
#/bin/bash
export HELLO_SCOPE=WORLD
eval $#
Now, I would like to download and then execute this shell script with parameters in the simplest way and be able to launch an interactive bash terminal with the HELLO_SCOPE variable set.
I have tried
curl http://example.com/hello_scope.sh | bash -s bash -i
But it quits the shell immediately. From what I can understand, it's because curls stdout, the script, remains the stdin of the bash, preventing it from starting interactively (as that would require my keyboard to be stdin).
Is there a way to avoid this without going through the extra step of creating a temporary file with the shell script?
You can source it:
# open a shell
. <(curl http://example.com/hello_scope.sh)
# type commands ...
You could just download this script you (using wget for example) and source this script, isn't it ?
script_name="hello_scope.sh"
[[ -f $script_name ]] && rm -rf "$script_name"
wget "http://example.com/$script_name" -O "$script_name" -o /dev/null
&& chmod u+x "$script_name"
&& source "$script_name"
You could use . "$script_name" instead of source "$script_name" if you want (. is POSIX compliant). You could write the previous code in a script and source it to have interactive shell with the setted variable $HELLO_SCOPE.
Finally you could remove the eval line in your remote shell script.