I'm stuck passing a parameter(URL to download) to a script.
My goal is to create a script for deployment that downloads and installs an app.
The script I run:
curl url_GitHub | bash -s url_download_app
The script on GitHub:
#! /bin/sh
url="$2"
filename=$(basename "$url")
workpath=$(dirname $(readlink -f $0))
curl $url -o $workpath/$filename -s
sudo dpkg --install $workpath/$filename
As I understood it doesn't pass the URL to download the app to the URL="$2" variable.
If I run the GitHub script locally, and pass the URL to download the app, it executes successfully.
Smth like:
bash install.sh -s url_download_app
Please help=)
-s appears to be an option intended for the downloaded script. However, it is also an option accepted by bash, so what I think you want is
curl url_GitHub | bash -s -- -s url_download_app
As the script on GitHub use $2, we should pass it as second argument :
curl url_GitHub | bash -s _ url_download_app
_ url_download_app will be passed to the script on GitHub.
What about the following (using process substitution):
bash <(curl -Ss url_GitHub) url_download_app
I did a proof of concept with the following script:
$ cat /tmp/test.sh
#!/bin/bash
echo "I got '$1'"
exit 0
and when you run it you get:
$ bash <(cat /tmp/test.sh) "test input argument"
I got 'test input argument'
Say I have a file at the URL http://mywebsite.example/myscript.txt that contains a script:
#!/bin/bash
echo "Hello, world!"
read -p "What is your name? " name
echo "Hello, ${name}!"
And I'd like to run this script without first saving it to a file. How do I do this?
Now, I've seen the syntax:
bash < <(curl -s http://mywebsite.example/myscript.txt)
But this doesn't seem to work like it would if I saved to a file and then executed. For example readline doesn't work, and the output is just:
$ bash < <(curl -s http://mywebsite.example/myscript.txt)
Hello, world!
Similarly, I've tried:
curl -s http://mywebsite.example/myscript.txt | bash -s --
With the same results.
Originally I had a solution like:
timestamp=`date +%Y%m%d%H%M%S`
curl -s http://mywebsite.example/myscript.txt -o /tmp/.myscript.${timestamp}.tmp
bash /tmp/.myscript.${timestamp}.tmp
rm -f /tmp/.myscript.${timestamp}.tmp
But this seems sloppy, and I'd like a more elegant solution.
I'm aware of the security issues regarding running a shell script from a URL, but let's ignore all of that for right now.
source <(curl -s http://mywebsite.example/myscript.txt)
ought to do it. Alternately, leave off the initial redirection on yours, which is redirecting standard input; bash takes a filename to execute just fine without redirection, and <(command) syntax provides a path.
bash <(curl -s http://mywebsite.example/myscript.txt)
It may be clearer if you look at the output of echo <(cat /dev/null)
This is the way to execute remote script with passing to it some arguments (arg1 arg2):
curl -s http://server/path/script.sh | bash /dev/stdin arg1 arg2
For bash, Bourne shell and fish:
curl -s http://server/path/script.sh | bash -s arg1 arg2
Flag "-s" makes shell read from stdin.
Use:
curl -s -L URL_TO_SCRIPT_HERE | bash
For example:
curl -s -L http://bitly/10hA8iC | bash
Using wget, which is usually part of default system installation:
bash <(wget -qO- http://mywebsite.example/myscript.txt)
You can also do this:
wget -O - https://raw.github.com/luismartingil/commands/master/101_remote2local_wireshark.sh | bash
The best way to do it is
curl http://domain/path/to/script.sh | bash -s arg1 arg2
which is a slight change of answer by #user77115
You can use curl and send it to bash like this:
bash <(curl -s http://mywebsite.example/myscript.txt)
I often using the following is enough
curl -s http://mywebsite.example/myscript.txt | sh
But in a old system( kernel2.4 ), it encounter problems, and do the following can solve it, I tried many others, only the following works
curl -s http://mywebsite.example/myscript.txt -o a.sh && sh a.sh && rm -f a.sh
Examples
$ curl -s someurl | sh
Starting to insert crontab
sh: _name}.sh: command not found
sh: line 208: syntax error near unexpected token `then'
sh: line 208: ` -eq 0 ]]; then'
$
The problem may cause by network slow, or bash version too old that can't handle network slow gracefully
However, the following solves the problem
$ curl -s someurl -o a.sh && sh a.sh && rm -f a.sh
Starting to insert crontab
Insert crontab entry is ok.
Insert crontab is done.
okay
$
Also:
curl -sL https://.... | sudo bash -
Just combining amra and user77115's answers:
wget -qO- https://raw.githubusercontent.com/lingtalfi/TheScientist/master/_bb_autoload/bbstart.sh | bash -s -- -v -v
It executes the bbstart.sh distant script passing it the -v -v options.
Is some unattended scripts I use the following command:
sh -c "$(curl -fsSL <URL>)"
I recommend to avoid executing scripts directly from URLs. You should be sure the URL is safe and check the content of the script before executing, you can use a SHA256 checksum to validate the file before executing.
instead of executing the script directly, first download it and then execute
SOURCE='https://gist.githubusercontent.com/cci-emciftci/123123/raw/123123/sample.sh'
curl $SOURCE -o ./my_sample.sh
chmod +x my_sample.sh
./my_sample.sh
This way is good and conventional:
17:04:59#itqx|~
qx>source <(curl -Ls http://192.168.80.154/cent74/just4Test) Lord Jesus Loves YOU
Remote script test...
Param size: 4
---------
17:19:31#node7|/var/www/html/cent74
arch>cat just4Test
echo Remote script test...
echo Param size: $#
If you want the script run using the current shell, regardless of what it is, use:
${SHELL:-sh} -c "$(wget -qO - http://mywebsite.example/myscript.txt)"
if you have wget, or:
${SHELL:-sh} -c "$(curl -Ls http://mywebsite.example/myscript.txt)"
if you have curl.
This command will still work if the script is interactive, i.e., it asks the user for input.
Note: OpenWRT has a wget clone but not curl, by default.
bash | curl http://your.url.here/script.txt
actual example:
juan#juan-MS-7808:~$ bash | curl https://raw.githubusercontent.com/JPHACKER2k18/markwe/master/testapp.sh
Oh, wow im alive
juan#juan-MS-7808:~$
I have a single command to ssh to a remote linux host and execute a shell script.
ssh -t -t $USER#somehost 'bash -s' < ./deploy.sh
Inside deploy.sh I have this:
#!/bin/bash
whoami; # I see this command echo
sudo -i -u someoneelse #I see this command echo
whoami; # I DON'T see this command echo, but response is correct
#subsequent commands don't echo
When I run the deploy.sh script locally all commands echo.
How do I get commands to echo after I sudo as another user over ssh?
Had to set -x AFTER sudo as another user
#!/bin/bash
whoami;
sudo -i -u someonelese
set -x #make sure echo on
whoami; #command echoed
I have a script with 2 ssh commands. The SSH scripts uses SSH to log into a remote server and deletes docker images.
ssh person#someserver.com 'set -x &&
echo "Stop docker images" ;
sudo docker stop $(sudo docker ps -a -q) ;
sudo docker rmi -f $(sudo docker images -q) ;
sudo docker rm -f $(sudo docker ps -a -q)'
Note use of ; to separate commands (we don't care if one or more of the commands fail).
The 2nd ssh command uses SSH to log into the same server, grab a docker compose file and run docker.
ssh person#someserver.com 'set -x &&
export AWS_CONFIG_FILE=/somelocation/myaws.conf &&
aws s3 cp s3://com.somebucket.somewhere/docker-compose/docker-compose.yml . --region us-east-1 &&
echo "Get ECR login credentials and do a docker compose up" &&
sudo $(aws ecr get-login --region us-east-1) &&
sudo /usr/local/bin/docker-compose up -d'
Note use of && to separate commands (this time we do care if one or more of the commands fail as we grab the exit code i.e exitCode=$?).
I don't like the fact I have to split this into 2 so my question is can these 2 sections of bash commands be combined into a single SSH call (with both ; and && combinations)?
Although it is possible to pass a set of commands as a simple single-quoted string, I wouldn't recommend that, because:
internal quotation marks should be escaped
it is difficult to read (and maintain!) a code that looks like a string in a text editor
I find it better to keep the scripts in separate files, then pass them to ssh as standard input:
cat script.sh | ssh -T user#host -- bash -s -
Execution of several scripts is done in the same way. Just concatenate more scripts:
cat a.sh b.sh | ssh -T user#host -- bash -s -
If you still want to use a string, use a here document instead:
ssh -T user#host -- <<'END_OF_COMMANDS'
# put your script here
END_OF_COMMANDS
Note the -T option. You don't need pseudo-terminal allocation for non-interactive scripts.
ssh person#someserver.com 'set -x;
echo "Stop docker images" ;
sudo docker stop $(sudo docker ps -a -q) ;
sudo docker rmi -f $(sudo docker images -q) ;
sudo docker rm -f $(sudo docker ps -a -q) ;
export AWS_CONFIG_FILE=/somelocation/myaws.conf &&
aws s3 cp s3://com.somebucket.somewhere/docker-compose/docker-compose.yml . --region us-east-1 &&
echo "Get ECR login credentials and do a docker compose up" &&
sudo $(aws ecr get-login --region us-east-1) &&
sudo /usr/local/bin/docker-compose up -d'
I need to input a variable into third linux system, here is the scheme:
From my laptop > docker server > a container,
#!/bin/bash
domain=$1
ssh -i $SSH_KEY docker#10.10.10.10 "docker run --rm=true 931967fb3e32 /bin/bash -c curl -Is $domain
Of course the variable reaches only the docker server, but not the container.
The first option to test is to pass $domain as an environment variable to your docker run command:
docker run -it --rm -e "domain=$domain" 931967fb3e32 /bin/bash -c curl -Is $domain
(note the use of -it, to be sure to have a tty in an interactive session)
If the curl somehow doesn't pick the right value, (you can test it by replacing /bin/bash -c curl -Is $domain with /bin/bash -c echo $domain), wrap it in a script (which mean your image should include that script)
As discussed in the comments, it seems to work without the bash -c:
ssh -i $SSH_KEY docker#10.10.10.10 "docker run --rm=true 931967fb3e32 curl -Is $domain