As the title says, within linux how can I feed input to the bash when I do sudo bash
Lets say I have a bash script that reads the name.
The way I execute the script is through sudo using:
cat read-my-name-script.sh | sudo bash
Lets just say this is how I execute the script throught the network.
Now I want to fill the name automatically, is there a way to feed the input. I tried doing this: cat read-my-name-script.sh < name-input-file | sudo bash where the name-input-file is a file for the input that the user will be using to feed the script.
I am new to linux and learning to automate the input and wanted to create a file for input where the user can fill it and feed it to my script.
This is convoluted, but might do what you want.
sudo bash -c "$(cat read-my-name.sh)" <name-input-file
The -c says the next quoted argument are the commands to run (so, read the script as a string on the command line, instead of from a file), and the calling shell interpolates the contents of the file inside the double quotes before the sudo command gets evaluated. So if read-my-name.sh contains
#!/bin/bash
read -p "I want your name please"
then the command gets expanded into
sudo bash -c '#!/bin/bash
read -p "I want your name please"' <name-input-file
(where of course at this time the shell has actually removed the outer double quotes altogether; I put in single quotes in their place instead to show how this would look as actually executable, syntactically valid code).
I think you need that:
while read -r arg; do sudo bash read-my-name-script.sh "$arg";done <name-input-file
So each line of name-input-file will be passed as argument to sudo bash read-my-name-script.sh
If your argslist located on http server, you can do that:
while read -r arg; do sudo bash read-my-name-script.sh "$arg";done < <(wget -q -O- http://some/address/in/internet/name-input-file)
UPD
add [[ -f name-input-file ]] && readarray -t args <name-input-file
to read-my-name-script.sh
and use "${args[#]}" as arguments of command in the script.
For example echo "${args[#]}" or cmd "${args[0]}" "${args[1]}" ... "${args[100]}" in any order.
In this case you can use
wget -q -O- http://some/address/in/internet/read-my-name-script.sh | bash
for run your script with arguments from name-input-file whitout saving script to the local machine
Related
i have uploaded a test script remote.sh to a remote webserver like this :
#!/usr/bin/bash
echo "input var is : $1"
and i have a local script local.sh like this :
#!/usr/bin/bash
curl -sS https://remote_host/remote.sh | bash
then i run the local script with some inline parameter :
./local.sh "some input here."
but the remote script i grabbed doesn't seem to see the local inline parameter. how can this be done ?
Your code is starting a second copy of bash, and not passing the arguments retrieved to it.
I would generally suggest not starting a second copy of bash at all:
#!/usr/bin/env bash
eval "$(curl -sS https://remote_host/remote.sh)"
...but you could proceed to do so and pass them through. The following passes that code on the command line, leaving stdin free (so the new copy of bash being started can use it to prompt the user):
#!/bin/sh
code=$(curl -sS https://remote_host/remote.sh) || exit
exec bash -c "$code" bash "$#"
...or, to continue using stdin to pass code, bash -s can be used:
#!/bin/sh
curl -sS https://remote_host/remote.sh | bash -s -- "$#"
By the way -- everywhere I use /bin/sh above you could substitute /bin/bash or any other POSIX-compliant shell; the point being made is that the code given above does not depend on behaviors that are unspecified in the POSIX.2 standard.
This question already has answers here:
Usage of expect command within a heredoc
(1 answer)
Pass commands as input to another command (su, ssh, sh, etc)
(3 answers)
Closed 4 years ago.
I have a script that I need to execute through ssh as another use, is there a way to pass whole script like this:
ssh -t user#server.com sudo -u user2 sh -c << EOF
cd /home
ls
dir=$(pwd)
echo "$dir"
echo "hello"
....
EOF
Returns: sh: -c: option requires an argument
ssh'ing and sudo'ing separately is not an option and putting .sh file directly on the machine is not possible.
sh -c requires a command string as the argument. Since you are reading the commands from standard input (through heredoc), you need to use sh -s option:
ssh -t user#server.com sudo -u user2 sh -s << 'EOF'
cd /home
ls
dir=$(pwd)
echo "$dir"
echo "hello"
...
EOF
From man sh:
-c
string If the -c option is present, then commands are read from string. If there are arguments after the string, they are assigned to the positional parameters, starting with $0.
-s
If the -s option is present, or if no arguments remain after option processing, then commands are read from the standard input. This option allows the positional parameters to be set when invoking an interactive shell.
Need to quote the heredoc marker to prevent the parent shell from interpreting the content.
I don't have a lot of experience with scripting.
I have a directory that contains, among many other files, a set of *.phylip files I need to analyze with a program. I would like to automate this task. I think a loop bash shell script would be appropriate, although I could be wrong.
If I was to perform the analysis manually on one .phylip file, I would use the following command in terminal:
./raxmlHPC-SSE3 -m GTRCAT -y -s uce-5.phylip --print-identical-sequences -p 12345 -n uce-5_result
For the bash shell script, I think it would be close to:
#!/bin/sh
for i in $( ls ); do
./raxmlHPC-SSE3 -m GTRCAT -y -s uce-5.phylip --print-identical-sequences -p 12345 -n test_5 $i
done
The issue I'm aware of, but don't know how to fix, is the -s option, which specifies the input phylip file. Any suggestions on how to modify the script to do what I need done?
Try the below code:
#!/bin/bash
for i in *.phylip
do
./raxmlHPC-SSE3 -m GTRCAT -y -s "$i" --print-identical-sequences -p 12345 -n ${i%.phylip}_result
done
-s option will be passed $i which has the file name with .phylip extension in the current directory.
${i%.phylip}_result replaces the .phylip extension with _result which i guess is what you expect. (Ref: Parameter Substitution)
Consider the following shell script on example.com
#/bin/bash
export HELLO_SCOPE=WORLD
eval $#
Now, I would like to download and then execute this shell script with parameters in the simplest way and be able to launch an interactive bash terminal with the HELLO_SCOPE variable set.
I have tried
curl http://example.com/hello_scope.sh | bash -s bash -i
But it quits the shell immediately. From what I can understand, it's because curls stdout, the script, remains the stdin of the bash, preventing it from starting interactively (as that would require my keyboard to be stdin).
Is there a way to avoid this without going through the extra step of creating a temporary file with the shell script?
You can source it:
# open a shell
. <(curl http://example.com/hello_scope.sh)
# type commands ...
You could just download this script you (using wget for example) and source this script, isn't it ?
script_name="hello_scope.sh"
[[ -f $script_name ]] && rm -rf "$script_name"
wget "http://example.com/$script_name" -O "$script_name" -o /dev/null
&& chmod u+x "$script_name"
&& source "$script_name"
You could use . "$script_name" instead of source "$script_name" if you want (. is POSIX compliant). You could write the previous code in a script and source it to have interactive shell with the setted variable $HELLO_SCOPE.
Finally you could remove the eval line in your remote shell script.
If I have a text file with a separate command on each line how would I make terminal run each line as a command? I just don't want to have to copy and paste 1 line at a time. It doesn't HAVE to be a text file... It can be any kind of file that will work.
example.txt:
sudo command 1
sudo command 2
sudo command 3
you can make a shell script with those commands, and then chmod +x <scriptname.sh>, and then just run it by
./scriptname.sh
Its very simple to write a bash script
Mockup sh file:
#!/bin/sh
sudo command1
sudo command2
.
.
.
sudo commandn
you can also just run it with a shell, for example:
bash example.txt
sh example.txt
Execute
. example.txt
That does exactly what you ask for, without setting an executable flag on the file or running an extra bash instance.
For a detailed explanation see e.g. https://unix.stackexchange.com/questions/43882/what-is-the-difference-between-sourcing-or-source-and-executing-a-file-i
You can use something like this:
for i in `cat foo.txt`
do
sudo $i
done
Though if the commands have arguments (i.e. there is whitespace in the lines) you may have to monkey around with that a bit to protect the whitepace so that the whole string is seen by sudo as a command. But it gives you an idea on how to start.
cat /path/* | bash
OR
cat commands.txt | bash