BASH: How to send params & data to a process on STDIN - linux

I'm scripting a call to curl, you can enter the password & parameters via STDIN (keep password off the cmd line).
I also need to send POST data on STDIN (large amount of data that won't fit on the cmd line).
So, from a command line I can successfully do this by:
> curl -K --data-binary #- -other_non-pw_params
> -u "username:password"
> <User types ctrl-d>
> lots_of_post_data
> lots_of_post_data
> <User types ctrl-d>
> <User types ctrl-d>
Now... I'm trying to do that in a BASH script...
Wishful-thinking Psudo-code:
{ echo '-u "username:password"'
echo <ctrl-d> | cat dev/null | ^D
echo lots_of_post_data
echo lots_of_post_data
} | curl -K --data-binary #- -other_non-pw_params

Aha! There's a curl specific solution to this.
You pass all of the parameters on STDIN, and leave --data-binary #- (or it's equivalent) to the end, then everything after it is accepted as data input. Example script:
#!/bin/bash
{ echo '--basic'
echo '--compress'
echo '--url "https://your_website"'
echo '-u "username:password"'
echo '--data-binary #-'
echo 'lots_of_post_data'
echo 'lots_of_post_data'
} | curl --config -

Use a "here document":
curl --config - <<EOF
--basic
...
EOF

There is no way to simulate a EOF as in Ctrl-D in the terminal save to stop sending data to the stream altogether. You will need to find a different way of doing this, perhaps by writing a script in a more capable language.

Related

SSH remote execution - How to declare a variable inside EOF block (Bash script)

I have the following code in a bash script:
remote_home=/home/folder
dump_file=$remote_home/my_database_`date +%F_%X`.sql
aws_pem=$HOME/my_key.pem
aws_host=user#host
local_folder=$HOME/db_bk
pwd_stg=xxxxxxxxxxxxxxxx
pwd_prod=xxxxxxxxxxxxxxx
ssh -i $aws_pem $aws_host << EOF
mysqldump --column-statistics=0 --result-file=$dump_file -u user -p$pwd_prod -h $db_to_bk my_database
mysql -u user -p$pwd_prod -h $db_to_bk -N -e 'SHOW TABLES from my_database' > $remote_home/test.txt
sh -c 'cat test.txt | while read i ; do mysql -u user -p$pwd_prod -h $db_to_bk -D my_database --tee=$remote_home/rows.txt -e "SELECT COUNT(*) as $i FROM $i" ; done'
EOF
My loop while is not working because "i" variable is becoming empty. May anyone give me a hand, please? I would like to understand how to handle data in such cases.
The local shell is "expanding" all of the $variable references in the here-document, but AIUI you want $i to be passed through to the remote shell and expanded there. To do this, escape (with a backslash) the $ characters you don't want the local shell to expand. I think it'll look like this:
ssh -i $aws_pem $aws_host << EOF
mysqldump --column-statistics=0 --result-file=$dump_file -u user -p$pwd_prod -h $db_to_bk my_database
mysql -u user -p$pwd_prod -h $db_to_bk -N -e 'SHOW TABLES from my_database' > $remote_home/test.txt
sh -c 'cat test.txt | while read i ; do mysql -u user -p$pwd_prod -h $db_to_bk -D my_database --tee=$remote_home/rows.txt -e "SELECT COUNT(*) as \$i FROM \$i" ; done'
EOF
You can test this by replacing the ssh -i $aws_pem $aws_host command with just cat, so it prints the here-document as it'll be passed to the ssh command (i.e. after the local shell has done its parsing and expansions, but before the remote shell has done its). You should see most of the variables replaced by their values (because those have to happen locally, where those variables are defined) but $i passed literally so the remote shell can expand it.
BTW, you should double-quote almost all of your variable references (e.g. ssh -i "$aws_pem" "$aws_host") to prevent weird parsing problems; shellcheck.net will point this out for the local commands (along with some other potential problems), but you should fix it for the remote commands as well (except $i, since that's already double-quoted as part of the SELECT command).

Redirect cat output to bash script [duplicate]

This question already has answers here:
How can I loop over the output of a shell command?
(4 answers)
Closed 1 year ago.
I need to write a bash script, which will get the subdomains from "subdomains.txt", which are separated by line breaks, and show me their HTTP response code. I want it to look this way:
cat subdomains.txt | ./httpResponse
The problem is, that I dont know, how to make the bash script get the subdomain names. Obviously, I need to use a loop, something like this:
for subdomains in list
do
echo curl --write-out "%{http_code}\n" --silent --output /dev/null "subdomain"
done
But how can I populate the list in loop, using the cat and pipeline? Thanks a lot in advance!
It would help if you provided actual input and expect output, so I'll have to guess that the URL you are passing to curl is in someway derived from the input in the text file. If the exact URL is in the input stream, perhaps you merely want to replace $URL with $subdomain. In any case, to read the input stream, you can simply do:
while read subdomain; do
URL=$( # : derive full URL from $subdomain )
curl --write-out "%{http_code}\n" --silent --output /dev/null "$URL"
done
Playing around with your example let me decide for wget and here is another way...
#cat subdomains.txt
osmc
microknoppix
#cat httpResponse
for subdomain in "$(cat subdomains.txt)"
do
wget -nv -O/dev/null --spider ${subdomain}
done
#sh httpResponse 2>response.txt && cat response.txt
2021-04-05 13:49:25 URL: http://osmc/ 200 OK
2021-04-05 13:49:25 URL: http://microknoppix/ 200 OK
Since wget puts out on stderr 2>response.txt leads to right output.
The && is like the then and is executed only if httpResponse succeeded.
You can do this without cat and a pipeline. Use netcat and parse the first line with sed:
while read -r subdomain; do
echo -n "$subdomain: "
printf "GET / HTTP/1.1\nHost: %s\n\n" "$subdomain" | \
nc "$subdomain" 80 | sed -n 's/[^ ]* //p;q'
done < 'subdomains.txt'
subdomains.txt:
www.stackoverflow.com
www.google.com
output:
www.stackoverflow.com: 301 Moved Permanently
www.google.com: 200 OK

How do i make my bash script on download automatically turn into a terminal command? [duplicate]

Say I have a file at the URL http://mywebsite.example/myscript.txt that contains a script:
#!/bin/bash
echo "Hello, world!"
read -p "What is your name? " name
echo "Hello, ${name}!"
And I'd like to run this script without first saving it to a file. How do I do this?
Now, I've seen the syntax:
bash < <(curl -s http://mywebsite.example/myscript.txt)
But this doesn't seem to work like it would if I saved to a file and then executed. For example readline doesn't work, and the output is just:
$ bash < <(curl -s http://mywebsite.example/myscript.txt)
Hello, world!
Similarly, I've tried:
curl -s http://mywebsite.example/myscript.txt | bash -s --
With the same results.
Originally I had a solution like:
timestamp=`date +%Y%m%d%H%M%S`
curl -s http://mywebsite.example/myscript.txt -o /tmp/.myscript.${timestamp}.tmp
bash /tmp/.myscript.${timestamp}.tmp
rm -f /tmp/.myscript.${timestamp}.tmp
But this seems sloppy, and I'd like a more elegant solution.
I'm aware of the security issues regarding running a shell script from a URL, but let's ignore all of that for right now.
source <(curl -s http://mywebsite.example/myscript.txt)
ought to do it. Alternately, leave off the initial redirection on yours, which is redirecting standard input; bash takes a filename to execute just fine without redirection, and <(command) syntax provides a path.
bash <(curl -s http://mywebsite.example/myscript.txt)
It may be clearer if you look at the output of echo <(cat /dev/null)
This is the way to execute remote script with passing to it some arguments (arg1 arg2):
curl -s http://server/path/script.sh | bash /dev/stdin arg1 arg2
For bash, Bourne shell and fish:
curl -s http://server/path/script.sh | bash -s arg1 arg2
Flag "-s" makes shell read from stdin.
Use:
curl -s -L URL_TO_SCRIPT_HERE | bash
For example:
curl -s -L http://bitly/10hA8iC | bash
Using wget, which is usually part of default system installation:
bash <(wget -qO- http://mywebsite.example/myscript.txt)
You can also do this:
wget -O - https://raw.github.com/luismartingil/commands/master/101_remote2local_wireshark.sh | bash
The best way to do it is
curl http://domain/path/to/script.sh | bash -s arg1 arg2
which is a slight change of answer by #user77115
You can use curl and send it to bash like this:
bash <(curl -s http://mywebsite.example/myscript.txt)
I often using the following is enough
curl -s http://mywebsite.example/myscript.txt | sh
But in a old system( kernel2.4 ), it encounter problems, and do the following can solve it, I tried many others, only the following works
curl -s http://mywebsite.example/myscript.txt -o a.sh && sh a.sh && rm -f a.sh
Examples
$ curl -s someurl | sh
Starting to insert crontab
sh: _name}.sh: command not found
sh: line 208: syntax error near unexpected token `then'
sh: line 208: ` -eq 0 ]]; then'
$
The problem may cause by network slow, or bash version too old that can't handle network slow gracefully
However, the following solves the problem
$ curl -s someurl -o a.sh && sh a.sh && rm -f a.sh
Starting to insert crontab
Insert crontab entry is ok.
Insert crontab is done.
okay
$
Also:
curl -sL https://.... | sudo bash -
Just combining amra and user77115's answers:
wget -qO- https://raw.githubusercontent.com/lingtalfi/TheScientist/master/_bb_autoload/bbstart.sh | bash -s -- -v -v
It executes the bbstart.sh distant script passing it the -v -v options.
Is some unattended scripts I use the following command:
sh -c "$(curl -fsSL <URL>)"
I recommend to avoid executing scripts directly from URLs. You should be sure the URL is safe and check the content of the script before executing, you can use a SHA256 checksum to validate the file before executing.
instead of executing the script directly, first download it and then execute
SOURCE='https://gist.githubusercontent.com/cci-emciftci/123123/raw/123123/sample.sh'
curl $SOURCE -o ./my_sample.sh
chmod +x my_sample.sh
./my_sample.sh
This way is good and conventional:
17:04:59#itqx|~
qx>source <(curl -Ls http://192.168.80.154/cent74/just4Test) Lord Jesus Loves YOU
Remote script test...
Param size: 4
---------
17:19:31#node7|/var/www/html/cent74
arch>cat just4Test
echo Remote script test...
echo Param size: $#
If you want the script run using the current shell, regardless of what it is, use:
${SHELL:-sh} -c "$(wget -qO - http://mywebsite.example/myscript.txt)"
if you have wget, or:
${SHELL:-sh} -c "$(curl -Ls http://mywebsite.example/myscript.txt)"
if you have curl.
This command will still work if the script is interactive, i.e., it asks the user for input.
Note: OpenWRT has a wget clone but not curl, by default.
bash | curl http://your.url.here/script.txt
actual example:
juan#juan-MS-7808:~$ bash | curl https://raw.githubusercontent.com/JPHACKER2k18/markwe/master/testapp.sh
Oh, wow im alive
juan#juan-MS-7808:~$

"stdin: is not a tty" from cronjob

I'm getting the following mail every time I execute a specific cronjob. The called script runs fine when I'm calling it directly and even from cron. So the message I get is not an actual error, since the script does exactly what it is supposed to do.
Here is the cron.d entry:
* * * * * root /bin/bash -l -c "/opt/get.sh > /tmp/file"
and the get.sh script itself:
#!/bin/sh
#group and url
groups="foo"
url="https://somehost.test/get.php?groups=${groups}"
# encryption
pass='bar'
method='aes-256-xts'
pass=$(echo -n $pass | xxd -ps | sed 's/[[:xdigit:]]\{2\}/&/g')
encrypted=$(wget -qO- ${url})
decoded=$(echo -n $encrypted | awk -F '#' '{print $1}')
iv=$(echo $encrypted | awk -F '#' '{print $2}' |base64 --decode | xxd -ps | sed 's/[[:xdigit:]]\{2\}/&/g')
# base64 decode input and save to file
output=$(echo -n $decoded | base64 --decode | openssl enc -${method} -d -nosalt -nopad -K ${pass} -iv ${iv})
if [ ! -z "${output}" ]; then
echo "${output}"
else
echo "Error while getting information"
fi
When I'm not using the bash -l syntax the script hangs during the wget process. So my guess would be that it has something to do with wget and putting the output to stdout. But I have no idea how to fix it.
You actually have two questions here.
Why it prints stdin: is not a tty?
This warning message is printed by bash -l. The -l (--login) options asks bash to start the login shell, e.g. the one which is usually started when you enter your password. In this case bash expects its stdin to be a real terminal (e.g. the isatty(0) call should return 1), and it's not true if it is run by cron—hence this warning.
Another easy way to reproduce this warning, and the very common one, is to run this command via ssh:
$ ssh user#example.com 'bash -l -c "echo test"'
Password:
stdin: is not a tty
test
It happens because ssh does not allocate a terminal when called with a command as a parameter (one should use -t option for ssh to force the terminal allocation in this case).
Why it did not work without -l?
As correctly stated by #Cyrus in the comments, the list of files which bash loads on start depends on the type of the session. E.g. for login shells it will load /etc/profile, ~/.bash_profile, ~/.bash_login, and ~/.profile (see INVOCATION in manual bash(1)), while for non-login shells it will only load ~/.bashrc. It seems you defined your http_proxy variable only in one of the files loaded for login shells, but not in ~/.bashrc. You moved it to ~/.wgetrc and it's correct, but you could also define it in ~/.bashrc and it would have worked.
in your .profile, change
mesg n
to
if `tty -s`; then
mesg n
fi
I ended up putting the proxy configuration in the wgetrc. There is now no need to execute the script on a login shell anymore.
This is not a real answer to the actual problem, but it solved mine.
If you run into this problem check if you are getting all the environment variables set as you expect. Thanks to Cyrus for putting me to the right direction.

I am trying to send mail which will redirect the content of log files in logfile.txt in same directory.But its failing

Please find my script below:-
#!/bin/bash
date=`date +%Y%m%d`
ssh root#server-ip "ls -lrth /opt/log_$date/"
ssh root#server-ip "cd /opt/log_$date/; for i in `cat *.log`;do echo $i >> /opt/log_$date/logfile.txt; done;cat /opt/log_$date/logfile.txt| mail -s \"Apache backup testing\" saranjeet.singh#*****.com"
Any help will be appreciated. Thanks
Because you use double quotes, your backticks are getting evaluated on the local host before the SSH command executes.
A much better fix in this case is to avoid them altogether, though;
ssh root#server-ip "cat /opt/log_$date/*.log |
tee /opt/log_$date/logfile.txt" |
mail -s ...

Resources