"Ignore" quotes in positional parameters in my Bash script - linux

I have a script that has SMS forwarded to it and posts some of that data to a PHP script. Below is my Bash script:
#!/bin/bash
# Script to post data to Top up processor
curl --request POST 'http://127.0.0.1/user//topup/process.php' --data "receipt=$1" --data "username=$9"
So to run it:
./mpesa_topup.sh sms_message
But the SMS server forwards the message with single quotes:
./mpesa_topup.sh 'sms_message'
The script ends up "parsing the entire SMS as 1 positional parameter. Here is a debug of what happens when the sms server runs the script.
root#sms:/var/lib/playsms/sms_command/1# bash -x mpesa_topup.sh 'JJA88QHC22 Confirmed.on 101015 at 9:49 PMKsh25.00 received from 254712345678 SOME BODY.New Account balance is Ksh25.00'
+ curl --request POST http://10.5.1.2/topup/process.php --data 'receipt=JJA88QHC22 Confirmed.on 101015 at 9:49 PMKsh25.00 received from 254722227332 JOTHAM KIIRU.New Account balance is Ksh25.00' --data username=
root#sms:/var/lib/playsms/sms_command/1#
Is there a way to remove/ignore the opening and closing single quotes in the Bash script?
PS : I am not a coder, gotten where I am with help from my friend Google.

It seems like you want the first and ninth word out of the single argument you are sent. You can do something like this:
$ set -- 'JJA88QHC22 Confirmed.on 101015 at 9:49 PMKsh25.00 received from 254712345678 SOME BODY.New Account balance is Ksh25.00'
$ echo $1
JJA88QHC22 Confirmed.on 101015 at 9:49 PMKsh25.00 received from 254712345678 SOME BODY.New Account balance is Ksh25.00
$ set -f # a
$ set -- $1 # b
$ set +f # c
$ echo $1
JJA88QHC22
$ echo $9
254712345678
The key is (b) where we omit the double quotes around the variable. This allows the shell to perform word-splitting on the value of the variable.
The shell will also attempt to perform glob-pattern expansion, unless you tell it not to, which I do in (a), and then turn that back on in (c).

You can solve this simply by putting your main command inside a function, and calling it again.
Your server is invoking your script with simple quotes, which transform your arguments in one single argument ($1).
If you treat this arg and call your_function() inside the script, you solved!
Here goes the example:
#!/bin/bash
# Script to post data to Top up processor
args=$1
your_function(){
curl --request POST 'http://127.0.0.1/user//topup/process.php' --data "receipt=$1" --data "username=$9"
}
your_function $1

Yes but that won't help. In the end, your code passes the whole SMS as a single string to curl because of --data "receipt=$1". If you only remove the quotes, that would become --data "receipt=JJA88QHC22" and the rest (like the amount) would be missing.
Your problem is that the input was multiple lines of text and that got somehow mangled. The solution is to parse the SMS. Since money is involved, you probably don't want any mistakes. That's why I would use a real programming language like Python or Java. But if you want to use BASH, this might work until an attacker starts sending you SMS to steal money:
# Split first parameter into $1...$n
set -- $1
recepient="$1"
# $2: Confirmed.on
# $3: 101015
# $4: at
# $5: 9:49
# $6: PMKsh25.00
amount=$(echo $6 | sed -e s/^(AM|PM)//) # sed removed the AM/PM at the beginning
# $7: received
# $8: from
sender="$7 $8 $9" # 254722227332 JOTHAM KIIRU.

Related

Escape semicolon, double quotes and backslashes for curl

What is the proper way to send this ,./'; '[]}{":?><|\\ as form-data value in curl. I'm doing this
curl --location --request POST 'https://postman-echo.com/post' \
--form 'more=",./'\'';[]}{\":?><|\\\\'"
right now and it gives different result, apparently only 2 backslashes in the response which is supposed to be 4 in total
Response snippet here
Solved!
Apparently this was the problem with my fish shell which was escaping the trailing double quotes. When i ran the same request on bash it was successful.
probably a shell is interpreting your cmd-line and each \\ pair gets reduced to a single \
(revision, even more explicit) :
using echo to show results,
echo more=",./';[]}{\":?><|\\\\\\\\"
adjust if needed, then copy the more="..." part to your curl cmd-line

What does <<$$$ mean in a Unix shell?

I'm using the google-http-client for a project at work and when I do some requests I have the following thing printed on my console.
curl -v --compressed -X POST -H 'Accept-Encoding: gzip' -H 'User-Agent: Google-HTTP-Java-Client/1.23.0 (gzip)' -H 'Content-Type: application/x-www-form-urlencoded; charset=UTF-8' -d '#-' -- 'http://example.com' << $$$
I was wondering what << $$$ mean.
If I try to run this command into a linux terminal seems that << $$$ makes the console to wait for more input. If that's the case, how can I specify to the terminal that I'm done feeding inputs to it?
Later edit: I have found that curl arguments -d #- implies that data will be red from the stdin.
This is a "here-document" with an unusual end marker.
A here-document is a type of redirection, and usually looks like
utility <<MARKER
document
content
goes here
MARKER
That is, it feeds a document delimited by MARKER to the utility on its standard input.
This is like utility <file where file contains the lines in the here-document, except that the shell will do variable expansion and command substitution on the text of the document (this may be prevented by quoting the marker as either \MARKER or 'MARKER' at the start).
The here-document marker can be any word, but $$$ is a highly unusual choice of word for it. As $ has a special meaning in the shell, using $ in the marker is, or may be, confusing to the reader.
If you type
somecommand <<stuff
in the shell, the shell expects you to give the rest of the contents of the here-document, and then the word stuff on a line by itself. That's how you signal end of input in a here-document.

curl request in bash script that requires a dollar sign in the URL [duplicate]

This question already has answers here:
Difference between single and double quotes in Bash
(7 answers)
Closed 5 years ago.
I have a problem that I just cannot seem to solve.
I need to make a curl request to a given URL, and the URL requires a dollar sign in it.
So, for example:
www.example.com/mypath/function&$filter=whatever
Now, I can execute this just fine from the command line if I put the URl in single ticks, or if I escape the dollar sign with a backslash, and then put it in a double quote.
Obviously, there is a problem if you do not do either of the above, because bash will see the '$' and will interpret anything after it to mean a variable name.
So when I try:
URL="www.example.com/mypath/function&\\\$filter=whatever"
MYOUTPUT=$(curl -s --header "Authorization: $HEADER" "$URL")
it doesn't work right.
When I try
URL="www.example.com/mypath/function&\$filter=whatever"
MYOUTPUT=$(curl -s --header "Authorization: $HEADER" "'$URL'")
it doesn't work right.
What am I doing wrong?
I can tell it's not working right because the server is not responding the same way in the script as it does in the command line. The site responds in a certain default manner if the query isn't done right, and I always get the default response through the script.
If you put a literal $ in a string, bash will not try to interpret it in future expansions:
URL='www.example.com/mypath/function&$filter=whatever'
MYOUTPUT=$(curl -s --header "Authorization: $HEADER" "$URL")

Why shell output redirect to a random name file?

I write a crontab mission to make 3 POST request every 10 minutes by cURL, and here is pseudo:
#!/bin/sh
echo `date` >>/tmp/log
curl $a >>/tmp/log
curl $b >>/tmp/log
curl $c >>/tmp/log
That is all the code, but after the first echo to my /tmp/log, other output was saved in random file name like "A6E0U9~D", it doesn't happen all the time, I got no clues why.:(
PS. I don't use "$a", I use a raw string which copy from CHROME Dev Tool, and one of them is added below. And every single line's output is good, the only problem is some of the output was redirected to a random name file.
the cURL link is deleted because it contained my login cookie
Not really a solution, but you can redirect the output of everything at once, rather than repeatedly appending to the same file.
#!/bin/sh
{
date
curl ...
curl ...
curl ...
} > /tmp/log
The benefit here is that all the output will appear in the same file, whether that file is /tmp/log or an oddly named file. If you still end up with another file aside from /tmp/log, then you know there must be a problem with one of the curl calls.
(Note that capturing and re-printing the output of date is redundant.)
In order to run each curl in parallel, you'll need to save the output from each, and concatenate them once all have finished.
#!/bin/sh
{
date
tmp1=$(mktemp) && curl ... > "$tmp1" &
tmp2=$(mktemp) && curl ... > "$tmp2" &
tmp3=$(mktemp) && curl ... > "$tmp3" &
wait
cat "$tmp1" "$tmp2" "$tmp3"
} > /tmp/log
rm "$tmp1" "$tmp2" "$tmp3"

How to make my script continue mirroring where it left off?

I'm creating a script to download and mirror a site, URLs are taken from a .txt file. The script is supposed to run daily for a few hours, so I need to get it to continue mirroring where it left off.
Here is the script:
# Created by Salik Sadruddin Merani
# email: ssm14293#gmail.com
# site: http://www.dragotech-innovations.tk
clear
echo ' Created by: Salik Sadruddin Merani'
echo ' email: ssm14293#gmail.com'
echo ' site: http://www.dragotech-innovations.tk'
echo
echo ' Info:'
echo ' This script will use the URLs provided in the File "urls.txt"'
echo ' Info: Logs will be saved in logfile.txt'
echo ' URLs are taken from the urls.txt file'
#
url=`< ./urls.txt`
useragent='Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:25.0) Gecko/20100101 Firefox/25.0'
echo ' Mozilla Firefox User agent will be used'
cred='log=abc#123.org&pwd=abc123&wp-submit=Log In&redirect_to=http://abc#123.org/wp-admin/&testcookie=1'
echo ' Loaded Credentails'
echo ' Logging In'
wget --save-cookies cookies.txt --post-data ${cred} --keep-session-cookies http://members.ebenpagan.com/wp-login.php --delete-after
OIFS=$IFS
IFS=','
arr2=$url
for x in $arr2
do
echo ' Loading Cookies'
wget --spider --load-cookies cookies.txt --keep-session-cookies --mirror --convert-links --page-requisites ${x} -U ${useragent} -np --adjust-extension --continue -e robots=no --span-hosts --no-parent -o log-file-$x.txt
done
IFS=$OIFS
Problems with the script:
The script is not referencing its links correctly by making it referable to the file in the parent directory, please tell me about that.
The script is not resuming after being aborted even with the --continue option.
A smarter way to solve the problem is, work with two .txt files, let's affectionately call them "to_mirror.txt" and "mirrored.txt". Keep each URL on a single line. Declare in your script a variable of value 0, for example total_mirrored=0, it will be very important in our code. Therefore, every time the wget command is executed and, consequently, the site is mirrored, increment the value of the "total_mirrored" variable by +1.
Upon exiting the loop, "total_mirrored" will have any integer value.
Then you must extract the lines from "to_mirror.txt" in the range: first line up to "total_mirrored"; then attach this to "mirrored.txt".
After that delete the range from the file "to_mirror.txt".
In this case the sed command can help you, see my example:
sed -n "1,$total_mirrored p" to_mirror.txt >> mirrored.txt && sed -i "1,$total_mirrored d" to_mirror.txt
You can learn a lot about the sed command by running man sed in your terminal, so I won't explain here what each option does as it's redundant.
But know that:
>> appends the existing file, or creates a file if the file of the mentioned name is not present in the directory. && A && B — run B only if A succeeded.
The --continue flag in wget will attempt to resume the downloading of a single file in the current directory. Please refer to the man page of wget for more info. It is quite detailed.
What you need is resuming the mirroring/downloading from where the script previously left off.
So, its more of a modification of script than some setting in wget. I can suggest a way to do that, but mind you, you can use a different approach as well.
Modify the urls.txt file to have one URL per line. Then refer this pseudocode:
get the URL from the file
if (URL ends with a token #DONE), continue
else, wget command
append a token #DONE to the end of the URL in the file
This way, you will know which URL to continue from, the next time you run the script. All URLs that have a "#DONE" at the end will be skipped, and the rest will be downloaded.

Resources