CPanel Email piping to shell script in linux, syntax - linux

I am trying to write a shell script, where emails get piped to (by an email forward in cpanel).
The shell script will then post the entire email to a url using curl.
The script looks like this:
curl -d "param=$1" http://localhost/stuff/
And the forward looks like this:
|/home/usr/script/curlthis.sh
This is only sort of working.
The email gets bounced back even though the curl posts to the url successfully. (it looks like only part of the email is getting posted, but I am not 100% sure)
I have been told the email bounces because I am not reading the stdin, but I am not sure why I need to do that and why I cannot use $1?
How can I read the entire contents of the pipe (then post it using curl), and will that stop the mail server from bouncing it back?
EDIT
Using the answer below here is what I came up with:
#!/bin/bash
m=$(cat -)
escapedm="$(perl -MURI::Escape -e 'print uri_escape($ARGV[0]);' "$m")"
curl -silent -G -d "param=$escapedm" http://localhost/stuff/ 2>&1 >/dev/null
This part:
2>&1 >/dev/null
is shockingly important. If you don't redirect the stdout/err to null then the email gets bounced back for whatever reason.

Your mail is being passed to the script as a stream on stdin, and not as a parameter ($1). Note that your forward script begins with a pipe, and that's the mechanism passing the mail into your script.
So you should be able to read this in your shell (bash?) using the read statement. See this SO answer for more details.

Related

Redirect wget screen output to a log file in bash

First of all, thank you everyone for your help. I have the following file that contains a series of URL:
Salmonella_enterica_subsp_enterica_Typhi https://ftp.ncbi.nlm.nih.gov/genomes/all/GCF/003/717/755/GCF_003717755.1_ASM371775v1/GCF_003717755.1_ASM371775v1_translated_cds.faa.gz
Salmonella_enterica_subsp_enterica_Paratyphi_A https://ftp.ncbi.nlm.nih.gov/genomes/all/GCF/000/818/115/GCF_000818115.1_ASM81811v1/GCF_000818115.1_ASM81811v1_translated_cds.faa.gz
Salmonella_enterica_subsp_enterica_Paratyphi_B https://ftp.ncbi.nlm.nih.gov/genomes/all/GCF/000/018/705/GCF_000018705.1_ASM1870v1/GCF_000018705.1_ASM1870v1_translated_cds.faa.gz
Salmonella_enterica_subsp_enterica_Infantis https://ftp.ncbi.nlm.nih.gov/genomes/all/GCA/011/182/555/GCA_011182555.2_ASM1118255v2/GCA_011182555.2_ASM1118255v2_translated_cds.faa.gz
Salmonella_enterica_subsp_enterica_Typhimurium_LT2 https://ftp.ncbi.nlm.nih.gov/genomes/all/GCF/000/006/945/GCF_000006945.2_ASM694v2/GCF_000006945.2_ASM694v2_translated_cds.faa.gz
Salmonella_enterica_subsp_diarizonae https://ftp.ncbi.nlm.nih.gov/genomes/all/GCF/003/324/755/GCF_003324755.1_ASM332475v1/GCF_003324755.1_ASM332475v1_translated_cds.faa.gz
Salmonella_enterica_subsp_arizonae https://ftp.ncbi.nlm.nih.gov/genomes/all/GCA/900/635/675/GCA_900635675.1_31885_G02/GCA_900635675.1_31885_G02_translated_cds.faa.gz
Salmonella_bongori https://ftp.ncbi.nlm.nih.gov/genomes/all/GCF/006/113/225/GCF_006113225.1_ASM611322v2/GCF_006113225.1_ASM611322v2_translated_cds.faa.gz
And I have to download the url using wget I have already achieve to download the URL but the typicall output in shell appears:
--2021-04-23 02:49:00-- https://ftp.ncbi.nlm.nih.gov/genomes/all/GCA/900/635/675/GCA_900635675.1_31885_G02/GCA_900635675.1_31885_G02_translated_cds.faa.gz
Reusing existing connection to ftp.ncbi.nlm.nih.gov:443.
HTTP request sent, awaiting response... 200 OK
Length: 1097880 (1,0M) [application/x-gzip]
Saving to: ‘GCA_900635675.1_31885_G02_translated_cds.faa.gz’
GCA_900635675.1_31885_G0 100%[=================================>] 1,05M 2,29MB/s in 0,5s
2021-04-23 02:49:01 (2,29 MB/s) - ‘GCA_900635675.1_31885_G02_translated_cds.faa.gz’ saved [1097880/1097880]
I want to redirect that output to a log file. Also as the files download, I want to decompress them, because they are zip in .gz. My code is the following
cat $ncbi_urls_file | while read line
do
echo " Downloading fasta files from NCBI..."
awk '{print $2}' | wget -i-
done
wget
wget does have options allowing logging to files, from man wget
Logging and Input File Options
-o logfile
--output-file=logfile
Log all messages to logfile. The messages are normally reported to standard error.
-a logfile
--append-output=logfile
Append to logfile. This is the same as -o, only it appends to logfile instead of overwriting the old log file. If logfile does not exist, a new file is created.
-d
--debug
Turn on debug output, meaning various information important to the developers of Wget if it does not work properly. Your system administrator may have chosen to compile Wget without debug support, in which case -d will not work. Please note that compiling with debug support is always safe---Wget compiled with the debug support will not print any debug info unless requested with -d.
-q
--quiet
Turn off Wget's output.
-v
--verbose
Turn on verbose output, with all the available data. The default output is verbose.
-nv
--no-verbose
Turn off verbose without being completely quiet (use -q for that), which means that error messages and basic information still get printed.
You would need to experiment to got what you need, if you need all logs in single file use -a log.out, which will cause wget to append logging information to said file and not writing to stderr.
Standard output can be redirected to a file in bash using the >> operator (for appending to the file) or the > operator (for truncating / overwriting the file). e.g.
echo hello >> log.txt
will append "hello" to log.txt. If you still want to be able to see the output in your terminal and also write it to a log file, you can use tee:
echo hello | tee.txt
However, wget outputs most of its basic progress information through standard error rather than standard output. This is actually a very common practice. Displaying progress information often involves special characters to overwrite lines (e.g. to update a progress bar), change terminal colors, etc. Terminals can process these characters sensibly in real time, but it often does not make much sense to store them in a file. For this reason, such kinds of incremental progress output are often separated from other output which is more sensible to store in a log file to make them easier to redirect accordingly, and hence incremental progress information is often output through standard error rather than standard output.
However, you can still redirect standard error to a log file:
wget example.com 2>> log.txt
Or using tee:
wget example.com 2>&1 | tee log.txt
(2>&1 redirects standard error through standard output, which is then piped to tee).

Linux sendmail command not sending mail while in a cron

I'm trying to run a sendmail command in a Red-hat Linux environment on a bash shell script through a cronjob. I can run this script successfully when it is ran manually and every other job within the shell runs correctly other than the mailing part.I have never used sendmail and am not sure if I need to restructure how it is being presented.
I have tried mail and mailx. I am able to send the emails but the log file contain many weird characters that it put the text format into a att00001.bin attachment on the email which I do not want. The sendmail command seems to be the only one that doesn't send an attachment when ran manually. Other cron jobs works correctly and are able to send emails they just do not have the special characters in the log file.
echo '##################################################'
date
echo '##################################################'
#Run Script and write to log file
/comp/gfb281m.sh > /usr/local/bin/oracle/getload/getload.log 2>&1
#Send log file to developer group
(echo "Subject:GetLoad Shell"; echo; cat
/usr/local/bin/oracle/getload/getload.log) | sendmail -v
exampleEmail#outlook.com exampleEmail2#mail.mil
When ran this cron job should send the contents of the getload.log file to a group a of users.
Fixed the issue thanks to another source. I was not using the sendmail's full path. I was just stating "| sendmail -v email" and not sendmails full path which was "/usr/sbin/sendmail" for me. Not sure if links are allowed here but below is where I found the answer.
https://www.unix.com/red-hat/271632-bash-sendmail-command-not-found.html
crontab sets PATH to /usr/bin:/bin. To avoid typing absolute command names like /etc/sbin/sendmail you can set up the PATH in you crontab:
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
*/30 * * * * sendmail user#example.com%subject: Sample email%%Email body%

Cannot redirect command output to a file (for postfix status)

I am trying to run the following command:
postfix status > tmp
however the resulting file never has any content written, and instead the output is still sent to the terminal.
I have tried adding the following into the mix, and even piping to echo before redirecting the output, but nothing seems ot have any effect
postfix status 2>&1 > tmp
Other commands work no problem.
script -c 'postfix status' -q tmp
It looks like it writes to the terminal instead to stdout. I don't understand piping to 'echo', did you mean piping to 'cat'?
I think you can always use the 'script' command, that logs everything that you see on the terminal. You would run 'script', then your command, then exit.
Thanks to another SO user, who deleted their answer, so now I can't thank, I was put on the right track. I found the answer here:
http://irbs.net/internet/postfix/0211/2756.html
So for those who want to be able to catch the response of the posfix, I used the following method.
Create a script which causes the output to go to where you wish. I did that like this:
#!/bin/sh
cat <<EOF | expect 2>&1
set timeout -1
spawn postfix status
expect eof
EOF
Then i ran the script (say script.sh) and could pipe/redirect from there. i.e. script.sh > file.txt
I needed this for PHP so I could use exec and actually get a response.

shell script wget not working when used as a cron job

i have a function in a php web app that needs to get periodically called by a cron job. originally i just did a simple wget to the url to call the function and everything worked fine, but ever since we added user auth i am having trouble getting it to work.
if i manually execute these commands i can login, get the cookie and then access the correct url:
site=http://some.site/login/in/here
cookie=`wget --post-data 'username=testuser&password=testpassword' $site -q -S -O /dev/null 2>&1 | awk '/Set-Cookie/{print $2}' | awk 'NR==2{print}'`
wget -O /dev/null --header="Cookie: $cookie" http://some.site/call/this/function
but when executed as a script, either manually or through cron, it doesn't work.
i am new to shell scripting, any help would be appreciated
this is being run on ubuntu server 10.04
OK simple things first -
I assume the file begins with #!/bin/bash or something
You have chmodded the file +x
You're using unix 0x0d line endings
And you're not expecting to return any of the variables to the calling shell, I presume?
Failing this try teeing the output of each command to a log file.
In theory, the only difference from manually executing these and using a script would be the timing.
Try inserting a sleep 5 or so before the last command. Maybe the http server does some internal communication and that takes a while. Hard to say, because you didn't post the error you get.

Setting up Dreamhost Cron job to simply execute URL

Just when I thought I was understanding cron jobs, I realize I'm still not understanding. I'm trying to set up a cron job through Dreamhost to ping a URL once an hour. This URL when visited performs a small(ish) query and updates the database.
A few examples I've tried which haven't seemed to of worked:
wget -O /dev/null http://www.domain.com/index.php?ACT=25&profile_id=1
and
wget -q http://www.domain.com/index.php?ACT=25&profile_id=1
The correct domain was inserted into the URL of course.
So, what am I missing? How could I execute a URL via a Cronjob?
One thing, are you escaping your url?
Try with:
wget -O /dev/null "http://www.domain.com/index.php?ACT=25&profile_id=1"
Having an ampersand in the URL usually leads to strange behaviour (process going background and ignoring the rest of the URL, etc).
I just had the same exact problem, and I found that actually two solutions work. One is as Victor Pimentel suggested: enclosing the url with " and the second option is to escape the & character in the cronjob like this: \&, so in your case the statement would look like this:
wget -q http://www.domain.com/index.php?ACT=25\&profile_id=1
or
wget -q "http://www.domain.com/index.php?ACT=25&profile_id=1"
Putting the following on the Dreamhost control panel\goodies\cron seems to work for me
wget -qO /dev/null http://domain/cron.php
links -dump http://Txx-S.com/php/test1.php
Worked much better than wget. It echoes the outputs of the php script into the email without all the junk that wget provides. Took awhile to get here, but it IS in the Dreamhost documentation. Don't need all the home/user stuff and the headache of placing all the php's under different users...
IMHO.
Pete

Resources