I am trying to send email using the sendmail utility in unix AIX. When
mailheader:
To: to#gmail.com
From: from#gmail.com
MIME-Version: 1.0
Content-Type: text/html; charset=us-ascii
Subject: Alert
status.hmtl -> contains html report spooled using a db query
(cat ./mailheader ./status.html ) | sendmail -t
When i try to use the above command from a shell script from a crontab tab i get below message in log:
cat: 0652-050 Cannot open ./mailheader.
cat: 0652-050 Cannot open ./status.html.
But the shell scripts runs perfectly when i run it manually.
Please let me know your thoughts
I didnt change directory from crontab and hence i was getting the rror.
used absolute paths and debug to figure out the issue and added cd at begining of my script to resolve the issue.
Use absolute paths like
(cat /there/mailheader /there/status.html ) | sendmail -t
Or use cd
cd /somepath
(cat mailheader status.html ) | sendmail -t
But first of all, debug. Add these lines into your script:
set -xv
exec >>/tmp/debug.$$ 2>&1
date
pwd
id -a
env
Related
I have a bash script which picks up files from /tmp and emails them to me. I run this script as root and it works perfectly but I am trying to get this automated with crontab.
Added the job to crontab, again running as root, and now I get 'Couldn't lock /sent'.
I managed to confirm it's using the file in /root by changing it's name in Muttrc and tried permission at 600 and 777.
(Also getting an error Segmentation fault, hoping that will go away if I fix the above.)
Anyone any ideas why Mutt is different as a cron job with the same user and the same file.
I simplified the script as follows and is doing exactly the same, works from root shell, but not in crontab.
error:-
Couldn't lock /sent
/data/mediators/email_file: line 5: 1666 Segmentation fault mutt $email -s "test" -i /tmp/test.txt < /dev/null
email_file script:-
#!/bin/bash
email=——#——.com
mutt $email -s "test" -i /tmp/test.txt < /dev/null
crontab:-
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=——#—-.com
HOME=/
54 02 * * * root /data/mediators/email_file
I also added printenv to the job and compared to a server where this runs OK. The difference is that the working system has USER=root, whereas the non-working one does not show this variable as being set.
The issue is combination of HOME=/ env variable in crontab and default mutt record configuration which defaults to ~/sent.
mutt stores sent emails in the record file. So choose whether you want to keep them (fix crontab's env var HOME or set mutt's record to meaningful value.
Add this option to mutt command in email_file If you want to set it:
-e 'set record=/root/sent'
or unset it with:
-e 'unset record'
You can find more in man pages muttrc(5)
record
Type: path
Default: “~/sent”
This specifies the file into which your outgoing messages should be appended. (This is meant as the primary method for saving a copy of your messages, but another way to do this is using the “my_hdr” command to create a “Bcc:” field with your email address in it.)
The value of $record is overridden by the $force_name and $save_name variables, and the “fcc-hook” command. Also see $copy and $
I have a bash script which I want to run as a cron job.
It works fine except one command.
I redirected its stderr to get the error and found out that the error it shows was the command not recognized.
It is a root crontab.
Both the current user and root execute the command successfully when I type it in the terminal.
Even the script executes the command when I run it through the terminal.
Startup script :
#!/bin/bash
sudo macchanger -r enp2s0 > /dev/null
sudo /home/deadpool/.logkeys/logger.sh > /dev/null
logger.sh :
#!/bin/bash
dat="$(date)"
echo " " >> /home/deadpool/.logkeys/globallog.log
echo $dat >> /home/deadpool/.logkeys/globallog.log
echo " " >> /home/deadpool/.logkeys/globallog.log
cat /home/deadpool/.logkeys/logfile.log >> /home/deadpool/.logkeys/globallog.log
cat /dev/null > /home/deadpool/.logkeys/logfile.log
cat /dev/null > /home/deadpool/.logkeys/error.log
logkeys --start --output /home/deadpool/.logkeys/logfile.log 2> /home/deadpool/.logkeys/error.log
error.log
/home/deadpool/.logkeys/logger.sh: line 10: logkeys: command not found
Remember cron runs with a different environment then your user account or root does and might not include the path to logkeys in its PATH. You should try the absolute path for logkeys (find it with which logkeys from your user) in your script. Additionally I recommend looking at this answer on serverfault about running scripts like they are running from cron when you need to find out why it's working for you and not in a job.
I have wrote a bash file so If an user select a specific option then on a linux server a specific .sh file will execute.
For example:
If a user presses 1:
for %%? in (1) do if /I "%C%"=="%%?" goto print
:print
CLS
start C:\tools\PLINK.EXE -ssh -pw <password> -t <user>#10.111.11.111 "echo <password> | sudo -S /var/www/test/test.sh"
I can see the shell script starting but on my linux server "test.sh" has commands to create a .txt file.
echo $NAME "test" >> test.txt (for example)
My question now is... why is it if I run test.sh on the linux server directly the test.txt has been succesfully created.
If I run test.sh trough my windows batch file, I can see the command is activated but no test.txt file is created on the linux server.
Thank you for the help!
Either the test.txt will be in your HOME directory or you can give the entire path along with the test.txt to verify.
Im using Bashscript for automated backups from my system and most important config files.
Strange thing is, after running the script it should send me an email with content of a log file. But debugging the script reveals, that it runs an altered command than written in my script
Script:
#!/bin/bash
###########################
### ###
### BACKUP SCRIPT ###
### ###
###########################
......
LOG="backup.log"
exec > >(tee -i >( sed 's/\x1B\[[0-9;]*[JKmsu]//g' > ${LOG}))
exec 2>&1
# commands etc
mailx -s "Backup | subject" mail#mail.tld < $LOG
So my log contains all necessary output and isn't empty.
Debugging the script reveals this
echo -e '\033[1;32mExiting prune script\033[0m'
Exiting prune script
+ mailx -s 'Backup | subject' mail#mail.tld #Missing < $LOG!!!
Really dunno why it's missing my logfile. The mail I receive ist just an empty mail with correct subject.
Some ideas why?
PS: using Ubuntu 16.04 LTS but this shouldn't matter
Backup.log: https://pastebin.com/cCVSLV0h nothing special.
If I run mail command directly from my shell it sends this log as I expect it.
I'm using simple script to automate ftp. The script looks like this:
ftp -nv $FTP_HOST<<END_FTP
user $FTP_USER $FTP_PASS
binary
mkdir $REMOTE_DIR
cd $REMOTE_DIR
lcd $LOCAL
put $FILE
bye
END_FTP
But I would like to pipe STDERR to the syslog and STDOUT to a logfile. Normally I would do something like that: ftp -nv $FTP_HOST 1>>ftp.log | logger<<END_FTP but in this case that won't work because of <<END_FTP. How should I do it properly to make the script work? Note that I want to redirect only output from the FTP command inside my script and not the whole script.
This works without using a temp file for the error output. The 2>&1 sends the error output to where standard output is going — which is the pipe. The >> changes where standard output is going — which is now the file — without changing where standard error is going. So, the errors go to logger and the output to ftp.log.
ftp -nv $FTPHOST <<END_FTP 2>&1 >> ftp.log | logger
user $FTP_USER $FTP_PASS
binary
mkdir $REMOTE_DIR
cd $REMOTE_DIR
lcd $LOCAL
put $FILE
bye
END_FTP
How about:
exec > mylogfile; exec 2> >(logger -t myftpscript)
in front of you ftp script
Another way of doing this I/O redirection is with the { ... } operations, thus:
{
ftp -nv $FTPHOST <<END_FTP >> ftp.log
user $FTP_USER $FTP_PASS
binary
mkdir $REMOTE_DIR
cd $REMOTE_DIR
lcd $LOCAL
put $FILE
bye
END_FTP
# Optionally other commands here...stderr will go to logger too
} 2>&1 | logger
This is often the best mechanism when more than one command, but not all commands, need the same I/O redirection.
In context, though, I think this solution is the best (but that's someone else's answer, not mine):
ftp -nv $FTPHOST <<END_FTP 2>&1 >> ftp.log | logger
...
END_FTP
Why not create a netrc file and let that do your login and put the file for you.
The netrc file will let you login and allow you to define an init macro that will make the needed directory and put the file you want over there. Most ftp commands let you specify which netrc file you'd like to use, so you could use various netrc files for various purposes.
Here's an example netrc file called my_netrc:
machine ftp_host
user ftp_user
password swordfish
macrodef init
binary
mkdir my_dir
cd my_dir
put my_file
bye
Then, you could do this:
$ ftp -v -Nmy_netrc $FTPHOST 2>&1 >> ftp.log | logger