Mysql auto backup stopped working - cron

I have a MySQL DB on a shared hosting server.
It's been sending me daily backups for years through email, with this cron task:
mysqldump -ce --user=***** --password=***** mydbname| gzip | uuencode dbbackup_e.gz | mail ****#gmail.com
For some reason my daily email does not contain the backup file anymore (I receive empty emails).
Could somebody point to me what's wrong?

Related

Mautic is not processing the queue. Messages are in the spool/default folder

Mautic will not send my queued emails.
I have set up the cron jobs and they are running as expected. The cron job email report for the ":messages:send" cron job that runs every minute is always this...
Processing message queue
Messages sent: 0
Content-type: text/html; charset=UTF-8
I have messages in my queue, which I have sent via the Contacts Tab, by clicking on the contact name (myself) and then clicking on the Send Email button, just to send myself a test email.
In my configuration email settings I am using PHP Mail.
If I have the mail set to 'Send Immediately' it works fine. I get my test email instantly. But if I have it set to queue the message goes into my spool/default folder but when the cron job triggers it is not sent.
Things I have tried so far....
I deleted the cache folder contents
I checked to see if I have two versions of this file: SendChannelBroadcastCommand.php - I don't, I just have this file once, in the ChannelBundle/Command folder. It is not also in the CoreBundle/Command folder (as suggested by a similar post)
I deleted all of the queued messages in the spool/default folder, then sent some more... which are now sitting in the folder just like before.
Things that might be a factor?
The permissions for the file SendChannelBroadcastCommand.php is set to 644. I don't know if this is correct but assume it is.
When I open the SendChannelBroadcastCommand.php file in dreamweaver, it flags it with lots of syntax errors. I don't really know enough about code to determine if these are genuine errors or if Dreamwaever is just being a little too sensitive. I also don't know if this file in included inside another one that'd make those errors disappear if Dreamweaver could see the complete end result, but I thought it was worth a mention.
Things that I'm sure are not a problem
I'm certain that the cron job is set up correctly. It is running. And I receive the email reports (although I've turned those back off now as I don't want a report every minute)
I've seen this problem mentioned a few times on other forums but none of the solutions are working for me.
My Mautic installation is 2.14.0
My PHP is 7.0.31
Installation was via Softaculous on cPanel on a dedicated server hosted with Namecheap
Thank you in advance for any suggestions that I can try to fix this issue.
Steve.
Oh, in case you're wondering... I am using PHP Mail as Mautic would not connect to Amazon SES. For that I get the following error (which my hosting company was unable to help me fix, so I'm trying PHP mail)
Connection could not be established with host email-smtp.us-east-1.amazonaws.com [Connection refused #111] Log data: ++ Starting Mautic\EmailBundle\Swiftmailer\Transport\AmazonTransport !! Connection could not be established with host email-smtp.us-east-1.amazonaws.com [Connection refused #111] (code: 0)
++ Starting Mautic\EmailBundle\Swiftmailer\Transport\AmazonTransport !! Connection could not be established with host email-smtp.us-east-1.amazonaws.com [Connection refused #111] (code: 0)
Regarding your Amazon SES Connection refused #111] (code: 0) error, it is hard-coded in mautic to use port 2587 to connect to amazon ses, regardless of what port you put in the smtp port number. This is in Mautic 2.13.1 version. Make sure TCP 2587 in/out is open on your webserver firewall. This change solved that error message for me. Have not expierenced a queue error, sorry can't comment on that.
If your queue setup is perfect then there is one more thing is there to setup in cron level.
That is php /path/to/mautic/bin/console mautic:emails:send
This command is used to process queued emails for Mautic
Check this for more info https://docs.mautic.org/en/setup/cron-jobs
Due to this only the emails are not processing, if your queue setup is good.
Just add the command and try again, it will work.

Bacula - Director unable to authenticate with Storage daemon

I'm trying to stay sane while configuring Bacula Server on my virtual CentOS Linux release 7.3.1611 to do a basic local backup job.
I prepared all the configurations I found necessary in the conf-files and prepared the mysql database accordingly.
When I want to start a job (local backup for now) I enter the following commands in bconsole:
*Connecting to Director 127.0.0.1:9101
1000 OK: bacula-dir Version: 5.2.13 (19 February 2013)
Enter a period to cancel a command.
*label
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"
Automatically selected Storage: File
Enter new Volume name: MyVolume
Defined Pools:
1: Default
2: File
3: Scratch
Select the Pool (1-3): 2
This returns
Connecting to Storage daemon File at 127.0.0.1:9101 ...
Failed to connect to Storage daemon.
Do not forget to mount the drive!!!
You have messages.
where the message is:
12-Sep 12:05 bacula-dir JobId 0: Fatal error: authenticate.c:120 Director unable to authenticate with Storage daemon at "127.0.0.1:9101". Possible causes:
Passwords or names not the same or
Maximum Concurrent Jobs exceeded on the SD or
SD networking messed up (restart daemon).
Please see http://www.bacula.org/en/rel-manual/Bacula_Freque_Asked_Questi.html#SECTION00260000000000000000 for help.
I double and triple checked all the conf files for integrity and names and passwords. I don't know where to further look for the error.
I will gladly post any parts of the conf files but don't want to blow up this question right away if it might not be necessary. Thank you for any hints.
It might help someone sometime who made the same mistake as I:
After looking through manual page after manual page I found it was my own mistake. I had (for a reason I don't precisely recall, I guess to trouble-shoot another issue before) set all ports to 9101 - for the director, the file-daemon and the storage daemon.
So I assume the bacula components must have blocked each other's communication on port 9101. After resetting the default ports like 9102, 9103 according to the manual, it worked and I can now backup locally.
You have to add director's name from the backup server, edit /etc/bacula/bacula-fd.conf on remote client, see "List Directors who are permitted to contact this File daemon":
Director {
Name = BackupServerName-dir
Password = "use *-dir password from the same file"
}

Save mail message as a file on Linux using sendmail

I have an application running on several RHEL 5.8 systems which monitors and alerts (via email). I need to create a durable log of these alerts locally on each node.
I think the easiest way to do this would be to add a local email user to the alerts and then use mailbox settings or a script (if needed) to save each message on a local filesystem
I would settle for message body dumped to a text file (one file per email.)
It would be better if it could extract time, host, subject, & body as seperate fields for consumption by an open source log reader.
My systems are using sendmail 8.1 and I would prefer to stick with it, although I also have postfix 2.3.3 available.
As you reported your sendmail uses procmail as local mailer => create special OS user account (e.g. log_user) and use ~log_user/.procmailrc to instruct procmail to deliver messages to maildir folder.
~log_user/.procmailrc
# deliver ALL messages to ~/maillog/ maildir.
# see "man procmailex" for email sorting examples
:0
maillog/

Cannot fetch email with large attachment

I use fetchmail to fetch emails from my Gmail account. But when the mail attachment is very large(like more than 20M), the mail will not be fetched to my local mail inbox.
How to force the fetchmail to download such a large email(Maybe it's the problem of Gmail?)?
Alternatively, I am willing to just download the mail without the attached in such a large attached case(But do not delete it in the Gmail Server).
How to solve this problem? Any suggestion?
Call fetchmail with:
fetchmail --limit 25165824 --timeout 1200
to increase the message size limit to 24 MB
and the server non-response time to 20 minutes.
(vary the size and timeout as you prefer).
Be aware that some local MDAs may also need limits raised
- I needed to add to /etc/postfix/main.cf:
message_size_limit = 25165824

Shell multiple logs monitoring and correlation

I have been trying this for days but still struggling.
The objective of the script is to perform real time log monitoring on multiple servers (29 in particular) and correlate login failure records between servers. The servers' log will be compressed at 23:59:59 everyday, and a new log starts from 0 o'clock.
My idea was to use tail -f | grep "failed password" | tee centralized_log on every server, activated by a loop through all server names, run on background, and output the login failure records to a centralized log. But it dosn't work. And it creates a lot of daemons which will become zombies as soon as I terminates the script.
I am also considering to do tail at some minutes interval. But as the logs grow larger, the processing time will increase. How to set a pointer to where the previous tail stopped?
So could you please suggest a better and working way to do multiple logs monitoring and correlation. Additional installations are not encouraged unless totally necessary.
If your logs are going through syslog, and you're using rsyslogd, then you can configure the syslog on each machine to forward the specific messages you're interested in to one (or two) centralized log servers, using a property match like:
:msg, contains, "failed password"
See the rsyslog documentation for more details about how to set up reliable syslog forwarding.

Resources