Variable date not working in linux cron - linux

I am having an issue that I can not find any information for while doing an extensive google search.
I have a linux cron, running via crontab, that works great until I try to add a variable date to the title of the file. BUT.. When I run the same command outside the cron, just from the command line, it works fine.. Also, the cron does work if I take out the date part.
Command line code that works:
sudo mysqldump -h mysql.url.com -u user -pPassword intravet sites | gzip > /mnt/disk2/database_`date '+%m-%d-%Y'`.sql.gz
Cron that works:
15 2 * * * root mysqldump -h mysql.url.com -u user -pPassword intravet sites | gzip > /mnt/disk2/database.sql.gz
Cron that DOESN'T work:
15 2 * * * root mysqldump -h mysql.url.com -u user -pPassword intravet sites | gzip > /mnt/disk2/database_`date '+%m-%d-%Y'`.sql.gz
I am not understanding why I can not use the date function while inside a cron?
Everything I find says I can, but in practice, I can not.
Server details:
Ubuntu 12.04.5
Thank you for any insight.

You just need to use escaping % sign
* * * * * touch /tmp/foo_`date '+\%m-\%d-\%Y'`.txt
Result:
[root#linux tmp]# ls -l /tmp/foo_*
-rw-r--r-- 1 root root 0 Apr 18 02:17 /tmp/foo_04-18-2015.txt

Try replacing the backticks with $() and escaping your %s, such as:
15 2 * * * root mysqldump -h mysql.url.com -u user -pPassword intravet sites | gzip > /mnt/disk2/database_$(date '+\%m-\%d-\%Y').sql.gz
I only mention removing the backticks because you will end up having all kinds of escaping problems later in your coding endeavours. Stick with using $() for command substitution.

Related

Cronjob stopped executing

I used the crontab before but cannot get any command running anymore.
I am editing directly via crontab -e and testing with simple commands like
* * * * * echo "hello there" >> /Users/myUsername/Desktop/test.txt
Running this command ps -ef | grep cron | grep -v grep gives me this output:
0 270 1 0 6Sep20 ?? 0:00.61 /usr/sbin/cron
Today is 22Sep20. Did the crontab stop running?
My shell is zsh on MacOS.
On MacOS run crontab -l to list all installed cron scripts
or crontab -l -u [user] for another user.
Your * * * * * syntax means it's running every minute and that looks all fine to me. Check syntax here

schedule pihole blacklist from text file of domains with crontab on raspberry pi

I'm trying to edit my crontab to make a scheduled block of specific domains for my pi hole configuration.
my setup is that I've got a file: 'blocklist.txt' which contains a list of domains like:
instagram.com
facebook.com
newssite.com
and I'm using the following to get that to work with xargs. I've taken the normal version and converted it to absolute paths here so that it will work in cron. I'm also attempting to write out to a file at /home/pi/cron.log which I made just to capture the output and see what's going on. nothing updates there either.
46 17 * * * /usr/bin/xargs -a /home/pi/blocklist.txt /usr/local/bin/pihole --wild &>/home/pi/cron.log
this works totally fine when running in my normal shell and updates the log, etc... but does not work when I try to schedule a cron job for it a few minutes out.
Maybe I'm missing something with my paths or with scheduling?
I already converted my timezone in raspi-config to my timezone.
My solution does not currently read from a file, but it's very close to what you are looking for. Here's a blog post with lots of details, but here are the core snippets:
block.sh:
#!/bin/bash
blockDomains=(facebook.com www.facebook.com pinterest.com www.pinterest.com)
for domain in ${blockDomains[#]}; do
pihole -b $domain
done
allow.sh:
#!/bin/bash
blockDomains=(facebook.com www.facebook.com pinterest.com www.pinterest.com)
for domain in ${blockDomains[#]}; do
pihole -b -d $domain
done
Allow execution on these scripts:
chmod +x /home/pi/Documents/block.sh
chmod +x /home/pi/Documents/allow.sh
Block after 9pm, allow after 6am. crontab -e:
0 21 * * * bash -l -c '/home/pi/Documents/block.sh' | logger -p cron.info
0 6 * * * bash -l -c '/home/pi/Documents/allow.sh' | logger -p cron.info

bash script doesn't work through crontab

I am running a bash script that transfers files to my AWS bucket.If i run the bash script through my terminal it works fine (via ./myBash.sh).
However I put it in my crontab but there it doesn't work.This is my bash script
#!/bin/bash
s3cmd put /home/anonymous/commLogs.txt s3://myBucket/
echo transfer completed
echo now listing files in the s3 bucket
s3cmd ls s3://myBucket/
echo check
And this is my crontab-
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
46 13 * * * /bin/bash myBash.sh
And here is a list of things i have aready tried -
1)tried running the crontab with a node app to test whether crontab was working(the answer was yes)
2)tried running the crontab without the SHELL and PATH
3)Tried running the bash script from cron using sudo (46 13 * * * sudo myBash.sh)
4)tried running the bash without the /bin/bash
5) Searched many sites on the net for an answer without satisfactory results
Can anyone help me with what the problem may be?(I am running Ubuntu 14.04)
After a long time getting the same error, I just did this :
SHELL=/bin/bash
PATH=/bin:/sbin:/usr/bin:/usr/sbin
* * * * * /bin/bash /home/joaovitordeon/Documentos/test.sh
For anyone coming to this post.
I was having same problem and reason was crontab was running under root user and s3cmd was configured under ubuntu user.
so need to copy .s3cfg to root
cp -i /home/ubuntu/.s3cfg /root/.s3cfg

Crontab cannot find AWS Credentials - linuxbox EC2

I've created a linux box that has a very simple make bucket command : was s3 mb s3://bucket running this from the prompt works fine.
I've run AWS configure as both the user I'm logged in as and sudo. The details are definitely correct as the above wouldn't create the bucket.
The error message I'm getting from cron is :make_bucket failed: s3://cronbucket/ Unable to locate credentials
I've tried various things thus far with the crontab in trying to tell it where the credentials are, some of this is an amalgamation of other solutions which may be a cause of the issue.
My crontab look like :
AWS_CONFIG_FILE="/home/ec2-user/.aws/config"
SHELL=/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binx
0 0 * * * /usr/bin/env bash /opt/foo.sh &>> /tmp/foo.log
* * * * * /usr/bin/uptime > /tmp/uptime
* * * * * /bin/scripts/script.sh >> /bin/scripts/cronlogs/cronscript.log 2>&1
initially I just had the two jobs that were making the bucket and then creating the uptime (as a sanity check), the rest of the crontab are solutions from other posts that do not seem to be working.
Any advice is much appreciated, thank you.
The issue is that cron doesn't get your env. There are several ways of approaching this. Either running a bash script that includes your profile. Or a nice simple solution would be to include it with crontab. (change profile to whatever you are using)
0 5 * * * . $HOME/.profile; /path/to/command/to/run
check out this thread
If you have attached IAM role for ECS Fargate task role then this solution will work
Add the following line in the entrypoint.sh
declare -p | grep -Ev 'BASHOPTS|BASH_VERSINFO|EUID|PPID|SHELLOPTS|UID' > /container.env
Add below line in crontab or cron file
SHELL=/bin/bash
BASH_ENV=/container.env
It worked for me.
In my case it was much trickier, because I was running a CRON job in Fargate instance, and I could access S3 from shell, but it did not work from CRON.
In Dockerfile configure the CRON job
RUN echo -e \
"SHELL=/bin/bash\n\
BASH_ENV=/app/cron/container.env\n\n\
30 0 * * * /app/cron/log_backup.sh >> /app/cron/cron.log 2>&1" | crontab -
In entrypoint script configure AWS credentials
creds=`curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI`
AWS_ACCESS_KEY_ID=`echo $creds | jq .'AccessKeyId' | tr -d '"'`
AWS_SECRET_ACCESS_KEY=`echo $creds | jq '.SecretAccessKey' | tr -d '"'`
AWS_SESSION_TOKEN=`echo $creds | jq '.Token' | tr -d '"'`
After that in same entrypoint script create container.env file as #Tailor Devendra suggested in previous solution:
declare -p | grep -Ev 'BASHOPTS|BASH_VERSINFO|EUID|PPID|SHELLOPTS|UID' > /app/cron/container.env
I can't say that I am happy with this solution, but it works.

Cron run every minute ( Runs in bash but not in cron)

This have been discussed several times in previous post. I followed given advise but it does not work for me. I have two scripts which are run by the cron service every minute. To my surprise, only one runs per minute( 1st in the list below), the other fails (2nd in list below). Most surprising, when run direct from the terminal, both scripts execute fine.
Cron setup :
*/1 * * * * /home/user/Desktop/scripts/generatepattern.sh
*/1 * * * * /home/user/Desktop/scripts/getnextfile.sh
File permissions are:
-rwxr--r-- 1 user user 522 Jul 25 16:18 generatepattern.sh
-rwxr--r-- 1 user user 312 Jul 25 23:02 getnextfile.sh
The code for the non-schedulable( not running in cron ) is :
#!/bin/bash
#Generate a file to be used for the search
cd /home/user/Desktop/scripts
no=`cat filecount.txt`
if test $no -lt 20
then
#echo "echo less"
#echo $no
expr `cat filecount.txt` + 1 >filecount.txt
fi
In the last line you wrote cat filecount.txt instead of cat /home/user/Desktop/scripts/filecount.txt
I discovered the main issue was that the new cron settings only get used when the vi editor gets closed. Changes have to be made on the editor and a :wq command issued so that the new settings get installed. Just issuing :w command does not work since no install happens(this was my mistake). I realised this after issuing :wq command on vi and the following output displayed :-
# crontab -e
crontab: installing new crontab
Thanks to all other suggestions made.

Resources