I've created a linux box that has a very simple make bucket command : was s3 mb s3://bucket running this from the prompt works fine.
I've run AWS configure as both the user I'm logged in as and sudo. The details are definitely correct as the above wouldn't create the bucket.
The error message I'm getting from cron is :make_bucket failed: s3://cronbucket/ Unable to locate credentials
I've tried various things thus far with the crontab in trying to tell it where the credentials are, some of this is an amalgamation of other solutions which may be a cause of the issue.
My crontab look like :
AWS_CONFIG_FILE="/home/ec2-user/.aws/config"
SHELL=/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binx
0 0 * * * /usr/bin/env bash /opt/foo.sh &>> /tmp/foo.log
* * * * * /usr/bin/uptime > /tmp/uptime
* * * * * /bin/scripts/script.sh >> /bin/scripts/cronlogs/cronscript.log 2>&1
initially I just had the two jobs that were making the bucket and then creating the uptime (as a sanity check), the rest of the crontab are solutions from other posts that do not seem to be working.
Any advice is much appreciated, thank you.
The issue is that cron doesn't get your env. There are several ways of approaching this. Either running a bash script that includes your profile. Or a nice simple solution would be to include it with crontab. (change profile to whatever you are using)
0 5 * * * . $HOME/.profile; /path/to/command/to/run
check out this thread
If you have attached IAM role for ECS Fargate task role then this solution will work
Add the following line in the entrypoint.sh
declare -p | grep -Ev 'BASHOPTS|BASH_VERSINFO|EUID|PPID|SHELLOPTS|UID' > /container.env
Add below line in crontab or cron file
SHELL=/bin/bash
BASH_ENV=/container.env
It worked for me.
In my case it was much trickier, because I was running a CRON job in Fargate instance, and I could access S3 from shell, but it did not work from CRON.
In Dockerfile configure the CRON job
RUN echo -e \
"SHELL=/bin/bash\n\
BASH_ENV=/app/cron/container.env\n\n\
30 0 * * * /app/cron/log_backup.sh >> /app/cron/cron.log 2>&1" | crontab -
In entrypoint script configure AWS credentials
creds=`curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI`
AWS_ACCESS_KEY_ID=`echo $creds | jq .'AccessKeyId' | tr -d '"'`
AWS_SECRET_ACCESS_KEY=`echo $creds | jq '.SecretAccessKey' | tr -d '"'`
AWS_SESSION_TOKEN=`echo $creds | jq '.Token' | tr -d '"'`
After that in same entrypoint script create container.env file as #Tailor Devendra suggested in previous solution:
declare -p | grep -Ev 'BASHOPTS|BASH_VERSINFO|EUID|PPID|SHELLOPTS|UID' > /app/cron/container.env
I can't say that I am happy with this solution, but it works.
Related
I wanted to run cron and run a few script started at a time set in crontab. I've installed cron on my docker container and wanted to add some crontab lines and cron starting in separate script. Here are fragments of my configuration
supervisord.conf
[program:cron]
command=/stats/run-crontabs.sh
/stats/run-crontabs.sh
#!/bin/bash
crontab -l | { cat; echo "11 1 * * * /stats/generate-year-rank.sh"; } | crontab -
crontab -l | { cat; echo "12 1 * * * /stats/generate-week-rank.sh"; } | crontab -
cron -f -L 15
and when it is time to run script by cron, I can see only that error in container logs
2022-01-29 01:12:01,920 INFO reaped unknown pid 691343
I wonder how I can run script by cron on docker container. Do I need supervisor?
EDIT: As #david-maze suggested I've done it like he commented and run cron as container entrypoint and problem is the same
Thank you for your help
Ok, I have to post an answer. I've realized that scripts working well, but it saved reports in system root directory, not on directories that I wanted.
It were because of lack of environment variables
More you can read those topic,
Where can I set environment variables that crontab will use?
but I've resolved my problem with adding that line at the start of 'run-crontabs.sh' script
crontab -l | { cat; echo "$(env)"; } | crontab -
I used the crontab before but cannot get any command running anymore.
I am editing directly via crontab -e and testing with simple commands like
* * * * * echo "hello there" >> /Users/myUsername/Desktop/test.txt
Running this command ps -ef | grep cron | grep -v grep gives me this output:
0 270 1 0 6Sep20 ?? 0:00.61 /usr/sbin/cron
Today is 22Sep20. Did the crontab stop running?
My shell is zsh on MacOS.
On MacOS run crontab -l to list all installed cron scripts
or crontab -l -u [user] for another user.
Your * * * * * syntax means it's running every minute and that looks all fine to me. Check syntax here
I'm trying to edit my crontab to make a scheduled block of specific domains for my pi hole configuration.
my setup is that I've got a file: 'blocklist.txt' which contains a list of domains like:
instagram.com
facebook.com
newssite.com
and I'm using the following to get that to work with xargs. I've taken the normal version and converted it to absolute paths here so that it will work in cron. I'm also attempting to write out to a file at /home/pi/cron.log which I made just to capture the output and see what's going on. nothing updates there either.
46 17 * * * /usr/bin/xargs -a /home/pi/blocklist.txt /usr/local/bin/pihole --wild &>/home/pi/cron.log
this works totally fine when running in my normal shell and updates the log, etc... but does not work when I try to schedule a cron job for it a few minutes out.
Maybe I'm missing something with my paths or with scheduling?
I already converted my timezone in raspi-config to my timezone.
My solution does not currently read from a file, but it's very close to what you are looking for. Here's a blog post with lots of details, but here are the core snippets:
block.sh:
#!/bin/bash
blockDomains=(facebook.com www.facebook.com pinterest.com www.pinterest.com)
for domain in ${blockDomains[#]}; do
pihole -b $domain
done
allow.sh:
#!/bin/bash
blockDomains=(facebook.com www.facebook.com pinterest.com www.pinterest.com)
for domain in ${blockDomains[#]}; do
pihole -b -d $domain
done
Allow execution on these scripts:
chmod +x /home/pi/Documents/block.sh
chmod +x /home/pi/Documents/allow.sh
Block after 9pm, allow after 6am. crontab -e:
0 21 * * * bash -l -c '/home/pi/Documents/block.sh' | logger -p cron.info
0 6 * * * bash -l -c '/home/pi/Documents/allow.sh' | logger -p cron.info
I am running a bash script that transfers files to my AWS bucket.If i run the bash script through my terminal it works fine (via ./myBash.sh).
However I put it in my crontab but there it doesn't work.This is my bash script
#!/bin/bash
s3cmd put /home/anonymous/commLogs.txt s3://myBucket/
echo transfer completed
echo now listing files in the s3 bucket
s3cmd ls s3://myBucket/
echo check
And this is my crontab-
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
46 13 * * * /bin/bash myBash.sh
And here is a list of things i have aready tried -
1)tried running the crontab with a node app to test whether crontab was working(the answer was yes)
2)tried running the crontab without the SHELL and PATH
3)Tried running the bash script from cron using sudo (46 13 * * * sudo myBash.sh)
4)tried running the bash without the /bin/bash
5) Searched many sites on the net for an answer without satisfactory results
Can anyone help me with what the problem may be?(I am running Ubuntu 14.04)
After a long time getting the same error, I just did this :
SHELL=/bin/bash
PATH=/bin:/sbin:/usr/bin:/usr/sbin
* * * * * /bin/bash /home/joaovitordeon/Documentos/test.sh
For anyone coming to this post.
I was having same problem and reason was crontab was running under root user and s3cmd was configured under ubuntu user.
so need to copy .s3cfg to root
cp -i /home/ubuntu/.s3cfg /root/.s3cfg
I am trying to make a cron job for the first time but i have some problems making it work.
Here is what i have done so far:
Linux commands:
crontab -e
My cronjob looks like this:
1 * * * * wget -qO /dev/null http://mySite/myController/myView
Now when i look in:
/var/spool/cron/crontabs/
I get the following output:
marc root
if i open the file root
i see my cronjob (the one above)
However it doesnt seem like it is running.
is there a way i can check if its running or make sure that it is running?
By default cron jobs do have a log file. It should be in /var/log/syslog (depends on your system). Vouch for it and you're done. Else you can simply append the output to a log file manually by
1 * * * * wget http://mySite/myController/myView >> ~/my_log_file.txt
and see what's your output. Notice I've changed removed the quiet parameter from wget command so that there is some output.