Running cron job every month and results from the command need to be in a file - linux

I am trying cron job for the first time. I am trying to generate file which will contain user installed application in Ubuntu and the same file needs to be uploaded to server.
I am unable to generate the text file with that information. Below is the command which i am trying.
Script file which needs to be run for the cron job /tmp/aptlist.sh
#!/bin/bash
comm -23 <(apt-mark showmanual | sort -u) <(gzip -dc /var/log/installer/initial-status.gz | sed -n 's/^Package: //p' | sort -u) &> /tmp/$(hostname)-$(date +%d-%m-%Y-%S)
cron has following entry done using crontab -e
:~$ crontab -l
0 0 1 * * /tmp/aptlist.sh > /dev/null 2>&1
syslog has following entry however no file is generated
Oct 21 14:09:01 Astrome46 CRON[14592]: (user) CMD (/tmp/aptlist.sh > /dev/null 2>&1)
Oct 21 14:10:01 Astrome46 CRON[14600]: (user) CMD (/tmp/aptlist.sh > /dev/null 2>&1)
Kindly let me know how to fix the issue.
Thank You

Try this:
0 0 1 * * bash /tmp/aptlist.sh > /dev/null 2>&1
If this works then I suspect it is because the file doesn't have executable permissions.
You can find that out by typing in the terminal:
ls -l /tmp/aptlist.sh.
If that is really the case then you can also edit the file permissions to allow it to run by typing:
chmod u+x /tmp/aptlist.sh
This will enable the file owner to run it, but will not allow that to other users. If you need it to run for a different user do:
chmod a+x /tmp/aptlist.sh
Good luck!

Related

How to run script multiple times and after every execution of command to wait until the device is ready to execute again?

I have this bash script:
#!/bin/bash
rm /etc/stress.txt
cat /dev/smd10 | tee /etc/stress.txt &
for ((i=0; i< 1000; i++))
do
echo -e "\nRun number: $i\n"
#wait untill module restart and bee ready for next restart
dmesg | grep ERROR
echo -e 'AT+CFUN=1,1\r\n' > /dev/smd10
echo -e "\nADB device booted successfully\n"
done
I want to restart module 1000 times using this script.
Module is like android device witch has linux inside it. But I use Windows.
AT+CFUN=1,1 - reset
When I push script, after every restart I need a command which will wait module and start up again and execute script 1000 times. Then I do pull in .txt file and save all output content.
Which command should I use?
I try commands like wait, sleep, watch, adb wait-for-device, ps aux | grep... Nothing works.
Can someone help me with this?
I find the solution. This is how my script actually looks:
#!/bin/bash
cat /dev/smd10 &
TEST=$(cat /etc/output.txt)
RESTART_TIMES=1000
if [[ $TEST != $RESTART_TIMES ]]
then
echo $((TEST+1)) > /etc/output.txt
dmesg
echo -e 'AT+CFUN=1,1\r\n' > /dev/smd10
fi
These are the steps that you need to do:
adb push /path/to/your/script /etc/init.d
cd /etc
cat outputfile.txt - make an output file and write inside file 0 ( echo 0 > output.txt )
cd init.d
ls - you should see rc5.d
cd .. then cd rc5.d - go inside
ln -s ../init.d/yourscript.sh S99yourscript.sh
ls - you should see S99yourscript.sh
cd .. return to init.d directory
chmod +x yourscript.sh - add permision to your script
./yourscript.sh

What causes multiple Mails when using Cron with Bash Script

I've made a little bash script to backup my nextcloud files including my database from my ubuntu 18.04 server. I want the backup to be executed every day. When the job is done I want to reseive one mail if the job was done (additional if it was sucessful or not). With the current script I reseive almost 20 mails and I can't figure out why. Any ideas?
My cronjob looks like this:
* 17 * * * "/root/backup/"backup.sh >/dev/null 2>&1
My bash script
#!/usr/bin/env bash
LOG="/user/backup/backup.log"
exec > >(tee -i ${LOG})
exec 2>&1
cd /var/www/nextcloud
sudo -u www-data php occ maintenance:mode --on
mysqldump --single-transaction -h localhost -u db_user --password='PASSWORD' nextcloud_db > /BACKUP/DB/NextcloudDB_`date +"%Y%m%d"`.sql
cd /BACKUP/DB
ls -t | tail -n +4 | xargs -r rm --
rsync -avx --delete /var/www/nextcloud/ /BACKUP/nextcloud_install/
rsync -avx --delete --exclude 'backup' /var/nextcloud_data/ /BACKUP/nextcloud_data/
cd /var/www/nextcloud
sudo -u www-data php occ maintenance:mode --off
echo "###### Finished backup on $(date) ######"
mail -s "BACKUP" name#domain.com < ${LOG}
Are you sure about the CRON string? For me this means "At every minute past hour 17".
Should be more like 0 17 * * *, right?

cron script not executing on Raspberry Pi

I am not a linux expert and I have a problem I do not manage to solve. I am sorry if it is obvious.
I am trying to execute a bash script in a cron table on a raspberry pi, and I don't manage to get it work.
Here is the example script I want to execute:
#!/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/games:/usr/games
plouf=$( ps -aux | grep reviews | wc -l)
if [[ "$plouf" == 1 ]] ;
then
echo "plouf" >> /home/pi/Documents/french_pain/crontest.txt
fi
My script in the cron consist in starting a script if there is no progam with review in its name running. To test I am just appending "plouf" to a file. I count the number of line of ps -aux | grep reviews | wc -l , and if there in only one line I do append "plouf" in a file.
Here is my crontab:
crontab -l
SHELL=/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/games:/usr/games
* * * * * sudo /home/pi/Documents/french_pain/script2.sh
The script do work when I do ./ script2.sh or /home/pi/Documents/french_pain/script2.sh directly in terminl: it add a "plouf" to the file.
I came across this page and tried different possibilities, by setting my path as the path given by env, and by explicitly setting the shell in the cron. But still not working.
What I am I doing wrong ?
To answer Mark Setchell comment:
raspberrypi:~/Documents/french_pain $ sudo /home/pi/Documents/french_pain/script2.sh
raspberrypi:~/Documents/french_pain $ cat crontest.txt
plouf
and cron is running:
raspberrypi:~/Documents/french_pain $ pgrep cron
353
I manage to do simple jobs like
* * * * * /bin/echo "cron works" >> /tmp/file
I tried with the direct path to the commands:
plouf=$( /bin/ps -aux | /bin/grep 'R CMD.*reviews' | usr/bin/wc -l)
if [[ "$plouf" == 1 ]] ;
then
/bin/echo "plouf" >> /home/pi/Documents/french_pain/crontest.txt
fi
without any luck. The permission for the file:
-rw-rw-rw- 1 root root 6 juil. 3 23:30 crontest.txt
I tried deleting it, and did not work either.
help !
I guess you trying this as "pi" user then 'sudo' won't work unless you have allowed nopasswd:all or using a command that is able to handle the password that Sudo requires from stdin in this case. The example below is dangerous since it will not require any password for sudo command anymore but since you wanted to use sudo in cronie:
Example 1:
With default /etc/sudoers below example will create an empty file:
* * * * * sudo ls / > ~/cronietest.txt
Try add below in /etc/sudoers at bottom (obs: do not use pi as username on a rpi):
pi ALL=(ALL) NOPASSWD:ALL
Now try again to add below in crontab
* * * * * sudo ls / > ~/cronietest.txt
It works!
Example 2:
This is more safe, add this to sudoers file for allow 'command' for "pi" user without any password when sudo is executed:
pi ALL= NOPASSWD: /bin/<command>
Example 3:
Without editing sudoers file, this is another example that will work (this is dangerous since your password is stored in cron file as plaintext)
* * * * * echo "password" | sudo -S ls / > ~/cronietest.txt
I did not find the reason why the sript was not working, but I finally found a way to make it work thanks to this post: I used shell script instead of bash:
The file script3.sh
#!/bin/sh
if ps -ef | grep -v grep | grep 'reviews'; then
exit 0
else
echo "plouf" >> /home/pi/Documents/french_pain/crontest.txt
fi
together with
* * * * * /home/pi/Documents/french_pain/script3.sh
in my crontab did the work I wanted.

Shell script to run two scripts when server load is above 20

I need a script that I can run on a cron every 5 minutes that will check if server load is above 20 and if it is it will run two scripts.
#!/bin/bash
EXECUTE_ON_AVERAGE="15" # if cpu load average for last 60 secs is
# greater or equal to this value, execute script
# change it to whatever you want :-)
while true; do
if [ $(echo "$(uptime | cut -d " " -f 13 | cut -d "," -f 1) >= $EXECUTE_ON_AVERAGE" | bc) = 1 ]; then
sudo s-
./opt/tomcat-latest/shutdown.sh
./opt/tomcat-latest/startup.sh
else
echo "do nothing"
fi
sleep 60
done
I then chmod +x the file.
When I run it I get this:
./script.sh: line 10: ./opt/tomcat-latest/shutdown.sh: No such file or directory
./script.sh: line 11: ./opt/tomcat-latest/startup.sh: No such file or directory
From the looks of it, your script is trying to execute the two scripts from the current working directory into opt/tomcat-latest/ -- which doesn't exist. You should confirm the full file paths for the two shell scripts and then use that instead of the current path.
Also, I'd recommend that you create a cron to do this task. Here's some documentation about the crontab. https://www.gnu.org/software/mcron/manual/html_node/Crontab-file.html
check the permission to execute the files shutdown.sh and startup.sh
Is sudo -s not sudo s-
And I recommend to put a sleep (seconds)
sudo -s /opt/tomcat-latest/shutdown.sh
sleep 15
sudo -s /opt/tomcat-latest/startup.sh
Or better
sudo -s /opt/tomcat-latest/shutdown.sh && sudo -s /opt/tomcat-latest/startup.sh
The startup.sh will executed only if shutdown.sh was executed with success.

Cron job generate duplicate processess

My cron is like below:
$ crontab -l
0,15,30,45 * * * * /vas/app/check_cron/cronjob.sh 2>&1 > /vas/app/check_cron/cronjob.log; echo "Exit code: $?" >> /vas/app/check_cron/cronjob.log
$ more /vas/app/check_cron/cronjob.sh
#!/bin/sh
echo "starting script";
/usr/local/bin/rsync -r /vas/app/check_cron/cron1/ /vas/app/check_cron/cron2/
echo "completed running the script";
$ ls -l /usr/local/bin/rsync
-rwxr-xr-x 1 bin bin 411494 Oct 5 2011 /usr/local/bin/rsync
$ ls -l /vas/app/check_cron/cronjob.sh
-rwxr-xr-x 1 vas vas 153 May 14 12:28 /vas/app/check_cron/cronjob.sh
if i run it manually ... the script is running well.
$ /vas/app/check_cron/cronjob.sh 2>&1 > /vas/app/check_cron/cronjob.log; echo "Exit code: $?" >> /vas/app/check_cron/cronjob.log
if run by crontab, the cron generate double processes more than 30 in 24hours until i kill them manually.
$ ps -ef | grep cron | grep -v root | grep -v grep
vas 24157 24149 0 14:30:00 ? 0:00 /bin/sh /vas/app/check_cron/cronjob.sh
vas 24149 8579 0 14:30:00 ? 0:00 sh -c /vas/app/check_cron/cronjob.sh 2>&1 > /vas/app/check_cron/cronjob.log; ec
vas 24178 24166 0 14:30:00 ? 0:00 /usr/local/bin/rsync -r /vas/app/check_cron/cron1/ /vas/app/check_cron/cron2/
vas 24166 24157 0 14:30:00 ? 0:01 /usr/local/bin/rsync -r /vas/app/check_cron/cron1/ /vas/app/check_cron/cron2/
Please give me advice how to make running well and no processes still running in the system
and processes stop properly.
BR,
Noel
The output you provide seems normal, the first two processes is just /bin/sh running your cron script and the later two are the rsync processes.
It might be a permission issue if the crontab is not the same user as the one you use for testing, causing the script to take longer when run from cron. You can add -v, -vv, or even -vvv to the rsync command for increased output and then observe the cron email after each run.
One method to prevent multiple running instances of scripts is to use lock files of some sort, I find it easy to use mkdir for this purpose.
#!/bin/sh
LOCK="/tmp/$0.lock"
# If mkdir fails then the lock already exists
mkdir $LOCK > /dev/null 2>&1
[ $? -ne 0 ] && exit 0
# We clean up the lock when the script exists for any reason
trap "{ rmdir $LOCK ; exit 0 ; }" EXIT
echo "starting script";
/usr/local/bin/rsync -r /vas/app/check_cron/cron1/ /vas/app/check_cron/cron2/
echo "completed running the script";
Just make sure you have some kind of cleanup when the OS starts in case it doesn't clean up /tmp by itself. The lock might be left there if the script crashes, is killed or is running when the OS is rebooted.
Why do you worry? Is something not working? From the parent process ID's I can deduce that the shell (PID=24157) forks an rsync (24166), and the rsync forks another rsync (24178). Looks like that's just how rsync operates...
It's certainly not cron starting two rsync processes.
Instead of CRON, you might want to have a look at the Fat Controller
It works similarly to CRON but has various built-in strategies for managing cases where instances of the script you want to run would overlap.
For example, you could specify that the currently running instance is killed and a new one started, or you could specify a grace period in which the currently running instance has to finish before then terminating it and starting a new one. Alternatively, you can specify to wait indefinitely.
There are more examples and full documentation on the website:
http://fat-controller.sourceforge.net/

Resources