Python script not working when run via a CRON job - python-3.x

I am fairly new to python so please bear with me.
I have a certain script in python which performs the following steps:
Calls another python script to generate a file (called logfile.txt which contains a lot of text around 25kb which might take upto a couple of mins).
after this file generation is complete, it parses the file and looks for a certain text (assume 'power=XXX'), and saves the xxx in a variable.
performs a db call and saves this value xxx in a mysql table.
All of this is working fine when the script is executed standalone like this:
sudo python3 Parser.py
Now I have to do the same in every 15 mins.
So I made an entry in the crontab sudo crontab -e
*/2 * * * * sudo python3 /home/pi/Parser.py >> /home/pi/cronlogs.txt
now this cron job is running fine and generating the cronlogs, where I can see the logs from the python script, but the logfile is not generated, and therefore the parsing is failing(giving 0 as expected when the file is not found)
Things I have tried:
I thought it could be a user issue so I printed the user in the logs in teh start of the script and both scenarios the user is root (as expected)
Added a file availability check before calling the parsing code:
print("file found:",os.path.isfile(fileName))
This resulted in a true!! (even when I can see there is no file in the location)
When i printed the first line of the file it printed empty.
Please help.
OS: Rasbian (Rapberry pi)
Update [Added code snippet which creates the file]:
libDir="/home/pi/SIDLMSIntegration/Gurux.DLMS.Library.python/"
dlmsCmd = " sudo python3 " + libDir + "main.py "
dlmsCmd += " -S /dev/ttyUSB0:9600:8None1 -c 32 -s 3 -a Low -P ABCD0001 -sn 1 -v 0.0.40.0.0.255 -t Verbose > /home/pi/SIDLMSIntegration/logFile.txt "
print("dlmsCommand is:",dlmsCmd)
try:
os.system(dlmsCmd)
signedPow=0.0
activeEnergy=0
print("Exception did not occur")
except Exception as err:
print("Exception Occured:",err)
pass
parseFile("Unit.ACTIVE_POWER")

Related

Bash script results in different output when running from a cron job

I'm puzzled by this problem I'm having on Ubuntu 20.04 where cron is able to run a bash script but the overall outcome is different then when using the shell command.
I've look through all questions I could in here and on Google but couldn't find anyone that had the same problem.
Background:
I'm using Pushgateway to store metrics I'm generating through a bash script, and afterwards it's being imported automatically to Prometheus.
The end goal is to export a list of running processes, their CPU%, Mem% etc, similar to top command.
This is the bash script:
#!/bin/bash
z=$(top -n 1 -bi)
while read -r z
do
var=$var$(awk 'FNR>7{print "cpu_usage{process=\""$12"\", pid=\""$1"\"}", $9z} FNR>7{print "memory_usage{process=\""$12"\", pid=\""$1"\"}", $10z}')
done <<< "$z"
curl -X POST -H "Content-Type: text/plain" --data "$var
" http://localhost:9091/metrics/job/top/instance/machine
I used to have a version that used ps aux but then I found out that it only shows the average CPU% per process.
As you can see, the command I'm running is top -n 1 -bi which gives me a snapshot of active processes and their metrcis.
I'm using awk to format the data, and FNR>7 because I need to ignore the first 7 lines which is the summery presented by top.
The bash scrip is registered on /bin, /usr/bin and /usr/local/bin.
When checking http://localhost:9091/metrics, which is supposed to show me the information gathered, I'm getting this some of information when running the scrip using shell:
cpu_usage{instance="machine",job="top",pid="114468",process="php-fpm74"} 17.6
cpu_usage{instance="machine",job="top",pid="114483",process="php-fpm74"} 11.8
cpu_usage{instance="machine",job="top",pid="126305",process="ffmpeg"} 64.7
And this is the same information when cron is running the same script:
cpu_usage{instance="machine",job="top",pid="114483",process="php-fpm+"} 5
cpu_usage{instance="machine",job="top",pid="126305",process="ffmpeg"} 60
cpu_usage{instance="machine",job="top",pid="128777",process="php"} 15
So, for some reason, when I run it from cron it cuts the process name after 7 places.
I initially though it was related to the FNR>7 but even after changing it to 8 or 9 (and using exec bash to re-register the command) it gives the same results, also when I run it manually it works just fine.
Any help would be appreciated!!

How to automate Python Script in Cron Job without .py file extension

I want to run the command webscreenshot automatically (project found at https://pypi.org/project/webscreenshot/#description).
This command should run via cron task or systemd automatically, every 15 minutes.
Running a linux server with python3.6, I've tried to incorporate this as a cron task but it is failing. Should I create my own python script to automate this?
*/1 * * * * /usr/bin/python3.6 /home/user/.local/bin/webscreenshot -i /home/user/projects/webscreenshot/data.txt -o /home/user/projects/webscreenshot/screenshots > /home/user/projects/log.txt 2>&1
I expect this to run the python script webscreenshot but this is not the case, screenshots are not produced.
#Ari - the .py extension is left out by the default installation.
I was able to get this to work by adding
xvfb-run to the begging, to read:
xvfb-run /dir/to/webscreenshot -i file.txt -o /output/dir/screenshots

Active cron job

I am trying to make a cron job for the first time but i have some problems making it work.
Here is what i have done so far:
Linux commands:
crontab -e
My cronjob looks like this:
1 * * * * wget -qO /dev/null http://mySite/myController/myView
Now when i look in:
/var/spool/cron/crontabs/
I get the following output:
marc root
if i open the file root
i see my cronjob (the one above)
However it doesnt seem like it is running.
is there a way i can check if its running or make sure that it is running?
By default cron jobs do have a log file. It should be in /var/log/syslog (depends on your system). Vouch for it and you're done. Else you can simply append the output to a log file manually by
1 * * * * wget http://mySite/myController/myView >> ~/my_log_file.txt
and see what's your output. Notice I've changed removed the quiet parameter from wget command so that there is some output.

Simple bash script runs asynchronously when run as a cron job

I have a backup script written that will do the following in this order:
Zip up files via SSH on a remote backup server
Dump my local database
Transfer my local database via SSH rsync to the backup server
Now when I run this script from the command line in RHEL it works a-ok perfectly fine.
BUT when I set this script to run via a cronjob, the script does run, but from what I can tell, it's somehow running those above 3 commands simultaneously. Because of that, things are getting done out of order (my local database is completed dumping and transferred before the #1 zip job is actually complete).
Has anyone run across such a strange scenario? As the most simple fix, is there a way to force a script to run synchronously? Maybe add some kind of command to wait for the prior line to complete before moving on?
EDIT I added a example version of my backup script. It seems that the second line of my script runs at the same time as the first line of my script, so while the SSH command has been issued, it has not yet completed before my second line triggers and an SQL dump begins.
#!/bin/bash
THEDIR="sample"
THEDBNAME="mydatabase"
ssh -i /rsync/mirror-rsync-key sample#sample.com "tar zcvpf /$THEDIR/old-1.tar /$THEDIR/public_html/*"
mysqldump --opt -Q $THEDBNAME > mySampleDb
/usr/bin/rsync -avz --delete --exclude=**/stats --exclude=**/error -e "ssh -i /rsync/mirror-rsync-key" /$THEDIR/public_html/ sample#sample.com:/$THEDIR/public_html/
/usr/bin/rsync -avz --delete --exclude=**/stats --exclude=**/error -e "ssh -i /rsync/mirror-rsync-key" /$THEDIR/ sample#sample.com:/$THEDIR/
Unless you're explicitly using backgrounding (&) everything should run one-by-one, waiting until the prior finishes.
Perhaps you are actually seeing overlapping prior executions by cron? If so, you can prevent multi-execution by calling your script with flock
e.g. midnight cron entry from
0 0 * * * backup.sh
to
0 0 * * * flock -n /tmp/backup.lock -c backup.sh
If you want to run commands in a sequential order you can use ; operator.
; – semicolon operator
This operator Run multiple commands in one go, but in a sequential order. If we take three commands separated by semicolon, second command will run after first command completion, third command will run only after second command execution completes. One point we should know is that to run second command, it do not depend on first command exit status.
Execute ls, pwd, whoami commands in one line sequentially one after the other.
ls;pwd;whoami
Please correct me if i am not understanding your question correctly.

Shell script to log server checks runs manually, but not from cron

I'm using a basic shell script to log the results of top, netstat, ps and free every minute.
This is the script:
/scripts/logtop:
TERM=vt100
export TERM
time=$(date)
min=${time:14:2}
top -b -n 1 > /var/log/systemCheckLogs/$min
netstat -an >> /var/log/systemCheckLogs/$min
ps aux >> /var/log/systemCheckLogs/$min
free >> /var/log/systemCheckLogs/$min
echo "Message Content: $min" | mail -s "Ran System Check script" email#domain.com
exit 0
When I run this script directly it works fine. It creates the files and puts them in /var/log/systemCheckLogs/ and then sends me an email.
I can't, however, get it to work when trying to get cron to do it every minute.
I tried putting it in /var/spool/cron/root like so:
* * * * * /scripts/logtop > /dev/null 2>&1
and it never executes
I also tried putting it in /var/spool/cron/myservername and also like so:
* * * * * /scripts/logtop > /dev/null 2>&1
it'll run every minute, but nothing gets created in systemCheckLogs.
Is there a reason it works when I run it but not when cron runs it?
Also, here's what the permissions look like:
-rwxrwxrwx 1 root root 326 Jul 21 01:53 logtop
drwxr-xr-x 2 root root 4096 Jul 21 01:51 systemCheckLogs
Normally crontabs are kept in "/var/spool/cron/crontabs/". Also, normally, you update it with the crontab command as this HUPs crond after you're done and it'll make sure the file gets in the correct place.
Are you using the crontab command to create the cron entry? crontab to import a file directly. crontab -e to edit the current crontab with $EDITOR.
All jobs run by cron need the interpreter listed at the top, so cron knows how to run them.
I can't tell if you just omitted that line or if it is not in your script.
For example,
#!/bin/bash
echo "Test cron jon"
When running from /var/spool/cron/root, it may be failing because cron is not configured to run for root. On linux, root cron jobs are typically run from /etc/crontab rather than from /var/spool/cron.
When running from /var/spool/cron/myservername, you probably have a permissions problem. Don't redirect the error to /dev/null -- capture them and examine.
Something else to be aware of, cron doesn't initialize the full run environment, which can sometimes mean you can run it just fine from a fully logged-in shell, but it doesn't behave the same from cron.
In the case of above, you don't have a "#!/bin/shell" up top in your script. If root is configured to use something like a regular bourne shell or cshell, the syntax you use to populate your variables will not work. This would explain why it would run, but not populate your files. So if you need it to be ksh, "#!/bin/ksh". It's generally best not to trust the environment to keep these things sane. If you need your profile run the do a ". ~/.profile" up front as well. Or a quick and dirty way to get your relatively full env is to do it from su as such "* * * * * su - root -c "/path/to/script" > /dev/null 2>&1
Just some things I've picked up over the years. You're definitely expecting a ksh based on your syntax, so you might want to be sure it's using it.
Thanks for the tips... used a little bit of each answer to get to the bottom of this.
I did have the interpreter at the top (wasn't shown here), but may have been wrong.
Am using #!/bin/bash now and that works.
Also had to tinker with the permissions of the directory the log files are being dumped in to get things working.

Resources