Cron / wget jobs intermittently not running - not getting into access log - linux

I've a number of accounts running cron-started php jobs hourly.
The generic structure of the command is this:
wget -q -O - http://some.site.com/cron.php
Now, this used to be running just fine.
Lately, though, on a number of accounts it has started playing up - but only on this one server. Once or twice a day the php file is not run.
The access log is missing the relevant entry.
While the cron log shows that the job was run.
We've added a bit to the command to log things out (-o /tmp/logfile) but it shows nothing.
I'm at a loss, really. I'm looking for ideas what can be wrong, or how to sidestep this issue as it has started taking up way too much of my time.
Has anyone seen anything remotely like this?
Thanks in advance!

Try this command
wget -d -a /tmp/logfile -O - http://some.site.com/cron.php
With -q you turn off wget's output. With -d you turn on debug output (maybe -v for verbose output is already enough). With -a you append logging messages to /tmp/logfile instead of always creating a new file.
You can also use curl:
curl http://some.site.com/cron.php

Related

Bash script results in different output when running from a cron job

I'm puzzled by this problem I'm having on Ubuntu 20.04 where cron is able to run a bash script but the overall outcome is different then when using the shell command.
I've look through all questions I could in here and on Google but couldn't find anyone that had the same problem.
Background:
I'm using Pushgateway to store metrics I'm generating through a bash script, and afterwards it's being imported automatically to Prometheus.
The end goal is to export a list of running processes, their CPU%, Mem% etc, similar to top command.
This is the bash script:
#!/bin/bash
z=$(top -n 1 -bi)
while read -r z
do
var=$var$(awk 'FNR>7{print "cpu_usage{process=\""$12"\", pid=\""$1"\"}", $9z} FNR>7{print "memory_usage{process=\""$12"\", pid=\""$1"\"}", $10z}')
done <<< "$z"
curl -X POST -H "Content-Type: text/plain" --data "$var
" http://localhost:9091/metrics/job/top/instance/machine
I used to have a version that used ps aux but then I found out that it only shows the average CPU% per process.
As you can see, the command I'm running is top -n 1 -bi which gives me a snapshot of active processes and their metrcis.
I'm using awk to format the data, and FNR>7 because I need to ignore the first 7 lines which is the summery presented by top.
The bash scrip is registered on /bin, /usr/bin and /usr/local/bin.
When checking http://localhost:9091/metrics, which is supposed to show me the information gathered, I'm getting this some of information when running the scrip using shell:
cpu_usage{instance="machine",job="top",pid="114468",process="php-fpm74"} 17.6
cpu_usage{instance="machine",job="top",pid="114483",process="php-fpm74"} 11.8
cpu_usage{instance="machine",job="top",pid="126305",process="ffmpeg"} 64.7
And this is the same information when cron is running the same script:
cpu_usage{instance="machine",job="top",pid="114483",process="php-fpm+"} 5
cpu_usage{instance="machine",job="top",pid="126305",process="ffmpeg"} 60
cpu_usage{instance="machine",job="top",pid="128777",process="php"} 15
So, for some reason, when I run it from cron it cuts the process name after 7 places.
I initially though it was related to the FNR>7 but even after changing it to 8 or 9 (and using exec bash to re-register the command) it gives the same results, also when I run it manually it works just fine.
Any help would be appreciated!!

Conclusion of result of work of the Linux service?

Is automatically started as service at start of system a script. For example: myscript.service
Question - how to look through work (conclusion) of my script in real time. By analogy as though I have started the script from the terminal and I see a conclusion of work of a script.
service myscript status removes several lines and doesn't update a conclusion
journalctl -f -u myscript
Assuming you have used the systemd capabilities.
Either redirect> logs to the file and the tail -f command

Cron job not logging output - Whenever gem

I'm new to cron and started using Whenever gem to perform scheduled tasks. However, sometimes they stall and I have to restart the app for it to pick up again. So, I wanted to inspect the logs to see if there are any exceptions that might cause it.
According to this wiki, I put a line in my schedule.rb setting the output location:
set :output, "/var/log/cron.log"
But this file is always empty.
I tried doing manually from the terminal /bin/bash -l -c 'echo "hello" >> /var/log/cron.log 2>&1' and it saved hello to the log.
Any thoughts? Thank you.

scp hangs in scipt sometimes

I need some help to try and figure out a problem we are experiencing. We had the following bash shell script running in devices on two separate networks (network1 and network2). Both networks go to the same destination server.
while
do
# do something ...
scp *.zip "$username#$server_ip:$destination_directory"
# do something ...
sleep 30
done
The script worked fine until a recent change to network2 where the scp command in the script above sometimes hangs for hours before resetting. The same script is still working fine on netowrk1 which did not change. We are not able identify what the issue is with network2, everything seem to work except scp. The hang does not happened on every try but when it does hang it hangs for hours.
So I changed the scp command as follow and it now resets within minutes and the data delay is bearable but not desirable.
scp -o BatchMode=yes -o ServerAliveCountMax=3 -o ServerAliveInterval=10 -o \
ConnectTimeout=60 *.zip "$username#$server_ip:$destination_directory"
I also tried sftp as follows;
sftp -o ConnectTimeout=60 -b "batchfiles.txt" "$username#$server_ip"
The ConnectTimeout does not seem to work well in sftp because it still hangs for hours sometimes. So I am back to using scp.
I even included the -o IdentityFile=path_to_key/id_rsa option in both scp and sftp thinking it maybe an authentication issue. That did not work either.
What is really strange is that it always works when I issue the same commands from a terminal. The shell script run as a background task. I am running Linux 3.8.0-26-generic #38-Ubuntu and OpenSSH_6.1p1 Debian-4. I don’t think is a local script permissions issue because; 1) it worked before network2 changed, 2) It works some of the time.
I did a network packet capture. I can see that each time when the scp command hangs it is accompanied by [TCP Retransmission] and [RST, ACK] within seconds from the start of a scp conversation.
I am very confused as to if the issue is networks or script related. Base on the sequence of events I am thinking is likely due to the recent change in network2. But why the same command works from a terminal every time I tried?
Can someone kindly tell me what my issue is or tell me how to go about troubleshoot it?
Thank you for reading and helping.

Check Whether a Web Application is Up or Not

I would like to write a script to check whethere the application is up or not using unix shell scripts.
From googling I found a script wget -O /dev/null -q http://mysite.com, But not sure how this works. Can someone please explain. It will be helpful for me.
Run the wget command
the -O option tells where to put the data that is retrieved
/dev/null is a special UNIX file that is always empty. In other words the data is discarded.
-q means quiet. Normally wget prints lots of info telling its progress in downloading the data so we turn that bit off.
http://mysite.com is the URL of the exact web page that you want to retrieve.
Many programmers create a special page for this purpose that is short, and contains status data. In that case, do not discard it but save it to a log file by replacing -O /dev/null with -a mysite.log.
Check whether you can connect to your web server.
Connect to the port where you web server
If it connects properly your web server is up otherwise down.
You can check farther. (e.g. if index page is proper)
See this shell script.
if wget -O /dev/null -q http://shiplu.mokadd.im;
then
echo Site is up
else
echo Site is down
fi

Resources