Wget Timeout if download takes too long - linux

Is there any way to stop awgetdownload if the download is not completed in x seconds?
For example if in 2 seconds the download is not complete, to get a timeout?

wget has following timeout related options
--timeout=seconds
--dns-timeout=seconds
--connect-timeout=seconds
--read-timeout=seconds
see linked man page for explanation.
Is there any way to stop awgetdownload if the download is not
completed in x seconds?
You might find timeout command useful, it does kill provided command if it does not end in given amount of seconds. For example
timeout 5 wget https://www.example.com
does give 5 seconds wget to download example and if it fails to do kill it. Note that timeout command might be used with other commands that wget.

Related

Cronjob stuck without exiting

I have 50+ cronjobs like the one given below running in my Centos 7 server.
curl -s https://url.com/file.php
This runs every 10 minutes. When running manually from the shell it only takes 1-2 minutes. It also is working fine using cronjob. The problem is that it does not exit after the execution. When i check my processes using ps command, it shows many cronjobs of previous dates(even 10 days before) which accumulates the total proccesses in my server.
Line in crontab :-
*/10 * * * * user curl -s https://url.com/file.php > /dev/null 2>&1
Is there any reasion for this? If i rememmber correctly this happened after latest patch update.
Please help.
Modify your command to store the logs in log files instead of dumping it to /dev/null.
Options
--max-time
--connect-timeout
--retry
--retry-max-time
can be used to control the curl command behaviour.

How to check if a command is hanged in a bash script?

I have a bash script, in which I call another script and sometimes second script hangs. Is there any way to check if it is hung or not. And I can't make any changes is the second script.
#!/bin/bash
calling second script(thata might hang)
if hang then do something
If you already know a threshold time, that after that script is considered hung. you can use timeout.
timeout 30 bash script.sh
command bash script.sh will run until it finish in less that 30 seconds or gets killed by timeout. You can adjust the time according to your environment.
Command Reference:
timeout
Usage: timeout [OPTION] DURATION COMMAND [ARG]...
or: timeout [OPTION]
Start COMMAND, and kill it if still running after DURATION.
Please refer to respective man page for options.

Using cron Chef cookbook to run a command every 30 mins

I am using the cron cookbook to run every 30 minutes in the following way:
cron_d 'logrotate_check' do
minute "*/30"
command "logrotate -s /var/log/logstatus /etc/logrotate.d/consul_logs"
user 'root'
end
Please let me know if it is correct?
Yes, that is fine. In the future, please just try it yourself rather than asking the internet and waiting 10 hours.

Run a bash linux command for a specified time

Is there a way in linux please to execute a command just a certain duration of time like 10 minutes ?
I wanna make a capture with: airodump-ng -w $CAPT_DEST $mon
But i just want it to last 10 minutes and then the command stops automatically.
The command you are looking for is timeout:
timeout 600 airodump-ng -w "$CAPT_DEST" "$mon"
See man timeout for more information.

Long running service check in Nagios

I have a service check that I've found on the Nagios Exchange site which works well for small directories, but not well for larger ones that take longer than 30 or 60 seconds to complete.
http://exchange.nagios.org/directory/Plugins/Uncategorized/Operating-Systems/Linux/CheckDirSize/details
The problem I'm having is that I need to configure a service check that Nagios can run once a day but will remain open for 1440 minutes (one day). The directory listing is huge and takes many hours to complete (up to 20 hours).
This is my service check (check every day, when using nrpe, the timeout is 86400 seconds which is also one day). But for some reason, even though I can see the du -sk running on the command line in ps -ef | grep du, Nagios is reporting "(Service Check Timed Out)":
define service {
use generic-service,srv-pnp
host_name IMAGEServer1
service_description Images
check_command check_nrpe!check_dirsize -t 86400
check_interval 1440
}
In my nrpe.cfg file on the linux server i have these two directives as well:
command_timeout=86400
connection_timeout=86400
How can I get Nagios to complete the check and not time out? I was under the impression that my directives above were correct.
What's timing out is the check_nrpe command on the local side (it has a default timeout of 2 minutes). You could edit its command definition to use a long timeout.
Alternatively, you might want to do this as a passive check on IMAGEServer1, running as a cron job.

Resources