I'm running a PHP script via cron using Wget, with the following command:
wget -O - -q -t 1 http://www.example.com/cron/run
The script will take a maximum of 5-6 minutes to do its processing. Will WGet wait for it and give it all the time it needs, or will it time out?
According to the man page of wget, there are a couple of options related to timeouts -- and there is a default read timeout of 900s -- so I say that, yes, it could timeout.
Here are the options in question :
-T seconds
--timeout=seconds
Set the network timeout to seconds
seconds. This is equivalent to
specifying --dns-timeout,
--connect-timeout, and
--read-timeout, all at the same
time.
And for those three options :
--dns-timeout=seconds
Set the DNS lookup timeout to seconds
seconds. DNS lookups that don't
complete within the specified time
will fail. By default, there is no
timeout on DNS lookups, other than
that implemented by system libraries.
--connect-timeout=seconds
Set the connect timeout to seconds
seconds. TCP connections that take
longer to establish will be aborted.
By default, there is no connect
timeout, other than that implemented
by system libraries.
--read-timeout=seconds
Set the read (and write) timeout to
seconds seconds. The "time" of
this timeout refers to idle time: if,
at any point in the download, no data
is received for more than the
specified number of seconds, reading
fails and the download is restarted.
This option does not directly
affect the duration of the entire
download.
I suppose using something like
wget -O - -q -t 1 --timeout=600 http://www.example.com/cron/run
should make sure there is no timeout before longer than the duration of your script.
(Yeah, that's probably the most brutal solution possible ^^ )
The default timeout is 900 second. You can specify different timeout.
-T seconds
--timeout=seconds
The default is to retry 20 times. You can specify different tries.
-t number
--tries=number
link: wget man document
Prior to version 1.14, wget timeout arguments were not adhered to if downloading over https due to a bug.
Since in your question you said it's a PHP script, maybe the best solution could be to simply add in your script:
ignore_user_abort(TRUE);
In this way even if wget terminates, the PHP script goes on being processed at least until it does not exceeds max_execution_time limit (ini directive: 30 seconds by default).
As per wget anyay you should not change its timeout, according to the UNIX manual the default wget timeout is 900 seconds (15 minutes), whis is much larger that the 5-6 minutes you need.
None of the wget timeout values have anything to do with how long it takes to download a file.
If the PHP script that you're triggering sits there idle for 5 minutes and returns no data, wget's --read-timeout will trigger if it's set to less than the time it takes to execute the script.
If you are actually downloading a file, or if the PHP script sends some data back, like a ... progress indicator, then the read timeout won't be triggered as long as the script is doing something.
wget --help tells you:
-T, --timeout=SECONDS set all timeout values to SECONDS
--dns-timeout=SECS set the DNS lookup timeout to SECS
--connect-timeout=SECS set the connect timeout to SECS
--read-timeout=SECS set the read timeout to SECS
So if you use --timeout=10 it sets the timeouts for DNS lookup, connecting, and reading bytes to 10s.
When downloading files you can set the timeout value pretty low and as long as you have good connectivity to the site you're connecting to you can still download a large file in 5 minutes with a 10s timeout. If you have a temporary connection failure to the site or DNS, the transfer will time out after 10s and then retry (if --tries aka -t is > 1).
For example, here I am downloading a file from NVIDIA that takes 4 minutes to download, and I have wget's timeout values set to 10s:
$ time wget --timeout=10 --tries=1 https://developer.download.nvidia.com/compute/cuda/11.2.2/local_installers/cuda_11.2.2_460.32.03_linux.run
--2021-07-02 16:39:21-- https://developer.download.nvidia.com/compute/cuda/11.2.2/local_installers/cuda_11.2.2_460.32.03_linux.run
Resolving developer.download.nvidia.com (developer.download.nvidia.com)... 152.195.19.142
Connecting to developer.download.nvidia.com (developer.download.nvidia.com)|152.195.19.142|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3057439068 (2.8G) [application/octet-stream]
Saving to: ‘cuda_11.2.2_460.32.03_linux.run.1’
cuda_11.2.2_460.32.03_linux.run.1 100%[==================================================================================>] 2.85G 12.5MB/s in 4m 0s
2021-07-02 16:43:21 (12.1 MB/s) - ‘cuda_11.2.2_460.32.03_linux.run.1’ saved [3057439068/3057439068]
real 4m0.202s
user 0m5.180s
sys 0m16.253s
4m to download, timeout is 10s, everything works just fine.
In general, timing out DNS, connections, and reads using a low value is a good idea. If you leave it at the default value of 900s you'll be waiting 15m every time there's a hiccup in DNS or your Internet connectivity.
Related
I have as iperf.sh shell script in multiple sub servers that runs at every " 1,14,28,42,50 * * * * " and pings the iperf server to check bandwidth , is there any way to randomize this cron or setting up a shell script that sleeps and runs at random time...?
[ Note : The issue that i am facing with this classic cron system is all sub-servers are running the iperf.sh script at the same time and my main-Iperf server is getting high cpu utilization which is resulting to improper ping data. ]
Thanks In Advance.
You can add a randomized wait period at the start of your script (or even in the crontab itself, as suggested in the comments).
I recommend GNU shuf which will be more portable than $RANDOM (since not all shells will support it, e.g. dash won't).
sleep $(shuf -i5-20 -n1)
# Rest of script
You can experiment with the range of random wait periods (5 to 20 seconds in this example).
I installed aria2 (1.18.1) in my ubuntu.
but the problem is it can not increase download connection.
aria2c -x 10 http://mirror.sg.leaseweb.net/speedtest/100mb.bin
[#fe34b4 2.1MiB/95MiB(2%) CN:4 DL:233KiB ETA:6m49s]
by default it only download from 4 connection not 10 (as i given).
i even try with another sites, but default 4 connection carried out while downloading
Solved:-
There are multiple options that influence the behavior:
--split -s Maximum number of concurrent splits (connections) per download. Defaults to 5, so unless you changed it, you will get max 5 connections for a single download no matter what -x.
--min-split-size -k A split should only be initiated when the split would be bigger than this. Defaults to 20M, meaning that when you download a 100M file, the time the download is split some data is already retrieved which means less slightly less than 100MB is remaining, meaning 4 splits (5 splits would create splits slightly less than 20MB).
There you have it. Please check out the manual for more information on the various options.
$ aria2c -k 1M -s 10 -x 10 http://mirror.sg.leaseweb.net/speedtest/100mb.bin
[#1325f1 7.4MiB/95MiB(7%) CN:10 DL:1.2MiB ETA:1m8s]
I have a list of thousands of URLs. I want to get a health check (healt.php) with an http request.
This is my problem:
I've wrote an application in node. It makes the requests in a pooled way. I use a variable to control how many concurrent connections I open. 300, ie.
One by one, each request is so fast, no more than 500ms.
But when I run the application, the result is:
$ node agent.js
200ms url1.tld
250ms url4.tld
400ms url2.tld
530ms url8.tld
800ms url3.tld
...
2300ms urlN.tld
...
30120ms urlM.tld
It seems that there is a limit in concurrency. When I execute
$ ps axo nlwp,cmd | grep node
The result is:
6 node agent.js
There are 6 threads to manage all concurrent connections. I found an evn variable to control concurrency in node: UV_THREADPOOL_SIZE
$ UV_THREADPOOL_SIZE=300 node agent.js
200ms url1.tld
210ms url4.tld
220ms url2.tld
240ms url8.tld
400ms url3.tld
...
800ms urlN.tld
...
1010ms urlM.tld
The problem is still there, but the results are much better. With the ps command:
$ ps axo nlwp,cmd | grep node
132 node agent.js
Next step: Looking in the source code of node, I've found a constant in deps/uv/src/unix/threadpool.c:
#define MAX_THREADPOOL_SIZE 128
Ok. I've changed that value to 2048, compiled and installed node and run once the command
$ UV_THREADPOOL_SIZE=300 node agent.js
All seems ok. Response times are not incrementing gradually. But when I try with a bigger concurrency number the problema appears. But this time it's not related to the number of threads, because with the ps command I see there are enough of them.
I tried to write the same application in golang, but the results are the same. The time is increasing gradually.
So, my question is: Where is the concurrence limit? memory and cpu load and bandwith are not out of bounds. And I tuned sysctl.conf and limits.conf to avoid some limits (files, ports, memory, ...).
You may be throttled by http.globalAgent's maxSockets. Depending on whether you're using http or https, see if this fixes your problem:
require('http').globalAgent.maxSockets = Infinity;
require('https').globalAgent.maxSockets = Infinity;
If you're using request or request-promise you can set the pool size:
request({
url: url,
json: true,
pool: {maxSockets: Infinity},
timeout: 2000
})
More info here: https://github.com/request/request
I'm working on a somewhat unusual application where 10k clients are precisely timed to all try to submit data at once, every 3 mins or so. This 'ab' command fairly accurately simulates one barrage in the real world:
ab -c 10000 -n 10000 -r "http://example.com/submit?data=foo"
I'm using Node.js on Ubuntu 12.4 on a rackspacecloud VPS instance to collect these submissions, however, I'm seeing some very odd behavior from Node, even when I remove all my business logic and turn the http request into a no-op.
When the test gets about 90% done, it hangs for a long period of time. Strangely, this happens consistently at 90% - for c=n=10k, at 9000; for c=n=5k, at 4500; for c=n=2k, at 1800. The test actually completes eventually, often with no errors. But both ab and node logs show continuous processing up till around 80-90% of the test run, then a long pause before completing.
When node is processing requests normally, CPU usage is typically around 50-70%. During the hang period, CPU goes up to 100%. Sometimes it stays near 0. Between the erratic CPU response and the fact that it seems unrelated to the actual number of connections (only the % complete), I do not suspect the garbage collector.
I've tried this running 'ab' on localhost and on a remote server - same effect.
I suspect something related to the TCP stack, possibly involving closing connections, but none of my configuration changes have helped. My changes:
ulimit -n 999999
When I listen(), I set the backlog to 10000
Sysctl changes are:
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_max_orphans = 20000
net.ipv4.tcp_max_syn_backlog = 10000
net.core.somaxconn = 10000
net.core.netdev_max_backlog = 10000
I have also noticed that I tend to get this msg in the kernel logs:
TCP: Possible SYN flooding on port 80. Sending cookies. Check SNMP counters.
I'm puzzled by this msg since the TCP backlog queue should be deep enough to never overflow. If I disable syn cookies the "Sending cookies" goes to "Dropping connections".
I speculate that this is some sort of linux TCP stack tuning problem and I've read just about everything I could find on the net. Nothing I have tried seems to matter. Any advice?
Update: Tried with tcp_max_syn_backlog, somaxconn, netdev_max_backlog, and the listen() backlog param set to 50k with no change in behavior. Still produces the SYN flood warning, too.
Are you running ab on the same machine running node? If not do you have a 1G or 10G NIC? If you are, then aren't you really trying to process 20,000 open connections?
Also if you are changing net.core.somaxconn to 10,000 you have absolutely no other sockets open on that machine? If you do then 10,000 is not high enough.
Have you tried to use nodejs cluster to spread the number of open connections per process out?
I think you might find this blog post and also the previous ones useful
http://blog.caustik.com/2012/08/19/node-js-w1m-concurrent-connections/
In my rails app i do a nslookup using a ruby library resolv. If the site like dgdfgdfgdfg.com is entered its talking too long to resolve. in some instance like 20 sec.(mostly for non-existent sites) Because it cause the application to slowdown.
So i though of introducing a timeout period for the dns lookup.
What will be the ideal timeout period for the dns lookup so that resolution of actual site doesnt fail. will something like 10 sec will be fine?
There's no IETF mandated value, although §6.1.3.3 of RFC 1123 suggests a value not less than 5 seconds.
Perl's Net::DNS and the command line dig utility do default to 5 seconds between retries. Some versions of the Microsoft resolver appear to default to 3 seconds.
You can run some tests among the users to find out the right number compromising responsiveness / performance.
Also you can adjust that timeout dinamically depending on the network traffic.
For example, for every sucessful resolv, you save how much time it took you to resolv it. And every hour (for example) you can calculate an average and set double of its value as timeout (Remember that "average" is, roughly speaking, "the middle"). This way if your latency is high at some point, it autoadjust itself to increase the timeout period.