so I was looking for an easy way to setup bandwidth throttling on my website. I installed debian 11, apache2.4, ispconfig, etc. I enabled mod_ratelimit and modified .htaccess to set the limits. Amazingly it worked.. kinda. No matter what I put, max download speed was 121k/s. Disabling it I would get 50mb/sec which is what I normally get on my gigabit connection (only 256mb/up).
SetEnv rate-limit 100 = 121kb/sec
SetEnv rate-limit 512 = 121kb/sec
SetEnv rate-limit 25000 = 121kb/sec
I only found 1 mention of something similar to this anywhere, and the guy had a similar issue, that it would only do 2 different speeds, 68mb/sec or 178mb/sec and without it he got 300mb/sec.
Similar but not exact, and I cannot figure out how the heck to fix this. The idea was to use this module and set guest users to 400k/sec max, and paid users get 1mb/sec max for tier 1, 5mb/max for tier 2, etc, by set_env variable in php. (not sure that's the right variable name, but you should get what I mean). does anyone else have this issue and is there a way to fix this?
I tried remove burst, since it did not seem to do anything. My download starts at about 10k/sec and slowly climbs to 121k/set then sits there.
In .htaccess for that directory increase by 1000 not 100
<IfModule ratelimit_module>
SetOutputFilter RATE_LIMIT
SetEnv rate-limit 5000
SetEnv rate-initial-burst 8000
</IfModule>
Related
Trying to understand more about Native-Transport-Requests!
As we know these are cql requests and if limit exceeds the result will be all time blocked NTR.
My question is how do i monitor these requests in real time and get some kind of report on it.
I see some settings like max_queued_native_transport_requests and native_transport_max_threads. How these settings will have effect over all time blocked.
Have a look at JIRA-11363.
Also check this discussion for more info.
The recommendation is to start with the default values and tune from there. The default values are:
max_queued_native_transport_requests=1024
native_transport_max_threads: 128
Monitor you nodes and if you see an increasing number of blocked Native-Transport-Requests, then you need to increase max_queued_native_transport_requests.
Also, I think it's worth checking these discussions: 1, 2
We actually have a 10Gb/s servers and 1Gb/s servers that coexist together (temporary migrating solution) [UDP traffic]. We would like to shape the traffic coming from the 10Gb/s servers in order to avoid big bursts that the 1G servers could not handle.
It seems that "tc" cannot do the job with a tbf (or maybe we use it the wrong way). For instance on our 10G servers we tried the following:
sudo tc qdisc add dev eth5 root tbf rate 950mbit latency 1s burst 50mbit peakrate 1000mbit mtu 1500
Here we normally set the peakrate at 1mb (which normally can't generate burst > 1mb/s).
Unfortunately, that does not work, in fact after using this tc config, we lower our main bandwidth to at max 2Mb/s..
Our only clue for this strange behavior is that sentence in the tc manual:
"To achieve perfection, the second bucket may contain only a single packet, which leads to the earlier mentioned 1mbit/s limit.
This limit is caused by the fact that the kernel can only throttle for at minimum 1 'jiffy', which depends on HZ as 1/HZ. For perfect shaping, only a single packet can get sent per jiffy - for HZ=100, this means 100 packets of on average 1000 bytes each, which roughly corresponds to 1mbit/s. "
So, it's sure we can't have a peakrate > 1Mbit/s ?
Maybe, there is another completely different way to achieve our goal, if anyone has a suggestion that would help me achieve our goal.. =) ?
Kind regards
Why do you have a 1s latency? Seems WAY too high for a 1 Gbit link
I have an EC2 server running Elasticsearch 0.9 with a nginx server for read/write access. My index has about 750k small-medium documents. I have a pretty continuous stream of minimal writes (mainly updates) to the content. The speeds/consistency I receive with search is fine with me, but I have some sporadic timeout issues with multi-get (/_mget).
On some pages in my app, our server will request a multi-get of a dozen to a few thousand documents (this usually takes less than 1-2 seconds). The requests that fail, fail with a 30,000 millisecond timeout from the nginx server. I am assuming this happens because the index was temporarily locked for writing/optimizing purposes. Does anyone have any ideas on what I can do here?
A temporary solution would be to lower the timeout and return a user friendly message saying documents couldn't be retrieved (however they still would have to wait ~10 seconds to see an error message).
Some of my other thoughts were to give read priority over writes. Anytime someone is trying to read a part of the index, don't allow any writes/locks to that section. I don't think this would be scalable and it may not even be possible?
Finally, I was thinking I could have a read-only alias and a write-only alias. I can figure out how to set this up through the documentation, but I am not sure if it will actually work like I expect it to (and I'm not sure how I can reliably test it in a local environment). If I set up aliases like this, would the read-only alias still have moments where the index was locked due to information being written through the write-only alias?
I'm sure someone else has come across this before, what is the typical solution to make sure a user can always read data from the index with a higher priority over writes. I would consider increasing our server power, if required. Currently we have 2 m2x-large EC2 instances. One is the primary and the replica, each with 4 shards.
An example dump of cURL info from a failed request (with an error of Operation timed out after 30000 milliseconds with 0 bytes received):
{
"url":"127.0.0.1:9200\/_mget",
"content_type":null,
"http_code":100,
"header_size":25,
"request_size":221,
"filetime":-1,
"ssl_verify_result":0,
"redirect_count":0,
"total_time":30.391506,
"namelookup_time":7.5e-5,
"connect_time":0.0593,
"pretransfer_time":0.059303,
"size_upload":167002,
"size_download":0,
"speed_download":0,
"speed_upload":5495,
"download_content_length":-1,
"upload_content_length":167002,
"starttransfer_time":0.119166,
"redirect_time":0,
"certinfo":[
],
"primary_ip":"127.0.0.1",
"redirect_url":""
}
After more monitoring using the Paramedic plugin, I noticed that I would get timeouts when my CPU would hit ~80-98% (no obvious spikes in indexing/searching traffic). I finally stumbled across a helpful thread on the Elasticsearch forum. It seems this happens when the index is doing a refresh and large merges are occurring.
Merges can be throttled at a cluster or index level and I've updated them from the indicies.store.throttle.max_bytes_per_sec from the default 20mb to 5mb. This can be done during runtime with the cluster update settings API.
PUT /_cluster/settings HTTP/1.1
Host: 127.0.0.1:9200
{
"persistent" : {
"indices.store.throttle.max_bytes_per_sec" : "5mb"
}
}
So far Parmedic is showing a decrease in CPU usage. From an average of ~5-25% down to an average of ~1-5%. Hopefully this can help me avoid the 90%+ spikes I was having lock up my queries before, I'll report back by selecting this answer if I don't have any more problems.
As a side note, I guess I could have opted for more balanced EC2 instances (rather than memory-optimized). I think I'm happy with my current choice, but my next purchase will also take more CPU into account.
My website has seen ever decreasing traffic, so I've been working to increase speed and usability. On WebPageTest.org I've worked most of my grades up but First Byte is still horrible.
F First Byte Time
A Keep-alive Enabled
A Compress Transfer
A Compress Images
A Progressive JPEGs
B Cache static
First Byte Time (back-end processing): 0/100
1081 ms First Byte Time
90 ms Target First Byte Time
I use the Rackspace Cloud Server system,
CentOS 6.4 2gig of Ram 80 gig harddrive,
Next Generation Server
Linux 2.6.32-358.18.1.el6.x86_64
Apache/2.2.15 (CentOS)
MySQL 5.1.69
PHP: 5.3.3 / Zend: 2.3.0
Website system Tomatocart Shopping Cart.
Any help would be much appreciated.
Traceroute #1 to 198.61.171.121
Hop Time (ms) IP Address FQDN
0.855 - 199.193.244.67
0.405 - 184.105.250.41 - gige-g2-14.core1.mci3.he.net
15.321 - 184.105.222.117 - 10gigabitethernet1-4.core1.chi1.he.net
12.737 - 206.223.119.14 - bbr1.ord1.rackspace.NET
14.198 - 184.106.126.144 - corea.ord1.rackspace.net
14.597 - 50.56.6.129 - corea-core5.ord1.rackspace.net
13.915 - 50.56.6.111 - core5-aggr1501a-1.ord1.rackspace.net
16.538 - 198.61.171.121 - mail.aboveallhousplans.com
#JXH's advise I did a packet capture and analyzed it using wireshark.
during a hit and leave visit to the site I got
6 lines of BAD TCP happening at about lines 28-33
warning that I have TCP Retransmission and TCP Dup ACK...
2 of each of these warnings 3 times.
Under the expanded panel viewing a
Retransmission/ TCP analysis flags - "retransmission suspected" "security level NOTE" RTO of 1.19 seconds.
Under the expanded panel viewing
DCP Dup ACK/ TCP analysis flags - Duplicate ACK" "security level NOTE" RTT of 0.09 seconds.
This is all gibberish to me...
I don't know if this is wise to do or not, but I've uploaded my packet capture dump file.
If anyone cares to take a look at my flags and let me know what they think.
I wonder if the retransmission warnings are saying that the HTTP file is sending duplicate information? I have a few things in twice that seems a little redundant. like user agent vary is duplicated.
# Set header information for proxies
Header append Vary User-Agent
# Set header information for proxies
Header append Vary User-Agent
Server fixed the retransmission and dup ack's a few days ago but lag in initial server response remains.
http://www.aboveallhouseplans.com/images/firstbyte001.jpg
http://www.aboveallhouseplans.com/images/firstbyte002.jpg
First byte of 600ms remains...
In my rails app i do a nslookup using a ruby library resolv. If the site like dgdfgdfgdfg.com is entered its talking too long to resolve. in some instance like 20 sec.(mostly for non-existent sites) Because it cause the application to slowdown.
So i though of introducing a timeout period for the dns lookup.
What will be the ideal timeout period for the dns lookup so that resolution of actual site doesnt fail. will something like 10 sec will be fine?
There's no IETF mandated value, although ยง6.1.3.3 of RFC 1123 suggests a value not less than 5 seconds.
Perl's Net::DNS and the command line dig utility do default to 5 seconds between retries. Some versions of the Microsoft resolver appear to default to 3 seconds.
You can run some tests among the users to find out the right number compromising responsiveness / performance.
Also you can adjust that timeout dinamically depending on the network traffic.
For example, for every sucessful resolv, you save how much time it took you to resolv it. And every hour (for example) you can calculate an average and set double of its value as timeout (Remember that "average" is, roughly speaking, "the middle"). This way if your latency is high at some point, it autoadjust itself to increase the timeout period.