I am doing some banning tests for the cache.
Of course I don't want false positives, so I've set:
set beresp.ttl = 0s;
on my default.vcl file and reset the service.
Despite that, after a minute of two, the cache is refreshed.
Am I missing something here?
I'm not following your question, but beresp.ttl = 0s means disabling caching (at least if built-in VCL is executed you'll cache a HFP/HFM object; otherwise is a bad idea due to request serialisation).
Related
so I was looking for an easy way to setup bandwidth throttling on my website. I installed debian 11, apache2.4, ispconfig, etc. I enabled mod_ratelimit and modified .htaccess to set the limits. Amazingly it worked.. kinda. No matter what I put, max download speed was 121k/s. Disabling it I would get 50mb/sec which is what I normally get on my gigabit connection (only 256mb/up).
SetEnv rate-limit 100 = 121kb/sec
SetEnv rate-limit 512 = 121kb/sec
SetEnv rate-limit 25000 = 121kb/sec
I only found 1 mention of something similar to this anywhere, and the guy had a similar issue, that it would only do 2 different speeds, 68mb/sec or 178mb/sec and without it he got 300mb/sec.
Similar but not exact, and I cannot figure out how the heck to fix this. The idea was to use this module and set guest users to 400k/sec max, and paid users get 1mb/sec max for tier 1, 5mb/max for tier 2, etc, by set_env variable in php. (not sure that's the right variable name, but you should get what I mean). does anyone else have this issue and is there a way to fix this?
I tried remove burst, since it did not seem to do anything. My download starts at about 10k/sec and slowly climbs to 121k/set then sits there.
In .htaccess for that directory increase by 1000 not 100
<IfModule ratelimit_module>
SetOutputFilter RATE_LIMIT
SetEnv rate-limit 5000
SetEnv rate-initial-burst 8000
</IfModule>
I want to ignore requests with big cookie size. We have some requests dropped in varnish due to "BogoHeader Header too long: Cookie: xyz". How can it be done in VCL? I didn't find any len, length or strlen function in VCL, I know that it can be done in the vcl_rcev phase.
The strlen() feature won't help to fix your problem. Varnish discards the request due to the large Cookie header before vcl_recv is executed. If you don't want those requests to be discarded you need to check and adjust some runtime parameters: http_req_hdr_len, http_req_size, http_resp_hdr_len, etc.
In any case, if you are still interested in the strlen() feature, it would be trivial to add it to the std VMOD, but that support doesn't exist at the moment. You could consider using an existing VMOD including utilities like strlen() (or implement it on your own), but that's probably too much work. Finally, you could consider using a hacky approach using just VCL and a regexp:
if (req.http.Cookie ~ "^.{1024,}$") {
...
}
I'm trying to get varnish cache response to be chunked... (that's possible, right?)
I have the following scenario:
1 - cache is clean, good to go (service varnish restart)
2 - access the www.mywebsite.com/page for the first time
(no content-length is returned, and chunking is there, great!)
3 - the next time I access the page (like simple reloading) it will be cached.. and now I get this:
(now we have content-length... which means no chunking :( not great!)
After reading some Varnish docs/blogs (and this: http://book.varnish-software.com/4.0/chapters/VCL_Basics.html), looks like there are two "last" returns: return(fetch) or return(deliver).
When forcing a return(fetch), the chunked encoding works... but it also means that the request won't be cached, right? While return(deliver) caches correctly but adds the content-length header.
I've tried adding these to my default.vcl file:
set beresp.do_esi = true; (at vcl_backend_response stage)
and
unset beresp.http.content-length; (at different stages, without success)
So.. how to have Varnish caching working with Transfer-Encoding: chunked?
Thanks for your attention!
Is there a reason why you want to send it chunked? Chunked transfer encoding is kind of a clumsy workaround for when the content length isn't known ahead of time. What's actually happening here is Varnish is able to compute the length of the gzipped content after caching it for the first time, and so doesn't have to use the workaround! Rest assured that you are not missing out on any performance gains in this scenario.
I have an EC2 server running Elasticsearch 0.9 with a nginx server for read/write access. My index has about 750k small-medium documents. I have a pretty continuous stream of minimal writes (mainly updates) to the content. The speeds/consistency I receive with search is fine with me, but I have some sporadic timeout issues with multi-get (/_mget).
On some pages in my app, our server will request a multi-get of a dozen to a few thousand documents (this usually takes less than 1-2 seconds). The requests that fail, fail with a 30,000 millisecond timeout from the nginx server. I am assuming this happens because the index was temporarily locked for writing/optimizing purposes. Does anyone have any ideas on what I can do here?
A temporary solution would be to lower the timeout and return a user friendly message saying documents couldn't be retrieved (however they still would have to wait ~10 seconds to see an error message).
Some of my other thoughts were to give read priority over writes. Anytime someone is trying to read a part of the index, don't allow any writes/locks to that section. I don't think this would be scalable and it may not even be possible?
Finally, I was thinking I could have a read-only alias and a write-only alias. I can figure out how to set this up through the documentation, but I am not sure if it will actually work like I expect it to (and I'm not sure how I can reliably test it in a local environment). If I set up aliases like this, would the read-only alias still have moments where the index was locked due to information being written through the write-only alias?
I'm sure someone else has come across this before, what is the typical solution to make sure a user can always read data from the index with a higher priority over writes. I would consider increasing our server power, if required. Currently we have 2 m2x-large EC2 instances. One is the primary and the replica, each with 4 shards.
An example dump of cURL info from a failed request (with an error of Operation timed out after 30000 milliseconds with 0 bytes received):
{
"url":"127.0.0.1:9200\/_mget",
"content_type":null,
"http_code":100,
"header_size":25,
"request_size":221,
"filetime":-1,
"ssl_verify_result":0,
"redirect_count":0,
"total_time":30.391506,
"namelookup_time":7.5e-5,
"connect_time":0.0593,
"pretransfer_time":0.059303,
"size_upload":167002,
"size_download":0,
"speed_download":0,
"speed_upload":5495,
"download_content_length":-1,
"upload_content_length":167002,
"starttransfer_time":0.119166,
"redirect_time":0,
"certinfo":[
],
"primary_ip":"127.0.0.1",
"redirect_url":""
}
After more monitoring using the Paramedic plugin, I noticed that I would get timeouts when my CPU would hit ~80-98% (no obvious spikes in indexing/searching traffic). I finally stumbled across a helpful thread on the Elasticsearch forum. It seems this happens when the index is doing a refresh and large merges are occurring.
Merges can be throttled at a cluster or index level and I've updated them from the indicies.store.throttle.max_bytes_per_sec from the default 20mb to 5mb. This can be done during runtime with the cluster update settings API.
PUT /_cluster/settings HTTP/1.1
Host: 127.0.0.1:9200
{
"persistent" : {
"indices.store.throttle.max_bytes_per_sec" : "5mb"
}
}
So far Parmedic is showing a decrease in CPU usage. From an average of ~5-25% down to an average of ~1-5%. Hopefully this can help me avoid the 90%+ spikes I was having lock up my queries before, I'll report back by selecting this answer if I don't have any more problems.
As a side note, I guess I could have opted for more balanced EC2 instances (rather than memory-optimized). I think I'm happy with my current choice, but my next purchase will also take more CPU into account.
In my rails app i do a nslookup using a ruby library resolv. If the site like dgdfgdfgdfg.com is entered its talking too long to resolve. in some instance like 20 sec.(mostly for non-existent sites) Because it cause the application to slowdown.
So i though of introducing a timeout period for the dns lookup.
What will be the ideal timeout period for the dns lookup so that resolution of actual site doesnt fail. will something like 10 sec will be fine?
There's no IETF mandated value, although ยง6.1.3.3 of RFC 1123 suggests a value not less than 5 seconds.
Perl's Net::DNS and the command line dig utility do default to 5 seconds between retries. Some versions of the Microsoft resolver appear to default to 3 seconds.
You can run some tests among the users to find out the right number compromising responsiveness / performance.
Also you can adjust that timeout dinamically depending on the network traffic.
For example, for every sucessful resolv, you save how much time it took you to resolv it. And every hour (for example) you can calculate an average and set double of its value as timeout (Remember that "average" is, roughly speaking, "the middle"). This way if your latency is high at some point, it autoadjust itself to increase the timeout period.