I have .htaccess set up to expire js and css files after 7 days. ETag is turned off, and gzip / Deflate is turned on.
In my source HTML there are 25 different calls to load JS files. Not my design. Here is an example of one of those calls:
<script type="text/javascript" src="content/vendors/jquery/rater/jquery.rater-custom.js"></script>
The response header from inspection via Firebug:
HTTP/1.1 200 OK
Date: Sun, 20 Jan 2013 23:35:42 GMT
Server: Apache
Last-Modified: Sun, 20 Jan 2013 22:49:10 GMT
Accept-Ranges: bytes
Cache-Control: max-age=604800
Expires: Sun, 27 Jan 2013 23:35:42 GMT
Vary: Accept-Encoding
Content-Encoding: gzip
Content-Length: 648
Keep-Alive: timeout=1, max=95
Connection: Keep-Alive
Content-Type: application/x-javascript
There are also a ton of CSS references. The page is extremely slow and I am trying to get caching to work, to speed it up. On IE 9 and Chrome after the first load, this page will render almost instantly...I can tell all these files are being pulled from the cache in those browsers.
On FireFox I cannot get the browser to use the cached copies. Any idea what I am missing or what could be going on that is forcing FireFox to request fresh copies of these files every single time the page is reloaded?
Have you checked your Firefox configuration? Sometimes people deactivate caching for development reasons, for example via the Developers Toolbar...
I'm not too sure how Firefox handles automatic caching of files it is served but if your goal is to improve performance by caching files then maybe implementing ApplicationCache might be a viable solution.
Application Cache
http://www.html5rocks.com/en/tutorials/appcache/beginner/
http://www.sitepoint.com/offline-browsing-in-html5-with-applicationcache/
http://www.whatwg.org/specs/web-apps/current-work/#applicationcache
Related
My desired behavior is to have cloudfront cache my origin(api endpoint that returns json).
It seems like no matter what values I put in the header for max-age at my origin or what "ttl" values I put in my cloudfront distribution, hitting the distribution point caches for 24 hours in cloudfront when using curl with a php backend. When hitting the endpoint through Chrome it is cached every 5 seconds like desired.(I think chrome is sending no-cache in the request header)
Does anyone have a recommendation on how to force cloudfront to cache fresh data every 5 seconds on my site especially when using curl?
I am effectively hitting a rest api point that is sending json back that needs to be dynamically cached in cloudfront every five seconds. The cache needs to be refreshed after 5 seconds so fresh data is served even if the data is the same. The origin is dynamic content that changes every hour on the hour and is not static.
Is it impossible to get passed the 24 hour caching limit?
I Tried different Cache-Control headers like max-age and s-maxage
I Tried changing the ttl times in my cloudfront distrubtuion
I have set Object caching to Customize and:
Min TTL is 0
Max TTL is 5
Default TTL is 5
My origin headers are:
HTTP/1.1 200 OK
Date: Thu, 09 May 2019 14:15:17 GMT
Server: Apache/2.4.39 (cPanel) OpenSSL/1.0.2r mod_bwlimited/1.4 mpm-itk/2.4.7-04
Last-Modified: Thu, 09 May 2019 14:12:57 GMT
ETag: "e1ab8bae1d38bfceecece2c36df378c1-gzip"
X-Robots-Tag: noindex, follow
Link: <https://www.example.com/json/>; rel="https://api.w.org/"
Cache-Control: max-age=5
Vary: Accept-Encoding,User-Agent
Content-Encoding: gzip
Content-Length: 789
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
Content-Type: application/json
Is it possible to exploit the banner information provided in the response header to get sensitive information about the server?
A typical response looks like below
**HTTP/1.1 200 OK
Date: Tue, 19 Apr 2011 09:23:32 GMT
Server: Microsoft-IIS/6.0
X-Powered-By: ASP.NET
Set-Cookie: tracking=tI8rk7joMx44S2Uu85nSWc
X-AspNet-Version: 2.0.50727
Cache-Control: no-cache
Pragma: no-cache
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 1067
<!DOCTYPE html PUBLIC “-//W3C//DTD XHTML 1.0 Transitional//EN” “http://
www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd”><html xmlns=“http://
www.w3.org/1999/xhtml” ><head><title>Your details</title>
…**
I am trying to know wether "Server: Microsoft-IIS/6.0" can be used to exploit the vulnerability present in the server or to get some sensitive information about the server.
Thanks
All kinds of information can potentially be useful to a potential attacker. Your example could be one small part of a larger reconnaissance effort.
As Jedi mentioned, the info you've provided can give a clue about potential weaknesses in the web server itself. Another thing to keep in mind though, is what the server tells you about the company's choice of architecture in general; if they are using a web server from Microsoft, chances are they will be using a database system from them too (and possibly mail servers and other stuff too).
There`s no guarantee of this of course, but it may provide a starting point for further investigation which may in turn reveal other weaknesses, such as SQL injection vulnerabilities (which are probably the most prevalent class of vulnerabilities in websites and web applications today).
I have an ASP.NET MVC site running in IIS 6.0 and want it to compress the static css and js files it serves. The site has a wildcard mapping so all requests (inc. extensionless URLs) go via aspnet_isapi.dll. Static content is held in the Content and Scripts folders.
I have carried out the following steps:
Enabled HTTP compression for application files and static files in the IIS Console (Web Sites -> Service tab).
Added a Web Service Extension named "HTTP Compression" referring to inetsrv\gzip.dll
Edited MetaBase.xml to add css and js to the HcFileExtensions property of the gzip and deflate IIsCompressionScheme entries.
Removed the wildcard mapping from the Content and Scripts folders (did so by temporarily making them subwebs, removed the wildcard map, reverted them back to ordinary folders). This should ensure IIS serves those files without ASP.NET getting involved.
The strange behaviour I now get is that, when Fiddler is running, it reports compressed file sizes for css and js, and Firebug concurs (e.g. 47.9KB for jquery-ui.min.js). But when I disable Fiddler and hit CTRL+F5, Firebug reports uncompressed sizes (194.2KB for jquery-ui.min.js) and unexpected Content-Type and Content-Length.
The request headers do not change, but it's interesting to look at the response headers.
With Fiddler running, Firebug reports (for jquery-ui.min.js):
Content-Length 49009
Content-Type application/x-javascript
Content-Encoding gzip
Last-Modified Wed, 26 Jan 2011 11:59:25 GMT
Accept-Ranges bytes
Etag "80cce07950bdcb1:a03"
Vary Accept-Encoding
Server Microsoft-IIS/6.0
X-Powered-By ASP.NET
x-ua-compatible IE=8
Without Fiddler:
Proxy-Authenticate NTLM
Content-Length 415
Keep-Alive timeout=5, max=100
Connection Keep-Alive
Content-Type text/html; charset=iso-8859-1
Why is Content-Type now text/html? The Content-Length of 415 looks odd, it doesn't match the 194.2KB that Firebug reports as the size of the response. Various other headers are no longer present.
For completeness, the request header is:
Host my-windows-box
User-Agent Mozilla/5.0 (Windows; U; Windows NT 5.2; en-GB; rv:1.9.2.13) Gecko/20101203 Firefox/3.6.13 ( .NET CLR 3.5.30729; .NET4.0E)
Accept */*
Accept-Language en-gb,en;q=0.5
Accept-Encoding gzip,deflate
Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive 115
Proxy-Connection keep-alive
Referer http://my-windows-box/site
Cookie ASP.NET_SessionId=nbsb2hbkjdtcgjdntco25zqc
Pragma no-cache
Cache-Control no-cache
What's the status code on the response in Firebug? My first guess is that it's a HTTP/401 and what you're seeing here is the authentication page that requests your NTLM credentials.
I'm really confused by all that caching stuff. I'm trying to setup mod_expires to reduce the number of HTTP Requests from my website to the server.
I did well so far, I installed mod_expires and wrote a little .conf file from the instructions on http://httpd.apache.org/docs/2.0/mod/mod_expires.html.
Now, for instance, all my .png, .gif, .jpeg files have a Cache-Control header. My expected result was, that the browser won't do any GET Request within the time period (given from the Cache-Control value). But it does, every single file fires a request and receives HTTP 304 not Modified.
That is the wrong behavior isn't it ? It should load that files from the internal cache.
One thing I don't understand is, that the browser sends a Response header: Cache-Control: max-age=0. Should it be like that?
Here is an complete example Request + Response headers for a single .png file:
Request
Host dev-mgg.localdomain
User-Agent Mozilla/5.0 (Windows; U; Windows NT 5.1; de; rv:1.9.2.8) Gecko/20100722 Firefox/3.6.8
Accept image/png,image/*;q=0.8,*/*;q=0.5
Accept-Language de-de,de;q=0.8,en-us;q=0.5,en;q=0.3
Accept-Encoding gzip,deflate
Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive 115
Connection keep-alive
Referer http://dev-mgg.localdomain/css/global/icons.css?18224
Cookie IR_SQLPwdStore=; IR_SQLUser=sysadm
If-Modified-Since Thu, 24 Jul 2008 06:24:11 GMT
If-None-Match "4010127-3c4-452bf1aefd8c0"
Cache-Control max-age=0
Response
Date Mon, 02 Aug 2010 14:00:28 GMT
Server Apache/2.2.9 (Debian) PHP/5.2.6-1+lenny8 with Suhosin-Patch mod_perl/2.0.4 Perl/v5.10.0
Connection Keep-Alive
Keep-Alive timeout=15, max=59
Etag "4010127-3c4-452bf1aefd8c0"
Expires Mon, 02 Aug 2010 14:04:28 GMT
Cache-Control max-age=240
It looks like the Expires header being sent back by the server is only 4 minutes in the future. The algorithm by which web browsers decide whether or not to actually make a request (based on Expires values) uses the "closeness" of the current time to the expiration date - so unless the expiration date is still a long time away (weeks, months, years), you can be pretty sure that the browser will make a request for the file.
I'm having trouble with a particular version of Pocket IE running under Windows Mobile 5.0. Unfortunately, I'm not sure of the exact version numbers.
We had a problem whereby this particular 'installation' would return a locally cached version of a page when the wireless network was switched off. Fair enough, no problem. We cleared the cache of the handheld and started sending the following headers:
Expires: Mon, 26 Jul 1997 05:00:00 GMT
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache
Last-Modified: Thu, 30 Jul 2009 16:42:08 GMT
The Last-Modified header is calculated on the fly and set to 'now'.
Even still, the handheld seems to be caching these pages: the page is sent with the headers but then when they disconnect the wireless network and click a link to the page (that was not supposed to be cached) it still returns this cached file.
Is there some other header/s that should be sent, or is this just a problem with Pocket IE? Or is it possibly something entirely different?
Thanks!
I'm not sure I can answer your question since I have no Pocket IE to test with, but maybe I can offer something that can help.
This is a very good caching reference: http://www.mnot.net/cache_docs/
Also, I'm not sure whether your example is the pasted results of your headers, or the code that you've set up to send the headers, but I believe the collection of headers in most language implementations (and by extension I assume most browser implementations) is treated as a map; therefore, it's possible you've overwritten "no-store, no-cache, must-revalidate" with the second "Cache-Control" header. In other words, only one can get sent, and if last wins, you only sent "post-check=0, pre-check=0".
You could also try adding the max-age=0 header.
In my experience both Firefox and IE have seemed more sensitive to pages served by HTTPS as well. You could try that if you have it as an option.
If you still have no luck, and Pocket IE is behaving clearly differently from Windows IE, then my guess is that the handheld has special rules for caching based on the assumption that it will often be away from internet connectivity.
Edit:
After you mentioned CNN.com, and I realized that you do not have the "private" header in Cache-Control. I think this is what is making CNN.com cache the page but not yours. I believe "private" is the most strict setting available in the "Cache-Control header. Try adding that.
For example, here are CNN's headers. (I don't think listing "private" twice has any effect)
Date: Fri, 31 Jul 2009 16:05:42 GMT
Server: Apache
Accept-Ranges: bytes
Cache-Control: max-age=60, private, private
Expires: Fri, 31 Jul 2009 16:06:41 GMT
Content-Type: text/html
Vary: User-Agent,Accept-Encoding
Content-Encoding: gzip
Content-Length: 21221
200 OK
If you don't have the Firefox Web Developer Toolbar, it's a great tool to check Response Headers of any site - in the "Information" dropdown, "View Reponse Headers" is at the bottom.
Although Renesis has been awesome in trying to help me here, I've had to give up.
By 'give up' I mean I've cheated. Instead of trying to resolve this issue on the client side, I went the server side route.
What I ended up doing was writing a function in PHP that will take a URL and essentially make it unique. It does this by adding a random GET parameter based on a call to uniqid(). I then do a couple of other little things to it: make sure I add a '?' or a '&' to the URL based on the existence of other GET parameters and make sure that any '#' anchor items are pushed right to the end and then I return that URL to the browser.
This essentially resolves the issue as each link the browser ever sees is unique: it's never seen that particular URL before and so can't retrieve it from the cache.
Hackish? Yes. Working? So far, so good.