We are having problems with static files like css and js being served without being g-zipped. We use CDN so the number of requests that get to our server farm will be a lot less and this is resulting in IIS 7.5 to serve pages uncompressed.
I have tried to alter the requentHitThreshold to once every 24 hours but still the problem is if the CDN node is the first one to hit a particular server in the farm it will serve that uncompressed file forever.
So is there anyway to fix this issue? ie.., to force IIS to server G-ZIpped file from the first request.
Any help much appreciated,
Thanks.
Have you looked at this? I don't know if it will force the first request or not, though. http://coderjournal.com/2008/04/iis-7-compress-javascript-gzip/
Related
I have been trying to generate a SSL certificate for one of our projects which is running on an Azure VM which has no IP restrictions. However, the challenge file which is generated throws a 404 error and is not accessible over the web.
I have tried the following:
Moving the static content type above the extension less options in IIS
Adding a mime type for text/json, text/html
None of the above work which is making it really hard for me to generate a SSL using this service. Any idea how I can make it accessible ? I have given full access to that specific App Pool identity so permissions don't seem to be an issue in this case, its just the way the extension less files are being handled in IIS
Any help is appreciated.
Thanks,
Vishal
You just Add a New MIME Type on IIS
like this .
and try use the url at your browser . you will see that
Now you can Pass the lets-encrypt authentication :)
Also, if you're using a system with lots of custom routing or a framework that interferes with how URLs are handled (e.g. a CMS), ensure that you've told it to ignore /.well-known
We often use Umbraco for public-facing sites and I keep forgetting that I need to add ~/.well-known to the umbracoReservedUrls app setting in the web.config. Hopefully next time I'm stuck, I'll come across this answer...
Taking inspiration from the accepted answer, I did the following:
I was using plesk for windows on Godaddy.
Go to
Web server settings
In the MIME types, added the following node and click OK.
text/plain .
Note the dot at the end of the above setting.
I was looking for a good way to minifying my css, js and html codes, and found this package at google https://code.google.com/p/minify/. The issue that I have Nginx web server where this minifying application needs mod_rewrite which comes with Apache only. I got this message when I ran the script:
Your webserver does not seem to support mod_rewrite (used in /min/.htaccess). Your Minify URIs will contain "?", which may reduce the benefit of proxy cache servers.
Now I want to know if there is a way I can use this script on my Nginx server or not? if not, then what would be the alternative to that??
I'm looking for minifying css, js and html that make my web pages fast enough so that my clients can browse my site pages quickly...
any idea?
Thanks
Update #1:
I just found out that I had to add a rewrite rule (replacing .htaccess rule) on my nginx server to redirect the folder and its contents.
location / {
rewrite ^/min/([a-z]=.*) /min/index.php?$1 last;
}
but that redirects to error 404... any idea what the correct code is??
The way you have it is actually correct, the issue that you are having is likely the same one I am (and I'm not sure why it is) but basically it comes down to NGINX's rewrite rules ignoring the ? next to the $1 in the rule.
A work around for this is simply instead of going to example.com/min/f=path/to/file.css just put a ? in front of the f example.com/min/?f=path/to/file.css.
A better method would be to just serve the files as a group:
For the best performance you can serve these files as a pre-defined group with a URI like:
/min/g=keyName
To do this, add a line like this to /min/groupsConfig.php:
return array(
'keyName' => array('//path/to/js/file.js', '//path/to/js/otherfile.js')
);
Chances are though, you may need to use /min/?g=keyName.
As a side note, minifying and bundling isn't just ~1kb it can (and tends to be) much more. It has a huge impact on the user (especially on mobile devices). A browser can make 6 concurrent connections, so if you have any more files than that being downloaded, the user is waiting for them, one of the projects I recently have been working on had roughly 60 requests being made for different js and css files (the original coders were... all inclusive in the plugins department). The entire page was roughly 1 Meg and took 3 seconds to download uncached (nothing was cached, because the previous coders don't understand caching). I minified bundled and compressed everything into 3 files (removed the useless stuff too) and got the entire page down to 20kb uncached, 3kb cached, with an uncached load time under 20ms.
That was an extreme example of poor coding though. One final thought... if you don't go into the config and add the cache directories and cache everything, it will cause a slight performance hit on the server (though, probably not as severe as serving up a dozen extra files). I suggest enabling APC or memcache, or at least specifying the cache folder for it to store the files in.
the linux-guy here has a question about IIS v6.
The case is, that i have a site running, when i do some specific tasks on this site ( Like deleting a specific item, three times in a row) the site will break, and a completely blank page will appear. Checking the response headers, i noticed that the server sends a "403 Forbidden: IP address of the client has been rejected."
Through a proxy, i can connect just fine.
Checking the site-options in the IIS manager, shows me that my IP is not blocked globally, its something thats just happening.
Where can i check for this? It happens automatically and the block ends after about 8-12 minutes, every time?
best regards.
Jonas
do you have access to IIS?
i assume blocking occure on web site side (inside ASP code), not in IIS.
please check IIS console, and make sure that there are no blocked IP's. If it's true, you should find database/table or some config file, where stored all blocked IP's. After that, you should get able to find ASP code, which is responsable for blocking....
you also could try make quick search inside all ASP pages for text like: "REMOTE_ADDR" and ".ServerVariables"
We have a fairly high-traffic static site (i.e. no server code), with lots of images, scripts, css, hosted by IIS 7.0
We'd like to turn on some caching to reduce server load, and are considered setting the expiry of web content to be some time in the future. In IIS, we can do this on a global level via "Expire web content" section of the common http headers in the IIS response header module. Perhaps setting content to expire 7 days after serving.
All this actually does is sets the max-age HTTP response header, so far as I can tell, which makes sense, I guess.
Now, the confusion:
Firstly, all browsers I've checked (IE9, Chrome, FF4) seem to ignore this and still make conditional requests to the server to see if content has changed. So, I'm not entirely sure what the max-age response header will actually effect?! Could it be older browsers? Or web-caches?
It is possible that we may want to change an image in the site at short notice... I'm guessing that if the max-age is actually used by something that, by its very nature, it won't then check if this image has changed for 7 days... so that's not what we want either
I wonder if a best practice is to partition one's site into folders of content really won't change often and only turn on some long-term expiry for these folders? Perhaps to vary the querystring to force a refresh of content in these folders if needed (e.g. /assets/images/background.png?version=2) ?
Anyway, having looked through the (rather dry!) HTTP specification, and some of the tutorials, I still don't really have a feel for what's right in our situation.
Any real-world experience of a situation similar to ours would be most appreciated!
Browsers fetch the HTML first, then all the resources inside (css, javascript, images, etc).
If you make the HTML expire soon (e.g. 1 hour or 1 day) and then make the other resources expire after 1 year, you can have the best of both worlds.
When you need to update an image, or other resource, you just change the name of that file, and update the HTML to match.
The next time the user gets fresh HTML, the browser will see a new URL for that image, and get it fresh, while grabbing all the other resources from a cache.
Also, at the time of this writing (December 2015), Firefox limits the maximum number of concurrent connections to a server to six (6). This means if you have 30 or more resources that are all hosted on the same website, only 6 are being downloaded at any time until the page is loaded. You can speed this up a bit by using a content delivery network (CDN) so that everything downloads at once.
this is a weird one. I have a site deployed on a web server. (Win 2003, IIS 6)
For some reason, I can do an HTTP GET on a file if I specify it with "camel casing".. i.e. Styles.css, but I cannot get the same file using "small cap" i.e. styles.css
In the later case I get a 401.3 exception.
I know that 401.3 is a security issue but I'm confused because as far as I know ACL has nothing to do with case sensitivity.
What is even more weird is that I have (in the same folder) other files that I cannot GET (small cap or camel cap) and they have the exact same security settings.
Any ideas on what the issue could be?
Regards,
Alex
Can you check your web log or possibly use Process Monitor to see what happens when that file is requested?
The problem was related to a proxy server in front of my server. This box caches the files so it was not really hitting the IIS server all the time.
Thanks Micky for your help.