How can I enable IIS7 to gzip static files like js and css and how can I test if IIS7 is really gziping them before sending to the client?
Configuration
You can enable GZIP compression entirely in your Web.config file. This is particularly useful if you're on shared hosting and can't configure IIS directly, or you want your config to carry between all environments you target.
<system.webServer>
<httpCompression directory="%SystemDrive%\inetpub\temp\IIS Temporary Compressed Files">
<scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll"/>
<dynamicTypes>
<add mimeType="text/*" enabled="true"/>
<add mimeType="message/*" enabled="true"/>
<add mimeType="application/javascript" enabled="true"/>
<add mimeType="*/*" enabled="false"/>
</dynamicTypes>
<staticTypes>
<add mimeType="text/*" enabled="true"/>
<add mimeType="message/*" enabled="true"/>
<add mimeType="application/javascript" enabled="true"/>
<add mimeType="*/*" enabled="false"/>
</staticTypes>
</httpCompression>
<urlCompression doStaticCompression="true" doDynamicCompression="true"/>
</system.webServer>
Testing
To test whether compression is working or not, use the developer tools in Chrome or Firebug for Firefox and ensure the HTTP response header is set:
Content-Encoding: gzip
Note that this header won't be present if the response code is 304 (Not Modified). If that's the case, do a full refresh (hold shift or control while you press the refresh button) and check again.
You will need to enable the feature in the Windows Features control panel:
Global Gzip in HttpModule
If you don't have access to the final IIS instance (shared hosting...) you can create a HttpModule that adds this code to every HttpApplication.Begin_Request event :
HttpContext context = HttpContext.Current;
context.Response.Filter = new GZipStream(context.Response.Filter, CompressionMode.Compress);
HttpContext.Current.Response.AppendHeader("Content-encoding", "gzip");
HttpContext.Current.Response.Cache.VaryByHeaders["Accept-encoding"] = true;
Testing
Kudos, no solution is done without testing. I like to use the Firefox plugin "Liveheaders" it shows all the information about every http message between the browser and server, including compression, file size (which you could compare to the file size on the server).
under windows 2012 r2 it can be found here:
I only needed to add the feature in windows features as Charlie mentioned.For people who cannot find it on window 10 or server 2012+ find it as below. I struggled a bit
Windows 10
windows server 2012 R2
window server 2016
If you use YSlow with Firebug and analyse your page performance, YSlow will certainly tell you what artifacts on your page are not gzip'd!
If you are also trying to gzip dynamic pages (like aspx) and it isnt working, its probably because the option is not enabled (you need to install the Dynamic Content Compression module using Windows Features):
http://support.esri.com/en/knowledgebase/techarticles/detail/38616
For all the poor guys who have to struggle with a german/deutsche Server :)
Another easy way to test without installing anything, neither is it dependent on IIS version. Paste your url to this link - SEO Checkup
To add to web.config: http://www.iis.net/configreference/system.webserver/httpcompression
Try Firefox with Firebug addons installed. I'm using it; great tool for web developer.
I have enable Gzip compression as well in my IIS7 using web.config.
Related
I currently have the problem that IIS serves all my cookies with the sameSite=lax attribute after an update of .Net Framework on Windows Server (https://support.microsoft.com/en-us/help/4524419/kb4524419)
The problem is similar to how SameSite attribute added to my Asp.net_SessionID cookie automatically?
This breaks the functionality of most of the IFrames that are in use in webpages with another domain, as the browser does not send the ASP.Net Session-ID back to the server with subsequent requests.
Now while there are some suggestions in the above-mentioned thread they do not really work for me. This is due to Safaris nonstandard behavior. Safari on MacOSX and iOS 12.x treats the value "None" for the sameSite-attribute as unknown and therefore sets the value to "Strict" which again breaks the functionality of the IFrames for Safari users.
Now I wonder whether it is possible to define an outbound rewrite rule in the IIS web.config that first checks the request-header to see if the client is using a Safari browser. Depending on the Client browser, version different rewrite-outbound rules should change the cookies corresponding to what the browser expects.
Is it possible to write outbound rules with conditions based on the request? I did not find any documentation or website indicating this works...
I modified upon several SO answers to come up with this URL rewrite that adds SameSite=None to session cookies, and also remove SameSite=None from all cookies for most incompatible browsers. The aim of this rewrite is to preserve the "legacy" behaviour pre-Chrome 80. It specifically covers the Safari on MacOSX and iOS 12.x scenario you mention.
Full write-up in my Coder Frontline blog:
<rewrite>
<outboundRules>
<preConditions>
<!-- Checks User Agent to identify browsers incompatible with SameSite=None -->
<preCondition name="IncompatibleWithSameSiteNone" logicalGrouping="MatchAny">
<add input="{HTTP_USER_AGENT}" pattern="(CPU iPhone OS 12)|(iPad; CPU OS 12)" />
<add input="{HTTP_USER_AGENT}" pattern="(Chrome/5)|(Chrome/6)" />
<add input="{HTTP_USER_AGENT}" pattern="( OS X 10_14).*(Version/).*((Safari)|(KHTML, like Gecko)$)" />
</preCondition>
</preConditions>
<!-- Adds or changes SameSite to None for the session cookie -->
<!-- Note that secure header is also required by Chrome and should not be added here -->
<rule name="SessionCookieAddNoneHeader">
<match serverVariable="RESPONSE_Set-Cookie" pattern="((.*)(ASP.NET_SessionId)(=.*))(SameSite=.*)?" />
<action type="Rewrite" value="{R:1}; SameSite=None" />
</rule>
<!-- Removes SameSite=None header from all cookies, for most incompatible browsers -->
<rule name="CookieRemoveSameSiteNone" preCondition="IncompatibleWithSameSiteNone">
<match serverVariable="RESPONSE_Set-Cookie" pattern="(.*)(SameSite=None)" />
<action type="Rewrite" value="{R:1}" />
</rule>
</outboundRules>
</rewrite>
This should work for most ASP .Net and ASP .Net Core applications, although newer Frameworks have proper code and config options to let you control this behaviour. I would recommend researching all the options available to you before using my rewrite above.
I am looking at the ApplicationHost.config file in IIS server to understand the configurations of Http Compression.
I see the following code:
<httpCompression
directory="%SystemDrive%\inetpub\temp\IIS Temporary Compressed Files">
<scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" />
<dynamicTypes>
<add mimeType="text/*" enabled="true" />
<add mimeType="message/*" enabled="true" />
<add mimeType="application/javascript" enabled="true" />
<add mimeType="*/*" enabled="false" />
</dynamicTypes>
<staticTypes>
<add mimeType="text/*" enabled="true" />
<add mimeType="message/*" enabled="true" />
<add mimeType="application/javascript" enabled="true" />
<add mimeType="*/*" enabled="false" />
</staticTypes>
</httpCompression>
(taken from: https://learn.microsoft.com/en-us/iis/configuration/system.webserver/httpcompression/)
And my question for you is:
What does it mean to have the same mimeType in both dynamic and static types?
For example from the code I gave we see application/javascript in both sections. now lets say both dynamic and static content compression are enabled, what will happen when we serve an Http Response with Content-Type application/javascript?
Content served by IIS is either static or dynamic. For the most part, if your content is served by a handler such as ASP.NET or Classic ASP, then it falls under the dynamic bucket, if it is a file read straight off the disk, it a static. The example you give obviously doesn't matter because if application/javascript is served, and is enabled by both, then it is eligible for compression. A better example is to say that if the javascript is served thru the static file handler (i.e. the javascript file is from a .js file on the disk) then it will change the static file handler to see if compression is enabled and may compress it. If the javascript comes from some call to script.axd or some other "dynamic" handler, then it will check the dynamicTypes.
So you might ask why two sections? The reason is simply that static files can be compressed and then cached because the files are, well, static. Therefore, we can be much more liberal with our static caching rules because the file can be compressed for the first person who requests it and then cache that compressed copy. Future requests to that same file can be served directly out of the cache. Of course the system handles any modifications that may happen to the file an updates the cache.
With dynamic content, well, that file may be different for every request, every user, etc. As result, IIS doesn't make attempts to cache the compressed copy and simply compresses it each time.
Hopefully that's enough info to get you started. Side note, with static compression it doesn't actually compress for the first user (generally) and it needs a couple people requesting before it goes thru the effort of compressing the content.
I have an ASP.NET website that I'm trying to enable Static Compression for. My website has the following compression configuration.
<httpCompression directory="%SystemDrive%\inetpub\temp\IIS Temporary Compressed Files" staticCompressionEnableCpuUsage="0" staticCompressionDisableCpuUsage="100" staticCompressionIgnoreHitFrequency="true">
<clear/>
<scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="10" dynamicCompressionLevel="3" />
<scheme name="deflate" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="10" dynamicCompressionLevel="3" />
<staticTypes>
<clear/>
<add mimeType="text/*" enabled="true" />
<add mimeType="message/*" enabled="true" />
<add mimeType="application/x-javascript" enabled="true" />
<add mimeType="application/javascript" enabled="true" />
<add mimeType="*/*" enabled="false" />
</staticTypes>
</httpCompression>
<urlCompression doStaticCompression="true" doDynamicCompression="false" dynamicCompressionBeforeCache="false" />
I do not want to enable dynamic compression. According to Microsoft documentation,
Unlike static compression, IIS 7 performs dynamic compression each time a client requests the content, but the compressed version is not cached to disk.
My web server is fairly heavily loaded with processes, so this would be an unwanted burden. By Static Compression is appealing because the compressed files are cached on disk.
However, even after continuous refreshing of the localhost page (Ctrl+F5), and waiting 15+minutes on watching the compression directory, nothing is being cached.
Also, none of the relevant files (css/js/html) are being returned with an gzip compression header.
Both dynamic and static compression are installed. Dynamic is turned off. If I turn on dynamic compression, I start seeing the gzip HTTP response headers come back.
What am I missing? Why does Static Compression refuse to work?
IIS 10
I had this problem and tracked it down to a bad URL Rewrite rule. The static assets were living in C:\inetpub\wwwroot\MyProject\wwwroot and the rewrite rule was changing ^assets/(.*) to ./{R:1}, so IIS was looking at the top of MyProject and not finding the file. But then when it handed the request off to the .Net app, the app would see the file and serve it. So the two symptoms were:
gzip worked only when dynamic compression was enabled (because the .Net app was serving the files).
turning off runAllManagedModulesForAllRequests (on the modules element) caused our static files to become 404 errors---basically surfacing the problem of IIS not seeing the file.
To fix it I changed the rewrite rule from ./{R:1} to ./wwwroot/{R:1}.
Have you looked at this: https://blogs.msdn.microsoft.com/friis/2017/09/05/iis-dynamic-compression-and-new-dynamic-compression-features-in-iis-10/
Theres not much context to see from your question ... but for me this worked.
Cached by asp.net mvc, because it's a bundle of multiple js files. I guess the IIS can see it's not a static file on disk on thats the reason it's dynamic.
There also help to see what id actually does with your js file to find out why it's not doing compression in the link I posted.
I also saw a line in the link you posted:
Unlike static compression, IIS 7 performs dynamic compression each time a client requests the content, but the compressed version is not cached to disk. This change is made because of the primary difference between static and dynamic content. Static content does not change. However, dynamic content is typically content that is created by an application and therefore changes often, such as Active Server Pages (ASP) or ASP.NET content. Since dynamic content should change often, IIS 7 does not cache it.
Also try to read this post: https://forums.iis.net/t/1071156.aspx
I'm trying to setup Mercurial on IIS 7.5. I have a web.config for an application directory that is ignoring the maxAllowedContentLength attribute and I simply cannot get IIS to accept it! I've tried it a thousand different ways at global, local, and every level. It sticks by its default of 30MB and refuses to let me push changesets that are larger than that. It doesn't even close the connection, it just gets to 30MB and stalls completely. It's not a timeout issue, I've tried pushing from the local machine to its IP address.
What the hell is going on?
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<system.webServer>
<handlers>
<add name="Python" path="*.cgi" verb="*" modules="CgiModule" scriptProcessor="C:\Python27\python.exe -u "%s"" resourceType="Unspecified" requireAccess="Script" />
</handlers>
<security>
<requestFiltering>
<requestLimits maxAllowedContentLength="1073741824" />
</requestFiltering>
</security>
<rewrite>
<rules>
<rule name="rewrite to hgwebdir" patternSyntax="Wildcard">
<match url="*" />
<conditions logicalGrouping="MatchAll" trackAllCaptures="false">
<add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />
</conditions>
<action type="Rewrite" url="hgweb.cgi/{R:1}" />
</rule>
</rules>
</rewrite>
</system.webServer>
<!-- I don't know if this is supposed to work... it doesn't matter where I put the settings. -->
<location path="*">
<system.web>
<!-- maxRequestLength is in kilobytes (KB) -->
<httpRuntime maxRequestLength="1048576" /> <!-- 1GB -->
</system.web>
<system.webServer>
<security>
<requestFiltering>
<!-- maxAllowedContentLength is in bytes (B) -->
<requestLimits maxAllowedContentLength="1073741824"/> <!-- 1GB -->
</requestFiltering>
</security>
</system.webServer>
</location>
</configuration>
I found a few ways of dealing with this issue:
To fix this server-side in IIS, download and install https://www.nartac.com/Products/IISCrypto/Default.aspx and click the BEAST button, or force SSL3.0 by disabling other protocols.
If you don't have access to the IIS server, you can fix it by rolling back Python to version 2.7.2 or earlier.
If you are adventurous, you can modify the mercurial source in sslutil.py, near the top, change the line
sslsocket = ssl.wrap_socket(sock, keyfile, certfile,
cert_reqs=cert_reqs, ca_certs=ca_certs)
to
from _ssl import PROTOCOL_SSLv3
sslsocket = ssl.wrap_socket(sock, keyfile, certfile,
cert_reqs=cert_reqs, ca_certs=ca_certs, ssl_version=PROTOCOL_SSLv3)
This will work around the problem and fix the push limit to mercurial behind IIS.
If you are interested in why Python 2.7.3 broke this, look at http://bugs.python.org/issue13885 for the explanation (it is security-related). If you want to modify Python itself, in Modules/_ssl.c change the line
SSL_CTX_set_options(self->ctx,
SSL_OP_ALL & ~SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS);
back to how it was prior to 2.7.3:
SSL_CTX_set_options(self->ctx, SSL_OP_ALL);
Compile and reinstall python, etc. This adds more SSL compatibility at the expense of potential security risks, if I understand the OpenSSL docs correctly.
Like others, the accepted answer didn't work for me.
The reason the upload fails appears to have to do with an incompatibility in the cipher suite that is negotiated between Mercurial and IIS - specifically, with IIS' default settings, the choice of a CBC-based cipher suite.
Mercurial version 2.9.1 (the one I've tested) sends this cipher suite order to the server. The suites supported by Windows Server 2008 R2 (and IIS 7.5) with an RSA certificate are bold here:
TLS_DHE_RSA_WITH_AES_256_SHA
TLS_DHE_DSS_WITH_AES_256_SHA
TLS_RSA_AES_256_SHA
SSL_DHE_RSA_WITH_3DES_EDE_SHA
SSL_DHE_DSS_WITH_3DES_EDE_SHA
SSL_RSA_WITH_3DES_EDE_SHA
TLS_DHE_RSA_WITH_AES_128_SHA
TLS_DHE_DSS_WITH_AES_128_SHA
TLS_RSA_AES_128_SHA
SSL_RSA_WITH_RC4_128_SHA
SSL_RSA_WITH_RC4_128_MD5
Only two of those aren't CBC - the RC4 based ones. IIS will pick anything coming before those in both its own and Mercurial's priorities.
The reason IISCrypto 1.3 worked to fix the issue seems not to be that it disabled SSL 2 (although that's still a good idea), but because it moved RC4 above the CBC cipher suites, due to the BEAST attack. In 1.4, RC4 was moved down again, due to newly found vulnerabilities.
So... The best compromise seems to be to move IIS' priority for RC4_128_SHA up above AES_256_SHA. Note that the merits of AES 256 over AES 128 in terms of security are widely debated.
Security-wise, this still prioritizes all the ECDHE CBC ciphers, which Mercurial doesn't support at the moment, but all modern browsers do. IE running on Windows XP as well as Android 2.3 will be using RC4 due to this change - the rest are covered. While RC4 is broken, an attack on it isn't trivial. For my purposes, I think I'll survive. Any user of this method will have to make up their own mind as to whether they'll risk it. :-)
It's still a compromise, and I'm not at all happy about it, but at least I found a workable (and working) compromise. Now if only there was a way to pick cipher suite order on a per-website basis rather than globally on the server...
Thanks to #Sahil for pointing me in the direction of this.
As explained in this rant, IIS 7.5 introduces a default setting for maxAllowedContentLength in Machine.config, which will apparently take precedence over whatever you specify in any Web.config.
To fix this, open IIS Manager, click the server node, choose Configuration Editor, and expand system.webServer/security/requestFiltering and then change requestLimits/maxAllowedContentLength (which happens to default to 30000000 bytes). Remember to click Apply afterwards.
This is an incompatibility in the SSL module of Python 2.7.3+.
Bug documented here on the TortoiseHg site, but it applies to all platforms pushing into IIS over HTTPS.
https://bitbucket.org/tortoisehg/thg/issue/2593/cant-push-over-30mb-to-iis-via-https
In My case I had to make more changes in the IISCrypto software referenced above.
I have IIS 7.5 and IISCrypto version 1.4 (latest at time of writing)
Changing to "Best" or "PCI" profile did not work for me so I did following.
Changed the profile back to Best option.
Look for the bottom left corner box named SSL Cipher Suite Order.
Disable/Uncheck all CBC-based ciphers
Restart your computer/Server
I found a solution from this thread and an answer from Zach Mason worked for me as described above.
Hope this helps someone.
I have the same setup running, and I needed to add the maxAllowedContentLength attribute today.
I just inserted it at the bottom of my existing web.config, and it worked at once without problems (with a >100MB commit).
My complete web.config looks like this now:
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<system.webServer>
<handlers>
<add name="Python" path="*.cgi" verb="*" modules="CgiModule" scriptProcessor="C:\Python26\python.exe -u "%s"" resourceType="Unspecified" />
</handlers>
<rewrite>
<rules>
<rule name="rewrite to hgwebdir" patternSyntax="Wildcard">
<match url="*" />
<conditions logicalGrouping="MatchAll" trackAllCaptures="false">
<add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />
</conditions>
<action type="Rewrite" url="hgweb.cgi/{R:1}" />
</rule>
</rules>
</rewrite>
<security>
<requestFiltering>
<requestLimits maxAllowedContentLength="2147482624" />
</requestFiltering>
</security>
</system.webServer>
</configuration>
I would like to enable SSI on an Azure Web Site to but using .html rather than .shtml. SSI is enabled but for the life of me I can't find a way to get it to process .html.
Locally I've added a handler to web.config
<system.webServer>
<handlers accessPolicy="Read, Script">
<add name="ASPClassicHtml" path="*.html" verb="GET,HEAD,POST" modules="IsapiModule" scriptProcessor="%IIS_BIN%\asp.dll" resourceType="File" />
</handlers>
</system.webServer>
And that works fine, but when I upload to the azure web site my i get the following error;
The resource you are looking for has been removed, had its name changed, or is temporarily unavailable.
Any suggestions?
It's not possible at the moment but SSI module inclusion on WAWS is apparently in the pipe-line.
http://social.msdn.microsoft.com/Forums/en-US/windowsazurewebsitespreview/thread/391a7918-00e8-49af-b2db-675980aebbd0