Hello.
I am using HttpContext.RewritePath to direct request to inner site folders depending from request and host.
Problem:
When I do any request that requires RewritePath for any static file that is cached and GZIPped by IIS, I get in response original non-compressed file content with Content-Encoding: gzip header, which leads to “Content decoding has failed” error.
But when I do same request but with full directory path (in that case RewritePath is skipped in my code) I get right gzipped content with Content-Encoding: gzip.
E.g.:
Situation with error:
Request url: localhost/lib/ext_3.4.0/resources/css/ext-all.css
Request path is rewrited using HttpContext.RewritePath to: localhost/_sites/mainSite/lib/ext_3.4.0/resources/css/ext-all.css
First response is not gzipped - usual IIS behavior. When I press ctrl+F5, I get “Content decoding has failed” error. By using Fiddler2 I can see that response content is not gzipped and it contains Content-Encoding: gzip header.
Situation without error:
Request url: localhost/_sites/mainSite/lib/ext_3.4.0/resources/css/ext-all.css
Request path is not rewrited because it is not needed.
First response is not gzipped again. When I press ctrl+F5, I get normal file content. By using Fiddler2 I can see that response content is gzipped, size is 5 times less and it contains Content-Encoding: gzip header.
I can't throw away RewritePath and I need IIS gzip. Is there any way to make them friends?
Related
I have problem with IIS configuration.
I have SOAP Service available via IIS.
When I create request with good values I will receive 200 and content type as text/xml and body as xml string.
When I create request with wrong values - in my code I will throw FaultException, but from Visual Studio I receive 500 and body as xml string with my values, but on my server environment somehow I am redirecting to error page.
I tried to setup this in web.config file, but after removing all values connected with errorPages - still the same problem.
Any idea how to fix this?
I am dealing with a weird Error, that is reproducible only in production over a https connection.
Firstly I am trying to download a *.cvs file, my backingbean code looks as follows:
ExternalContext externalContext = facesContext.getExternalContext();
try (OutputStream outputStream = externalContext.getResponseOutputStream()) {
String exportFileName = exportHandler.getExportFileName();
String contentType = externalContext.getMimeType(exportFileName);
externalContext.responseReset();
externalContext.setResponseCharacterEncoding(CHARSET);
externalContext.setResponseContentType(contentType);
externalContext.setResponseHeader("Content-disposition", "attachment; filename=\"" + exportFileName + "\"");
int length = exportHandler.writeCsvTo(outputStream); // with a BufferedWriter the lines are written and the bytes are summed up.
externalContext.setResponseContentLength(length);
outputStream.flush();
}
facesContext.responseComplete();
Everything works fine over an http connection, or on localhost (even https on localhost works), but in production over an https connection, different web browsers give different Error messages.
Chrome gives the following message: "The webpage at ... might be temporarily down or it may have moved permanently to a new web address. ERR_HTTP2_PROTOCOL_ERROR"
Under Firefox starts to load and then nothing happens as it takes forever.
Under IE: "The connection to the website was reset. Error Code: INET_E_DOWNLOAD_FAILURE"
My take would be that, it has to do with the response content length, as I am not sure I still need it after writing the Outputstream.
The response header looks like this:
Connection: keep-alive
Content-disposition: attachment; filename="test.csv"
Content-Type: text/csv; charset=ISO-8859-15
Date: Mon, 04 May 2020 10:16:41 GMT
Transfer-Encoding: chunked
Not sure why ends up my response with a chunked transfer-encoding, as far as I know, that can happen only when content length is not set. Does any body have any idea, do i miss some header constraints in my response, like setting cache-control?
The difference might actually be between HTTP1.1 and 2.0 and not directly https and http. This might be occuring because https allows upgrading the protocol to 2.0 during the handshake, while in basic http no such negotiation takes place.
With that in mind, the header parsing implementations in the browser might be different. If I had to guess, this is about the content-disposition filename.
In HTTP2, most browsers will fail to parse the content-disposition filename if it contains non US-ASCII characters. HTTP1.1 implementations are more lenient with that.
You can still have utf-8 filenames with content-disposition filename* attribute.
Currently the most supported Content-Disposition format is:
attachment; filename="<us-ascii filename>"; filename*=utf-8''<url-encoded-utf-8-filename>
ie.
attachment; filename="EURO rates"; filename*=utf-8''%e2%82%ac%20rates
I'm trying out the Azure Mobile App API and getting an error on making Patch calls.
GET and POST and DELETE works fine.
Here is what my url looks like:
PATCH http://mymobileappapi.azurewebsites.net/tables/Skill/c89027fa-edce-4d36-b42a-ecb0920ebab6
body:
{
"name": "Leadership SDFF"
}
I have these as headers too (as I said other http verbs work.)
ZUMO-API-VERSION 2.0.0
Content-Type Application/Json
And I get 500 error back with this in the body:
{
"error": "An item to update was not provided"
}
The same id works when I do a GET using that id...
And when I make the same call using same body with PUT i get a 404 Not found without any content in the response body.
Any ideas?
It turns out our implementation requires the content-type header value to be lower case, i.e. application/json works, whereas Application/Json doesn't. I've updated this issue to be the placeholder for the fix. As a workaround in the meantime, make the value for the content-type header lower case.
https://github.com/Azure/azure-mobile-apps-node/blob/master/src/express/middleware/parseItem.js#L27
should use req.get instead of req.headers. Keep in mind that values can also include encoding, e.g. application/json; charset=utf-8
Here is link to the issue:
https://github.com/Azure/azure-mobile-apps-node/issues/368
It seems possible to inject javascript in a get request, when refering to the /xsp/.ibmmodres/ XSP/Domino resources.
Normally, when you try this at .nsf/ resources, you get a correct default or custom errorpage without XSS possibilities. Special characters are substituted.
Example:
- http://[server]/[path]/[dbname].nsf/%3Cscript%3Ealert%28document.cookie%29%3C/script%3E
Result:
HTTP Web Server: Cannot find design element
But refering to the /xsp/.ibmmodres/ resources, it yields XSS possibilities.
Example:
http://[server]/[path]/[dbname].nsf/xsp/.ibmmodres/%3Cscript%3Ealert%28document.cookie%29%3C/script%3E
Result:
I get a 404 errorpage "Cannot load unregistered resource /"
And it executes CSJS and shows for example DomAuthSessID !!
How is this possible?
Is there a way to avoid this?
Please help!
Here is an article about how to avoid this:
http://www.wissel.net/blog/d6plinks/SHWL-8XS3MY
Check your Domino version. It should be fixed in 8.5.3. FP2 (not fully sure about that) (but definitely 9.0 Beta).
Other than that follow my instructions and create some web rules:
Type of rule: HTTP response headers
Incoming URL pattern: */xsp/.ibmxspres/*
HTTP response codes: 404
Expires header: Don't add header
Custom header: Content-Type : text/plain (overwrite)
Type of rule: HTTP response headers
Incoming URL pattern: */xsp/.ibmmodres/*
HTTP response codes: 404
Expires header: Don't add header
Custom header: Content-Type : text/plain (overwrite)
I'm currently trying to pass PCI compliance for one of my client's sites but the testing company are flagging up a vulnerability that I don't understand!
The (site removed) details from the testing company are as follows:
The issue here is a cross-site
scripting vulnerability that is
commonly associated with e-commerce
applications. One of the tests
appended a harmless script in a GET
request on the end of the your site
url. It flagged as a cross-site
scripting vulnerability because this
same script that was entered by the
user (our scanner) was returned by the
server unsanitized in the header. In
this case, the script was returned in
the header so our scanner flagged the
vulnerability.
Here is the test I ran from my
terminal to duplicate this:
GET
/?osCsid=%22%3E%3Ciframe%20src=foo%3E%3C/iframe%3E
HTTP/1.0 Host:(removed)
HTTP/1.1 302 Found
Connection: close
Date: Tue, 11 Jan 2011 23:33:19 GMT
Server: Microsoft-IIS/6.0
X-Powered-By: ASP.NET
X-AspNet-Version: 2.0.50727
Location: http://www.(removed).co.uk/index.aspx?osCsid="><iframe src=foo></iframe>
Set-Cookie: ASP.NET_SessionId=bc3wq445qgovuk45ox5qdh55; path=/; HttpOnly
Cache-Control: private
Content-Type: text/html; charset=utf-8
Content-Length: 203
<html><head><title>Object moved</title></head><body>
<h2>Object moved to here.</h2>
</body></html>
The solution to this issue is to
sanitize user input on these types of
requests, making sure characters that
could trigger executable scripts are
not returned on the header or page.
Firstly, I can't get the result that the tester did, it only ever returns a 200 header which doesn't include the location, nor will it return the object moved page. Secondly, i'm not sure how (on iis 6) to stop it returning a header with the query string in it! Lastly, why does code in the header matter, surely browsers wouldn't actually execute code from the http header?
Request: GET /?osCsid=%22%3E%3Ciframe%20src=foo%3E%3C/iframe%3E HTTP/1.0 Host:(removed)
The <iframe src=foo></iframe> is the issue here.
Response text:
<html><head><title>Object moved</title></head><body>
<h2>Object moved to here.</h2>
</body></html>
The response link is:
http://www.(removed).co.uk/index.aspx?osCsid="><iframe src=foo></iframe>
Which contains the contents from the request string.
Basically, someone can send someone else a link where your osCsid contains text that allows the page to be rendered in a different way. You need to make sure that osCsid sanitizes input or filters against things that could be like this. For example, I could provide a string that lets me load in whatever javascript I want, or make the page render entirely different.
As a side note, it tries to forward your browser to that non-existent page.
It turned out that I have a Response.redirect for any pages which are accessed by https which don't need to be secure and this was returning the location as part of the redirect. Changing this to:
Response.Status = "301 Moved Permanently";
Response.AddHeader("Location", Request.Url.AbsoluteUri.Replace("https:", "http:"));
Response.End();
Fixed the issue