I configured a Frontdoor on Azure, with 2 rules
(HTTP endpoint) Rewrite HTTP to HTTPS
(HTTPS endpoint) HTTPS to backend pool
When requesting the http endpoint Frontdoor answer this:
HTTP/1.1 302 Found
Location: https://example.com/
Server: Microsoft-IIS/10.0 <------
X-Azure-Ref: xxxxxxxx
Date: Wed, 08 Jul 2020 12:00:00 GMT
Content-Length: 0
Is it possible to remove this header ? I know it's a managed resource but I can't find any documentation on this matter/if it's normal.
I don't believe it's my backend answering because my https endpoint doesn't answer me that ... but maybe?
You can set Rules engine configuration in Front Door.
The server information may not be displayed at present due to my environment. But you can still refer to my screenshots for configuration.
I found that the information such as Date, although I modified it, still does not take effect. It may be related to the information returned by the azure server and cannot be modified. You can try to modify the Server information.
After the modification, if the Server information remains unchanged, there is no way to modify it.
Related similar posts you can refer to:
ASP.NET MVC 5 Azure App ZAP Scan indicates Proxy Disclosure vulnerability - how can we prevent that?
In the webapp, no matter the program is modified or in other ways, the server information cannot be modified.
So if the above method can be modified successfully, it can be of great help to you.
If it doesn't work, you don't have to spend time to deal with this problem. You can raise a ticket in portal to confirm.
Related
I set up an Azure Verizon Premium CDN a few days ago as follows:
Origin: An Azure web app (.NET MVC 5 website)
Settings: Custom Domain, no geo-filtering
Caching Rules: standard-cache (doesn't care about parameters)
Compression: Enabled
Optimized for: Dynamic site acceleration
Protocols: HTTP, HTTPS, custom domain HTTPS
Rules: Force HTTPS via Rules Engine (if request scheme = http, 301 redirect to https://{customdomain}/$1)
So - this CDN has been running for a few days now, but the ADN reports are saying that nearly 100% (99.36%) of the cache status is "CONFIG_NOCACHE" (Description: "The object was configured to never be cached in accordance with customer-specific configurations residing on the edge servers, so the response was served via the origin server.") A few (0.64%) of them are "NONE" (Description: "The cache was bypassed entirely for this request. For instance, the request was immediately rejected by the token auth module, or the client request method used an uncacheable request method such as "PUT".") Also, in the "Cache Hit" report, it says "0 hits, 0 misses" for every day. Nothing is coming through the "HTTP Large" side, only "ADN".
I couldn't find these exact messages while searching around, but I've tried:
Updating cache-control header to max-age, public (ie: cache-control: public,max-age=1209600)
Updating the cache-control header to max-age (cache-control: max-age=1209600)
Updating the expires header to a date way in the future (expires: Tue, 19 Jan 2038 03:14:07 GMT)
Using different browsers so the request cache info is different. In Chrome, the request is "cache-control: no-cache" in my browser. In Firefox, it'll say "Cache-Control: max-age=0". In any case, I'd assume the users on the website wouldn't have these same settings, right?
Refreshing the page a bunch of times, and looking at the real time report to see hits/misses/cache statuses, and it shows the same thing - CONFIG_NOCACHE for almost everything.
Tried running a "worldwide" speed test on https://www.dotcom-tools.com/website-speed-test.aspx, but that had the same result - a bunch of "NOCACHE" hits.
Tried adding ADN rules to set the internal and external max age to 864000 sec (10 days).
Tried adding an ADN rule to ignore "no-cache" requests and just return the cached result.
So, the message for "NOCACHE" says it's a node configuration issue... but I haven't really even configured it! I'm so confused. It could also be an application issue, but I feel like I've tried all the different permutations of "cache-control" that I can. Here's an example of one file that I'd expect to be cached:
Ultimately, I would hope that most of the requests are being cached, so I'd see most of the requests be "TCP Hit". Maybe that's incorrect? Thanks in advance for your help!
So, I eventually figured out this issue. Apparently Azure Verzion Premium CDN ADN platform has "bypass cache" enabled by default.
To disable this behavior you need to configure additional features to your caching rules.
Example:
IF Always
Features:
Bypass Cache Disabled
Force Internal Max-Age Response 200 864000 Seconds
Ignore Origin No-Cache 200
Has anyone used IdentityServer3 with WebAuthenticationBroker to implement SSO in windows universal app (WUP)?
For some reason the broker class decides to POST! to the STS at some point and it does not see the correct redirect (in the form of ms-app://sid#tokens) that is coming from STS.
I can see in the fiddler traces that STS is indeed redirecting to the proper location in the last step:
HTTP/1.1 302 Found
Content-Length: 0
Location: ms-app://s-1-15-2-38.../#code=0bdb86...&id_token=eyJ0eX...&access_token=eyJ0e...&token_type=Bearer&expires_in=7776000&scope=openid%20offline_access%20email&state=xyz&session_state=367wnzhjdQ2r9TiX7sZmQ03y_kjBMKNVVwr6xfuByQ0.cd1e1d29ed153a7b4743bd0f51def6a3
The answer is here (IdentityModel/IdentityModel.OidcClient.Samples):
https://github.com/IdentityModel/IdentityModel.OidcClient.Samples/tree/master/Uwp/UwpSample
I've got 6 identical machines running IIS and Apache. Today one of them decided to just stop serving requests. I can access all of the webapps when I try from localhost/resource but when I try from url/resource I get a 404. I did a Get request against the machine that isn't working and I get this back:
Server: Microsoft-HTTPAPI/2.0
Connection: close
Compared to a working server:
Server: Microsoft-IIS/8.5
X-Powered-By: ASP.NET
Content-Type: text/html
Tried searching for this problem but came up with nothing, anyone got any idea's?
Windows has an HTTP service that manages calls to IIS and other HTTP enabled services on a windows machine. Either you need to configure it to handle your calls, or, in the case of WAMP or similar non-IIS-web-server-on-windows scenarios you may just need to turn it off.
When you see "Microsoft-HttpApi/2.0" returning error, such as 400 "bad URL" or "bad header", etc. the problem is most likely because the HTTP.sys service is intercepting your http request and terminating it because it does not meet with the minimum validation rules that are configured.
This configuration is found in the registry at HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\HTTP\Parameters. In my case, it was choking because I had a RESTful call that had a 400 character segment in the url which was 160 characters more than the default value of 260, so I
added the registry parameter UrlSegmentMaxLength with a DWORD value of 512,
stopped the service using net stop http
started the service using net start http
I've run into these issues before and it is easy to troubleshoot but there is very little on the web that addresses it.
Try these links
"the underlying problem is that the client has sent a request to IIS that breaks one or more rules that HTTP.sys is enforcing"
enabling logging on HTTP.sys is described here
a list of the HTTP.sys parameters that you can control in the registry is found here.
A bit late, so put here for posterity ;-)
After trying all sorts of solutions found on the web, I almost gave up, but found this little nugget.
If the response's Server header returns Microsoft-HttpApi/2.0, it means that the HTTP.sys is being called, not IIS.
As a result, a lot of the workarounds will not work (URLScan, etc).
This worked however:
Open regedit
Navigate HKLM\SYSTEM\CurrentControlSet\Services\HTTP\Parameters\
If DisableServerHeader doesn't exist, create it (DWORD 32bit) and give it a value of 2. If it does exist, and the value isn't 2, set it to 2.
Finally, restart the service by calling net stop http then net start http
src: WS/WCF: Remove Server Header
Set below registry flag to: 2
HKLM\SYSTEM\CurrentControlSet\Services\HTTP\Parameters\DisableServerHeader
Setting this to 2 will ensure that self host WCF services no longer sends the SERVER header and thus ensure we are security compliant.
Please note that this disables ALL server headers.
The default value of 0 enables the header, and the value of 1 disables server header from DRIVER (http.sys), but app can still have headers.
For me I had to restart the server for the changes to take effect.
Hope this helps someone
I was working on our web app on a client's site and ran into an issue where the site root pages loaded, but the reports folder always returned a 404 for files that existed in the folder. The 404 page showed the .Net version of 2 when the application was set to 4, and a test of a non-existent page in the root returned a 404 page showing .Net 4.
I tried just http://localhost/reports and got back a Microsoft Reporting Services page. Not part of my application.
Be sure to check just the default document of the folder when a unexpected 404 comes up and the file exists.
This question and series of replies helped me get to the bottom of the related issue I was having. My issue centered around using just a subdomain to go to our server (e.g. typing "www/somepath" into the browser while on our corporate network), which had worked in the past on an older server, but no longer worked when the system was upgraded to a new server. I saw the unexpected Microsoft-HttpApi/2.0 string in the header when using the Chrome Devtools to inspect the Network traffic.
My HTTP.sys process was already logging, so I could verify that my traffic was going to that service and returning 404 NotFound status codes.
My resolution was to add a binding to the IIS site for the subdomain, making IIS respond instead of the HTTP.sys process, as described in this server fault article - https://serverfault.com/questions/479274/why-is-microsoft-httpapi-returning-404-to-my-network-switch
In my case, running Windows 10 Pro, it was the Windows MultiPoint Service.
By executing:
net stop wms
Port 80 was released.
I added a custom domain name for an Azure API App (actually on the underlying API app host).
The `https://microsoft-apiappXXXXXXXX.azurewebsites.net/ address still works but the custom domain yields the following error in the browser:
<Error>
<Message>
No ApiApp installed that can handle forward request to https://my.customdomain.com/
</Message>
</Error>
I've configured a custom SSL certificate, but with or without it, I get the same issue (minus the SSL warning when the custom SSL is not configured). HTTP access just issues a 302 Redirect to HTTPS.
Any ideas?
Unfortunately this is a known limitation of the preview bits when using custom domains. When I replied on the original thread of yours I didn't realize that, my apologies.
It's on our backlog and the team is working hard to get this implemented. It's a very common scenario so I don't think it will take much time (maybe a couple of weeks if everything goes ok) but I don't have an exact ETA to share at this point. I'll update this thread once it's live, it shouldn't be long.
You can find a list of known issues here, we will update it to include this as well.
I can't seem to get the magic combination of enabling NTLM authentication and still having RDS work. If I leave just anonymous authentication on, RDS works fine - as soon as I enabled it site wide, RDS fails (which is to be expected). Here is what I have done:
This is Windows XP SP2 and ColdFusion 8, Eclipse + Adobe plugins
In the IIS Manager, Right click on default web site and choose Properties
Directory Security tab, click the Edit button for anonymous access and authentication control
Authentication Methods popup window, uncheck anonymous access, and check Integrated Windows authentication (all other checks blank as well).
Click OK, OK, and override the settings for all child sites as well such that the entire site is "secured" using NTLM authentication.
Back in the IIS manager, right click on the CFIDE virtual directory, choose Properties
Directory security tab, edit the authentication methods. Uncheck Integrated Windows authentication and check anonymous access. Hit OK, OK and test:
C:\>wget -S -O - http://localhost/CFIDE/administrator/
--2009-01-21 10:11:59-- http://localhost/CFIDE/administrator/
Resolving localhost... 127.0.0.1
Connecting to localhost|127.0.0.1|:80... connected.
HTTP request sent, awaiting response...
HTTP/1.1 200 OK
Server: Microsoft-IIS/5.1
Date: Wed, 21 Jan 2009 17:12:00 GMT
X-Powered-By: ASP.NET
Set-Cookie: CFID=712;expires=Fri, 14-Jan-2039 17:12:00 GMT;path=/
Set-Cookie: CFTOKEN=17139032;expires=Fri, 14-Jan-2039 17:12:00 GMT;path=/
Set-Cookie: CFAUTHORIZATION_cfadmin=;expires=Mon, 21-Jan-2008 17:12:00 GMT;path=/
Cache-Control: no-cache
Content-Type: text/html; charset=UTF-8
Length: unspecified [text/html]
Saving to: `STDOUT'
... html output follows ...
And so far so good, the CFIDE directory and at least one child directory appear to be working without NTLM authentication. So I fire up Eclipse and try to establish an RDS connection. Unfortunately I just get an Access Denied message. Investigating a bit further it appears that Eclipse is trying to communicate with /CFIDE/main/ide.cfm - fair enough, pull out trusty wget once again see what IIS is doing:
C:\>wget -S -O - http://localhost/CFIDE/main/ide.cfm
--2009-01-21 10:16:56-- http://localhost/CFIDE/main/ide.cfm
Resolving localhost... 127.0.0.1
Connecting to localhost|127.0.0.1|:80... connected.
HTTP request sent, awaiting response...
HTTP/1.1 401 Access Denied
Server: Microsoft-IIS/5.1
Date: Wed, 21 Jan 2009 17:16:56 GMT
WWW-Authenticate: Negotiate
WWW-Authenticate: NTLM
Content-Length: 4431
Content-Type: text/html
Authorization failed.
One potential hang up that has been documented elsewhere is that the main directory and ide.cfm page don't actually exist on disk. IIS is configured to hand off all .cfm files to JRun and JRun is configured to map ide.cfm to the RDS servlet. In an attempt to force IIS to be a bit more sensible, I dropped a main directory and empty ide.cfm file on disk hoping it would solve the authentication issue but it didn't make any difference.
What I can do as a work around is leave the entire site as anonymous access and then just enable the specific application folders to use NTLM integrated authentication, but there are quite literally hundreds of possible web applications I would have to do that for. Yuck.
Please Help!!!
There is something strange about answering your own question, but I did finally get it resolved.
NTLM integrated authentication can be enabled for the entire web site
Anonymous access must be enabled for the CFIDE virtual directory
Anonymous access must be enabled for the JRunScripts virtual directory
Once both CFIDE and JRunScripts had anonymous access enabled, RDS and debugging through Eclipse worked like a charm.