I have an app service for linux running .NET 5 on Azure, with HTTP 2 set in the configuration section
However, traffic is still getting served using HTTP 1.1. My understanding is SSL is required for HTTP 2, and I am currently using the https://*.azurewebsites.com domain to access it. I think that azure does SSL termination at that point, and my app will get a regular http connection. Could that be why it still gets served using http 1.1? I am using ASP.NET Core, and if I run the site locally it gets served as HTTP 2.
Is there something else that I'm missing?
EDIT:
I enabled/disabled the "Allow client certificates" checkbox on the configuration page, and it started serving it up on HTTP 2. I guess that forced some kind of refresh on Azure's end? It was getting served using http 2 for awhile but is back to http 1.1 now.
UPDATE
Add below code in appsettings.json, then try it.
"Kestrel": {
"EndpointDefaults": {
"Protocols": "Http2"
}
},
Try to use another browser, like chrome or firefox. Or open a new InPrivate window, like below.
Reason: It should be caused by the browser cache. Clearing the cache can also solve the problem.
Related
I'm developing a web app using Next.js that is, in the end, served by a custom Express.js server. I'm trying to deploy this app on EC2 and access it but I'm getting ERR_CONNECTION_REFUSED errors.
I'm accessing the app over HTTP using the public DNS of my instance (http://ec2-PUBLIC_IPV4_ADDRESS.compute-1.amazonaws.com/) which works fine, the index.html then needs to load other files (e.g.: .js or .css files), but tried to load them over HTTPS (https://ec2-PUBLIC_IPV4_ADDRESS.compute-1.amazonaws.com/style.css). In the network tab of the developer tool of Chrome, I get one request that is succesful and other assets that fail with net::ERR_CONNECTION_REFUSED.
I was wondering if there is a config either on my EC2 instance, on my Express server or even on Next.js that needs to be modified to make sure that the connection is not upgraded to HTTPS.
I would prefer to find a solution that doesn't involve setting up a domain for early testing purposes.
Thanks in advance.
I have developed a node js application which works fine as long as use http. Now I need to upgrade the code too be able to work ssl and I am having problems to load the socket.io-client/socket.io.js file. (The rest is working fine. I installed the certificates and the server works well)
Firefox fails with the following message: Blocked loading mixed active content "http://"url"/socket.io/?EIO=3&transport=polling&t=NX-uS5E". which is weird because the link states a http request.
Chrome fails with this message: socket.io.js:3511 Mixed Content: The page at 'https://"url"?' was loaded over HTTPS, but requested an insecure XMLHttpRequest endpoint 'http://"url"/socket.io/?EIO=3&transport=polling&t=NX-s_OB'. This request has been blocked; the content must be served over HTTPS.
It seems that socket.io-client is trying to load a resource using http instead of https. Is that possible?
How can I correct this? Any idea?
I have been searching the web for two days noow and I have not come to any indication of someone else having this issue
Ok, after letting it go for the evening and having a good rest I checked my whole code again and found the error!
I had one obfuscated code line where I was using a http request instead of a https one. I had to correct this on both, the server and the client side.
I also had to include the port number on each of the calls and force the socket on the client side to use polling instead of websockets by adding the option "transports: ['polling']"
I want to add a load balancer to an existing asp.net project using Application Request Routing. So I made myself familiar with the concepts and created a local test-setup:
IIS locally running on Windows 10:
Installed Application Request Routing 3.0 with Windows Platform Installer
Created server farm with following servers:
<test-server-name>.de (Microsoft 2012 R2 Server: contains the asp.net project)
www.google.com (just to see if load balancing and url rewriting works because I don't have two test servers available)
URL-Rewriting rule:
After typing localhost multiple times in any browser, I can see that load balancing (weighted round robin) is working fine. It's alternating between 1. and 2. website.
The problem I'm facing is a 404 Error on both websites.
I already tried the following:
Installing and enabling Failed Request Tracing Rules (on local IIS): URL Rewriting is working properly i think.
Failed Request Log for www.google.com: google drive, unzip and open xml in e.g. IE for better view
Create Server Farm without automatic creation of URL Rewrite rules
(selecting No and create own URL Rewrite rule)
Change "Managed Pipeline Mode"-setting of Applcation Pool from Integrated to Classic
Healthcheck on other Websites I have absolutly no clue why it's working on Git-websites and why facebook is returning a 400 error code.
Enabling/disabling proxy (IIS-Manager -> Application Request Routing Cache -> Server Proxy Settings...)
I don't know what i could do next, so I appreciate any help. Thanks.
Answer can be found here: https://forums.iis.net/t/1238739.aspx?Why+some+sites+return+HTTP+404+some+don+t+
Some websites simply don't support localhost as hostname, which is why localhost can't be found (error 404) e.g. on google.com
Detailed answer if link above is not working in future:
That is not an effective test.
What you are doing is sending the hostname of your request to the third party servers. Like Google.
So if your request is say http://example.com you are sending this to say www.google.com and the Google servers will likely reject this as you can see
Web server admins generally don't let themselves receive traffic from domain thet do not host.
If you sent a request to my servers IP with mysite.com I too would likely reject it. (Things get complex if you have wildcard sites and you allow all traffic through)
But simply showing that 404 page from Google means tour request hit there server so that implies ARR is working.
If you really wanted to test it this way have a local host file with www.google.com resolving to your servers IP. Set up a site with www.google.com as the hostheader and then you should see the correct info hitting Google. But there is no accounting for what 3rd party admins do on their side.
I'm trying to setup Sonarqube behind a Azure Web App using .NET Core's proxy library. This might sound weird but as Web Apps provide a SSL certificates automatically and I am not able to get a custom domain I thought this solution to be the easiest for me ;)
Now after some playing around everything works great, the site works without any errors in browsers, the Log-in is possible using Sonar login or Azure Active Directory.
But in my build processes it is just not possible to post the analysis result to the server. The response is always 401.
I have checked the Sonarqube logs and found the following corresponding entries:
in Web.log
DEBUG web[...][auth.event] login failure [cause|Wrong CSFR in request][method|JWT][provider|LOCAL|local][IP|some ip|actual client ip:37390][login|admin]
in access.log:
some ip - - [...] "POST /api/ce/submit HTTP/1.1" 401 - "-" "Mozilla/5.0 ..." "..."
Therefore I can see that the actual sonar request comes from a different IP, probably because of the network setup or any other Azure magic.
I cannot figure out how to solve this issue :D
My reverse proxy solution is very simple. Basicly I use a simple empty ASP.NET Core application and integrate the reverse proxy functionality in the Startup.cs like this:
app.RunProxy(new ProxyOptions
{
BackChannelMessageHandler = new HttpClientHandler
{
CheckCertificateRevocationList = false,
ServerCertificateCustomValidationCallback = (message, certificate2, arg3, arg4) => true,
AllowAutoRedirect = true,
AutomaticDecompression = DecompressionMethods.GZip,
CookieContainer = new CookieContainer
{
Capacity = int.MaxValue,
MaxCookieSize = int.MaxValue,
PerDomainCapacity = int.MaxValue
}
},
Scheme = serverConfiguration.Scheme,
Host = serverConfiguration.Host,
Port = serverConfiguration.Port,
});
I also added some middleware to add the X_FORWARDED_PROTO header and I check if the X-Forwarded-For header is configured correctly. I also configured the Azure IIS to not truncate query parameters or content in large requests via the web.config way.
I also tried to fake it and set the X-Forwarded-For IP to the IP sending the actual request to Sonarqube with no effect.
Has anyone an idea how to get this solved? :) As this is just a POC setup I would love to just turn CSRF checking off but i could not find any config for that. Any help would be appreciated.
Edit + Current Solution
Thinking a bit more about my initial solution the problem becomes quite clear. I am trying to connect to the server by using Azure App Service's VNet Integration Feature. This provides a secure VPN Connection between the Proxy site and the actual server. But it also causes the IP to be different than expected:
Client [Client IP] -> Web App Proxy [Proxy Public IP] -> VNet VPN [VPN IP of the Web App == some ip in the logs] -> Sonarqube => 401 CSRF error
I guess that the X-Fowarded-For chain is not correct in that case, and I don't know how to fix that.
Now, as a workaround, I have added a Public IP to the Sonarqube server and configured the Network Security Groups to allow traffic only from the Web App (using the provided outgoing IP addressess of the Web App). With that solution everything works :)
I still would like to facilitate the VNet integration feature, so if someone has an idea, please let me know :)
we have the same problem with Sonar behind Apache as reverse proxy with SSO. Apache sends the SSO Headers in the proxy request to sonar.
I have reported this problem to google group as bug:
"Sonar native login form displayed randomly even if SSO is used"
https://groups.google.com/forum/#!msg/sonarqube/o2p2ZmjqRN8/UAZZF3tMBgAJ
What I have found is, that apache reuses one connection for different users. User-A comes through Apache-Sonar connection-1, then Apache reuses this connection for other request of other user-B and then next request in the same Apache-Sonar connection is used for new request from user-A. This request is then classified as unauthorized and Sonar generates Login Form, although the Apache request contains the headers with SSO Login data.
Today, I have activated DEBUG logs and found the message "Wrong CSFR in request". It looks really like CSFR protection, but with some bug, as if the code does ignore username or something like this.
Regards,
Robert.
I've got 6 identical machines running IIS and Apache. Today one of them decided to just stop serving requests. I can access all of the webapps when I try from localhost/resource but when I try from url/resource I get a 404. I did a Get request against the machine that isn't working and I get this back:
Server: Microsoft-HTTPAPI/2.0
Connection: close
Compared to a working server:
Server: Microsoft-IIS/8.5
X-Powered-By: ASP.NET
Content-Type: text/html
Tried searching for this problem but came up with nothing, anyone got any idea's?
Windows has an HTTP service that manages calls to IIS and other HTTP enabled services on a windows machine. Either you need to configure it to handle your calls, or, in the case of WAMP or similar non-IIS-web-server-on-windows scenarios you may just need to turn it off.
When you see "Microsoft-HttpApi/2.0" returning error, such as 400 "bad URL" or "bad header", etc. the problem is most likely because the HTTP.sys service is intercepting your http request and terminating it because it does not meet with the minimum validation rules that are configured.
This configuration is found in the registry at HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\HTTP\Parameters. In my case, it was choking because I had a RESTful call that had a 400 character segment in the url which was 160 characters more than the default value of 260, so I
added the registry parameter UrlSegmentMaxLength with a DWORD value of 512,
stopped the service using net stop http
started the service using net start http
I've run into these issues before and it is easy to troubleshoot but there is very little on the web that addresses it.
Try these links
"the underlying problem is that the client has sent a request to IIS that breaks one or more rules that HTTP.sys is enforcing"
enabling logging on HTTP.sys is described here
a list of the HTTP.sys parameters that you can control in the registry is found here.
A bit late, so put here for posterity ;-)
After trying all sorts of solutions found on the web, I almost gave up, but found this little nugget.
If the response's Server header returns Microsoft-HttpApi/2.0, it means that the HTTP.sys is being called, not IIS.
As a result, a lot of the workarounds will not work (URLScan, etc).
This worked however:
Open regedit
Navigate HKLM\SYSTEM\CurrentControlSet\Services\HTTP\Parameters\
If DisableServerHeader doesn't exist, create it (DWORD 32bit) and give it a value of 2. If it does exist, and the value isn't 2, set it to 2.
Finally, restart the service by calling net stop http then net start http
src: WS/WCF: Remove Server Header
Set below registry flag to: 2
HKLM\SYSTEM\CurrentControlSet\Services\HTTP\Parameters\DisableServerHeader
Setting this to 2 will ensure that self host WCF services no longer sends the SERVER header and thus ensure we are security compliant.
Please note that this disables ALL server headers.
The default value of 0 enables the header, and the value of 1 disables server header from DRIVER (http.sys), but app can still have headers.
For me I had to restart the server for the changes to take effect.
Hope this helps someone
I was working on our web app on a client's site and ran into an issue where the site root pages loaded, but the reports folder always returned a 404 for files that existed in the folder. The 404 page showed the .Net version of 2 when the application was set to 4, and a test of a non-existent page in the root returned a 404 page showing .Net 4.
I tried just http://localhost/reports and got back a Microsoft Reporting Services page. Not part of my application.
Be sure to check just the default document of the folder when a unexpected 404 comes up and the file exists.
This question and series of replies helped me get to the bottom of the related issue I was having. My issue centered around using just a subdomain to go to our server (e.g. typing "www/somepath" into the browser while on our corporate network), which had worked in the past on an older server, but no longer worked when the system was upgraded to a new server. I saw the unexpected Microsoft-HttpApi/2.0 string in the header when using the Chrome Devtools to inspect the Network traffic.
My HTTP.sys process was already logging, so I could verify that my traffic was going to that service and returning 404 NotFound status codes.
My resolution was to add a binding to the IIS site for the subdomain, making IIS respond instead of the HTTP.sys process, as described in this server fault article - https://serverfault.com/questions/479274/why-is-microsoft-httpapi-returning-404-to-my-network-switch
In my case, running Windows 10 Pro, it was the Windows MultiPoint Service.
By executing:
net stop wms
Port 80 was released.