I've enabled "HTTP keep-alive" in IIS 7.5 settings.
But still, the IIS doesn't respond with Connection: keep-alive header (to both FF and Chrome)
As I noticed, Nginx responds with this header when I enable keep-alive on it.
Shouldn't the Connection: keep-alive header be sent by server in response to requests?
In HTTP/1.1 persistent connections are the default:
http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html#sec8
In other words, IIS doesn't really need to (however Apache seems to always send it).
You could verify this with netstat or as I tend to do with tcpview (a small sysinternals tool which you can download from microsoft:
http://technet.microsoft.com/en-us/sysinternals/bb897437.aspx)
It appears that IIS doesn't send the Connection: keep-alive. Still it doesn't close the connection and browser reuses it for further requests.
Related
When I start my API server on local it serves http/1.1 but I found that when it's deployed on VPS and set up with Cloudflare, the browser shows the protocol is http/3. So between the clients and Cloudflare is http/3 and Cloudflare and VPS is http/1.1 is this correct? That means http/3 is served only by DNS, my server is still plain http/1.1 and I need to migrate it to http/2 to be truly supported http/2. (I'm using node so it'll be a switch from http to http2 module)
When your web application / web API is behind Cloudflare, Cloudflare acts as a reverse proxy. This means that there are two "legs" of the connection:
From the end user's client (browser / mobile phone etc...) to Cloudflare
From Cloudflare to your origin server (in your case a VPS)
From a user point of view, they see leg (1) so it is quite easy to enable HTTP/2 or HTTP/3 (see documentation) even if your origin server does not support them. This is what you see in the browser when testing, depending on your configuration in the Cloudflare Dashboard.
For leg (2), only HTTP/1.1 is currently supported (as noted also in this Support KB). You can still optimize the setup of that leg by using features such as Argo Smart Routing or Argo Tunnel,
Update Jun 2022: HTTP/2 to the origin server is now supported and can be enabled in the dashboard. See here for more details.
Need some help to dig deeper into why IIS is behaving in a certain way. Edge/Chrome makes an HTTP2.0 request to IIS, using the IPv6 address in the header (https://[ipv6]/) which results in the server generating a 302 response. The ISAPI filter makes some mods to the 302 response and replaces the response buffer. IIS drops the request/response and logs in HTTPERR log:
<date> <time> fe80::a993:90bf:89ff:1a54%2 1106 fe80::bdbf:5254:27d2:33d8%2 443 HTTP/2.0 GET <url> 1 - 1 Connection_Dropped_List_Full <pool>
Suspect related to HTTP2.0, when putting Fiddler in the middle, it isn't HTTP/2.0 anymore, it downgrades to HTTP/1.1 and it works.
When using an IPv4 address, it works. In either case the filter goes through the identical steps. There is no indication in the filter that anything went wrong.
Failed Request Tracing will not write buffers for incomplete/dropped requests that appear in HTTPERR log.
Is there a place where I can find out more detailed information about why IIS is dropping the request?
I did the network capture, and looks like browser is initiating the FIN tear down of session.
Do you use any load balance or reverse proxy before request get in IIS? This error indicates that the log cannot store more lost connections, so the problem is that your connection is lost.
If you use load balance, web application is under heavy load and
because of this no threads are available to currently provide
logging data to HTTP.sys. Check this.
Or before IIS response to client, client has closed the request but
IIS still send response. This is more likely to be a problem with the
application itself not IIS and http.sys. Check this.
One thing I noticed is if you change http2 to 1.1, it can work well. The difference between http1.1 and 2 is performance.
HTTP/1.1 practically allows only one outstanding request per TCP connection (though HTTP pipelining allows more than one outstanding request, it still doesn’t solve the problem completely).
HTTP/2.0 allows using same TCP connection for multiple parallel requests
So it looks like that when you use http2, one connection includes multiple requests and application cannot handle these requests well, especially the request of image.
Aother thing is failed request tracing can catch all request and response, including status code is 200 and 302.
I'm trying to understand what is the difference between HTTPS and http/2?
If I'm going to build a Node.js/express app, what should I use?
Can I use HTTPS with http/2?
Maybe if I use HTTPS, I don't need http/2 because it's the same, or HTTPS use http/2 under the hood?
I'm confused.
Someone is linked to me "difference between HTTP 1.1 and HTTP 2.0 [closed]", but I understand the difference between HTTP and HTTP2. I'm asking about HTTPS and HTTP/2
HTTP - A protocol used by clients (e.g. web browsers) to request resources from servers (e.g. web servers).
HTTPS - A way of encrypting HTTP. It basically wraps HTTP messages up in an encrypted format using SSL/TLS. The web is moving towards HTTPS more and more and web browsers are starting to put more and more warnings when a website is served over unencrypted HTTP. Unless you have a very good reason not to, use HTTPS on any websites you create now.
Digging into HTTP more we have:
HTTP/1.1 - this was the prevalent format of HTTP until recently. It is a text-based protocol and has some inefficiencies in it - especially when requesting lots of resources like a typical web page. HTTP/1.1 messages can be unencrypted (where web site addresses start http://) or encrypted with HTTPS (where web site address start with https://). The client uses the start of the URL to decide which protocol to use, usually defaulting to http:// if not provided.
HTTP/2 - a new version of HTTP released in 2015 which addresses some of the performance issues by moving away from a text based protocol to a binary protocol where each byte is clearly defined. This is easier to parse for clients and servers, leaves less room for errors and also allows multiplexing. HTTP/2, like HTTP/1.1, is available over unencrypted (http://) and encrypted (https://) channels but web browsers only support it over HTTPS, where it is decided whether to use HTTP/1.1 or HTTP/2 as part of the HTTPS negotiation at the start of the connection.
HTTP/2 is used by about a third of all websites at the time of writing (up to 50% of websites as of Jan 2020, and 67% of website requests). However not all clients support HTTP/2 so you should support HTTP/1.1 over HTTPS and HTTP/2 over HTTPS where possible (I believe node automatically does this for you when using the http module). I do not believe HTTP/1.1 will be retired any time soon. You should also consider supporting HTTP/1.1 over unencrypted HTTP and then redirect to HTTPS version (which will then use HTTP/1.1 or HTTP/2 as appropriate). A web server like Apache or Nginx in front of Node makes this easy.
HTTP/3 - the next version of HTTP, currently under development. It is expected to be finalised in 2020 though it will likely be late 2020 or even 2021 before you see this widely available in web servers and languages like node. It will be built on top of a UDP-based transport called QUIC (rather than the TCP-based protocol that HTTP/1.1 and HTTP/2 are based on top of). It will include part of HTTPS in the protocol so HTTP/3 will only be available over HTTPS.
In short you should use HTTP/1.1 over HTTPS, should consider HTTP/2 as well if easy to implement (not always possible as not quite ubiquitous yet - but getting there) and in future you might be using HTTP/3.
I suggest you get a firm understanding of all of these technologies (except maybe HTTP/3 just yet) if you want to do web development. It will stand you in good stead.
I have been looking at ways to improve page loads using HTTP2 specifically using server push.
We have a HAProxy => Varnish => Apache configuration.
I know varnish 5 can handle HTTP2 requests, but when the request has server push headers for further resources on a page do those resources come out of the cache or does it just get passed to apache?
My thinking is that if those server push headers don't get handled by varnish it would be to the determent of page loads rather than a net gain...
HAProxy 1.8 only has HTTP/2 support at the front end and then will connect to Varnish using HTTP/1.1 - unless you are using this as a TCP loadbalancer rather than a HTTP loadbalancer? HAproxy 1.9 did add HTTP/2 in the backend (i.e. to downstream systems like Varnish in your setup). I do not believe either supports HTTP/2 Push when using it as a HTTP proxy.
Varnish similarly only supports HTTP/2 over the front end and not push AFAIK.
So basically you cannot use Push in your currently infrastructure. They will connect to Apache (which is the only piece with Push support) as a HTTP/1.1 connection and therefore it will not even attempt to Push resources unless using HTTP/2.
The easiest way to support this would be to simplify your infrastructure. I’m not sure if you have the volume that requires both HAProxy and Varnish or if you have some other reason to set this up this way? If not then you could either get rid of them totally and just use Apache. Alternatively just use HAProxy as a TCP proxy and connect to any Apache instances (either over TLS or not) using HTTP/2.
The other option is to put another instance of Apache in front of HAProxy to handle HTTP/2 and HTTP/2 Push. The back end connections can be over HTTP/1 then and signal to this new Apache to push a resource using Link headers (even over HTTP/1) and it will request the resource from downstream appropriately (which may read it from the Varnish cache if it’s set up that way). But running Apache -> HAProxy -> Varnish -> Apache definitely sounds like overkill for most sites.
HTTP/2 Push is now supported in HAproxy 1.9 with "option http-use-htx"
It can use Varnish (and other non-TLS) as HTTP/2 backend servers with parameter "proto h2".
It can also use TLS HTTP/2 enabled backend servers with parameter "ssl verify none alpn h2"
However I'm strucling with Varnish 6.0 to allow HTTP/2 Push back to another HAproxy Frontend with "proto h2" enabled, as Varnish seems to only support HTTP/1.1 to its backend servers.
Even though varnishlog shows HTTP/2 protocol is used in the request from HAProxy.
I have tested the following and it works all the way with HTTP/2 (end-2-end).
HTTP/2-browser -> Public-HAProxy-h2frontend -> Back-HAProxy-h2frontend-> HTTP/2-SSL-Webserver (IIS)
The following fails after Varnish, as it only uses http/1.1 to the Back-HAProxy-h2frontend, need to be able to force varnish to keep using HTTP/2 to the backend server.
HTTP/2-browser -> Public-HAProxy-h2frontend -> HTTP/2-Varnish -> Back-HAProxy-h2frontend-> HTTP/2-SSL-Webserver (IIS)
I'm planning to set up a group of NodeJS application servers running Socket.io on EC2, and I'd like to use the Elastic Load Balancer to spread load between them. I know ELB doesn't support Websockets out of the box, but I can use the setup described here in Scenario 2.
As described in the blog post, though, I notice that this setup offers no session affinity or source IP info:
We can not have Session Affinity nor X-Forward headers with this setup
because ELB is not parsing the HTTP messages, so its impossible to
match the cookies to ensure Session Affinity nor Inject special
X-Forward headers.
Will Socket.io still work under these circumstances? Or is there another way to have a set of Socket.io app servers behind a load balancer with SSL?
EDIT: Tim Caswell talks about doing this already here. Are there any posts explaining how to set this up? Again there's no session stickiness here, but things seem to be working fine.
As an aside, are sticky sessions actually necessary with websockets? Does information travel as new and separate requests or is there only one request + connection that all the information moves along?
Socket.io does not work out of the box even with a TCP ELB because it makes two HTTP requests before upgrading the connection to websockets.
The first connection is used to establish protocol, since socket.io supports more than just websockets.
GET /socket.io/1/?t=1360136617252 HTTP/1.1
User-Agent: node-XMLHttpRequest
Accept: */*
Host: localhost:9999
Connection: keep-alive
HTTP/1.1 200 OK
Content-Type: text/plain
Date: Wed, 06 Feb 2013 07:43:37 GMT
Connection: keep-alive
Transfer-Encoding: chunked
47
xX_HbcG1DN_nufWddblv:60:60:websocket,htmlfile,xhr-polling,jsonp-polling
0
The second request is used to actually upgrade the connection:
GET /socket.io/1/websocket/xX_HbcG1DN_nufWddblv HTTP/1.1
Connection: Upgrade
Upgrade: websocket
Sec-WebSocket-Version: 13
Sec-WebSocket-Key: MTMtMTM2MDEzNjYxNzMxOA==
Host: localhost:9999
HTTP/1.1 101 Switching Protocols
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Accept: 249I3zzVp0SzEn0Te2RLp0iS/z0=
You can see in the above example that xX_HbcG1DN_nufWddblv is a shared key between requests. This is the problem. ELBs do round-robin routing, meaning the upgrade request hits a server than did not participate in the initial negotiation. As such, the server has no idea who the client is.
In-memory stateful data is the enemy of load-balancing. Thankfully, socket.io supports using Redis to store the data instead. If you share your redis connection with multiple servers, they essentially share the sessions of all clients.
See the socket.io wiki page for details on setting up Redis.
You can now use the new application load balancer recently launched by AWS.
Just replace the ELB(now called Classic load balancer) with the ALB (Application load balancer) and enable sticky sessions.
ALB supports Web sockets. This should do the trick.
https://aws.amazon.com/blogs/aws/new-aws-application-load-balancer/
http://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html
As I mentioned in the post, we only use ELB to ssl terminate and load-balance across a cluster of http-proxy servers that do support websockets. ELB doesn't talk to the websocket servers directly. The HTTP proxy cluster handles looking up the right socket.io server to connect to ensuring session stickiness.
When you run a server in a cloud that has a load-balancer/reverse proxy, routers etc, you need to configure it to work properly, especially when you scale the server to use multiple instances.
One of the constraints Socket.io, SockJS and similar libraries have is that they need to continuously talk to the same instance of the server. They work perfectly well when there is only 1 instance of the server.
When you scale your app in a cloud environment, the load balancer (Nginx in the case of Cloud Foundry) will take over, and the requests will be sent to different instances causing Socket.io to break.
To help in such situations, load balancers have a feature called 'sticky sessions' aka 'session affinity'. The main idea is that if this property is set, then after the first load-balanced request, all the following requests will go to the same server instance.
In Cloud Foundry, cookie-based sticky sessions are enabled for apps that set the cookie jsessionid.
Note: jsessionid is the cookie name commonly used to track sessions in Java/Spring applications. Cloud Foundry is simply adopting that as the sticky session cookie for all frameworks.
So, all the apps need to do is to set a cookie with the name jsessionid to make socket.io work.
app.use(cookieParser); app.use(express.session({store:sessionStore, key:'jsessionid', secret:'your secret here'}));
So these are the steps:
Express sets a session cookie with name jsessionid.
When socket.io connects, it uses that same cookie and hits the load balancer
The load balancer always routes it to the same server that the cookie was set in.
If you are using Application Load Balancer then Sticky session settings is at target group level