Confusion http/2,3 and cloudflare - node.js

When I start my API server on local it serves http/1.1 but I found that when it's deployed on VPS and set up with Cloudflare, the browser shows the protocol is http/3. So between the clients and Cloudflare is http/3 and Cloudflare and VPS is http/1.1 is this correct? That means http/3 is served only by DNS, my server is still plain http/1.1 and I need to migrate it to http/2 to be truly supported http/2. (I'm using node so it'll be a switch from http to http2 module)

When your web application / web API is behind Cloudflare, Cloudflare acts as a reverse proxy. This means that there are two "legs" of the connection:
From the end user's client (browser / mobile phone etc...) to Cloudflare
From Cloudflare to your origin server (in your case a VPS)
From a user point of view, they see leg (1) so it is quite easy to enable HTTP/2 or HTTP/3 (see documentation) even if your origin server does not support them. This is what you see in the browser when testing, depending on your configuration in the Cloudflare Dashboard.
For leg (2), only HTTP/1.1 is currently supported (as noted also in this Support KB). You can still optimize the setup of that leg by using features such as Argo Smart Routing or Argo Tunnel,
Update Jun 2022: HTTP/2 to the origin server is now supported and can be enabled in the dashboard. See here for more details.

Related

What is difference between httpS and http/2?

I'm trying to understand what is the difference between HTTPS and http/2?
If I'm going to build a Node.js/express app, what should I use?
Can I use HTTPS with http/2?
Maybe if I use HTTPS, I don't need http/2 because it's the same, or HTTPS use http/2 under the hood?
I'm confused.
Someone is linked to me "difference between HTTP 1.1 and HTTP 2.0 [closed]", but I understand the difference between HTTP and HTTP2. I'm asking about HTTPS and HTTP/2
HTTP - A protocol used by clients (e.g. web browsers) to request resources from servers (e.g. web servers).
HTTPS - A way of encrypting HTTP. It basically wraps HTTP messages up in an encrypted format using SSL/TLS. The web is moving towards HTTPS more and more and web browsers are starting to put more and more warnings when a website is served over unencrypted HTTP. Unless you have a very good reason not to, use HTTPS on any websites you create now.
Digging into HTTP more we have:
HTTP/1.1 - this was the prevalent format of HTTP until recently. It is a text-based protocol and has some inefficiencies in it - especially when requesting lots of resources like a typical web page. HTTP/1.1 messages can be unencrypted (where web site addresses start http://) or encrypted with HTTPS (where web site address start with https://). The client uses the start of the URL to decide which protocol to use, usually defaulting to http:// if not provided.
HTTP/2 - a new version of HTTP released in 2015 which addresses some of the performance issues by moving away from a text based protocol to a binary protocol where each byte is clearly defined. This is easier to parse for clients and servers, leaves less room for errors and also allows multiplexing. HTTP/2, like HTTP/1.1, is available over unencrypted (http://) and encrypted (https://) channels but web browsers only support it over HTTPS, where it is decided whether to use HTTP/1.1 or HTTP/2 as part of the HTTPS negotiation at the start of the connection.
HTTP/2 is used by about a third of all websites at the time of writing (up to 50% of websites as of Jan 2020, and 67% of website requests). However not all clients support HTTP/2 so you should support HTTP/1.1 over HTTPS and HTTP/2 over HTTPS where possible (I believe node automatically does this for you when using the http module). I do not believe HTTP/1.1 will be retired any time soon. You should also consider supporting HTTP/1.1 over unencrypted HTTP and then redirect to HTTPS version (which will then use HTTP/1.1 or HTTP/2 as appropriate). A web server like Apache or Nginx in front of Node makes this easy.
HTTP/3 - the next version of HTTP, currently under development. It is expected to be finalised in 2020 though it will likely be late 2020 or even 2021 before you see this widely available in web servers and languages like node. It will be built on top of a UDP-based transport called QUIC (rather than the TCP-based protocol that HTTP/1.1 and HTTP/2 are based on top of). It will include part of HTTPS in the protocol so HTTP/3 will only be available over HTTPS.
In short you should use HTTP/1.1 over HTTPS, should consider HTTP/2 as well if easy to implement (not always possible as not quite ubiquitous yet - but getting there) and in future you might be using HTTP/3.
I suggest you get a firm understanding of all of these technologies (except maybe HTTP/3 just yet) if you want to do web development. It will stand you in good stead.

How does Varnish handle HTTP/2 Server Push?

I have been looking at ways to improve page loads using HTTP2 specifically using server push.
We have a HAProxy => Varnish => Apache configuration.
I know varnish 5 can handle HTTP2 requests, but when the request has server push headers for further resources on a page do those resources come out of the cache or does it just get passed to apache?
My thinking is that if those server push headers don't get handled by varnish it would be to the determent of page loads rather than a net gain...
HAProxy 1.8 only has HTTP/2 support at the front end and then will connect to Varnish using HTTP/1.1 - unless you are using this as a TCP loadbalancer rather than a HTTP loadbalancer? HAproxy 1.9 did add HTTP/2 in the backend (i.e. to downstream systems like Varnish in your setup). I do not believe either supports HTTP/2 Push when using it as a HTTP proxy.
Varnish similarly only supports HTTP/2 over the front end and not push AFAIK.
So basically you cannot use Push in your currently infrastructure. They will connect to Apache (which is the only piece with Push support) as a HTTP/1.1 connection and therefore it will not even attempt to Push resources unless using HTTP/2.
The easiest way to support this would be to simplify your infrastructure. I’m not sure if you have the volume that requires both HAProxy and Varnish or if you have some other reason to set this up this way? If not then you could either get rid of them totally and just use Apache. Alternatively just use HAProxy as a TCP proxy and connect to any Apache instances (either over TLS or not) using HTTP/2.
The other option is to put another instance of Apache in front of HAProxy to handle HTTP/2 and HTTP/2 Push. The back end connections can be over HTTP/1 then and signal to this new Apache to push a resource using Link headers (even over HTTP/1) and it will request the resource from downstream appropriately (which may read it from the Varnish cache if it’s set up that way). But running Apache -> HAProxy -> Varnish -> Apache definitely sounds like overkill for most sites.
HTTP/2 Push is now supported in HAproxy 1.9 with "option http-use-htx"
It can use Varnish (and other non-TLS) as HTTP/2 backend servers with parameter "proto h2".
It can also use TLS HTTP/2 enabled backend servers with parameter "ssl verify none alpn h2"
However I'm strucling with Varnish 6.0 to allow HTTP/2 Push back to another HAproxy Frontend with "proto h2" enabled, as Varnish seems to only support HTTP/1.1 to its backend servers.
Even though varnishlog shows HTTP/2 protocol is used in the request from HAProxy.
I have tested the following and it works all the way with HTTP/2 (end-2-end).
HTTP/2-browser -> Public-HAProxy-h2frontend -> Back-HAProxy-h2frontend-> HTTP/2-SSL-Webserver (IIS)
The following fails after Varnish, as it only uses http/1.1 to the Back-HAProxy-h2frontend, need to be able to force varnish to keep using HTTP/2 to the backend server.
HTTP/2-browser -> Public-HAProxy-h2frontend -> HTTP/2-Varnish -> Back-HAProxy-h2frontend-> HTTP/2-SSL-Webserver (IIS)

How to get nginx to take advantage of http2 with express

I am using express with node and nginx as a reverse proxy. I'd like to know how to take advantage of http/2 with nginx to serve static content, with all other requests being forwarded to the express API.
At the moment, my express server is being served via http/1 and nginx is accepting http/2 connections, and forwarding them to express. How do I set up nginx so that it uses http/2 to serve everything in my statics folder, but forwards all requests to the API as http1?
I will break your questions into two parts:
How to take advantages of http/2.0 to serve static files from nginx?
How to setup nginx to send http/1.1 request to the backend server in case where nginx act as a reverse proxy?
Answer 1:
For the case of serving static files the major performance benefit can come from using the multiplexing feature of the http/2.0 protocol.
Multiplexing enhances the pipelining feature introduced in http/1.1 and overcomes the problem of HOL blocking. With multiplexing you can use the same underlying TCP connection to load multiple resources in parallel using one http connection. You should also consider the stream prioritisation to assign priority to the resource which you want to load first on the page otherwise loading of some of the critical resources can be delayed since all the resources will contend for same multiplexed connection.
Answer 2:
Sending http/1.1 request to the backend server is the default behaviour. So if you have already configured nginx to use http/2.0 you do not have to do anything special to proxy http/1.1 request to your backend. This is because nginx does not support http/2.0 in proxy module as of now. Refer to this ticket. Also, please check this digital ocean tutorial which will guide you to setup nginx with http/2.0 configured on ubuntu 16.04.

How to setup forward proxy on Windows server for outgoing HTTP and HTTPS requests?

I have a windows server 2012 VPS running a web app behind Cloudflare. The app needs to initiate outbound connections based on user actions (eg upload image from URL). The problem is that this 'leaks' my server's IP address and increases risk of DDOS attacks.
So I would like to prevent my server's IP from being discovered by setting up a forward proxy. So far my research has shown that this is no simple task, and would involve setting up another VPS to act as a proxy.
Does this extra forward proxy VPS have to be running windows ? Are their any paid services that could act as a forward proxy for my server (like cloudflare's reverse proxy system)?
Also, it seems that the suggested IIS forward proxy plugin, Application Request Routing, does not work for HTTPS.
Is there a solution for both types of outgoing (HTTPS + HTTP) requests?
I'm really lost here, so any help or suggestions would be appreciated.
You are correct in needing a "Forward Proxy". A good analogy for this is the proxy settings your browser has for outbound requests. In your case, the web application behaves like a desktop browser and can be configured to make the resource request through a proxy.
Often you can control this for individual requests at the application layer. An example of doing so with C#: C# Connecting Through Proxy
As far as the actual proxy server: No, it does not need to run Windows or IIS. Yes, you can use a proxy service. The vast majority of proxy services are targeted towards consumers and are used for personal privacy or to get around network restrictions. As such, I have no direct recommendations.
Cloudflare actually has recommendations regarding this: https://blog.cloudflare.com/ddos-prevention-protecting-the-origin/.
Features like "upload from URL" that allow the user to upload a photo from a given URL should be configured so that the server doing the download is not the website origin server.
This may be a more comfortable risk mitigator, as it wouldn't depend on a third party proxy service. A request for upload could be handled as a web service call to a dedicated "file downloader" server. Keep in mind that if you have a queued process for another server to do the work, and that server is hosted in the same infrastructure, both might be impacted by a DDoS, depending on the type of DDoS.
Your question implies that you may be comfortable using a non-windows server. Many softwares exist that can operate as a proxy(most web servers), but suffer from the same problem as ARR - lack of support for the HTTP "CONNECT" verb, which is used by modern browsers to start an HTTPS connection before issuing a "GET". SQUID is very popular, open source, and supports everything to connect to.. anything. It's not trivial to set up. Apache also has support for this in "mod_proxy_connect", but I have no experience in that and the online documentation isn't very robust. It's Apache, though, so it may be worth the extra investigation.

mod_spdy: Browsers which not support the SPDY protocol

I have a doubt about using module mod_spdy in my webite:
If I install the module mod_spdy in my Apache Server, What will it happen with the http requests come from desktop and mobile browser which not support the SPDY protocol? (see the browser which not support the SPDY protocol in http://caniuse.com/spdy )
I don’t know if in this case Apache will serve the information using the http protocol or the web browser will have problem to render the information. In the last case, is there any solution to solve the problem with the browser that not support SPDY? For instance, use a web server responding with a different protocol (http or SPDY) depending on which user agent is requesting: browsers support SPDY or browsers only support HTTP.
Thanks in advance,
First of all Apache mod_SPDY supports encrypted connection(HTTPS) only, therefore you have to create a VirtualHost for the 443 port and add your SSL certificate. Mod_SPDY will automatically fallback to HTTPS 1.1 if the browser does not support SPDY. A good use for it is to enable server PUSH. Have fun with SPDY!

Resources