I've been testing HTTP2 Multiplexing and HTTP2/Server Push in Node.js locally and inspected it in the waterfall network graph in Chrome Dev Tools.
While using my own Node.js Server Push with res.stream.pushStream I got an "Initiator: Push (index)" in DevTools like this:
The change was noticeable as below:
I did some more research and figured I should use a reverse proxy to do the job (for example Nginx instead) and connect to my Node.js via HTTP1.1 instead as the upstream and serve HTTP2 from my reverse proxy.
After setting up the nginx.conf with http2_push_preload on; i sent some headers like these from my Node.js backend:
res.setHeader("Link","</picture.jpg>; as=image; rel=preload");
To my suprise I didn't see the "Push / (index)" indicator but "Other" like on the screenshot below and the asset listed in the link header seemed to show up on the waterfall graph quicker with a slightly lower TTFB time compared to the rest of assets.
I've been also looking for a solution to serve HTTP/2 as the reverse proxy and to download the assets via HTTP/2 from the service directly without the TLS (HTC) but it seems there's nothing like this.
Getting back to my question: How should I go about testing the HTTP/2 Server Push? Is the "Initiator: Other" a misinterpretation from the devtools? It seems to be working but it doesn't report as Server Push.
Also, are there any projects/solutions that would let for a connection to a backend upstream via HTTP2 directly?
I'm pretty sure the latter image shows slower access because of the overhead of using a reverse proxy instead of connecting to the server directly and it's magnified by the HTTP1.1 usage on the target server.
Thanks to #Barry I figured out it's actually Chrome itself as the "Other" initiator and it indeed speeds up the process a bit by using the Resource Hints Link header and not HTTP/2 Server Push itself.
The problem was actually a bug I can't seem to reproduce and it works well after a restart of the OS. Afterall Nginx was the culprit of not using the headers correctly, which kicked in when parsed in Chrome.
Related
I have a working node.js express based server (and client) application here that shows RPC over http+websockets. This works perfectly when run locally (using devcontainers) and includes the Dockerfile as well as devcontainer.json. However, when run from a codespace, it fails with the following client-side error messages.
client.js:9 Mixed Content:
The page at 'https://aniongithub-jsonrpc-bidirectional-example-<redacted>-8080.preview.app.github.dev/'
was loaded over HTTPS, but attempted to connect to the insecure WebSocket endpoint
'ws://aniongithub-jsonrpc-bidirectional-example-<redacted>-8080.preview.app.github.dev/api'.
This request has been blocked; this endpoint must be available over WSS.
(anonymous) # client.js:9
client.js:9 Uncaught DOMException: Failed to construct 'WebSocket':
An insecure WebSocket connection may not be initiated from a page loaded over HTTPS
at 'https://aniongithub-jsonrpc-bidirectional-example-<redacted>-8080.preview.app.github.dev/client.js:9:10'
The documentation here states that By default, GitHub Codespaces forwards ports using HTTP but you can update any port to use HTTPS, as needed. When I check the settings indicated:
it's set to http. What am I missing here? How can I get it to serve my express application over http?
Note: My intention is that when locally cloned and opened in a devcontainer, the code works just as it would if opened in a CodeSpace. This means I need to ensure that the certs generated by CodeSpaces are somehow factored into my local devcontainer process or that I forego authentication altogether. Alternatively, I need to find out if I'm running on CodeSpaces and do different things, which seems messy and shouldn't be the case. Hope this makes my intentions for asking this question clearer!
It turns out that I just couldn't use http for the RPC endpoint when running over https, so the solution was to use location.protocol and ws/wss depending on the current protocol to initialize the client RPC endpoint.
I have a problem with an Express.js service running on production that I'm not able to replicate on my localhost. I have already tried requesting all the urls to production again to my local machine, but on my machine everything works fine. So I suspect that the problem comes with the data on the http headers (cookies, user agents, languages...).
So, is there a way, (some express module, or sniffer that runs on ubuntu) that allows me to easily create a dump on the server with the whole header so I can later repeat those exact requests to my localhost?
You can capture network packages with https://www.wireshark.org/, analyze them and maybe find the difference between your local environment and the production one.
You can try to use a Proxy-Tool like Charles (https://www.charlesproxy.com/) or Fiddler (http://www.telerik.com/fiddler) to log your Browser Requests.
I am trying to update our online shop to use HTTP/2 with Server Push capabilities but I can't find a solution for a webserver like Nginx(for proxying and some other stuff) with upstream HTTP/2. We are using Node.js with the node HTTP module at the moment, but would like to switch to the node spdy module. The spdy module supports HTTP/2 with Server Push. I have tried H2O as an alternative to Nginx, but it doesn't support HTTP/2 upstream either.
I am kind of lost at the moment and need help.
Nginx has only just added support for HTTP/2 Push so unless you are rubbing the latest mainline version you will not be able to do this. Also because it is so new there are still some issues with it. Nginx do not support http2 over backend connections (and have stated they won’t support this). So you cannot directly push from a downstream system all the way up like you suggest.
There is some question as to whether that is the best way to push anyway. A downstream system may push to the upstream proxy server even if the client does not support push for example - which is a wasted push.
So the better way is to push from the proxy and have the downstream system tell the upstream system (via link headers) to do that push. This has several advantages in reducing complexity, allowing the downstream system to push assets it may not control (e.g. static assets like stylesheets, JavaScript, images... etc.), a central store of already pushed assets (cache digests) and also not requiring HTTP/2 to be supported all the way through (link headers can be sent over HTTP/1.1 as easily as HTTP/2).
The main downside to pushing from the upstream proxy via link headers is that you have to wait for the requested resource to be ready as the link headers are read from the response. If the request resource takes some time to generate then it may be more beneficial to start pushing other resources while it’s being processed. This is solved by a new 103 Early Hints HTTP Status code where you can reply back earlier before sending the main 200-status code later. This early message can have link headers which can be read by the upstream proxy and be used to push the resource. I am not sure if Nginx implementation will support this.
Incidentally Apache has supported Push for a while now and has a much more mature implementation. It supports it via direct Apache config or via link headers (including via 103 responses which by default are configured not to be sent on in case on compatibility issues). It even supports proxying to backends via HTTP/2 though does not support direct push over back end connections for reasons described above. Some other less well known servers (e.g. H2O) also support HTTP/2 better than Nginx.
Finally if using a CDN then they may support HTTP/2 Push (often via link headers) without you having to upgrade any of your backend infrastructure. In fact Cloudflare is an Nginx based CDN which has had HTTP/2 Push for a while and I fact it’s two Cloudflare engineers which have back ported their implementation to the base Nginx code.
After NGINX 1.13.9 (just pushed to mainline today) you're able to have HTTP/2 server push out of the box by compiling it with the ngx_http_v2_module.
If you're interested in the recent addition, this is the commit that added most of the functionality: hg.nginx.org: HTTP/2: server push.
Its use is relatively straightforward: add the http2_push_preload directive to the server that is proxying Node and then from node make use of the Link header (as described in the W3 spec - https://www.w3.org/TR/preload/#server-push-http-2) and then NGINX will do the job of sending the h2 frame that indicates a server push.
For instance, assume that you have a / endpoint that serves a regular index.html but also pushes image.svg to the client.
In NGINX you could configure an upstream server and then in the server configuration enable http2_push_preload on the server configuration:
# Add an upstream server to proxy requests to.
upstream sample-http1 {
server localhost:8080;
}
server {
# Listen on port 8443 with http2 support on.
listen 8443 http2;
# Enable TLS such that we can have proper HTTP2
# support using browsers.
ssl on;
ssl_certificate certs/cert_example.com.pem;
ssl_certificate_key certs/key_example.com.pem;
# Enable support for using `Link` headers to indicate
# origin server push.
http2_push_preload on;
# Act as a reverse proxy for requests going to /proxy/*.
#
# Because we don't want to rewrite our endpoints in the
# Node app, rewrite the path such that `/proxy/lol` ends up
# as `/lol`.
location / {
proxy_pass http://sample-http1;
}
}
Then in the NodeJS app, you'd serve / as you'd normally do but add an extra Link header to the response:
response.setHeader('Link', '</image.svg>; rel=preload; as=image');
ps.: yeah, you'd keep those angle brackets; I do not mean that you should replace them.
By the way, the example I just gave (with some debugging tips) is written up in complete here: https://ops.tips/blog/nginx-http2-server-push/ .
You can compile/recompile nginx from source and include the --with-http_v2_module configuration parameter to enable HTTP2 push capabilities.
I'm having fits accomplishing something and after scouring google & SO, throwing my hands up after a few days. Trying to do something that I think is pretty common: debug / examine all HTTP traffic while developing a node.js app.
In Windows it is as simple as firing up Fiddler and I can see all HTTP & HTTPS traffic from all processes. But I've switched platforms over to OSX and trying to make the same work.
I've tried using Charles & MITMPROXY, but all I'm seeing is the traffic to, with the response, my node.js app. My node.js app is calling external services, some using the popular request package (which I have seen how to set that up) but also using other packages, like azure-storage. What's troubling me is I can't get any of the debugging proxies to show me at the azure-storage package is sending / receiving to the endpoints they are calling.
Conceptually I think I get it... I have to tell these different things (like node.js, request & azure-storage) to go through the proxy each of these tools uses... but how can you do that without modifying their source? Can't, like how Fiddler works on Windows, you do something to "all traffic goes through this proxy"?
I'd use Fiddler on OSX but it is currently not working with no ETA in sight after talking to Telerik.
So the problem I was having is what I thought... in my specific instance the module that I was using to access Azure storage was not using the default proxy. I found a package (**global-tunnel that hijacked everything that used the request package to control it going through a proxy. Now I saw stuff show up in the HTTP debuggers I was using.
The problem now is when I am trying to reach an HTTPS endpoint... using something like Charles, it used it's own SSL cert which wasn't trusted by Azure so the connections were refused. Back to the drawing board...
I want to ask about some good practices. I have a Node.js (Express) web server and socket.io push server (in case technology matters). I can turn both of them into one application but I want them separated (they can communicate with each other if necessary). There are two reasons to do that:
It will be easier to manage, debug and develop the app;
It will be a lot easier to scale the app. I can just add another instance of push server or web server if necessary;
This is at least what I believe. The only problem is that when a client connects to the seperate socket.io server then it won't send cookies (different port, cross-domain policy).
The workaround I came up with is to put a reverse proxy (written in Node.js as well) in front and check what kind of request we are dealing with and send it to web server or push server accordingly. Great, now we have cookies in both web server and push server. The reverse proxy can be a load balancer which is an additional bonus.
It looks like a good idea to me. What do you think about this design? Perhaps any other workaround for cookie problem?
I recently did something simular, we initially used a node.js reverse proxy but ran into reliability/scalability problems. We found serving static files and proxying requests was best left to nginx. haproxy is also a very viable solution for stand alone proxying as well.
HaProxy
Nginix as a reverse proxy