In our project we're using 2 servers: 1 as a PROD API server and 1 as a proxy(actually nginx is used for that)
The proxy server uses HTTP/2 as well. In one scenario the proxy may get response from prod API server and replace PROD links by Proxy's and then return that to the client.
In that case we can catch the "net::ERR_SPDY_PROTOCOL_ERROR 200" error. I googled little bit about that issue, but it looks like it may be few reasons for that error.
In my case it occurs only when we replace hosts(modify the response from the PROD before sending it to client)
Can someone describe what actually the "net::ERR_SPDY_PROTOCOL_ERROR 200" means and maybe best practices to avoid that?
HTTP/2 is derived from the earlier SPDY protocol, that's probably why the error message doesn't mention HTTP/2 at all.
One of the reasons why you may see the ERR_SPDY_PROTOCOL_ERROR message is an invalid HTTP header coming from the server. Perhaps your proxy is making some change to an HTTP response header which is making it invalid/malformed?
Try to disable HTTP/2 on your proxy server and see if the error goes away. If it does, inspect the response headers and make sure they are valid. I suspect your proxy server is malforming the response.
We met similar issue today when running the reverse proxy server using docker image: nginx:1.16.0-alpine. After changing to use nginx:1.16.0, this issue was solved.
Related
I am using Nginx as my https server to serve my http content from my node server.
I am also hosting my server on google cloud.
I have been keep getting a 504 Gateway Timeout Error; So I wonder if it is because I didnt set my upstream server (node server) 8080 port open. Then it works. Not so sure if it is the correct way to do it
But then I kept looking other docs or tutorial online. I never see people configure in such way to connect to node server. They mainly only left the port 80 opened. So I wondered if my config in server block causing the 504 gateway problem
----------second update
this is my setting, and the default_server is written by default
but i always see doc have included a variable - server_name ; Actually I dont quite understand this varibale. May I know should I consider it or not for later use, although it works now
Aside, I got an
Server Error from my app.
FetchError: request to https://34.96.213.54:443/search/guest2 failed, reason: self-signed certificate
Why is that it works on chrome,although I get that api directly and postman successfully.
third updated------
About self-signed certificate: You need to buy one or using a free service like https://letsencrypt.org .Beside that your questions are so basic so you have to research more on nginx docs (http://nginx.org/en/docs/http/server_names.html)
Need some help to dig deeper into why IIS is behaving in a certain way. Edge/Chrome makes an HTTP2.0 request to IIS, using the IPv6 address in the header (https://[ipv6]/) which results in the server generating a 302 response. The ISAPI filter makes some mods to the 302 response and replaces the response buffer. IIS drops the request/response and logs in HTTPERR log:
<date> <time> fe80::a993:90bf:89ff:1a54%2 1106 fe80::bdbf:5254:27d2:33d8%2 443 HTTP/2.0 GET <url> 1 - 1 Connection_Dropped_List_Full <pool>
Suspect related to HTTP2.0, when putting Fiddler in the middle, it isn't HTTP/2.0 anymore, it downgrades to HTTP/1.1 and it works.
When using an IPv4 address, it works. In either case the filter goes through the identical steps. There is no indication in the filter that anything went wrong.
Failed Request Tracing will not write buffers for incomplete/dropped requests that appear in HTTPERR log.
Is there a place where I can find out more detailed information about why IIS is dropping the request?
I did the network capture, and looks like browser is initiating the FIN tear down of session.
Do you use any load balance or reverse proxy before request get in IIS? This error indicates that the log cannot store more lost connections, so the problem is that your connection is lost.
If you use load balance, web application is under heavy load and
because of this no threads are available to currently provide
logging data to HTTP.sys. Check this.
Or before IIS response to client, client has closed the request but
IIS still send response. This is more likely to be a problem with the
application itself not IIS and http.sys. Check this.
One thing I noticed is if you change http2 to 1.1, it can work well. The difference between http1.1 and 2 is performance.
HTTP/1.1 practically allows only one outstanding request per TCP connection (though HTTP pipelining allows more than one outstanding request, it still doesn’t solve the problem completely).
HTTP/2.0 allows using same TCP connection for multiple parallel requests
So it looks like that when you use http2, one connection includes multiple requests and application cannot handle these requests well, especially the request of image.
Aother thing is failed request tracing can catch all request and response, including status code is 200 and 302.
I'm using the node-request module, regularly sending GET requests to a set of URLs and, sometimes, getting the error below on some sites.
Error: 29472:error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol:openssl\ssl\s23_clnt.c:683
The problem is that I don't get this error always or always on the some URLs, just sometimes. Also, it can't be ignored with "strictSSL: false".
I have read that this can be related to me sending SSL requests with the wrong protocol (SSLv2, SSLv3, TLS..). But this doesn't explain why it happens irregularly.
Btw, I'm running nodejs on a Win 2008 server.
Any help is appreciated.
You will get such error message when you request HTTPS resource via wrong port, such as 80. So please make sure you specified right port, 443, in the Request options.
This was totally my bad.
I was using standard node http.request on a part of the code which should be sending requests to only http adresses. Seems like the db had a single https address which was queried with a random interval.
Simply, I was trying to send a http request to https.
I got this error because I was using require('https') where I should have been using require('http').
Some of the sites are speaking SSLv2, or at least sending an SSLv2 server-hello, and your client doesn't speak, or isn't configured to speak, SSLv2. You need to make a policy decision here. SSLv2 should have vanished from the face of the earth years ago, and sites that still use it are insecure. However, if you gotta talk to them, you just have to enable it at your end, if you can. I would complain to the site owners though if you can.
I had this problem (403 error for each package) and I found nothing great in the internet to solve it.
My .npmrc file inside my user folder was wrong and misunderstood.
I changed this npmrc line from
proxy=http://XX.XX.XXX.XXX:XXX/
to :
proxy = XX.XX.XXX.XXX:XXXX
var https = require('https');
https.globalAgent.options.secureProtocol = 'SSLv3_method';
I got this error while connecting to Amazon RDS. I checked the server status 50% of CPU usage while it was a development server and no one is using it.
It was working before, and nothing in the connection configuration has changed.
Rebooting the server fixed the issue for me.
So in Short,
vi ~/.proxy_info
export http_proxy=<username>:<password>#<proxy>:8080
export https_proxy=<username>:<password>#<proxy>:8080
source ~/.proxy_info
Hope this helps someone in hurry :)
in my case (the website SSL uses ev curves) the issue with the SSL was solved by adding this option ecdhCurve: 'P-521:P-384:P-256'
request({ url,
agentOptions: { ecdhCurve: 'P-521:P-384:P-256', }
}, (err,res,body) => {
...
JFYI, maybe this will help someone
I got this error, while using it on my rocketchat to communicate with my gitlab via enterprise proxy,
Because, was using the https://:8080 but actually, it worked for http://:8080
I've been testing HTTP2 Multiplexing and HTTP2/Server Push in Node.js locally and inspected it in the waterfall network graph in Chrome Dev Tools.
While using my own Node.js Server Push with res.stream.pushStream I got an "Initiator: Push (index)" in DevTools like this:
The change was noticeable as below:
I did some more research and figured I should use a reverse proxy to do the job (for example Nginx instead) and connect to my Node.js via HTTP1.1 instead as the upstream and serve HTTP2 from my reverse proxy.
After setting up the nginx.conf with http2_push_preload on; i sent some headers like these from my Node.js backend:
res.setHeader("Link","</picture.jpg>; as=image; rel=preload");
To my suprise I didn't see the "Push / (index)" indicator but "Other" like on the screenshot below and the asset listed in the link header seemed to show up on the waterfall graph quicker with a slightly lower TTFB time compared to the rest of assets.
I've been also looking for a solution to serve HTTP/2 as the reverse proxy and to download the assets via HTTP/2 from the service directly without the TLS (HTC) but it seems there's nothing like this.
Getting back to my question: How should I go about testing the HTTP/2 Server Push? Is the "Initiator: Other" a misinterpretation from the devtools? It seems to be working but it doesn't report as Server Push.
Also, are there any projects/solutions that would let for a connection to a backend upstream via HTTP2 directly?
I'm pretty sure the latter image shows slower access because of the overhead of using a reverse proxy instead of connecting to the server directly and it's magnified by the HTTP1.1 usage on the target server.
Thanks to #Barry I figured out it's actually Chrome itself as the "Other" initiator and it indeed speeds up the process a bit by using the Resource Hints Link header and not HTTP/2 Server Push itself.
The problem was actually a bug I can't seem to reproduce and it works well after a restart of the OS. Afterall Nginx was the culprit of not using the headers correctly, which kicked in when parsed in Chrome.
Sorry if it is a duplicate, as I am not a security nor network expert I may have missed the correct lingo to find information.
I am working on an application to intercept and modify HTTP requests and responses between a web browser and a web server (see how to intercept and modify HTTP responses on server side? for the background). I decided to implement a reverse proxy in ASP.Net which forwards client requests to the back-end HTTP server, translates links and headers from the response to the properly "proxified" URL, and sends the response to the client after having extracted relevant information from the response.
It is working as expected, except for the authentication part: the web server uses NTLM authentication by default, and just forwarding requests and responses through the reverse proxy does not allow the user to be authenticated on the remote application. Both the reverse proxy and the web application are on the same physical machine and are executed in the same IIS server (Windows server 2008/IIS 7 if that matters). I tried both enabling and disabling authentication on the reverse proxy app with no luck.
I have looked for information about it, and it seems to be related to the "double-hop problem", which I do not understand. My question is: is there a way to authenticate the user on the remote application through the reverse proxy using NTLM? If there is none, are there alternative authentication methods I could use?
Even if you don't have a solution to my problem, just pointing me to relevant information about it to help me get out of the confusion would be great!
I found what the problem was (and it is NTLM): in order to have the browser asks the user for its credentials, the response must have a 401 status code. My reverse proxy was forwarding the response to the browser, so IIS was adding a standard HTML code to explain the requested page cannot be accessed thus preventing the browser from asking credentials.
The problem was solved by removing the response content when the status code is a 401.
With all due respect I have for the one that answered that some years ago, I must admit this is plainly false. The problem was indeed solved AFTER removing the response content when the status code is a 401, but it had none to do with the initial problem..
The truth is that windows authentication was made to authenticate people over local windows networks, where no proxy server is present or even needed.
The main problem with NTLM authentication is that this protocol does not authenticate the HTTP session but the underlying TCP connection, and as far as I know there is no way to access it from asp code.
Every proxy server I tried broke NTLM authentication.
Windows authentication is comfortable for an user because he won't ever need to enter your password to whatever application may lie in your intranet, frightening for a security guy because there is an auto-login without even a prompt if the site domain is trusted by IE, shocking for a network administrator because it melts the application, transport and network layer into some "windows ball of mug" instead of just plain http traffic.
NTLM won't work if the TCP packets are not forwarded exactly as the reverse proxy received > them. And that's why many reverse proxy doesn't work with NTLM authentication. (like nginx) > They forward HTTP requests correcty but not the TCP packets.
Nginx has the functionality to work with NTLM authentication. Keepalive needs to be enabled which is only available trough the http_upstream_module. Additionally in the location block you need to specify that you will be using HTTP/1.1 and that the "Connection" header field should be cleared for each proxied request. Nginx config should look something like:
upstream http_backend {
server 1.1.1.1:80;
keepalive 16;
}
server {
...
location / {
proxy_pass http://http_backend/;
proxy_http_version 1.1;
proxy_set_header Connection "";
...
}
}
I scratched my head for quite some time with this issue but the above works for me. Note that if you need to proxy HTTPS traffic, a separate upstream block is deemed necessary. To clarify a bit more, "keepalive 16;" specifies the number of simultaneous connections to the upstream your proxy is allowed to keep. Adjust the number as per the expected number of simultaneous visitors on the site.
Although this is an old post, I just want to report that it works for me quite well with an Apache2.2 reverse proxy and the keepalive=on option. Obviously, this keeps the connection between the proxy and the SharePoint host open and "pinned" to the client<>proxy connection. I don't exactly know the mechanisms behind this, but it works fairly well.
But: Sometimes, my users encounter the issue that they're logged in as another user. So there seems to be some mixing-up through sessions. I will have to give this some further testing.
Solution for everything (in case you have a valid, signed SSL certificate): Switch IIS to Basic Auth. This works absolutely fine, and even Windows (i.e. Office with SharePoint connection, all WebClient-based processes etc.) won't complain at all.
But they will when you're just using http without SSL/TLS, and also with self-signed certificates.
I confirm that it works with "keep-alive=on" on apache2.2
I examined frames with Wireshark, and I know why it doesn't work. NTLM won't work if the TCP packets are not forwarded exactly as the reverse proxy received them. That's why many reverse proxies, like nginx, don't work with NTLM authentication. Reverse proxies forward HTTP requests correctly but not the TCP packets.
NTLM requires a TCP reverse proxy.