Cloudflare - No further requests possible during download - node.js

My site allows users to download big .zip files. A problem I'm dealing with right now is that whenever the user is currently downloading such a file, all other requests to the site wait until the download is finished or cancelled, making the site practically unusable. In the Chrome network tab, the request shows as pending. Why could this be?
The server itself is implemented in Node.js using Express and is proxied through NGINX and then through Cloudflare. When I connect to the Express server or the NGINX proxy directly, this problem doesn't come up, only when it's routed through Cloudflare from what I have observed.
This is my NGINX config, if of any help:
server {
listen 80;
listen [::]:80;
server_name marbleland.vani.ga;
client_max_body_size 20m;
location / {
proxy_pass "http://localhost:20020/";
}
}
Am I missing something obvious?

Related

How to handle Nginx reverse proxy 502 Error in specific case?

Here is my nginx default
server {
listen 80;
listen [::]:80;
server_name _;
location /login-with-args.html {
alias /opt/code-server/login-with-args.html;
}
}
And I am having login-with-args.html at this /opt/code-server/login-with-args.html location but the curl command in linux is giving me 200 but in my browser is it showing me 502 error.
this is what the url I am hitting from UI
https://url/login-with-args.html?password=1233&Id=12123&Code=sand-42&port=8127
Generally I could have advised you to enable error logging and check the corresponding log.
But as far as I see, there are mismatches in your question. Your configuration contains listen 80; which, in generally, means plain HTTP, except if you're add ssl parameter (anyway I'd not recommend you to enable SSL/TLS on port 80). But URL you try to request is:
https://url/login-with-args.html?password=1233&Id=12123&Code=sand-42&port=8127
which assumes using HTTPS on port 443 (by default, if not specified other).
At the same time there is no reverse proxy defined in your configuration. You just aliased static file.
Since you got 502 error, you nginx is either located behind some proxy (or CDN) or have another server section, containing listen with ssl parameter and reverse proxy definition somewhere in configuration.

Setting Up of Nginx reverse proxy

I have a node application running on an ec2 instance. Node is running on port 5000. I want to access the api from remote.
this is nginx configuration file.
server {
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
client_max_body_size 20M;
listen 80;
listen [::]:80;
location / {
proxy_pass http://127.0.0.1:5000;
}
location /nginx_status {
# Turn on stats
stub_status on;
access_log off;
}
}
when I try to curl using curl localhost/nginx_status
it returns
Active connections: 1
server accepts handled requests
11 11 12
Reading: 0 Writing: 1 Waiting: 0
Also when I try to access the IP in browser, it shows
Welcome to nginx!
If you see this page, the nginx web server is successfully installed and working. Further configuration is required.
For online documentation and support please refer to nginx.org.
Commercial support is available at nginx.com.
Thank you for using nginx.
But if I try to access the ip_address/nginx_status it shows 404 Error for example if I took IP address 123.456.789.098 in browser it shows the above mentioned message and if I took 123.456.789.098/nginx_status it will return 404 error. Even if I try curl ip_address/nginx_status it is also returning 404 error.
My question is, How can I access node application running on port 5000 from outside world?
unfortunately I only see part of your config, is there another server that listens to 80?
You don't use "default_server" for listen either, and without "server_name" I find it difficult to distinguish between them. So maybe another config with the server + port 80 as default_server takes effect. Check in your /etc/nginx/ folder which servers {..} all exist.
The proxy_pass looks correct, if the nodjs server is really listed there, check again whether it is really http or https scheme. For the correct protocol transmission of the proxy_pass.
But you should then add a control for the "stub_status" so that it is information that you do not entrust to everyone, for me it is the case that only one application has access to it internally and under another list what is not released on the internet:
server {
listen 127.0.0.1:10081 default_server;
location /flyingfish_status {
stub_status on;
access_log off;
allow 127.0.0.1;
deny all;
}
}
I'm curious what you find out! :)

Use Reverse Proxy from Https client to Http server running locally on my machine

I have a published site that uses HTTPS. The site needs to communicates with a HTTP node Express API. The API is run on my local machine. Everything worked fine until I switched the client application to use HTTPS. Now I receive mixed content warnings. I have been reading about reverse proxys and wonder if this could be the solution to my problem. Is it possible to proxy a request to my localhost? Or will localhost point to the server the proxy is on?
I have been looking at using nginx as the reverse proxy server but I have zero experience with proxys and not positive how to go about it.
I am mainly wondering if it is possible or not before I dig any deeper.
Yes, this is a pretty standard use case for using nginx (or any other reverse proxy). You would configure the location prefixes, etc that need to go to your backend application and proxy (via proxy_pass directive) to them. Any static content can be served directly from nginx. All of this can then behind nginx.
Assuming that your application is never issuing absolute urls which make use of "http://" this should resolve your mixed content warnings.
You will probably want to read some tutorials but the basics of your configuration would be:
server {
listen 443 ssl; # you can also add http2
server_name hostnames that you listen for;
ssl_certificate_key /path/to/cert.key;
ssl_certificate /path/to/cert.pem;
root /var/www/sites/foo.com;
location /path/handled/by/application {
proxy_pass http://localhost:8000; # or whatever port is
}
}

Azure App Service getting Error 404 when redirected via NGINX

I created a VM, port 80 is open and installed NGINX on it.
I created 2 App Services which can be accessed via x1.azurewebsites.net and x2.azurewebsites.net
I configured the VM to act as an load balancer but when redirecting the traffic I get the following: https://i.gyazo.com/b94bed9c90d3b0f0c400c83f762f0544.png
I am not using my own domain. Does someone know what the issue could be?
I got the following configurations:
upstream backend {
server xx.azurewebsites.net;
server xxx.azurewebsites.net;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/html;
server_name_;
location / {
proxy_pass http://backend;
}
}
Azure App Service uses cookies for ARR (Application Request Routing). You have to make sure that your NGinx reverse proxy configuration pass the correct cookie / header to your web app.
The other possibility (to make sure the behavior comes from ARR) is to disable it: https://azure.microsoft.com/en-us/blog/disabling-arrs-instance-affinity-in-windows-azure-web-sites/

socket.io slow response after using nginx

I have used my local setup without nginx to serve my node.js application, I was using socket.io and the performance was quite good.
Now, I am using nginx to proxy my request and I see that socket.io has a huge response time, which means my page is getting rendered fast, but the data rendered by socket.io is order of magnitude slower than before.
I am using NGINX 1.1.16 and here is the conf,
gzip on;
server {
listen 80;
server_name localhost;
#charset koi8-r;
access_log logs/host.access.log main;
location / {
proxy_pass http://localhost:9999;
root html;
index index.html index.htm;
}
Even though everything is working, I have 2 issues,
socket.io response is slower than before. With NGINX, the response
time is around 12-15sec and without, it's hardly 300ms. tried this
with apache benchmark.
I see this message in the console, which was not there before using
NGINX,
[2012-03-08 09:50:58.889] [INFO] console - warn - 'websocket connection invalid'
You could try adding:
proxy_buffering off;
See the docs for info, but I've seen some chatter on various forums that buffering increases the response time in some cases.
Is the console message from NGINX or SocketIO?
NGINX proxy does not talk HTTP 1.1, which may be why web socket is not working.
Update:
Found a blog post about it: http://www.letseehere.com/reverse-proxy-web-sockets
A proposed solution:
http://blog.mixu.net/2011/08/13/nginx-websockets-ssl-and-socket-io-deployment/
Nginx only supports websocket starting from 1.3.13. It should be straightforward to set it up. Check the link below:
http://nginx.org/en/docs/http/websocket.html

Resources