I've encounterd no problems while developing with ng-flow, NodeJS and Express, although I'm not able to configure Nginx properly as a reverse proxy in order to make it run smoothly with ng-flow and Express.
Here you'll find a request example that is getting a pending status:
Remote Address:192.168.0.81:80
Request URL:http://itl.lan/api/v1/flow-upload/
Request Method:POST
Status Code:200 OK
Response Headers
view source
Cache-Control:private, no-cache, no-store, must-revalidate
Connection:close
Date:Mon, 03 Aug 2015 20:24:56 GMT
Expires:-1
Pragma:no-cache
X-Powered-By:Express
Request Headers
view source
Accept:*/*
Accept-Encoding:gzip, deflate
Accept-Language:en-US,en;q=0.8,it;q=0.6
App-Session-Hash:df9b1ac0-3a10-11e5-af61-af8fb284004c
Connection:keep-alive
Content-Length:1049741
Content-Type:multipart/form-data; boundary=----WebKitFormBoundaryY2FVUSE0oIcye77i
Host:itl.lan
Origin:http://itl.lan
Referer:http://itl.lan/webapp/
User-Agent:Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.125 Safari/537.36
Request Payload
------WebKitFormBoundaryY2FVUSE0oIcye77i
Content-Disposition: form-data; name="flowChunkNumber"
9
------WebKitFormBoundaryY2FVUSE0oIcye77i
Content-Disposition: form-data; name="flowChunkSize"
1048576
------WebKitFormBoundaryY2FVUSE0oIcye77i
Content-Disposition: form-data; name="flowCurrentChunkSize"
1048576
------WebKitFormBoundaryY2FVUSE0oIcye77i
Content-Disposition: form-data; name="flowTotalSize"
12515925
------WebKitFormBoundaryY2FVUSE0oIcye77i
Content-Disposition: form-data; name="flowIdentifier"
12515925-showcase_001_20150802T2239tgz
------WebKitFormBoundaryY2FVUSE0oIcye77i
Content-Disposition: form-data; name="flowFilename"
showcase_0.0.1_20150802T2239.tgz
------WebKitFormBoundaryY2FVUSE0oIcye77i
Content-Disposition: form-data; name="flowRelativePath"
showcase_0.0.1_20150802T2239.tgz
------WebKitFormBoundaryY2FVUSE0oIcye77i
Content-Disposition: form-data; name="flowTotalChunks"
11
------WebKitFormBoundaryY2FVUSE0oIcye77i
Content-Disposition: form-data; name="file"; filename="showcase_0.0.1_20150802T2239.tgz"
Content-Type: application/x-compressed-tar
Here's my directive in NGINX server.conf:
upstream itl_node_app {
server 127.0.0.1:5000;
keepalive 8;
}
server {
...
location /api/v1/flow-upload/ {
proxy_set_header Access-Control-Allow-Origin *;
proxy_set_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, PUT, DELETE';
proxy_set_header 'Access-Control-Allow-Headers' 'X-Requested-With,Accept,Content-Type, Origin$
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://itl_node_app;
proxy_redirect off;
}
}
Checking nginx error.log you'll find:
2015/08/03 22:34:28 [error] 8004#0: *792 upstream sent no valid HTTP/1.0 header while reading response header from upstream, client: 192.168.0.4, server: itl.lan, request: "POST /api/v1/flow-upload/ HTTP/1.1", upstream: "http://127.0.0.1:5000/api/v1/flow-upload/", host: "itl.lan", referrer: "http://itl.lan/webapp/"
Any help appreciated.
Cheers,
Luca
The log NGINX says the HTTP Header is invalid, check defined controller on express.js in this route:
/api/v1/flow-upload/
If status sent in response is a number.
I hope this helps!
I wrote the Express.js "/api/v1/flow-upload/" route taking as a model and example the flow.js Node sample app, which you can find it here. The issue I described pops up on line 23. The status parameter might be a string literal. I learned the hard way that Nginx manages it way strictly than Express does, raising the error.
A simple and safer way to go is just to pass for instance:
res.status(200).send();
Related
Nginx Cache for Azure blob authentication is notworking and getting below error
HTTP/1.1 403 Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
Server: nginx/1.23.3
Date: Tue, 14 Feb 2023 04:50:24 GMT
Content-Type: application/xml
Content-Length: 408
Connection: keep-alive
x-ms-request-id: fb70e59e-701e-0040-172f-40ef6a000000
Access-Control-Expose-Headers: content-length
Access-Control-Allow-Origin: *
Below is my nginx.conf:
events {
worker_connections 1024;
}
# Nginx configuration file
http {
# Define the proxy cache settings
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=static_cache:10m inactive=60m;
proxy_cache_valid 200 60m;
proxy_cache_valid 404 1m;
proxy_cache_bypass $http_pragma;
proxy_cache_revalidate on;
server {
listen 80;
server_name <nginx-host-name>;
location / {
proxy_pass https://<my-blob-account>.blob.core.windows.net/nasunifiler64e2bc43-41f8-424d-8f4d-ed85d8fd1ab1-5/;
proxy_set_header Host <my-blob-account>.blob.core.windows.net;
proxy_set_header Authorization "SharedKey my-blob-account:<access-key>";
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}
}
No network issue, I have verified with azcopy and curl (curl request directly to azure blob and worked). My authorization header is correct? or any additional headers need to set?
I have a webchat (example.com/chat) listening on tcp://0.0.0.0:50500 on my server.
I've also configured nginx reverse proxy to send data from my example.com/chat to 0.0.0.0:50500.
My site nginx conf goes like this:
map $http_upgrade $connection_upgrade {
default Upgrade;
'' close;
}
server {
server_name example.com www.example.com ;
listen 5.4.....0:443 ssl http2 ;
listen [2a0......:a4]:443 ssl http2 ;
ssl_certificate "/var/www......._51.crt";
ssl_certificate_key "/var/www/.......51.key";
add_header Strict-Transport-Security "max-age=31536000" always;
charset utf-8;
gzip on;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/css text/xml application/javascript text/plain application/json image/svg+xml image/x-icon;
gzip_comp_level 1;
set $root_path /var/www/user/data/www/example.com;
root $root_path;
disable_symlinks if_not_owner from=$root_path;
location / {
proxy_pass http://127.0.0.1:81;
proxy_redirect http://127.0.0.1:81/ /;
include /etc/nginx/proxy_params;
}
location ~ ^/chat {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://0.0.0.0:50500;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_read_timeout 300s;
proxy_buffering off;
}
location ~* ^.+\.(jpg|jpeg|gif|png|svg|js|css|mp3|ogg|mpeg|avi|zip|gz|bz2|rar|swf|ico|7z|doc|docx|map|ogg|otf|pdf|tff|tif|txt|wav|webp|woff|woff2|xls|xlsx|xml)$ {
try_files $uri $uri/ #fallback;
expires 30d;
}
location #fallback {
proxy_pass http://127.0.0.1:81;
proxy_redirect http://127.0.0.1:81/ /;
include /etc/nginx/proxy_params;
}
include "/etc/nginx/fastpanel2-sites/fastuser/example.com.includes";
include /etc/nginx/fastpanel2-includes/*.conf;
error_log /var/www/user/data/logs/example.com-frontend.error.log;
access_log /var/www/user/data/logs/example.com-frontend.access.log;
}
server {
server_name example.com www.example.com ;
listen 5.4.....0:80;
listen [2a.....:a4]:80;
return 301 https://$host$request_uri;
error_log /var/www/user/data/logs/example.com-frontend.error.log;
access_log /var/www/user/data/logs/example.com-frontend.access.log;
}
The webchat is configured to use these settings:
SOCKET_CHAT_URL="wss://example.com"
SOCKET_CHAT_PORT=50500
Since I have an upgrade header, the 426 Upgrade required error looks strange to me.
I know there are a lot of similar threads related to this issue, however, they all suggest creating an upgrade header that I already have.
I've tried to:
Use both SOCKET_CHAT_URL="ws://example.com" and "wss://example.com"
Changing the proxy_pass line to https: https://0.0.0.0:50500; << in this case the /chat page goes nginx 504 timeout.
changing the WebSocket line to the server IP: wss://123.312.123.321
wss://example.com/chat format << in this case the page closes a websocket connection instantly
Also, my header:
General
Request URL: https://example.com/chat
Request Method: GET
Status Code: 426
Remote Address: 5**.***.***.*50:443
Referrer Policy: strict-origin-when-cross-origin
Response Headers
date: Mon, 06 Sep 2021 21:11:50 GMT
sec-websocket-version: 13
server: nginx/1.18.0
strict-transport-security: max-age=31536000
upgrade: websocket
x-powered-by: Ratchet/0.4.3
Request Headers
:authority: example.com
:method: GET
:path: /chat
:scheme: https
accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
accept-encoding: gzip, deflate, br
accept-language: uk-UA,uk;q=0.9
cache-control: max-age=0
sec-ch-ua: "Chromium";v="94", "Google Chrome";v="94", ";Not A Brand";v="99"
sec-ch-ua-mobile: ?0
sec-ch-ua-platform: "Windows"
sec-fetch-dest: document
sec-fetch-mode: navigate
sec-fetch-site: none
sec-fetch-user: ?1
upgrade-insecure-requests: 1
user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.31 Safari/537.36
Okay, the server runs on HTTP/2, and WebSockets are not supported on it.
Also, it is not possible to switch only 1 site to HTTP/1.1 with Nginx, you should switch an entire server for that.
We've switched to sockets.io
When I run my express app locally, there are no issues whatsoever, but since I've placed it on an AWS EC2 instance running an Nginx web server, there are a few problems whenever I visit the designated domain.
Whenever I go to the domain, it displays a 504 Gateway Timeout for the main.css file that is being served from the public directory. Not sure if it's relevant, but I am using Jade templates with Express, and sass files are being compiled at runtime to the public/css directory.
Here's the request headers:
GET /css/main.css HTTP/1.1
Host: dispatch.nucleus.technology
Connection: keep-alive
Pragma: no-cache
Cache-Control: no-cache
Accept: text/css,/;q=0.1
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.106 Safari/537.36
DNT: 1
Referer: http://dispatch.nucleus.technology/
Accept-Encoding: gzip, deflate, sdch
Accept-Language: en-US,en;q=0.8
There also seems to be a websocket error:
WebSocket connection to 'ws://dispatch.nucleus.technology/socket.io/?EIO=3&transport=websocket&sid=W0qJNlmaVdFbKoeBAAAA' failed: Error during WebSocket handshake: Unexpected response code: 400
I've since fixed the css problem, and it seems that running the app as root solves it, but I'm not sure why - if anyone has some insight regarding that it would be very helpful.
As for the websocket error, I'm still receiving that despite running the app as root.
Adding this to the nginx config should get rid of the websocket error:
location / {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
...
We have a Node/express web app that is serving static assets in addition to normal content, via express.static(). There is an nginx server in front of it that is currently configured to gzip these static asset requests, if the user agent is up for it.
However, though nginx is doing the gzip as expected, it is dropping the Content-Length header from the origin, and setting Transfer-Encoding: chunked instead. This breaks caching on our CDN.
Below are the responses for a typical static asset request (a JS file in this case), from the node backend, and from nginx:
Request:
curl -s -D - 'http://my_node_app/res/my_js.js' -H 'Accept-Encoding: gzip, deflate, sdch' -H 'Connection: keep-alive' --compressed -o /dev/null
Response Headers from Node:
HTTP/1.1 200 OK
Accept-Ranges: bytes
Date: Wed, 07 Jan 2015 02:24:55 GMT
Cache-Control: public, max-age=0
Last-Modified: Wed, 07 Jan 2015 01:12:05 GMT
Content-Type: application/javascript
Content-Length: 37386 // <--- The expected header
Connection: keep-alive
Response Headers from nginx:
HTTP/1.1 200 OK
Server: nginx
Date: Wed, 07 Jan 2015 02:24:55 GMT
Content-Type: application/javascript
Transfer-Encoding: chunked // <--- The problematic header
Connection: keep-alive
Vary: Accept-Encoding
Cache-Control: public, max-age=0
Last-Modified: Wed, 07 Jan 2015 01:12:05 GMT
Content-Encoding: gzip
Our current nginx configuration for the static assets location looks like below:
nginx config:
# cache file paths that start with /res/
location /res/ {
limit_except GET HEAD { }
# http://nginx.com/resources/admin-guide/caching/
# http://nginx.org/en/docs/http/ngx_http_proxy_module.html
proxy_buffers 8 128k;
#proxy_buffer_size 256k;
#proxy_busy_buffers_size 256k;
# The cache depends on proxy buffers, and will not work if proxy_buffering is set to off.
proxy_buffering on;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_connect_timeout 2s;
proxy_read_timeout 5s;
proxy_pass http://node_backend;
chunked_transfer_encoding off;
proxy_cache my_app;
proxy_cache_valid 15m;
proxy_cache_key $uri$is_args$args;
}
As can be seen from the above config, even though we've explicitly set chunked_transfer_encoding off for such paths as per the nginx docs, have proxy_buffering on, and have a big enough proxy_buffers size, the response is still being chunked.
What are we missing here?
--Edit 1: version info--
$ nginx -v
nginx version: nginx/1.6.1
$ node -v
v0.10.30
--Edit 2: nginx gzip config--
# http://nginx.org/en/docs/http/ngx_http_gzip_module.html
gzip on;
gzip_buffers 32 4k;
gzip_comp_level 1;
gzip_min_length 1000;
#gzip_http_version 1.0;
gzip_types application/javascript text/css
gzip_proxied any;
gzip_vary on;
You are correct, let me elaborate.
The headers are the first thing that need to get sent. However since you are using streaming compression, the final size is uncertain. You only know the size of the uncompressed asset, and sending a Content-Length too large would also be incorrect.
Thus, there are two options:
transfer encoding chunked
Completely Compress the asset before sending any data, so the compressed size is known
Currently, you're experiencing the first case, and it sounds like you really need the second. The easiest way to get the second case is to turn on gzip_static as #kodeninja said in the comments.
I'm trying to retrieve a large amount of data (1000+ rows from MongoDB, using the JugglingDB ORM). If I set the limit to be 1001 rows, my data is complete, but once I step up to 1002, the data is incomplete (doesn't matter if I hit it with cURL or the browser). I'm not entirely sure what the issue is, as the console.log shows all of my data, but it sounds like there might be an issue with my response headers or the response itself... here's the code that I'm trying to work with:
function getAllDevices(controller) {
controller.Device.all({limit: parseInt(controller.req.query.limit)}, function(err, devices) {
// This shows all of my devices, and the data is correct
console.log(devices, JSON.stringify(devices).length);
controller.req.headers['Accept-Encoding'] = '';
controller.res.setHeader('transfer-encoding', '');
controller.res.setHeader('Content-Length', JSON.stringify(devices).length);
return controller.send({success: true, data: devices});
});
}
Request headers:
Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Encoding gzip, deflate
Accept-Language en-US,en;q=0.5
Connection keep-alive
Host localhost
Response headers:
Access-Control-Allow-Cred... true
Access-Control-Allow-Head... X-Requested-With,content-type
Access-Control-Allow-Meth... GET, POST, OPTIONS, PUT, PATCH, DELETE
Access-Control-Allow-Orig... *
Connection keep-alive
Content-Length 1610106
Content-Type application/json; charset=utf-8
Date Wed, 20 Aug 2014 17:20:51 GMT
Server nginx/1.6.0
X-Powered-By Express
It's good to note that I'm using nginx as a reverse proxy to my Node server, and this is what the config looks like:
location /nodeJs/ {
proxy_pass http://127.0.0.1:3005/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Connection close;
proxy_pass_header Content-Type;
proxy_pass_header Content-Disposition;
proxy_pass_header Content-Length;
}
I believe I have fixed this issue, and once again, it seems like an nginx change... in my main nginx.conf file, I added:
gzip on;
gzip_types application/json;
Adding application/json to the gzip_types is ultimately the solution.