I am running a varnish server on top of my backend server and under a SSL terminator. SSL terminator returns a X-Forwarded-For header with the real ip address from where the request actually came, but for some weird reason, X-Forwarded-For is changed to ip address of the varnish cache server. Like below:
- ReqHeader X-Forwarded-For : <One-ip> <----------
- ReqHeader Range: bytes=156478-
- ReqHeader User-Agent: Some Stupid Phone
- ReqHeader Accept-Encoding: identity
- ReqHeader Host: host.example.com
- ReqHeader Connection: Keep-Alive
- ReqHeader X-Forwarded-For: <another-ip> <-----------
I want to capture the IP address of actual client and send it to log. What should I do?
Related
I have a webchat (example.com/chat) listening on tcp://0.0.0.0:50500 on my server.
I've also configured nginx reverse proxy to send data from my example.com/chat to 0.0.0.0:50500.
My site nginx conf goes like this:
map $http_upgrade $connection_upgrade {
default Upgrade;
'' close;
}
server {
server_name example.com www.example.com ;
listen 5.4.....0:443 ssl http2 ;
listen [2a0......:a4]:443 ssl http2 ;
ssl_certificate "/var/www......._51.crt";
ssl_certificate_key "/var/www/.......51.key";
add_header Strict-Transport-Security "max-age=31536000" always;
charset utf-8;
gzip on;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/css text/xml application/javascript text/plain application/json image/svg+xml image/x-icon;
gzip_comp_level 1;
set $root_path /var/www/user/data/www/example.com;
root $root_path;
disable_symlinks if_not_owner from=$root_path;
location / {
proxy_pass http://127.0.0.1:81;
proxy_redirect http://127.0.0.1:81/ /;
include /etc/nginx/proxy_params;
}
location ~ ^/chat {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://0.0.0.0:50500;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_read_timeout 300s;
proxy_buffering off;
}
location ~* ^.+\.(jpg|jpeg|gif|png|svg|js|css|mp3|ogg|mpeg|avi|zip|gz|bz2|rar|swf|ico|7z|doc|docx|map|ogg|otf|pdf|tff|tif|txt|wav|webp|woff|woff2|xls|xlsx|xml)$ {
try_files $uri $uri/ #fallback;
expires 30d;
}
location #fallback {
proxy_pass http://127.0.0.1:81;
proxy_redirect http://127.0.0.1:81/ /;
include /etc/nginx/proxy_params;
}
include "/etc/nginx/fastpanel2-sites/fastuser/example.com.includes";
include /etc/nginx/fastpanel2-includes/*.conf;
error_log /var/www/user/data/logs/example.com-frontend.error.log;
access_log /var/www/user/data/logs/example.com-frontend.access.log;
}
server {
server_name example.com www.example.com ;
listen 5.4.....0:80;
listen [2a.....:a4]:80;
return 301 https://$host$request_uri;
error_log /var/www/user/data/logs/example.com-frontend.error.log;
access_log /var/www/user/data/logs/example.com-frontend.access.log;
}
The webchat is configured to use these settings:
SOCKET_CHAT_URL="wss://example.com"
SOCKET_CHAT_PORT=50500
Since I have an upgrade header, the 426 Upgrade required error looks strange to me.
I know there are a lot of similar threads related to this issue, however, they all suggest creating an upgrade header that I already have.
I've tried to:
Use both SOCKET_CHAT_URL="ws://example.com" and "wss://example.com"
Changing the proxy_pass line to https: https://0.0.0.0:50500; << in this case the /chat page goes nginx 504 timeout.
changing the WebSocket line to the server IP: wss://123.312.123.321
wss://example.com/chat format << in this case the page closes a websocket connection instantly
Also, my header:
General
Request URL: https://example.com/chat
Request Method: GET
Status Code: 426
Remote Address: 5**.***.***.*50:443
Referrer Policy: strict-origin-when-cross-origin
Response Headers
date: Mon, 06 Sep 2021 21:11:50 GMT
sec-websocket-version: 13
server: nginx/1.18.0
strict-transport-security: max-age=31536000
upgrade: websocket
x-powered-by: Ratchet/0.4.3
Request Headers
:authority: example.com
:method: GET
:path: /chat
:scheme: https
accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
accept-encoding: gzip, deflate, br
accept-language: uk-UA,uk;q=0.9
cache-control: max-age=0
sec-ch-ua: "Chromium";v="94", "Google Chrome";v="94", ";Not A Brand";v="99"
sec-ch-ua-mobile: ?0
sec-ch-ua-platform: "Windows"
sec-fetch-dest: document
sec-fetch-mode: navigate
sec-fetch-site: none
sec-fetch-user: ?1
upgrade-insecure-requests: 1
user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.31 Safari/537.36
Okay, the server runs on HTTP/2, and WebSockets are not supported on it.
Also, it is not possible to switch only 1 site to HTTP/1.1 with Nginx, you should switch an entire server for that.
We've switched to sockets.io
My nginx.conf works as expected locally without error, but when moving to this App Service environment I get the errors outlined below.
I am developing a React app built on TypeScript using the Azure App Service multi-container (preview) app in Web App for Containers, but running into some issues with NGINX. The main errors I get are some error logs saying this when I try to run the App Service:
"connect() failed (111: Connection refused)"
"no live upstreams while connecting to upstream"
My WEBSITES_PORT under App Service > Settings > Configuration is set to 80. I have also tried to set it to 80:80. In both cases I get the same error logs below. Setting WEBSITES_PORT to 3001 and removing nginx from the list of services in the container settings file results in the App Service deploying successfully.
Let me know if there are other files I can provide in addition to the ones below.
My container settings found under App Service > Settings > Container Settings pointing to my private Azure Container Registry that stores all of my application images. The structure is very similar to the docker compose file I use for local deployment.
version: '3.3'
services:
mysite:
image: "reactapp.azurecr.io/my_site_img"
ports:
- "3001:3001"
nginx:
image: "reactapp.azurecr.io/nginx"
ports:
- "80:80"
An nginx.conf that controls the routing in my nginx image.
upstream my_site_proxy {
server localhost:3001;
}
server {
listen 0.0.0.0:80;
server_name localhost;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://my_site_proxy/;
proxy_redirect off;
}
}
The Error Log file that is generated when I try to run my Azure App Service with the above configuration.
2020-07-13T01:22:52.929149550Z 2020/07/13 01:22:52 [error] 27#27: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.16.7.1, server: localhost, request: "GET /robots1234.txt HTTP/1.1", upstream: "http://127.0.0.1:3001/robots1234.txt", host: "127.0.0.1:4548"
2020-07-13T01:22:52.929653182Z 2020/07/13 01:22:52 [warn] 27#27: *1 upstream server temporarily disabled while connecting to upstream, client: 172.16.7.1, server: localhost, request: "GET /robots1234.txt HTTP/1.1", upstream: "http://127.0.0.1:3001/robots1234.txt", host: "127.0.0.1:4548"
2020-07-13T01:22:52.930048306Z 2020/07/13 01:22:52 [error] 27#27: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.16.7.1, server: localhost, request: "GET /robots1234.txt HTTP/1.1", upstream: "http://127.0.0.1:3001/robots1234.txt", host: "127.0.0.1:4548"
2020-07-13T01:22:52.930060507Z 2020/07/13 01:22:52 [warn] 27#27: *1 upstream server temporarily disabled while connecting to upstream, client: 172.16.7.1, server: localhost, request: "GET /robots1234.txt HTTP/1.1", upstream: "http://127.0.0.1:3001/robots1234.txt", host: "127.0.0.1:4548"
2020-07-13T01:22:52.936363702Z 172.16.7.1 - - [13/Jul/2020:01:22:52 +0000] "GET /robots1234.txt HTTP/1.1" 502 157 "-" "-" "-"
2020-07-13T01:22:53.004840493Z 2020/07/13 01:22:53 [error] 27#27: *1 no live upstreams while connecting to upstream, client: 172.16.7.1, server: localhost, request: "GET /robots933456.txt HTTP/1.1", upstream: "http://my_site_proxy /robots933456.txt", host: "127.0.0.1:4548"
2020-07-13T01:22:53.005790052Z 172.16.7.1 - - [13/Jul/2020:01:22:53 +0000] "GET /robots933456.txt HTTP/1.1" 502 157 "-" "-" "-"
2020-07-13T01:22:53.024544427Z 2020/07/13 01:22:53 [error] 27#27: *4 no live upstreams while connecting to upstream, client: 172.16.7.1, server: localhost, request: "GET / HTTP/1.1", upstream: "http://my_site_proxy /", host: "mysite.azurewebsites.net", referrer: "https://portal.azure.com/"
2020-07-13T01:22:53.025501687Z 172.16.7.1 - - [13/Jul/2020:01:22:53 +0000] "GET / HTTP/1.1" 502 559 "https://portal.azure.com/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36" "198.8.81.196:62138"
2020-07-13T01:22:53.152345935Z 2020/07/13 01:22:53 [error] 27#27: *5 no live upstreams while connecting to upstream, client: 172.16.7.1, server: localhost, request: "GET /favicon.ico HTTP/1.1", upstream: "http://my_site_proxy /favicon.ico", host: "mysite.azurewebsites.net", referrer: "https://mysite.azurewebsites.net/"
2020-07-13T01:22:53.153395901Z 172.16.7.1 - - [13/Jul/2020:01:22:53 +0000] "GET /favicon.ico HTTP/1.1" 502 559 "https://mysite.azurewebsites.net/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36" "198.8.81.196:62138"
You need to change your nginx upstream configuration to this:
upstream my_site_proxy {
server mysite:3001;
}
You should connect to mysite, which is the name of your app container. Docker will resolve this DNS name to the IP address of the app container. You would only connect to localhost if you were running nginx and your app inside the same container (which is not best practice.)
When I run my express app locally, there are no issues whatsoever, but since I've placed it on an AWS EC2 instance running an Nginx web server, there are a few problems whenever I visit the designated domain.
Whenever I go to the domain, it displays a 504 Gateway Timeout for the main.css file that is being served from the public directory. Not sure if it's relevant, but I am using Jade templates with Express, and sass files are being compiled at runtime to the public/css directory.
Here's the request headers:
GET /css/main.css HTTP/1.1
Host: dispatch.nucleus.technology
Connection: keep-alive
Pragma: no-cache
Cache-Control: no-cache
Accept: text/css,/;q=0.1
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.106 Safari/537.36
DNT: 1
Referer: http://dispatch.nucleus.technology/
Accept-Encoding: gzip, deflate, sdch
Accept-Language: en-US,en;q=0.8
There also seems to be a websocket error:
WebSocket connection to 'ws://dispatch.nucleus.technology/socket.io/?EIO=3&transport=websocket&sid=W0qJNlmaVdFbKoeBAAAA' failed: Error during WebSocket handshake: Unexpected response code: 400
I've since fixed the css problem, and it seems that running the app as root solves it, but I'm not sure why - if anyone has some insight regarding that it would be very helpful.
As for the websocket error, I'm still receiving that despite running the app as root.
Adding this to the nginx config should get rid of the websocket error:
location / {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
...
I've encounterd no problems while developing with ng-flow, NodeJS and Express, although I'm not able to configure Nginx properly as a reverse proxy in order to make it run smoothly with ng-flow and Express.
Here you'll find a request example that is getting a pending status:
Remote Address:192.168.0.81:80
Request URL:http://itl.lan/api/v1/flow-upload/
Request Method:POST
Status Code:200 OK
Response Headers
view source
Cache-Control:private, no-cache, no-store, must-revalidate
Connection:close
Date:Mon, 03 Aug 2015 20:24:56 GMT
Expires:-1
Pragma:no-cache
X-Powered-By:Express
Request Headers
view source
Accept:*/*
Accept-Encoding:gzip, deflate
Accept-Language:en-US,en;q=0.8,it;q=0.6
App-Session-Hash:df9b1ac0-3a10-11e5-af61-af8fb284004c
Connection:keep-alive
Content-Length:1049741
Content-Type:multipart/form-data; boundary=----WebKitFormBoundaryY2FVUSE0oIcye77i
Host:itl.lan
Origin:http://itl.lan
Referer:http://itl.lan/webapp/
User-Agent:Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.125 Safari/537.36
Request Payload
------WebKitFormBoundaryY2FVUSE0oIcye77i
Content-Disposition: form-data; name="flowChunkNumber"
9
------WebKitFormBoundaryY2FVUSE0oIcye77i
Content-Disposition: form-data; name="flowChunkSize"
1048576
------WebKitFormBoundaryY2FVUSE0oIcye77i
Content-Disposition: form-data; name="flowCurrentChunkSize"
1048576
------WebKitFormBoundaryY2FVUSE0oIcye77i
Content-Disposition: form-data; name="flowTotalSize"
12515925
------WebKitFormBoundaryY2FVUSE0oIcye77i
Content-Disposition: form-data; name="flowIdentifier"
12515925-showcase_001_20150802T2239tgz
------WebKitFormBoundaryY2FVUSE0oIcye77i
Content-Disposition: form-data; name="flowFilename"
showcase_0.0.1_20150802T2239.tgz
------WebKitFormBoundaryY2FVUSE0oIcye77i
Content-Disposition: form-data; name="flowRelativePath"
showcase_0.0.1_20150802T2239.tgz
------WebKitFormBoundaryY2FVUSE0oIcye77i
Content-Disposition: form-data; name="flowTotalChunks"
11
------WebKitFormBoundaryY2FVUSE0oIcye77i
Content-Disposition: form-data; name="file"; filename="showcase_0.0.1_20150802T2239.tgz"
Content-Type: application/x-compressed-tar
Here's my directive in NGINX server.conf:
upstream itl_node_app {
server 127.0.0.1:5000;
keepalive 8;
}
server {
...
location /api/v1/flow-upload/ {
proxy_set_header Access-Control-Allow-Origin *;
proxy_set_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, PUT, DELETE';
proxy_set_header 'Access-Control-Allow-Headers' 'X-Requested-With,Accept,Content-Type, Origin$
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://itl_node_app;
proxy_redirect off;
}
}
Checking nginx error.log you'll find:
2015/08/03 22:34:28 [error] 8004#0: *792 upstream sent no valid HTTP/1.0 header while reading response header from upstream, client: 192.168.0.4, server: itl.lan, request: "POST /api/v1/flow-upload/ HTTP/1.1", upstream: "http://127.0.0.1:5000/api/v1/flow-upload/", host: "itl.lan", referrer: "http://itl.lan/webapp/"
Any help appreciated.
Cheers,
Luca
The log NGINX says the HTTP Header is invalid, check defined controller on express.js in this route:
/api/v1/flow-upload/
If status sent in response is a number.
I hope this helps!
I wrote the Express.js "/api/v1/flow-upload/" route taking as a model and example the flow.js Node sample app, which you can find it here. The issue I described pops up on line 23. The status parameter might be a string literal. I learned the hard way that Nginx manages it way strictly than Express does, raising the error.
A simple and safer way to go is just to pass for instance:
res.status(200).send();
We have a Node/express web app that is serving static assets in addition to normal content, via express.static(). There is an nginx server in front of it that is currently configured to gzip these static asset requests, if the user agent is up for it.
However, though nginx is doing the gzip as expected, it is dropping the Content-Length header from the origin, and setting Transfer-Encoding: chunked instead. This breaks caching on our CDN.
Below are the responses for a typical static asset request (a JS file in this case), from the node backend, and from nginx:
Request:
curl -s -D - 'http://my_node_app/res/my_js.js' -H 'Accept-Encoding: gzip, deflate, sdch' -H 'Connection: keep-alive' --compressed -o /dev/null
Response Headers from Node:
HTTP/1.1 200 OK
Accept-Ranges: bytes
Date: Wed, 07 Jan 2015 02:24:55 GMT
Cache-Control: public, max-age=0
Last-Modified: Wed, 07 Jan 2015 01:12:05 GMT
Content-Type: application/javascript
Content-Length: 37386 // <--- The expected header
Connection: keep-alive
Response Headers from nginx:
HTTP/1.1 200 OK
Server: nginx
Date: Wed, 07 Jan 2015 02:24:55 GMT
Content-Type: application/javascript
Transfer-Encoding: chunked // <--- The problematic header
Connection: keep-alive
Vary: Accept-Encoding
Cache-Control: public, max-age=0
Last-Modified: Wed, 07 Jan 2015 01:12:05 GMT
Content-Encoding: gzip
Our current nginx configuration for the static assets location looks like below:
nginx config:
# cache file paths that start with /res/
location /res/ {
limit_except GET HEAD { }
# http://nginx.com/resources/admin-guide/caching/
# http://nginx.org/en/docs/http/ngx_http_proxy_module.html
proxy_buffers 8 128k;
#proxy_buffer_size 256k;
#proxy_busy_buffers_size 256k;
# The cache depends on proxy buffers, and will not work if proxy_buffering is set to off.
proxy_buffering on;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_connect_timeout 2s;
proxy_read_timeout 5s;
proxy_pass http://node_backend;
chunked_transfer_encoding off;
proxy_cache my_app;
proxy_cache_valid 15m;
proxy_cache_key $uri$is_args$args;
}
As can be seen from the above config, even though we've explicitly set chunked_transfer_encoding off for such paths as per the nginx docs, have proxy_buffering on, and have a big enough proxy_buffers size, the response is still being chunked.
What are we missing here?
--Edit 1: version info--
$ nginx -v
nginx version: nginx/1.6.1
$ node -v
v0.10.30
--Edit 2: nginx gzip config--
# http://nginx.org/en/docs/http/ngx_http_gzip_module.html
gzip on;
gzip_buffers 32 4k;
gzip_comp_level 1;
gzip_min_length 1000;
#gzip_http_version 1.0;
gzip_types application/javascript text/css
gzip_proxied any;
gzip_vary on;
You are correct, let me elaborate.
The headers are the first thing that need to get sent. However since you are using streaming compression, the final size is uncertain. You only know the size of the uncompressed asset, and sending a Content-Length too large would also be incorrect.
Thus, there are two options:
transfer encoding chunked
Completely Compress the asset before sending any data, so the compressed size is known
Currently, you're experiencing the first case, and it sounds like you really need the second. The easiest way to get the second case is to turn on gzip_static as #kodeninja said in the comments.