My problem is getting the "real" IP address from the web at nginx level serving a static vuejs site via ssl.
I want to block certain IP addresses, how can I get the real IP address if I can't use proxy pass, since I only link to a static location?
haproxy (tcp) (port: 443) ==> encrypted request ==> nginx (port: 8085) request pass to ==> '/' location getting real IP for range blocking.
Please also see questions/comments in the nginx vhost file. Am I on the right track here or does this need to be done entirely differently?
haproxy setup:
frontend ssl_front_433 xx.xx.xx.xx:443
mode tcp
option tcplog
use_backend ssl_nginx_backend_8085
backend ssl_nginx_backend_8085
mode tcp
balance roundrobin
option tcp-check
server srv-2 127.0.0.1:8085 check fall 3 rise 2 inter 4s
nginx setup:
server {
listen 8085 ssl;
server_name mydomain;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
ssl_certificate ./fullchain.pem;
ssl_certificate_key ./privkey.pem;
include include.d/ssl.conf;
// I want to only allow certain ip addresses
// haproxy of course always returns 127.0.0.1 thus this is not working
include include.d/ip_range.conf;
location / {
//how to get the proxy headers to be applied here?
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
// do I need a proxy pass and if so where should I pass to,
// in order to use it with static html/js?
// can I use an upstream to a static location?
//proxy_pass http://;
try_files $uri $uri/ /index.html;
}
}
On the nginx side you can control which IP addresses or ranges are permitted with a deny all and an allow range to your server block like so:
allow 192.168.1.0/24;
deny all;
Note: The nginx docs are always an excellent place to start, here's the docs for restricting access by IP addresses and ranges.
First, I would challenge you to reconsider why you need a load balancer with haproxy for something as simple as a html/css/js static site. More infrastructure introduces more complications.
Second, the upstream in nginx is only needed if you want to point requests to a local wsgi server for example, in your case this is static content so you shouldn't need to point to an upstream – not unless you have some sort of wsgi service you want to forward requests to.
Finally, as for haproxy only forwarding requests as 127.0.0.1, first make sure the IP is in the header (i.e. X-Real-IP) then you can try to add something like this to your haproxy config (source), if you indeed want to keep haproxy:
frontend all_https
option forwardfor header X-Real-IP
http-request set-header X-Real-IP %[src]
The haproxy documentation is also a good resource for preserving source IP addresses.
Related
I've created an environment in AWS which includes an EC2 instance with node js web-server and Nginx installed, behind a self-signed application load balancer.
My ALB gets requests from HTTPS (443) and forwards them on HTTP (80) to the Nginx. My Nginx should get the requests from the ALB (in port 80) and forward them on port 9090 (which used by the node js web server).
However, I'm having issues with translating the requests from the Nginx to the application.
When entering the URL with the ALB DNS on HTTP I'm able to get to the above page (instead of my webserver application page):
My default.conf file attached above:
All my security groups are open to test the problem (on 443, 80, 9090). so ports are not the problem, but the Nginx configuration.
Also, my target group presented above:
What could be the problem / what further configuration should I do?
Thank you.
When you have Load Balancer why you are using Nginx? its sound like you are using two Nginx server for one nodejs application. also SSL operations consume extra CPU resources. The most CPU-intensive operation is the SSL handshake.
terminating-ssl-http
The correct way to handle this which will also solve your above issue.
Create a target group and bind with instance port 9090
Generate certificate from AWS (it's free)
Create an HTTPS listener and place the aws certificate
Add the target group that we create on step 1 to the HTTPS listener of Load Balancer.
With this approach, you are terminating SSL/TLS at the Load balancer and instance will receive plain HTTP connection which will save the CPU time for SSL encryption/decryption.
SSL termination is the term pointing to proxy servers or load balancers which accepts SSL/TLS connections however do not use the same while connecting to the back end servers. E.g. A load balancer exposed to the internet might accept HTTPS at port 443 but connects to backend servers via HTTP only
For testing purpose, this should work.
server {
listen 80;
server_name example.com;
client_max_body_size 32M;
underscores_in_headers on;
location / {
proxy_set_header Upgrade $http_upgrade;
proxy_pass_header device_id;
proxy_set_header Connection "upgrade";
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_read_timeout 120;
proxy_next_upstream error timeout invalid_header http_500;
proxy_connect_timeout 90;
proxy_pass http://localhost:9090;
}
}
Worked. The problem was in the "/etc/nginx/nginx.conf" file. After a lot of reading and try - I've found that inside the file it forwards to HTML (instead of my nodejs web server).
Changed the line of "root /path_to_ws", restarted Nginx and it worked.
Thank you for the help!
I have a reverse proxy configuration of NGINX:
Here are my configurations, I have edited out all the lines that IMHO are not relevant here.
Main Server:
server {
listen 80;
location / {
include cors;
proxy_set_header Host $http_host;
proxy_hide_header Cache-Control;
add_header Cache-Control $new_cache_control_header_val;
proxy_pass http://127.0.0.1:8090;
}
}
And my second NGINX configuration
server {
listen 127.0.0.1:8090;
}
The problem is, when I do in the broser:
http://myIP:8090/
I am getting to my server, while I have explicetly set the server to listen for this PORT only on THIS IP.
Where am I wrong?
Per the documentation
... nginx first tests the IP address and port of the request against the
listen directives of the server blocks. It then tests the “Host”
header field of the request against the server_name entries of the
server blocks that matched the IP address and port. If the server name
is not found, the request will be processed by the default server. For
example, a request for www.example.com received on the 192.168.1.1:80
port will be handled by the default server of the 192.168.1.1:80 port,
i.e., by the first server, since there is no www.example.com defined
for this port.
So I would go a step further and theorize that nginx sees that there is only one server block on port 8090 and making it the default_server for that port.
You could test by adding a second server block with that port and your myIP and see if that returns something different.
I am using sticky session in nodejs which is behind nginx.
Sticky session does the load balancing by checking the remoteAddress of the connection.
Now the problem is it always take ip of nginx server
server = net.createServer({ pauseOnConnect: true },function(c) {
// Get int31 hash of ip
var worker,
ipHash = hash((c.remoteAddress || '').split(/\./g), seed);
// Pass connection to worker
worker = workers[ipHash % workers.length];
worker.send('sticky-session:connection', c);
});
Can we get the client ip using net library?
Nginx Configuration:
server {
listen 80 default_server;
server_name localhost;
root /usr/share/nginx/html;
#auth_basic "Restricted";
#auth_basic_user_file /etc/nginx/.htpasswd;
#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
set_real_ip_from 0.0.0.0/0;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_http_version 1.1;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://socket_nodes;
proxy_read_timeout 3000;
As mef points out, sticky-session doesn't, at present, work behind a reverse proxy, where remoteAddress is always the same.
The pull request in the aforementioned issue, as well as an earlier pull request, might indeed solve the problem, though I haven't tested myself.
However, those fixes rely on partially parsing packets, doing low-level routing while peeking into headers at a higher level... As the comments on the pull requests indicate, they're unstable, depend on undocumented behavior, suffer from compatibility issues, might degrade performance, etc.
If you don't want to rely on experimental implementations like that, one alternative would be leaving load balancing entirely up to nginx, which can see the client's real IP and so keep sessions sticky. All you need is nginx's built-in ip_hash load balancing.
Your nginx configuration might then look something like this:
upstream socket_nodes {
ip_hash;
server 127.0.0.1:8000;
server 127.0.0.1:8001;
server 127.0.0.1:8002;
server 127.0.0.1:8003;
server 127.0.0.1:8004;
server 127.0.0.1:8005;
server 127.0.0.1:8006;
server 127.0.0.1:8007;
}
server {
listen 80 default_server;
server_name localhost;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
# Note: Trusting all addresses like this means anyone
# can pretend to have any address they want.
# Only do this if you're absolutely certain only trusted
# sources can reach nginx with requests to begin with.
set_real_ip_from 0.0.0.0/0;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_http_version 1.1;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://socket_nodes;
proxy_read_timeout 3000;
}
}
Now, to get this to work, your server code would also need to be modified somewhat:
if (cluster.isMaster) {
var STARTING_PORT = 8000;
var NUMBER_OF_WORKERS = 8;
for (var i = 0; i < NUMBER_OF_WORKERS; i++) {
// Passing each worker its port number as an environment variable.
cluster.fork({ port: STARTING_PORT + i });
}
cluster.on('exit', function(worker, code, signal) {
// Create a new worker, log, or do whatever else you want.
});
}
else {
server = http.createServer(app);
// Socket.io initialization would go here.
// process.env.port is the port passed to this worker by the master.
server.listen(process.env.port, function(err) {
if (err) { /* Error handling. */ }
console.log("Server started on port", process.env.port);
});
}
The difference is that instead of using cluster to have all worker processes share a single port (load balanced by cluster itself), each worker gets its own port, and nginx can distribute load between the different ports to get to the different workers.
Since nginx chooses which port to go to based on the IP it gets from the client (or the X-Forwarded-For header in your case), all requests in the same session will always end up at the same process.
One major disadvantage of this method, of course, is that the number of workers becomes far less dynamic. If the ports are "hard-coded" in the nginx configuration, the Node server has to be sure to always listen to exactly those ports, no less and no more. In the absence of a good system for syncing the nginx config and the Node server, this introduces the possibility of error, and makes it somewhat more difficult to dynamically scale to e.g. the number of cores in an environment.
Then again, I imagine one could overcome this issue by either programmatically generating/updating the nginx configuration, so it always reflects the desired number of processes, or possibly by configuring a very high number of ports for nginx and then making Node workers each listen to multiple ports as needed (so you could still have exactly as many workers as there are cores). I have not, however, personally verified or tried implementing either of these methods so far.
Note regarding an nginx server behind a proxy
In the nginx configuration you provided, you seem to have made use of ngx_http_realip_module. While you made no explicit mention of this in the question, please note that this may in fact be necessary, in cases where nginx itself sits behind some kind of proxy, e.g. ELB.
The real_ip_header directive is then needed to ensure that it's the real client IP (in e.g. X-Forwarded-For), and not the other proxy's, that's hashed to choose which port to go to.
In such a case, nginx is actually serving a fairly similar purpose to what the pull requests for sticky-session attempted to accomplish: using headers to make the load balancing decisions, and specifically to make sure the same real client IP is always directed to the same process.
The key difference, of course, is that nginx, as a dedicated web server, load balancer and reverse proxy, is designed to do exactly these kinds of operations. Parsing and manipulating the different layers of the protocol stack is its bread and butter. Even more importantly, while it's not clear how many people have actually used these pull requests, nginx is stable, well-maintained and used virtually everywhere.
It seems that the module you're using does not support yet to be behind a reverse-proxy source.
Have a look at this Github issue, some pull requests seem to fix your problem, so you may have a solution by using a fork of the module (you can point at it on github from your package.json file.)
I'm using this tutorial nginx reverse proxy tutorial to setup a node site with nginx. this is what my nano /etc/nginx/conf.d/mydomain.com.conf looks like
server {
listen 80;
server_name mydomain.com;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
The problem is that when I visit my domain, it's redirecting to another domain that I have setup on the same server. I setup that other domain (a static page) using an nginx virtual hosts tutorial that uses server blocks.
One difference I noticed is that the nginx reverse proxy tutorial doesn't do any of this symlinking between sites available and sites enabled which the virtual hosts tutorial does with the server blocks files. The virtual hosts tutorial instructs the reader to create server block files and then enable them like this
sudo ln -s /etc/nginx/sites-available/demo /etc/nginx/sites-enabled/demo
Do I have to do anything to enable the config file when setting up a reverse proxy with nginx? If not, do you know why it's redirecting to the other domain?
/etc/nginx/conf.d/mydomain.com.conf
Let me start with a small explanation on how nginx matches the hosts, quoting from how nginx processes a request
In this configuration nginx tests only the request’s header field
“Host” to determine which server the request should be routed to. If
its value does not match any server name, or the request does not
contain this header field at all, then nginx will route the request to
the default server for this port.
According to your description I would say there's 2 possibilities,
either that this reverse proxy virtual host has a wrong name, so it's not matched and the request is directed to the first virtual host that listens on port 80.
the reverse proxy is correct but the configuration was not loaded.
To fix this double check that this line is correct server_name mydomain.com; and indeed matches the URL you are requesting, then make sure you reloaded nginx settings sudo service nginx reload
The problem was that /etc/nginx/conf.d/mydomain.com.conf hadn't been copied into
/etc/nginx/sites-enabled
Most of the tutorials I've come across, you set up a Node.js web app by setting the server to listen on a port, and access it in the browser by specifying that port.. However, how would I deploy a Node.js app to be fully accessible by say a domain like foobar.com?
You have to bind your domain's apex (naked domain) and usually www with your web server's ip or it's CNAME.
Since you cannot bind apex domain with CNAME, you have to specify server IP or IPs or load balancers' IPs
Your question is a little vague.. If your DNS is already configured you could bind to port 80 and be done with it. However, if you already have apache or some other httpd running on port 80 to serve other hosts that obviously won't work.
If you prefer to run the node process as non-root (and you should) it's much more likely that you're looking for a reverse proxy. My main httpd is nginx, the relevant option is proxy_pass. If you're using apache you probably want mod_proxy.
I just created an "A record" at my registrar pointing to my web server's ip address. Then you can start your node app on port 80.
An alternative would be to redirect:
http://www.foobar.com to http://www.foobar.com:82
Regards.
Use pm2 to run your node apps on the server.
Then use Nginx to proxy to your node server. I know this sounds weird but that's the way it's done. Eventually if you need to set up a load balancer you do that all in Nginx too.
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://APP_PRIVATE_IP_ADDRESS:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
This is the best tutorial I've found on setting up node.js for production.
https://www.digitalocean.com/community/tutorials/how-to-set-up-a-node-js-application-for-production-on-ubuntu-14-04
For performance you also setup nginx to serve your public files.
location /public {
allow all;
access_log off;
root /opt/www/site;
}