I am having an issue with this configuration. My AWS ELB accepts TCP connections on port 80 and forwards them using proxy protocol to an nginx instance listening on port 8080. This nginx node is supposed to use ip_hash module to stick the user to a specific node.
This is working perfectly fine, but only 2 out of the 4 nodes are being used instead of being load balanced among all of them, here is my nginx config
upstream socket_nodes {
ip_hash;
server a.server.com:2000;
server a.server.com:2001;
server a.server.com:2002;
server a.server.com:2003;
}
# Accept connections via the load balancer
server {
listen 8080 proxy_protocol;
set_real_ip_from 0.0.0.0/32;
real_ip_header proxy_protocol;
charset utf-8;
location / {
proxy_pass http://socket_nodes;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Unlike "round robin" loading balancing, ip_hash means that for any given IP address, NGINX will always forward to the same application instance.
Related
I'm working on setting up a load balancer for my website. I want to do it manually so that I have full control over how the requests are rerouted. Im using AWS EBS to load balance between 2 ec2 instances, and that works fine. Each ec2 instance uses nginx as a reverse proxy for nodejs.
Currently, I only have 1 node app running on each server, but Ideally I would like to have 4 node apps on each server (1 for each core).
I was thinking that a really easy way to manage this would be to allow nginx to pick a random port between 8081, and 8084 and redirect to one of the apps. In theory this way I would be balancing the load as evenly as possible.
Currently this is my nginx reverse proxy set up:
server {
client_max_body_size 200M;
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
location / {
client_max_body_size 200M;
proxy_pass http://localhost:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Real-User-Agent $http_user_agent;
proxy_set_header Connection 'upgrade';
proxy_set_header X-serv-ip 'my.server.ip';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
Basically, my question boils down to if there's some way to make a variable like $rand_port and every time a request is made, $rand_port is set to 8081, 8082, 8083, or 8084, and then in my proxy_pass I could do something like:
proxy_pass http://localhost:{$rand_port} #not sure if the syntax is right.
Is there anything that lets me do something like this, or otherwise what other solutions are there?
nginx load balancing allows you to determine the algorithm ... if you just need each server to be equally loaded use this
upstream myproject {
server 127.0.0.1:8081;
server 127.0.0.1:8082;
server 127.0.0.1:8083;
server 127.0.0.1:8084;
}
server {
client_max_body_size 200M;
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
location / {
client_max_body_size 200M;
proxy_pass http://myproject;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Real-User-Agent $http_user_agent;
proxy_set_header Connection 'upgrade';
proxy_set_header X-serv-ip 'my.server.ip';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
I am running Glassfish 3.1.2 on Linux 6 server to deploy Oracle Apex.
I want to to hide port 8383 from url (current url say : https://sd1.domain.com:8383/apex)
80 and 443 port are already assigned for another service.
So, how can I hide port 8383 from URL.
A TCP connection is between two ip:port pairs. In case the server's port is a common one like 80/443, most browsers don't display it.
You can use a reverse proxy on port 80, that classifies incoming HTTP traffic.
It could check the subdomain in the HTTP header and then forward traffic to one of the two web servers (which both listen on dedicated ports).
With nginx the config file could look like this:
server {
server_name sd1.domain.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://localhost:8383;
}
}
server {
server_name www.domain.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://localhost:8080;
}
}
I have a private messaging app built with nodejs and socketio which works in the url http://localhost:3000
I'm wondering how would I integrate it in a dedicated server like Amazon's EC2? I can understand that it will work on http://someip:3000 but I want the chat application to work inside a website, just like facebook. How do I set it up to work on all website pages?
Thank you
Basically you need a web server to proxy your app in the port 80 (http), this is a basic configuration for Nginx where the node app running in 127.0.0.1:3000 will be proxied to the port 80 of www.example.com that should point to the EC2 public IP:
upstream node {
ip_hash;
server 127.0.0.1:3000;
}
server {
server_name www.example.com;
access_log /var/log/nginx/www.example.com-access.log main;
error_log /var/log/nginx/www.example.com-error.log warn;
listen 80;
location / {
## Socket.io
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
## End Socket.io
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
#proxy_set_header Host $http_host;
proxy_set_header Host $host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://node;
proxy_redirect off;
proxy_max_temp_file_size 0;
}
}
The issue is when i use nginx the socket is not establishing.Any one help me in steps to follow to integrate Socket oi with nginx.
i tried this
location /field {
# the following is required for WebSockets
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
# supposedly prevents 502 bad gateway error;
# ultimately not necessary in my case
proxy_buffers 8 32k;
proxy_buffer_size 64k;
# the following is required
proxy_pass
proxy_redirect off;
# the following is required as well for WebSockets
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
tcp_nodelay on; # not necessary
}
First of all check your nginx version. According to that page websockets supported after v1.3.13;
http://nginx.com/blog/nginx-nodejs-websockets-socketio/
Then compare your nginx conf with the below configuration as stated in the nginx blog;
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream websocket {
server 192.168.100.10:8010;
}
server {
listen 8020;
location / {
proxy_pass http://websocket;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
http://nginx.com/blog/websocket-nginx/
Also check your firewall configuration for the port you chose for socket.io server. (I've read that some ISPs are blocking websocket connection for ports other than 80 & 443, please also check if server receive packets using tcpdump etc.)
If everyhing is okay up until now, check your nginx error logs (/var/log/nginx/error.log) to see if there are any socket.io related error messages. You can paste it here for further analysis.
Then if there is no socket error in nginx logs start your node app with DEBUG mode on as below;
DEBUG=* node yourfile.js
And check if any socket connection message is printed to console. You can also paste it for further analysis.
I have an NGINX instance (1.4 stable) in front of a few NodeJS instances. I'm trying to load balance with NGINX using the upstream module like so:
upstream my_web_upstream {
server localhost:3000;
server localhost:8124;
keepalive 64;
}
location / {
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_set_header Connection "";
proxy_http_version 1.1;
proxy_cache one;
proxy_cache_key sfs$request_uri$scheme;
proxy_pass http://my_web_upstream;
}
The problem occurs when the instance at port 3000 is not available. I get a 502 Bad Gateway from NGINX.
If I change the upstream config to just point at one instance, 8124 for example, the 502 still occurs.
Running a netstat shows 0 other applications listening on any of the ports I've tried.
Why is NGINX reporting a bad gateway? How can I get NGINX to do a fallthrough if one of the instances is down?
If netstat shows that your nodejs applications aren't running on the ports, then the problem is that you haven't started your nodejs applications.
This nginx config knows how to proxy to the nodejs application, but you are guaranteed to get a 502 if the nodejs application has not been started. If you want to run it on multiple ports, then you have to start the application on each port. So, don't hardcode port 3000 into the NodeJS code, but make it take the port from an environmental variable, or spawn multiple instances using a process manager like pm2 (https://github.com/Unitech/pm2). Once these are running, then nginx can proxy to them.