I've created an environment in AWS which includes an EC2 instance with node js web-server and Nginx installed, behind a self-signed application load balancer.
My ALB gets requests from HTTPS (443) and forwards them on HTTP (80) to the Nginx. My Nginx should get the requests from the ALB (in port 80) and forward them on port 9090 (which used by the node js web server).
However, I'm having issues with translating the requests from the Nginx to the application.
When entering the URL with the ALB DNS on HTTP I'm able to get to the above page (instead of my webserver application page):
My default.conf file attached above:
All my security groups are open to test the problem (on 443, 80, 9090). so ports are not the problem, but the Nginx configuration.
Also, my target group presented above:
What could be the problem / what further configuration should I do?
Thank you.
When you have Load Balancer why you are using Nginx? its sound like you are using two Nginx server for one nodejs application. also SSL operations consume extra CPU resources. The most CPU-intensive operation is the SSL handshake.
terminating-ssl-http
The correct way to handle this which will also solve your above issue.
Create a target group and bind with instance port 9090
Generate certificate from AWS (it's free)
Create an HTTPS listener and place the aws certificate
Add the target group that we create on step 1 to the HTTPS listener of Load Balancer.
With this approach, you are terminating SSL/TLS at the Load balancer and instance will receive plain HTTP connection which will save the CPU time for SSL encryption/decryption.
SSL termination is the term pointing to proxy servers or load balancers which accepts SSL/TLS connections however do not use the same while connecting to the back end servers. E.g. A load balancer exposed to the internet might accept HTTPS at port 443 but connects to backend servers via HTTP only
For testing purpose, this should work.
server {
listen 80;
server_name example.com;
client_max_body_size 32M;
underscores_in_headers on;
location / {
proxy_set_header Upgrade $http_upgrade;
proxy_pass_header device_id;
proxy_set_header Connection "upgrade";
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_read_timeout 120;
proxy_next_upstream error timeout invalid_header http_500;
proxy_connect_timeout 90;
proxy_pass http://localhost:9090;
}
}
Worked. The problem was in the "/etc/nginx/nginx.conf" file. After a lot of reading and try - I've found that inside the file it forwards to HTML (instead of my nodejs web server).
Changed the line of "root /path_to_ws", restarted Nginx and it worked.
Thank you for the help!
Related
My problem is getting the "real" IP address from the web at nginx level serving a static vuejs site via ssl.
I want to block certain IP addresses, how can I get the real IP address if I can't use proxy pass, since I only link to a static location?
haproxy (tcp) (port: 443) ==> encrypted request ==> nginx (port: 8085) request pass to ==> '/' location getting real IP for range blocking.
Please also see questions/comments in the nginx vhost file. Am I on the right track here or does this need to be done entirely differently?
haproxy setup:
frontend ssl_front_433 xx.xx.xx.xx:443
mode tcp
option tcplog
use_backend ssl_nginx_backend_8085
backend ssl_nginx_backend_8085
mode tcp
balance roundrobin
option tcp-check
server srv-2 127.0.0.1:8085 check fall 3 rise 2 inter 4s
nginx setup:
server {
listen 8085 ssl;
server_name mydomain;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
ssl_certificate ./fullchain.pem;
ssl_certificate_key ./privkey.pem;
include include.d/ssl.conf;
// I want to only allow certain ip addresses
// haproxy of course always returns 127.0.0.1 thus this is not working
include include.d/ip_range.conf;
location / {
//how to get the proxy headers to be applied here?
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
// do I need a proxy pass and if so where should I pass to,
// in order to use it with static html/js?
// can I use an upstream to a static location?
//proxy_pass http://;
try_files $uri $uri/ /index.html;
}
}
On the nginx side you can control which IP addresses or ranges are permitted with a deny all and an allow range to your server block like so:
allow 192.168.1.0/24;
deny all;
Note: The nginx docs are always an excellent place to start, here's the docs for restricting access by IP addresses and ranges.
First, I would challenge you to reconsider why you need a load balancer with haproxy for something as simple as a html/css/js static site. More infrastructure introduces more complications.
Second, the upstream in nginx is only needed if you want to point requests to a local wsgi server for example, in your case this is static content so you shouldn't need to point to an upstream – not unless you have some sort of wsgi service you want to forward requests to.
Finally, as for haproxy only forwarding requests as 127.0.0.1, first make sure the IP is in the header (i.e. X-Real-IP) then you can try to add something like this to your haproxy config (source), if you indeed want to keep haproxy:
frontend all_https
option forwardfor header X-Real-IP
http-request set-header X-Real-IP %[src]
The haproxy documentation is also a good resource for preserving source IP addresses.
My current aws setup is below:
custom aws domain --> classic load balancer (which has an attached SSL cert from aws cert manager) --> elastic beanstalk instance
From what I understand about the relationship between a load balancer, and the instance it points to....after configuring a load balancer to point to an instance, you would then use the load balancer's public-facing DNS instead of the ip/dns of the elastic beanstalk instance.
The load balancer also has a ssl cert attached to it, from the AWS certificate manager service. From what I understand about incorporating a SSL cert with a load balancer, you would have someone go to https://, and the load balancer would encrypt and decrypt before communicating with the instance it's pointing to, which is done over port 80 via http.
My load balancer's listener config is below:
Load Balancer Config Image
On my elastic beanstalk instance i have a nginx.conf file that supposedly takes traffic over port 80 and reroutes to the port that my node server is running on, like below:
server {
listen 8080;
location / {
proxy_pass http://nodejs;
proxy_set_header Connection "";
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
At the moment I'm getting 502 nginx errors. Previously I was getting 508 errors. I haven't gotten a simple endpoint to load at all on my elastic beanstalk instance.
I have seen all the conversations on this topic but none that really help me conclude as to what I'm doing incorrectly. Any assistance would be appreciated.
I have uploaded a React client to DigitalOcean with an SSL certificate to enable HTTPS. I also have uploaded my Express server to Amazon's AWS. The reason for the different host providers is that I wasn't able to upload my client to AWS so I made the switch to DigitalOcean.
The server works great and I get normal responses from it when I use the client from my machine. However, the exact same code doesn't work in DigitalOcean's Nginx server. I get:
TypeError:Networkerror when attempting to fetch resource
But no response error code. The GraphQL/fetch requests aren't visible on the server so they either aren't being sent correctly or they cannot be accepted correctly by the server.
I played around with "proxy" in client's package.json and HOST/PORT/HTTPS attributes as seen here but I realized these have no effect in production.
I have no idea how to fix this. My only guess is that client uses HTTPS while server doesn't, but I haven't found info on if that's a problem.
This is my client's Nginx server configuration:
server {
listen 80 default_server;
server_name example.com www.example.com;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
If your client lives on a domain different from your API server then you need to make sure you have CORS headers enabled on the API server, otherwise the browser will refuse to load the contents.
See here for more information regarding CORS headers.
You turn off the credentials when running the client on DigitalOcean. Maybe storing a cookie on your client is/ was not possible.
I'm using this tutorial nginx reverse proxy tutorial to setup a node site with nginx. this is what my nano /etc/nginx/conf.d/mydomain.com.conf looks like
server {
listen 80;
server_name mydomain.com;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
The problem is that when I visit my domain, it's redirecting to another domain that I have setup on the same server. I setup that other domain (a static page) using an nginx virtual hosts tutorial that uses server blocks.
One difference I noticed is that the nginx reverse proxy tutorial doesn't do any of this symlinking between sites available and sites enabled which the virtual hosts tutorial does with the server blocks files. The virtual hosts tutorial instructs the reader to create server block files and then enable them like this
sudo ln -s /etc/nginx/sites-available/demo /etc/nginx/sites-enabled/demo
Do I have to do anything to enable the config file when setting up a reverse proxy with nginx? If not, do you know why it's redirecting to the other domain?
/etc/nginx/conf.d/mydomain.com.conf
Let me start with a small explanation on how nginx matches the hosts, quoting from how nginx processes a request
In this configuration nginx tests only the request’s header field
“Host” to determine which server the request should be routed to. If
its value does not match any server name, or the request does not
contain this header field at all, then nginx will route the request to
the default server for this port.
According to your description I would say there's 2 possibilities,
either that this reverse proxy virtual host has a wrong name, so it's not matched and the request is directed to the first virtual host that listens on port 80.
the reverse proxy is correct but the configuration was not loaded.
To fix this double check that this line is correct server_name mydomain.com; and indeed matches the URL you are requesting, then make sure you reloaded nginx settings sudo service nginx reload
The problem was that /etc/nginx/conf.d/mydomain.com.conf hadn't been copied into
/etc/nginx/sites-enabled
Most of the tutorials I've come across, you set up a Node.js web app by setting the server to listen on a port, and access it in the browser by specifying that port.. However, how would I deploy a Node.js app to be fully accessible by say a domain like foobar.com?
You have to bind your domain's apex (naked domain) and usually www with your web server's ip or it's CNAME.
Since you cannot bind apex domain with CNAME, you have to specify server IP or IPs or load balancers' IPs
Your question is a little vague.. If your DNS is already configured you could bind to port 80 and be done with it. However, if you already have apache or some other httpd running on port 80 to serve other hosts that obviously won't work.
If you prefer to run the node process as non-root (and you should) it's much more likely that you're looking for a reverse proxy. My main httpd is nginx, the relevant option is proxy_pass. If you're using apache you probably want mod_proxy.
I just created an "A record" at my registrar pointing to my web server's ip address. Then you can start your node app on port 80.
An alternative would be to redirect:
http://www.foobar.com to http://www.foobar.com:82
Regards.
Use pm2 to run your node apps on the server.
Then use Nginx to proxy to your node server. I know this sounds weird but that's the way it's done. Eventually if you need to set up a load balancer you do that all in Nginx too.
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://APP_PRIVATE_IP_ADDRESS:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
This is the best tutorial I've found on setting up node.js for production.
https://www.digitalocean.com/community/tutorials/how-to-set-up-a-node-js-application-for-production-on-ubuntu-14-04
For performance you also setup nginx to serve your public files.
location /public {
allow all;
access_log off;
root /opt/www/site;
}