I have three instances on aws. one for nginx which the front-end server, and two backend nodejs intstances.
Im trying to set up the nginx server to upstream to these node.js instances:
upstream node_servers {
server private_ip:8124 weight=10 max_fails=3; // node server 1 private_ip:port
server private_ip:8124 weight=10 max_fails=3; // node server 2 private_ip:port
}
server {
listen private_ip:80; // nginx server private ip:port
root /home/ubuntu/project/;
server_name public_ip.eu-west-1.compute.amazonaws.com; // nginx public DNS
location / {
try_files $uri $uri/ /index.html;
proxy_pass http://node_servers/;
}
}
on my node 1 server, node 2 server instance app.js code:
app.listen(8124, "127.0.0.1");
console.log("listening on 8124");
I go to the nginx server public domain name, and nothing really happens, its just loads forever sending request.....
In your node code, you are listening on the loopback interface on 127.0.0.1 (requests from localhost only):
app.listen(8124, "127.0.0.1");
You have to listen on your specific private IP or 0.0.0.0:
app.listen(8124, "0.0.0.0");
Related
I have my business app running in Development env and inside that, 2 folders named Client and Backend.
Client (ReactJS) running in port 5000
Backend (Node.JS) running in Port 6000
Server Nginx.
So in Nginx default.conf file, listening 80 and I've proxy_pass http://localhost:5000.
Its working fine in the Development.
Please note, some redirections are configured like ${host}:3000/xxx in the backend and client scripts
But while doing the production deployment, I found difficulty in doing so.
I have the static build client file and placed it in the nginx root folder.
Below is the .conf file
server {
listen 80;
listen 5000;
server_name xx.xxx.xxx.xxx;
location / {
root /usr/share/nginx/html/client/build;
index index.html index.htm;
try_files $uri $uri/ #backend;
}
location ~ ^/([A-Za-z0-9]+) {
proxy_pass http://localhost:6000;
}
}
I Also have SSO enabled, when I navigate the address, it send the index.html file which is the login page.
When I press login, first it will navigate to "/login/abc/" which is routed in "backend" script.
But it responds with 404 error.
What am I doing wrong?
But It's working fine with public IP as below
And I have created a subdomain with route 53 and then assigned A record with instance public IP
But when I ping the domain and IP it's getting request time out and all packets were lost. My application is node express app.So please it's a huge favor if anyone can solve this issue.
From your screenshot, I found that you're using Nginx as a Reverse proxy. So it might be because of your Nginx config
example your Nginx config may look like this,
server {
listen 80;
server_name 192.168.1.21;
...
}
you've to update it to:
server {
listen 80;
server_name subdomain.domain.com;
...
}
I'm trying to set up upstream servers with nginx. All run the same Node.js app on port 8080 with pm2. Here is the nginx default.conf of the main server
upstream backend {
ip_hash;
server localhost:8080;
server sv1_ip_address;
server sv2_ip_address;
}
server {
listen 443 ssl;
location / {
proxy_pass http://backend;
...
}
...
}
And on sv1 and sv2, I have the same default.conf as follows
server {
listen 80 default_server;
location / {
proxy_pass http://localhost:8080;
...
}
}
Now when I tried shutting down either sv1 or sv2 (using pm2 kill for Node or even reboot), all upstream servers went down and I receive a 500 error (?) when accessing the app by the domain name. So I thought there was something wrong with nginx on those secondary servers and I replaced upstream backend with this
upstream backend {
ip_hash;
server localhost:8080;
server sv1_ip_address:8080;
server sv2_ip_address:8080;
}
and now shutting down or rebooting were handled correctly (meaning nginx will route the requests to one of the living servers). Is this an expected behavior, or am I doing something wrong here? I don't think routing requests directly to port 8080 is a good idea though.
I donot know why you had to install nginx service on sv1 and sv2 servers.
When you reboot sv1 , sv2 servers, it should be enabling nginx first. Please check service nginx status is running or not once reboot is done.
And you kill node meaning application is down, so you got 500 error on nginx
I have an ALB that is in HTTPS that will request to my EC2 instance.
I configured the ALB listeners to HTTP/HTTPS then target my EC2.
When I try to access my ALB with these:
https://domainSample
Response = Welcome to nginx
https://domainSample/api/getSample
Response = 404 Not Found nginx
https://domainSample:3000
No Response
This is my nginx configuration in EC2 that runs on port 3000
server {
listen 80;
server_name domainSample;
location / {
try_files $uri $uri/ =404;
}
}
Where did I go wrong?
I have search and read about the documentation on AWS and do some tweek and test to the application.
What I understand in the flow of the request from the ALB to EC2.
In configuring the ALB, In Target Groups, we need to set the target of its request which will be the EC2 that your application is running on to.
For instance, we have Node js running on to port 3000 in the EC2.
We will add the target instance which we specify the port on 3000.
This solved my problem. Thanks
Lets say I have corporatewebsite.com listening on port 80. Which is an appache / WordPress site.
I have a node application that I'd like to respond to sub.corporatewebsite.com
The node application is running Express currently. I can get it to listen to a separate port at the moment, but my service won't start if it's pointed to port 80 since it's already in use.
How can I have both apache and node listening to port 80, but having node responding to the subdomain?
You could use a reverse proxy like Nginx to route your subdomains.
Here is an example of nginx configuration, you might probaly have to complete according to your project and server :
server {
listen 80;
server_name corporatewebsite.com;
location / {
[ ... some parameters ... ]
include proxy_params; // Import global configuration for your routes
proxy_pass http://localhost:1234/; // One of your server that listens to 1234
}
}
server {
listen 80;
server_name sub.corporatewebsite.com;
location / {
[ ... some parameters ... ]
include proxy_params;
proxy_pass http://localhost:4567/; // The other server that listens to 4567
}
}
You have to configure, for example, apache2 listening to port 1234 while nodejs is listening to port 4567.
If you do like this, a good practice is to block direct access from the outside to your ports 1234 and 4567 (using iptables for example).
I think this post could be useful to you : Node.js + Nginx - What now?