NGinx not routing between Node.js back-end and React front-end - node.js

I have a deployed a web app wih a Node.js back-end and React front-end on AWS Elastic Beanstalk using the NGINX default configuration.
upstream nodejs {
server 127.0.0.1:8081;
keepalive 256;
}
server {
listen 8080;
location / {
proxy_pass http://nodejs;
proxy_set_header Connection "";
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
gzip on;
gzip_comp_level 4;
gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;
}
My back-end runs on port 8081 (with Express.js) and doesn't receive any of the calls made by the front-end i.e. fetch("http:127.0.0.1/api/volatility").
In the console I see GET https://foo-bar.us-east-1.elasticbeanstalk.com:8080/api/volatility net::ERR_CONNECTION_TIMED_OUT.
Any way to this fix this?

My Elastic Beanstalk service didn't have permission to read/write in the database.

Related

Configure NGINX for multiple odoo instances

i have installed two odoo instances on my VPS, now i'm trying to configure nginx to use both domains with their ports, i am a beginner in nginx, i tried searching in the web but nothing is clear enough, i followed a guide on how to install odoo but it only shows nginx configuration for a single domain.
this is the config i'm currently using :
upstream odooserver {
server 127.0.0.1:8050;
}
server {
listen 80;
server_name www.domain.com;
access_log /var/log/nginx/odoo_access.log;
error_log /var/log/nginx/odooe_error.log;
proxy_read_timeout 720s;
proxy_connect_timeout 720s;
proxy_send_timeout 720s;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
location / {
proxy_redirect off;
proxy_pass http://odooserver;
}
location ~* /web/static/ {
proxy_cache_valid 200 90m;
proxy_buffering on;
expires 864000;
proxy_pass http://odooserver;
}
gzip_types text/css text/less text/plain text/xml application/xml application/json application/javascript;
gzip on;
}
it's working now after editing the config file like the others suggested, and it turned out that there was something wrong with the DNS settings.

Node with NGINX reverse proxy seems to kill connection even when configured using long timeout

I have this Node app running behind an NGINX reverse proxy. My Node app functionality is to download a large XLS file that consumes for about 80-120 seconds. It works locally without NGINX, but when I used NGINX, it seems that it just hangs and gives me timeout error.
I use MongoDB and Mongoose as a database in my Node App, and it will query the database to download the XLSX
Here is a piece of NGINX configuration:
keepalive_timeout 70;
client_max_body_size 16m;
location / {
gzip on;
gzip_min_length 1100;
gzip_buffers 4 32k;
gzip_types text/css text/javascript text/xml text/plain text/x-component application/javascript application/x-javascript application/json application/xml application/rss+xml font/truetype application/x-font-ttf font/opentype application/vnd.ms-fontobject image/svg+xml;
gzip_vary on;
gzip_comp_level 6;
proxy_pass http://indorelawan-80;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Request-Start $msec;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
send_timeout 600;
}
As you can see, it is using proxy_send_timeout and proxy_read_timeout for 600 seconds. When I tried it in local (without NGINX), it will download the XLS for about 83 seconds or so. But, in Production using NGINX, it will halt and return timeout. Is there any way to fix this?
Never mind, I moved to using queues like BullMQ instead.

Nodejs/Socket.io on Ubuntu Server with Nginx Reverse Proxy - "failed: Error in connection establishment: net::ERR_CONNECTION_TIMED_OUT"

I have been trying to deploy my chat app on my home Ubuntu server. It works locally when i connect to it using the internal ip or local server hostname.
I am using an nginx reverse proxy to change over from http://localhost:3000 to my external domain so that I can access it via the internet externally: http://tfmserver.dynu.net/
Nginx proxy:
server {
listen 80;
listen [::]:80;
root /var/www/tfmserver.dynu.net/html;
index index.html index.htm index.nginx-debian.html;
server_name tfmserver.dynu.net;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
I seem to get errors similar to the following but it is sometimes different depending on what I attempt to do to fix it:
WebSocket connection to 'ws://tfmserver.dynu.net/socket.io/?EIO=3&transport=websocket&sid=wQY_D0JOZm4VWGXgAAAA' failed: Error in connection establishment: net::ERR_CONNECTION_TIMED_OUT
or
POST http://tfmserver.dynu.net/socket.io/?EIO=3&transport=polling&t=MklujE_&sid=fbdZir8lxOlMOZm6AAAA net::ERR_CONNECTION_TIMED_OUT
According to some posts people have made about this error they are saying that Chrome is trying it as SSL but it is not being served that way, however I have added SSL to the server and into the project but it does not resolve the issue. At the moment I have it removed, but would not mind adding it back in if possible once it is working.
I've tried everything I can from all the possible other questions posted here, none are resolving the issue.
How can I get this to work externally? What am I doing wrong?
Here are the relevant parts of the project for the sockets. If you need anything else that could help, please let me know - Thanks in advance!
server:
var express = require('express');
var app = express();
var server = require('http').createServer(app);
var io = require('socket.io').listen(server);
server.listen(process.env.PORT || 3000, 'localhost');
client:
var socket = io.connect();
UPDATE: - Note: I just connected to it from my work computer and it works?! But it does not work in my own network when trying to use the external address? What's up with that?
I was able to make it work using my config.
you need to consider the redirect and proxy
server {
listen 80;
server_name 11.111.111.111;
client_max_body_size 800M;
gzip on;
gzip_comp_level 6;
gzip_vary on;
gzip_min_length 1000;
gzip_proxied any;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
gzip_buffers 16 8k;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://localhost:9000/;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location ~* \.io {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://localhost:4001;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
UPDATE: So I have finally resolved it (mostly)!
By mostly, I mean that I still am not able to use my domain to access this when I am at home on the same network as my server. To view it I MUST use the internal IP to the server or the server's network hostname. The domain works outside of the network, such as from work or elsewhere. (I can live with that!)
The issue was with the Nginx Proxy, the final config that resolved the issue for me is as follows:
server {
listen 80;
server_name tfmserver.dynu.net;
client_max_body_size 800M;
root /home/tfm/Projects/Chat;
gzip on;
gzip_comp_level 6;
gzip_vary on;
gzip_min_length 1000;
gzip_proxied any;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
gzip_buffers 16 8k;
location **/socket.io/** {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://localhost:9000**/socket.io/**;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
To fix the new error, the 502 bad gateway, I had to have "/socket.io/" added to the location and to the hostname. The other additions were recommended by Jairo Malanay and I added them in, which fixed the initial connection refusal problem I had.
I was having an issue with the CSS not loading in either and when adding the /socket.io/ this problem was resolved as well.
My final serverside:
var express = require('express');
var app = express();
var server = require('http').createServer(app);
var io = require('socket.io').listen(server);
server.listen(process.env.PORT || 9000, 'localhost');
app.get('/', function (req, res) {
res.sendFile(__dirname + '/index.html');
});
io.sockets.on('connection', function (socket) {
And my final clientside:
var socket = io.connect();
Thanks for the help again Jairo Malanay!

Compressing assets with NGINX in reverse proxy mode

I'm using NGINX as a reverse proxy in front of a Node.js app. The basic proxy works perfectly fine and I'm able to compress assets on the Node server with compression middleware.
To test if it's possible to delegate the compression task to NGINX, I've disabled the middleware and now I'm trying to gzip with NGINX with the following configuration:
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 300;
server {
listen 80;
## gzip config
gzip on;
gzip_min_length 1000;
gzip_comp_level 5;
gzip_proxied any;
gzip_vary on;
gzip_types text/plain
text/css
text/javascript
image/gif
image/png
image/jpeg
image/svg+xml
image/x-icon;
location / {
proxy_pass http://app:3000/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_cache_bypass $http_upgrade;
}
}
}
With this configuration, NGINX doesn't compress the assets. I've tried declaring these in the location context with different options but none of them seems to do the trick.
I couldn't find relevant resources on this so I'm questioning if it could be done this way at all.
Important points:
1- Node and NGINX are on different containers so I'm not serving the static assets with NGINX. I'm just proxying to the node server which is serving these files. All I'm trying to achieve is offload the node server with getting NGINX to do the gzipping.
2- I'm testing all the responses with "Accept-Encoding: gzip" enabled.
Try to add the application/javascript content type:
gzip_types
text/css
text/javascript
text/xml
text/plain
text/x-component
application/javascript
application/json
application/xml
application/rss+xml
font/truetype
font/opentype
application/vnd.ms-fontobject
image/svg+xml;
I took values from this conf H5BP:

setup nginx to use another gateway in case of 504 error

I got the following nginx config:
server {
listen 80;
server_name domainName.com;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/rss+xml text/javascript image/svg+xml application/vnd.ms-fontobject application/x-font-ttf font/opentype;
access_log /var/log/nginx/logName.access.log;
location / {
proxy_pass http://127.0.0.1:9000/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location ~ ^/(min/|images/|ckeditor/|img/|javascripts/|apple-touch-icon-ipad.png|apple-touch-icon-ipad3.png|apple-touch-icon-iphone.png|apple-touch-icon-iphone4.png|generated/|js/|css/|stylesheets/|robots.txt|humans.txt|favicon.ico) {
root /root/Dropbox/nodeApps/nodeJsProject/port/public;
access_log off;
expires max;
}
}
It is proxy for node.js application on port 9000.
Is it possible to change this config to let nginx use another proxy url (on port 9001 for example) in case nginx got 504 error.
I need this in case when node.js server is down on port 9000 and need several seconds to restart automatically and several seconds nginx gives 504 error for every request. I want nginx to "guess" that node.js site on port 9000 is down and use reserve node.js site on port 9001
Use upstream module.
upstream node {
server 127.0.0.1:9000;
server 127.0.0.1:9001 backup;
}
server {
...
proxy_pass http://node/;
...
}

Resources