Is it dangerous opening port 3000 of the server? - node.js

I want to deploy my Angular + NodeJS application. My NodeJS application runs on http://localhost:3000 on the server. And my Angular application tries to send it's requests to the server with this prefix address: http://server.ip.address:3000. I opened the port 3000 of the server with the following commands to help my program works and it works fine by now.
irewall-cmd --zone=public --add-port=3000/tcp --permanent
firewall-cmd --reload
But I am not sure if I did a good job or not?
My Angular app runs on nginx and my NodeJS app runs on PM2. I also tried to setting a reverse proxy as you can see below inside etc/nginx/nginx.conf, but it didn't work and just opening port 3000 worked for me!
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /demo/stock-front9/dist/strategy;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
#proxy_pass http://localhost:3000;
#proxy_http_version 1.1;
# First attempt to serve request as file, then
# as directory, then redirect to index(angular) if no file found.
try_files $uri $uri/ /index.html;
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
What is the best way to deploy Angular + NodeJS application and how can I do it?

You can deploy the app by just assigning port to process.env.PORT, and put the whole angular build in a public/src folder and give the public folder path in node server file.
You can take reference here https://github.com/Ris-gupta/chat-application

There's no best way but there some best practices. Opening port directly on a server is not good solution. I would suggest you to use docker and publish your application inside container with NGINX. Also you can deploy your backend server in same way.

Related

Deploy Vue frontend and node backend docker image with nginx

I have a Vue frontend and a separate Node backend. I want to deploy the whole thing and have created a docker image for the frontend and a docker image for the backend. Unfortunately the data from the backend is not visible on my frontend. My backend is running on port 3000.
The nginx.conf of the frontend looks like this:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name localhost;
location /api {
proxy_pass http://localhost:3000; # port that Express serves
}
location / {
root /app;
index index.html;
try_files $uri $uri/ /index.html;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
}
In Webstorm everything works but as soon as I build the docker images it doesn't work, respectively I don't see the data from the backend in my frontend.
What have I forgotten to consider? And how should the config files in my backend looks like that it works?
Thanks for help!
Regards.

Nodejs showing 413 Payload Too Large on nginx server

I'm running this code on my Amazon AWS server without load balancer. It's a simple server I setup. I'm trying to run a code that crawls for data, written in nodejs. Currently, it's showing the error shown below when I upload a lot of data to crawl:
Request Method: POST
Status Code: 413 Payload Too Large
After many of the suggestions I read here on StackOverflow, I added client_max_body_size 500M; on http, server, location and restarted the server but it doesn't have any effect on it.
Here's the nginx.conf file:
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 4096;
client_max_body_size 500M;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen 80;
listen [::]:80;
server_name example.com;
root /usr/share/nginx/html/crawler;
client_max_body_size 500M;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
#try_files $uri /index.html;
proxy_pass http://127.0.0.1:4200/;
client_max_body_size 500M;
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
Where am I going wrong?
I would suggest you to consider those points :
What's the size of your POST request? It might be bigger than 500MB, therefor you could raise the limit even further, technically, you'll only need to put it on the server (Even tho chunking the post data would be way better than raising the limit)
Are you using Express? Maybe did you put up some limits on the request size with Express.
Kind regards

Should I open port 3000 of my server to serve NodeJS application?

I have an Angular + Node.JS app. When I was running the program locally I defined a baseurl = http://localhost:3000/ in my Angular app and used this prefix for accessing to my NodeJS backend in my program defined links, but now when I wanted to deploy my app on a remote server, I changed the baseurldefinition to the baseurl = http://111.222.333.444:3000/(111.222.333.444 is my server ip address for example), but it doesn't work!
How should I connect my Angular app to the NodeServer on a remote server?
EDIT: This is my /etc/nginx/nginx.conf file content:
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /demo/stock-front9/dist/strategy;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
#proxy_pass http://localhost:3000;
#proxy_http_version 1.1;
# First attempt to serve request as file, then
# as directory, then redirect to index(angular) if no file found.
try_files $uri $uri/ /index.html;
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
I would not, I think is better to run the Node app with a tool like PM2 and then place a reverse proxy using Nginx in front of it, PM2 will act as orchestrator over your service while Nginx will provide access only through standard web ports (80, 443).
And in the case of Angular, when compiling, it should generate a static web app which you can serve using the same Nginx reverse proxy, doing it like so you'll save yourself the effort of configuring things like CORS, API routes and so forth, everything will go through Nginx.
Update on an example of Nginx config file
server {
listen 80;
server_name example.org;
location /api {
proxy_pass http://localhost:3000;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_redirect off;
proxy_http_version 1.1;
}
location / {
root /path/to/angular/compiled/app;
index index.html;
}
}
And then the angular app should point to the same host.
Good luck and cheers :)
You can still run your angular app locally. And for backend server, you can use proxy.
Please take a look at this.
https://github.com/angular/angular-cli/blob/master/docs/documentation/stories/proxy.md#using-corporate-proxy

Nginx Reverse Proxy display default page instead of remote home page

I have configured nginx as a reverse proxy and load balancer on a server and on another server there is a web application running. When i access the public URL of nginx it display the default page of RHEL instead of the homepage of the application on remote server. Also, when I add a path in the nginx IP it redirects me to the IP of the application server in the browser instead being the same nginx server. I want the IP to be same as nginx server.
Example:
Nginx IP : 52.2.2.2
Remote Ip : 52.2.2.3
Browser
http://52.2.2.2/admin_portal
IP changes in Broswer
http://52.2.2.3/admin_portal
Below are my configuration:
/etc/nginx/conf.d/load_balancer.conf
upstream backend {
server 10.128.0.2;
}
# This server accepts all traffic to port 80 and passes it to the upstream.
# Notice that the upstream name and the proxy_pass need to match.
server {
listen 80;
listen [::]:80;
location / {
proxy_pass http://backend;
}
}
My Nginx configuration file
/etc/nginx/nginx.conf
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
Can someone please help me
Before you pass the proxy you have to rewrite it.
location / {
rewrite ^/reclaimed/Ip / last;
proxy_pass http://backend;
}

EPIPE Error with ExpressJS, nginx proxy server

I am running multiple ExpressJS Node apps through an Nginx proxy server, and am getting an EPIPE Error thrown whenever my users try to download a file. This does not happen on my local setup (which is identical to the server's except for the proxy server), so I figure it has something to do with my Nginx configuration.
Here are my Nginx configs:
/etc/nginx/nginx.conf
user www www;
worker_processes 1;
error_log /home/alex/logs/error.log;
pid /var/run/nginx.pid;
worker_rlimit_nofile 8192;
events {
worker_connections 4096; ## Default: 1024
}
http {
include mime.types;
index index.html index.htm index.php;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] $status '
'"$request" $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /home/alex/logs/access.log main;
sendfile on;
tcp_nopush on;
gzip on;
server_names_hash_bucket_size 128; # this seems to be required for some vhosts
include /etc/nginx/conf.d/*.conf;
}
/etc/nginx/conf.d/default.conf
server {
listen 80;
server_name example.com;
# log access and stuff
access_log /home/alex/logs/example-site.log main;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
# Proxy to the NodeJS server
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://127.0.0.1:8999;
}
# redirect server error pages to their HTML
include /etc/nginx/errpages.conf;
}
The ExpressJS server is sending the download using the following code:
app.get('/citrite/p/:patch', function(req, res)
{
if(set.citFiles.indexOf(req.params.patch) == -1)
{
res.send(mbuild.get404());
}
else
{
track.incrViewcount(req.params.patch, 'citrite');
res.download(set.citDir + '/' + req.params.patch, files.doneSaving);
}
});
That code and everything else works fine on my local git repo, but when I push from there and pull on the server-side, the site kicks and screams - it times out on the user's end and gives me an EPIPE error in the console. I am running Node.js version 4.2.1 and ExpressJS version 4.13.3.
I figured out what the problem was: apparently, having sendfile set to on is what was causing the downloads to stall, and turning that off (specifically, removing the directive in the config) fixed the issue. Not exactly sure why this would interfere, but getting rid of the setting cleared things up.

Resources