I want to deploy a node.js app with pm2 and express into a Compute Engine Instance, it works fine in port 8080, but when i change the port to 8081, it returns me "500 Internal Server Error".
I also have a firewall rule with that port.
/etc/nginx/sites-available/default:
server {
listen 8081;
server_name **.***.***.***;
location / {
proxy_pass "http://127.0.0.1:8081";
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_cache_bypass $http_upgrade;
}
}
server {
listen 80;
server_name **.***.***.***;
root /var/www/html/;
}
In my file /home/myuser/.pm2/logs/index-error.log says: "ADDRESS ALREADY IN USE"
File: /var/log/nginx/error.log:
1260 768 worker_connections are not enough while connecting to upstream
I've tried with the next command:
sudo netstat -tulpn
And the only process that uses this port is the firewall rule that I create
Try this Below possible Solutions :
1)Set the maximum number of simultaneous connections that can be opened by a worker process. Please go through Worker_connections for more information. Also check full example configuration.
The formula for connections is worker_processes * worker_connections which should be 12 * 768, which would be (click clack) 9216. But your logs say 1768…
events {
worker_connections 10000;
}
Try this on your app.yml:
Any custom commands to run after building run:
exec: echo "Beginning of custom commands"
replace:
filename: "/etc/nginx/nginx.conf"
from: "worker_connections 768"
to: "worker_connections 2000"
replace:
filename: "/etc/nginx/nginx.conf"
from: "worker_processes auto"
to: "worker_processes 10"
Be aware that your block on post 2 is acting on the wrong file!
Another way to increase the limit is by setting worker_rlimit_nofile 10000 and had no issues, you can safely increase it, though, the chance of running out of file descriptors is minuscule.
Package bbb-config now sets worker_rlimit_nofile 10000; and worker_connections 4000; in /etc/nginx/nginx.conf #11347
Note : Note to CentOS / Fedora users, if you have SELinux enabled, you will need to run setsebool -P httpd_setrlimit 1 so that nginx has permissions to set its rlimit.
2)Check you may need to use a body parser to convert data to req.body github.com/expressjs/body-parser
3)Check the problem is now a linux kernel limit, see easyengine.io/tutorials/linux/increase-open-files-limit
Please see similar SO for more information.
Related
I have a node application running on an ec2 instance. Node is running on port 5000. I want to access the api from remote.
this is nginx configuration file.
server {
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
client_max_body_size 20M;
listen 80;
listen [::]:80;
location / {
proxy_pass http://127.0.0.1:5000;
}
location /nginx_status {
# Turn on stats
stub_status on;
access_log off;
}
}
when I try to curl using curl localhost/nginx_status
it returns
Active connections: 1
server accepts handled requests
11 11 12
Reading: 0 Writing: 1 Waiting: 0
Also when I try to access the IP in browser, it shows
Welcome to nginx!
If you see this page, the nginx web server is successfully installed and working. Further configuration is required.
For online documentation and support please refer to nginx.org.
Commercial support is available at nginx.com.
Thank you for using nginx.
But if I try to access the ip_address/nginx_status it shows 404 Error for example if I took IP address 123.456.789.098 in browser it shows the above mentioned message and if I took 123.456.789.098/nginx_status it will return 404 error. Even if I try curl ip_address/nginx_status it is also returning 404 error.
My question is, How can I access node application running on port 5000 from outside world?
unfortunately I only see part of your config, is there another server that listens to 80?
You don't use "default_server" for listen either, and without "server_name" I find it difficult to distinguish between them. So maybe another config with the server + port 80 as default_server takes effect. Check in your /etc/nginx/ folder which servers {..} all exist.
The proxy_pass looks correct, if the nodjs server is really listed there, check again whether it is really http or https scheme. For the correct protocol transmission of the proxy_pass.
But you should then add a control for the "stub_status" so that it is information that you do not entrust to everyone, for me it is the case that only one application has access to it internally and under another list what is not released on the internet:
server {
listen 127.0.0.1:10081 default_server;
location /flyingfish_status {
stub_status on;
access_log off;
allow 127.0.0.1;
deny all;
}
}
I'm curious what you find out! :)
When i want to access to Node Red via Brower I typ 192.168.0.24:1880/ui/.
Now i just want to reach the Node-Red side have an local Domain something like website.test.
I already have changed my port form 1880 Port to 80.
Also i edit the /etc/hosts file to -> 192.168.0.24 website.test
But when I test it, I cant access to the Node-Red Website with this Domain.
Does anyone know how to accomplish this?
First, to bind Node-RED to port 80 will require you to run it as root on the pi. This is a VERY BAD idea unless you 100% understand the consequences as it opens up a LOT of potential security issues.
A better solution is to change the address that Node-RED binds to to be 127.0.0.1 so it only listens on localhost then use something like nginx to proxy it on to port 80.
You can change the bind address by uncommenting the uiHost line at the top of the settings.js file in the userDir (which is normally /home/pi/.node-red)
A basic nginx config would look like this:
server {
listen 80;
listen [::]:80;
location / {
proxy_pass http://127.0.0.1:1880;
#Defines the HTTP protocol version for proxying
#by default it it set to 1.0.
#For Websockets and keepalive connections you need to use the version 1.1
proxy_http_version 1.1;
#Sets conditions under which the response will not be taken from a cache.
proxy_cache_bypass $http_upgrade;
#These header fields are required if your application is using Websockets
proxy_set_header Upgrade $http_upgrade;
#These header fields are required if your application is using Websockets
proxy_set_header Connection "upgrade";
#The $host variable in the following order of precedence contains:
#hostname from the request line, or hostname from the Host request header field
#or the server name matching a request.
proxy_set_header Host $host;
}
}
Second, editing the /etc/hosts file on the pi will only map a hostname for processes running on the pi not anywhere where (e.g. on your pc).
If your other machine is another Linux machine or a Mac then you can probably use mDNS to access the pi which by default will be found on raspberrypi.local. Unfortunately Windows doesn't support mDNS so that won't work (You may be able to add support by installing some printer packages from Apple).
You can edit the hosts file on Windows (C:\Windows\System32\drivers\etc\hosts) but this isn't a great solution as you will need to do it to every machine that wants to access the Node-RED instance.
Other options include adding an entry on your local router or running your own DNS server, but both of those options are far too complicated to get into here.
I have an Ubuntu 18.04 server running in a Droplet (DigitalOcean) secured with SSL and using an Nginx reverse proxy. Also Jenkins in running in my server (not in any docker) and configured to be accessed under the domain I created for it: jenkins.testdomain.com (all these steps following DO docs)
So the goal is to manage the deployment of a Node.js-React application to my testdomain.com later, by now, I just want to create the dist folder generated, after the 'npm build', within the /var/lib/jenkins/workspace/ , just that.
By now, I'm able to access my jenkins.testdomain.com site alright, trigger the pipeline to start the process after pushing to my repo, and start to run the stages; but it's here when start to fail nginx, when the pipeline reaches the Deliver phase (let's read 'npm build' phase), sometimes in the Build phase ('npm install').
It's at this point, reading the Jenkins console output where I see when it gets stuck and eventually shows a 502 Bad Gateway error. I will need to run the command systemctl restart Jenkins in my server console, to have access again. After restarting, the pipeline resume the work and seems to get the job done :/
In the /var/log/nginx/error.log for nginx I can read:
*1 connect() failed (111: Connection refused) while connecting to upstream, client: 85.146.85.194, server: jenkins.testdomain.com,
request: "GET /job/Basic%20NodeJS-React%20app/8/console HTTP/1.1",
upstream: "https:
//127.0.0.1:8080/job/Basic%20NodeJS-React%20app/8/console", host:
"jenkins.testdomain.com", referrer: "https:
//jenkins.testdomain.com/job/Basic%20NodeJS-React%20app/8/"
*1 SSL_do_handshake() failed (SSL: error:1408F10B:SSL routines:ssl3_get_record:wrong version number) while SSL handshaking
to upstream, client: 85.146.85.194, server: jenkins.testdomain.com,
request: "GET /favicon.ico HTTP/1.1", upstream: "https:
//127.0.0.1:8080/favicon.ico", host: "jenkins.testdomain.com",
referrer: "https:
//jenkins.testdomain.com/job/Basic%20NodeJS-React%20app/8/console" ...
In the Jenkinsfile of my node-js-react app (from jenkins repo), the agent looks like this:
pipeline {
agent {
docker {
image 'node:6-alpine'
args '-p 3000:80'
}
}
environment {
CI = 'true'
}
stages {
// Build, Test, and Deliver stages
}
}
And my jenkins.testdomain.com configuration (/etc/nginx/sites-available/jenkins.testdomain.com) is like this (pass tests from nginx -t):
server {
listen 80;
root /var/www/jenkins.testdomain.com/html;
server_name jenkins.testdomain.com www.jenkins.testdomain.com;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# Fix the "It appears that your reverse proxy set up is broken" error.
proxy_pass http://localhost:8080;
# High timeout for testing
proxy_connect_timeout 1200s;
proxy_send_timeout 1200s;
proxy_read_timeout 1200s;
proxy_redirect http://localhost:8080 https://jenkins.testdomain.com;
# Required for new HTTP-based CLI
proxy_http_version 1.1;
proxy_request_buffering off;
# Required for HTTP-based CLI to work over SSL
proxy_buffering off;
}
# Certbot auto-generated lines...
}
Any help would be very welcomed 3 days struggling with this and playing around with the different proxy_ directives from nginx and so...
Thanks in advance!
OK just add an update that some days after my latest post, I realized that the main and only reason the server was going down was a lack of resources in the droplet.
So I was using a droplet with 1GB of RAM, 25GB HD, etc.. (the most basic one), so I chose to update it to use at least 2GB of RAM and indeed, that made it work as I was expecting. Everything until now works fine and that issue didn’t happen again.
Hope it helps if someone experiences the same issue.
I have installed Nginx for Windows (64-bit) from here because the official binaries are 32-bit. The aim is to use Nginx for load balancing NodeJS applications. I am following instructions from here where, the link to sample basic configuration file also exists.
The following configuration file works successfully on Linux where nginx was installed via Ubuntu PPA. The servers are themselves started via pm2.
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream top_servers {
# Use IP Hash for session persistence
ip_hash;
# Least connected algorithm
least_conn;
# List of Node.JS Application Servers
server 127.0.0.1:3001;
server 127.0.0.1:3002;
server 127.0.0.1:3003;
server 127.0.0.1:3004;
}
server {
listen 80;
server_name ip.address;
location /topserver/ {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://top_servers;
proxy_set_header X-Request-Id $request_id;
}
}
However, this file does not work with Windows. I am getting an error as 'No such file or directory' under the html folder of Nginx installation on Windows. I haven't done any such setting for Linux.
Can you please help me convert the above configuration file for Windows?
NOTE I don't have a choice - Windows is a must for this project.
So, I over-wrote the contents of conf/nginx.conf with the contents shown above. First, I got an error as "map" directive is not allowed here. Then, after removing this directive, I got another error as "upstream" directive is not allowed here". I think, the binaries I am using does not support load balancing.
I installed Nginx using Ansible. To install on Centos7 I used the yum package so it by default was run as root user. I want it to start and run as a different user (ex - nginx user) in the Centos box. When I try to run it with a different user I get the following error:
Job for nginx.service failed because the control process exited with
error code. See "systemctl status nginx.service" and "journalctl -xe"
for details.
I know it's not advisable to run as root. So how do I get around this and run nginx as a non root user. Thanks
Add/Change the following in your /etc/nginx/nginx.conf:
user nginx;
You should create the user and grant permissions on the webroot directories recursively.
This way only master process runs as root. Because: Only root processes can listen to ports below 1024. A webserver typically runs at port 80 and/or 443. That means it needs to be started as root.
Note from the documentation on master and worker processes:
The main purpose of the master process is to read and evaluate
configuration files, as well as maintain the worker processes.
The worker processes do the actual processing of requests.
To run master process as non root user:
Change the ownership of the files whose path are specified by following Nginx directives:
error_log
access_log
pid
client_body_temp_path
fastcgi_temp_path
proxy_temp_path
scgi_temp_path
uwsgi_temp_path
Change the listen directives to ports above 1024, log in as desired user and run nginx by nginx -c /path/to/nginx.conf
Just in case it helps, for testing/debugging purpose, I sometimes run an nginx instance as a non privileged user on my Debian (stretch) laptop.
I use a minimal config file like this:
worker_processes 1;
error_log stderr;
daemon off;
pid nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
access_log access.log;
server {
listen 8080;
server_name localhost;
location / {
include /etc/nginx/uwsgi_params;
uwsgi_pass localhost:8081;
}
}
}
and I start the process with:
/usr/sbin/nginx -c nginx.conf -p $PWD
Just in case it helps someone stumbling over this question in 2020, here is my minimal nginx.conf for running a web server on port 8088, works for a non-root user. No modding of file permissions necessary! (Tested on Centos 7.4 with nginx 1.16.1)
error_log /tmp/error.log;
pid /tmp/nginx.pid;
events {
# No special events for this simple setup
}
http {
server {
listen 8088;
server_name localhost;
# Set a number of log, temp and cache file options that will otherwise
# default to restricted locations accessible only to root.
access_log /tmp/nginx_host.access.log;
client_body_temp_path /tmp/client_body;
fastcgi_temp_path /tmp/fastcgi_temp;
proxy_temp_path /tmp/proxy_temp;
scgi_temp_path /tmp/scgi_temp;
uwsgi_temp_path /tmp/uwsgi_temp;
# Serve local files
location / {
root /home/<your_user>/web;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
}
}
Why not use the rootless bitnami/nginx image:
$ docker run --name nginx bitnami/nginx:latest
More info
To verify it is not running as root but as your standard user (belonging to the docker group):
$ docker exec -it nginx id
uid=1**8 gid=0(root) groups=0(root)
And to verify that Nginx isn't listening to a root-restricted port 443 even internally:
$ docker ps -a | grep nginx
2453b37a9084 bitnami/nginx:latest "/opt/bitnami/script…" 4 minutes ago Up 30 seconds 8080/tcp, 0.0.0.0:8443->8443/tcp jenkins_nginx
It's easy to configure (see docs) and runs even under random UIDs defined at run time (i.e. not hard-coded in the Dockerfile). In fact this is Bitnami's policy to have all their containers rootless and prepared for UID changes at runtime, which is why we've been using them for a few years now under very security-conscious Openshift 3.x (bitnami/nginx in particular as a reverse proxy needed to enable authentication to MLflow web app).