I installed Nginx using Ansible. To install on Centos7 I used the yum package so it by default was run as root user. I want it to start and run as a different user (ex - nginx user) in the Centos box. When I try to run it with a different user I get the following error:
Job for nginx.service failed because the control process exited with
error code. See "systemctl status nginx.service" and "journalctl -xe"
for details.
I know it's not advisable to run as root. So how do I get around this and run nginx as a non root user. Thanks
Add/Change the following in your /etc/nginx/nginx.conf:
user nginx;
You should create the user and grant permissions on the webroot directories recursively.
This way only master process runs as root. Because: Only root processes can listen to ports below 1024. A webserver typically runs at port 80 and/or 443. That means it needs to be started as root.
Note from the documentation on master and worker processes:
The main purpose of the master process is to read and evaluate
configuration files, as well as maintain the worker processes.
The worker processes do the actual processing of requests.
To run master process as non root user:
Change the ownership of the files whose path are specified by following Nginx directives:
error_log
access_log
pid
client_body_temp_path
fastcgi_temp_path
proxy_temp_path
scgi_temp_path
uwsgi_temp_path
Change the listen directives to ports above 1024, log in as desired user and run nginx by nginx -c /path/to/nginx.conf
Just in case it helps, for testing/debugging purpose, I sometimes run an nginx instance as a non privileged user on my Debian (stretch) laptop.
I use a minimal config file like this:
worker_processes 1;
error_log stderr;
daemon off;
pid nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
access_log access.log;
server {
listen 8080;
server_name localhost;
location / {
include /etc/nginx/uwsgi_params;
uwsgi_pass localhost:8081;
}
}
}
and I start the process with:
/usr/sbin/nginx -c nginx.conf -p $PWD
Just in case it helps someone stumbling over this question in 2020, here is my minimal nginx.conf for running a web server on port 8088, works for a non-root user. No modding of file permissions necessary! (Tested on Centos 7.4 with nginx 1.16.1)
error_log /tmp/error.log;
pid /tmp/nginx.pid;
events {
# No special events for this simple setup
}
http {
server {
listen 8088;
server_name localhost;
# Set a number of log, temp and cache file options that will otherwise
# default to restricted locations accessible only to root.
access_log /tmp/nginx_host.access.log;
client_body_temp_path /tmp/client_body;
fastcgi_temp_path /tmp/fastcgi_temp;
proxy_temp_path /tmp/proxy_temp;
scgi_temp_path /tmp/scgi_temp;
uwsgi_temp_path /tmp/uwsgi_temp;
# Serve local files
location / {
root /home/<your_user>/web;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
}
}
Why not use the rootless bitnami/nginx image:
$ docker run --name nginx bitnami/nginx:latest
More info
To verify it is not running as root but as your standard user (belonging to the docker group):
$ docker exec -it nginx id
uid=1**8 gid=0(root) groups=0(root)
And to verify that Nginx isn't listening to a root-restricted port 443 even internally:
$ docker ps -a | grep nginx
2453b37a9084 bitnami/nginx:latest "/opt/bitnami/script…" 4 minutes ago Up 30 seconds 8080/tcp, 0.0.0.0:8443->8443/tcp jenkins_nginx
It's easy to configure (see docs) and runs even under random UIDs defined at run time (i.e. not hard-coded in the Dockerfile). In fact this is Bitnami's policy to have all their containers rootless and prepared for UID changes at runtime, which is why we've been using them for a few years now under very security-conscious Openshift 3.x (bitnami/nginx in particular as a reverse proxy needed to enable authentication to MLflow web app).
Related
I want to deploy a node.js app with pm2 and express into a Compute Engine Instance, it works fine in port 8080, but when i change the port to 8081, it returns me "500 Internal Server Error".
I also have a firewall rule with that port.
/etc/nginx/sites-available/default:
server {
listen 8081;
server_name **.***.***.***;
location / {
proxy_pass "http://127.0.0.1:8081";
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_cache_bypass $http_upgrade;
}
}
server {
listen 80;
server_name **.***.***.***;
root /var/www/html/;
}
In my file /home/myuser/.pm2/logs/index-error.log says: "ADDRESS ALREADY IN USE"
File: /var/log/nginx/error.log:
1260 768 worker_connections are not enough while connecting to upstream
I've tried with the next command:
sudo netstat -tulpn
And the only process that uses this port is the firewall rule that I create
Try this Below possible Solutions :
1)Set the maximum number of simultaneous connections that can be opened by a worker process. Please go through Worker_connections for more information. Also check full example configuration.
The formula for connections is worker_processes * worker_connections which should be 12 * 768, which would be (click clack) 9216. But your logs say 1768…
events {
worker_connections 10000;
}
Try this on your app.yml:
Any custom commands to run after building run:
exec: echo "Beginning of custom commands"
replace:
filename: "/etc/nginx/nginx.conf"
from: "worker_connections 768"
to: "worker_connections 2000"
replace:
filename: "/etc/nginx/nginx.conf"
from: "worker_processes auto"
to: "worker_processes 10"
Be aware that your block on post 2 is acting on the wrong file!
Another way to increase the limit is by setting worker_rlimit_nofile 10000 and had no issues, you can safely increase it, though, the chance of running out of file descriptors is minuscule.
Package bbb-config now sets worker_rlimit_nofile 10000; and worker_connections 4000; in /etc/nginx/nginx.conf #11347
Note : Note to CentOS / Fedora users, if you have SELinux enabled, you will need to run setsebool -P httpd_setrlimit 1 so that nginx has permissions to set its rlimit.
2)Check you may need to use a body parser to convert data to req.body github.com/expressjs/body-parser
3)Check the problem is now a linux kernel limit, see easyengine.io/tutorials/linux/increase-open-files-limit
Please see similar SO for more information.
I have a node application running on an ec2 instance. Node is running on port 5000. I want to access the api from remote.
this is nginx configuration file.
server {
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
client_max_body_size 20M;
listen 80;
listen [::]:80;
location / {
proxy_pass http://127.0.0.1:5000;
}
location /nginx_status {
# Turn on stats
stub_status on;
access_log off;
}
}
when I try to curl using curl localhost/nginx_status
it returns
Active connections: 1
server accepts handled requests
11 11 12
Reading: 0 Writing: 1 Waiting: 0
Also when I try to access the IP in browser, it shows
Welcome to nginx!
If you see this page, the nginx web server is successfully installed and working. Further configuration is required.
For online documentation and support please refer to nginx.org.
Commercial support is available at nginx.com.
Thank you for using nginx.
But if I try to access the ip_address/nginx_status it shows 404 Error for example if I took IP address 123.456.789.098 in browser it shows the above mentioned message and if I took 123.456.789.098/nginx_status it will return 404 error. Even if I try curl ip_address/nginx_status it is also returning 404 error.
My question is, How can I access node application running on port 5000 from outside world?
unfortunately I only see part of your config, is there another server that listens to 80?
You don't use "default_server" for listen either, and without "server_name" I find it difficult to distinguish between them. So maybe another config with the server + port 80 as default_server takes effect. Check in your /etc/nginx/ folder which servers {..} all exist.
The proxy_pass looks correct, if the nodjs server is really listed there, check again whether it is really http or https scheme. For the correct protocol transmission of the proxy_pass.
But you should then add a control for the "stub_status" so that it is information that you do not entrust to everyone, for me it is the case that only one application has access to it internally and under another list what is not released on the internet:
server {
listen 127.0.0.1:10081 default_server;
location /flyingfish_status {
stub_status on;
access_log off;
allow 127.0.0.1;
deny all;
}
}
I'm curious what you find out! :)
I want to host several react app on my linux(CentOS) server with nginx.
currently I have two server block in nginx.conf.
First server is where I proxy different request to different server.
Second server is my react app.
I couldn't get my react app host by nginx.
Block 1
server {
listen 80
server_name localhost
root /usr/share/nginx/html
...
location /app1/ {
proxy_pass http://localhost:3010
}
...
}
Block 2
server {
listen 3010;
listen [::]:3010;
server_name localhost;
root /home/administrator/Projects/app1/build;
index index.html index.htm;
location / {
try_files $uri $uri/ =404;
}
}
When I using telnet to check the server is hosting or not. only port 80 is listening.
No server is listening on port 3010.
How should I change my Nginx configuration to make it works?
Update
I check the nginx error log and I got
1862#0: bind() to 0.0.0.0:3010 failed (13: Permission denied)
I've search on it and there are a lot answer about non-root user cannot listen to port below 1024
But the server is trying to bind on 3010
Do I still have to add run nginx by root?
This is probably related to SELinux security policy. If You check on the below command and see http is only allowed to bind on the list of ports.
[root#localhost]# semanage port -l | grep http_port_t
http_port_t tcp 80, 81, 443, 488, 8008, 8009, 8443, 9000
Maybe you can use any of the listed ports or add your custom port using the below command
$ semanage port -a -t http_port_t -p tcp 3010
If semanage command is not installed, you can check using yum provides semanage command to identify which package to install.
I have two identiacal docker containers running on different ports on CentOS7 server. Older version runs on port 81, newer one on port 8080 (82,83 were checked as well).
When I'm trying to proxy second container and change port from 81 to 8080 I receive nginx error message (HTTP/1.1 502 Bad Gateway).
Nginx is not in a container. I just have it installed on the server.
Here is my proxy_pass setting:
location / {
proxy_pass http://0.0.0.0:8080/;
}
And some additional information:
nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
If I try to access containers directly via their ports everything works fine.
curl 0.0.0.0:81
{"msg":"Phone Masks service"}
curl 0.0.0.0:8080
{"msg":"Phone Masks service"}
nginx version: nginx/1.16.1
Docker version 19.03.4, build 9013bf583a
Full server config is pretty standard, I didn't change anything except proxy_pass setting
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
proxy_pass http://0.0.0.0:8080/;
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
The command I use to start the container:
sudo docker run --rm -it -p 8080:8080 -e PORT="8080" api
sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
47ef127e3e49 api "/start.sh" 26 minutes ago Up 26 minutes 80/tcp, 0.0.0.0:8080->8080/tcp infallible_borg
5d5fe891ba30 api "/start.sh" 7 hours ago Up 7 hours 80/tcp, 0.0.0.0:81->81/tcp hopeful_cerf
This is SElinux related:
setsebool -P httpd_can_network_connect true
According to this thread:
The second one [httpd_can_network_connect] allows httpd modules and scripts to make outgoing connections to ports which are associated with the httpd service. To see a list of those ports run semanage port -l | grep -w http_port_t
I'm currently running: nginx/1.12.2 with Phusion Passenger 5.1.12.
Whenever I try to use "passenger_log_file", "passenger_default_user" or "passenger_default_group" I get the following error.
nginx: [emerg] "passenger_log_file" directive is not allowed here in /etc/nginx/sites-enabled/test:11
My sites-enabled file looks like this:
server {
listen 80;
server_name example.com;
root /home/test/api;
passenger_app_root /home/test/api/dist;
passenger_enabled on;
passenger_app_type node;
passenger_startup_file app.js;
passenger_log_file /var/log/test/access.log;
}
If I remove the passenger_log_file everything works perfectly fine. What am I missing here?
I had the same issue. I think the correct option for your case is passenger_app_log_file which allow to define a log file for each app. Unfortunately this is an Enterprise level option. In other hand if you move your passenger_log_file a level up it should change all your passenger apps log file from the default one.
passenger_log_file /var/www/vhosts/system/xxx.pasi-consulting.com/logs/proxy_error_log;
server {
listen 137.74.195.27:443 ssl http2;
server_name www.pasi-consulting.com;
server_name www.xxx.pasi-consulting.com;
}
Per the documentation
passenger_log_file is only available at the http context level. In your provided example it is in the server context.
Both passenger_default_user and passenger_default_group are allowed in the server context so that is confusing.