Nginx can't access root directory in sites-available - linux

I have basic nginx set up on my Digital Ocean droplet (Ubuntu). I have only one root user. In my nginx configuration file in location I have
server {
listen 80;
server_name site.com;
root /var/www/html/site;
}
I tried to change it to
root /root/site
But it gives me 403 Forbidden error. When I change it to nginx default directory
/var/www/html/site;
everything works fine.
Why is it giving me that error? I understand that only root user has access to root directory, but why can't browser only read files from there? Is it okay to create another folder, not like "/var/www/html/23rdsquad;" somewhere on my server (not /root or /var/www) and use that instead?

Your Nginx user does't have permission to read directory /root/site, so:
Check directory permission
user#user:~$ ls -l /root/site | grep site
drwxrwxrwx 7 user user 4096 ago 17 16:56 site
Check Nginx user
user#user:~$ ps aux|grep nginx|grep -v grep
Nginx user configuration is "/etc/nginx/nginx.conf"
vim /etc/nginx/nginx.conf
Usually you have
"user www-data;"
1) Change directory permission either Nginx user.
2) Restart Nginx service
user#user:~$ sudo systemctl restart nginx

Related

How to avoid permissions problems with node and nginx directory structure?

On my production server I'm successfully using nginx to host a static site and as a reverse proxy for a node app. Currently, the node app is in /home/myUserName/apps and the site is in /var/www/siteDomain.com/html.
On my local/development machine, the html directory is inside my apps directory (../apps/html). I want to have the same directory structure in production, so that I can clone my git repository and then just run npm install in case the package.json has changed (node_modules is in .gitignore).
I get permissions problems when using git and npm in /var/www/siteDomain.com because the owner is root and siteDomain.com is drwxr-xr-x. I can clone my repo using sudo git, but then all the subdirectories (including html) are owned by root which causes problems (would have to use sudo npm, which I read can make more problems, cannot manipulate files in ftp...).
The other way I could do it is clone the repo to /home/myUserName/apps, where everything is owned by my non-root user, and then change the nginx config file to point to /home/myUserName/apps/html as the root for the static site.
What is the best way to structure my directories so that I don't have permissions problems when using git and npm? Is pointing the html root to something outside of /var/www unusual or will it problems in the future?
P.S. my local machine is Windows, I'm not very experienced with linux (which is running on production server)
You can create projects directory in /home/username/projectname
Run nginx without root permissions like described below.
Add/Change the following in your /etc/nginx/nginx.conf:
user nginx;
You should create the user and grant permissions on the webroot directories recursively.
This way only master process runs as root. Because: Only root processes can listen to ports below 1024. A webserver typically runs at port 80 and/or 443. That means it needs to be started as root.
To run master process as non root user:
Change the ownership of the following:
error_log
access_log
pid
client_body_temp_path
fastcgi_temp_path
proxy_temp_path
scgi_temp_path
uwsgi_temp_path
Change the listen directives to ports above 1024, log in as desired user and run nginx by nginx -c /path/to/nginx.conf.
And your node directory must be placed in /home/username/projectname.
Add node user, nginx user and git user to the common group and check projects permissions.

default directory for root in vsftpd on nginx server

I want to change default directory when root user login via ftp.
At present I am getting root directory but i want to set it as
/usr/share/nginx/html
I am on nginx server with ubuntu 14.04
Change the env. variable HOME from .profile file with the updated directory path.

Giving folder permission as apache owner

I have set up the AWS Linux instance and deployed web project and for that project, I need folder permission only by apache user I have root user access for SSH.
How can I do this which will show apache as an owner of the web project?
Apache creates www-data as the user and group.
Example: If the Server web root is /var/www.
sudo chown -R www-data:www-data /var/www
Hope it helps ;-)

Nginx still try to open default error log file even though I set nginx config file while reloading

The below is my nginx configuration file located in /etc/nginx/nginx.conf
user Foo;
worker_processes 1;
error_log /home/Foo/log/nginx/error.log;
pid /home/Foo/run/nginx.pid;
events {
worker_connections 1024;
use epoll;
}
http {
access_log /home/Foo/log/nginx/access.log;
server {
listen 80;
location = / {
proxy_pass http://192.168.0.16:9999;
}
}
}
As you can see I change log, pid files location into home directory.
When I re-start Linux it seems to work, Nginx records error logs in the file I set and pid file also.
However, when it tries nginx -s reload or the other, It tries to open other error log file.
nginx: [alert] could not open error log file: open() "/var/log/nginx/error.log" failed (13: Permission denied)
2015/12/14 11:23:54 [warn] 3356#0: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:1
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
2015/12/14 11:23:54 [emerg] 3356#0: open() "/home/Foo/run/nginx.pid" failed (13: Permission denied)
nginx: configuration file /etc/nginx/nginx.conf test failed
I know, I can solve permission error with sudo but the main issue in here is a error log file(/var/log/nginx/error.log) Nginx tries to open.
Why does it try to access another error log file?
This is old... but I went through the same pain and here is my solution.
As you can see the log is an alert, not a blocking error:
nginx: [alert] could not open error log file: open() "/var/log/nginx/error.log" failed (13: Permission denied)
It shouldn't be a problem :) Nginx just likes to check that file on startup...
Just use -p option. Something like this to launch Nginx locally works for me:
nginx -c /etc/nginx/nginx.conf -g 'daemon off;' -p /home/Foo/log/nginx
You might need to fire it with sudo
sudo nginx -t
The alert comes from the nginx initialization procedure, when it checks that it can write to the error log path that has been compiled in with the --error-log-path configure flag. This happens before nginx even looks at your configuration file, so it doesn't matter what you write in it.
Recently (2020-11-19), an -e option was added to nginx, allowing you to override the error log path that has been compiled in. You can use that option to point nginx to a user-writeable file (or maybe stderr).
See https://trac.nginx.org/nginx/changeset/f18db38a9826a9239feea43c95515bac4e343c59/nginx
Yes, Nginx just likes to check that file on startup. I copy the nginx installed directory to another place, I start it, and the pid of the new Nginx still in old place. So I suggest you to delete old directory.
You will get this alert because your user doesn't have permission to modify the log file. I just assign the permission to the Nginx log file and it worked as expected.
just use this command.
sudo chmod 766 /var/log/nginx/error.log
This simple answer is to use sudo.
So when I used sudo nginx -t
Everything turned out fine.
BTW, this had error precipitated for me when I was increasing the file upload limits in PHP.INI on Ubuntu 18.04, and I had restarted my PHP and my NGINX and thats when I tested:
2020/10/19 20:27:43 [warn] 1317#1317: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:1
2020/10/19 20:27:43 [emerg] 1317#1317: BIO_new_file("/etc/letsencrypt/live/websitename.com/fullchain.pem") failed (SSL: error:0200100D:system library:fopen:Permission denied:fopen('/etc/letsencrypt/live/websitename.com/fullchain.pem','r') error:2006D002:BIO routines:BIO_new_file:system lib)
nginx: configuration file /etc/nginx/nginx.conf test failed
Check the permissions on the directory /home/Foo/log/nginx/. It must be writable by nginx. Set permissions like so:
sudo chmod 766 /home/Foo/log/nginx
Alternatively reload nginx with sudo
sudo nginx -s reload

Permissions - Apache and Pure-FTPd - How to set?

I have a big doubt how to setup Apache and Pure-FTPd. I don't know how set folder permissions and secure users to not access other folders outsite their home directory.
My scenario:
Apache running defaults (group apache, user apache)
Pure-FTPd using Pure-DB (internal database, not Linux users) - installed using group "ftpusers" and user "ftpuser"
all sites in /sites
I did:
chown apache:apache /sites -R
To create an user on Pure-FTPd:
pure-pw -u myuser -d /sites/onesite -u ftpuser -g ftpusers
pure-pw mkdb
This way I can connect to a FTP account but cannot transfer (permission denied) or delete files.
I can set all /sites to 777 but I know this is not correct. I want to know the correct way, so users can upload/delete files, Apache can read/write files in each website, and if a user upload something to try read outside the /sites directory he gets an error.
Please, help me to secure my webserver using Apache and Pure-DB, plus Linux permissions.
Thank you!
Roger
Not sure if this is correct: I've created the FTP user using "apache:apache"
pure-pw -u myuser -d /sites/onesite -u apache -g apache
pure-pw mkdb
and set:
chmod 770 /sites -R
So everything runs on apache:apache.
Same issue here. I solved it lowering /etc/pure-ftpd/conf/MinUID to my www-data UID number. Though I'd like to know if there is a better solution.

Resources