On my production server I'm successfully using nginx to host a static site and as a reverse proxy for a node app. Currently, the node app is in /home/myUserName/apps and the site is in /var/www/siteDomain.com/html.
On my local/development machine, the html directory is inside my apps directory (../apps/html). I want to have the same directory structure in production, so that I can clone my git repository and then just run npm install in case the package.json has changed (node_modules is in .gitignore).
I get permissions problems when using git and npm in /var/www/siteDomain.com because the owner is root and siteDomain.com is drwxr-xr-x. I can clone my repo using sudo git, but then all the subdirectories (including html) are owned by root which causes problems (would have to use sudo npm, which I read can make more problems, cannot manipulate files in ftp...).
The other way I could do it is clone the repo to /home/myUserName/apps, where everything is owned by my non-root user, and then change the nginx config file to point to /home/myUserName/apps/html as the root for the static site.
What is the best way to structure my directories so that I don't have permissions problems when using git and npm? Is pointing the html root to something outside of /var/www unusual or will it problems in the future?
P.S. my local machine is Windows, I'm not very experienced with linux (which is running on production server)
You can create projects directory in /home/username/projectname
Run nginx without root permissions like described below.
Add/Change the following in your /etc/nginx/nginx.conf:
user nginx;
You should create the user and grant permissions on the webroot directories recursively.
This way only master process runs as root. Because: Only root processes can listen to ports below 1024. A webserver typically runs at port 80 and/or 443. That means it needs to be started as root.
To run master process as non root user:
Change the ownership of the following:
error_log
access_log
pid
client_body_temp_path
fastcgi_temp_path
proxy_temp_path
scgi_temp_path
uwsgi_temp_path
Change the listen directives to ports above 1024, log in as desired user and run nginx by nginx -c /path/to/nginx.conf.
And your node directory must be placed in /home/username/projectname.
Add node user, nginx user and git user to the common group and check projects permissions.
Related
I am running a default t2.nano ec2 linux ami. Nothing is changed on it. I am trying to rsync my local changes to the server. There is a permissions issue that I don't know enough about to fix.
My structure is as follows. I'm trying to push my work to the technology directory. The technology directory is mapped to a staging domain. i.e. technology.staging.com
:/var/www/html/technology
this is from the root, and it does work fine, it's the rsync that is failing.
when I push locally to that directory I get a "failed: Permission denied (13)" error.
I'm running an nginx server and assigned permissions to the www directory as follows:
sudo chown -R nginx:nginx /var/www
My user is ec2-user which is the normal default. Here is where I am tripped up. You can see the var directory is given root access.
You can see that the www directory then has permissions set to nginx so our server can access the files. I believe I need to add the ec2-user to this directory as well as the nginx user so that I can rsync my files there and the server will still have access I'm just unsure of how to do that.
As a test, I created a test directory at this location and it worked successfully.
:/home/ec2-user/test
you can see the permission here are set for the ec2-user which is why it works i'm sure.
Here's the command I'm running on my local machine to rsync my files which fails.
rsync -azP -e "ssh -i /Users/username/devwork/company/comp.pem" company_technology/ ec2-user#1.2.3.4:/var/www/html/technology
Here's the command that was working.
rsync -azP -e "ssh -i /Users/username/devwork/company/comp.pem" company_technology/ ec2-user#1.2.3.4:/home/ec2-user/test
I have done enough research and testing to know that it's a permissions error, I just can't figure out the right way to solve it. Do I need to create a group and assign both the nginx and ec2-user to the group and then give that group the same permissions level on the :/var directory.
Side note, what permissions level do I set for the chown to make these permissions that are currently set?
I have server config files in the :/etc/nginx/conf.d/ directory that map to the directories I create inside of :/var/www/html directory so I can have multiple sites hosted on the server.
So in this example, I have a config file at :/etc/nginx/conf.d/technology.conf which maps to the directory at :/var/www/html/technology
Thank you in advance, again, I do feel like I have put forth the research and effort to show that I've gone as far as I know how to do.
The answer made sense after I spent roughly a day playing around. You have to give access to both the ec2-user and the nginx group. I believe you never want to put a user in a group that involves the server itself, I think things would go south.
After changing the owner to both the ec2-user and nginx group, it still didn't work exactly the way I wanted it to. The reason was, I needed the nginx permissions to be updated to what they had when they were assigned the user role.
Basically, theec2-user had write permissions and the server did not. we wanted the user to have write permissions so they could rsync my local files to the directory on the server, and the nginx group needed the same level of permissions to display the pages. Now that I think about it, the nginx group may have only needed read permissions to display things, but this at least solved the problem for now.
Here is the command I ran on the server to update the ownership and the permissions, as well as the output.
modify ownership
sudo chown -R ec2-user:nginx :/var/www/html/technology
modify permissions
sudo chmod -R o=rwx,g+rwx,o-w technology
The end result looks like this
You can see the permissions match, and the ownership is as we expected. The only thing I have to figure out is after I rsync new files to the server, I need to run the previous code to update the permissions again. I'm sure that will come to me later, but I hope this helps anyone in the same situation.
Im trying to publish my website for the first time (complete newbie in servers). Im using apache2 and the app is built with node/react/express.
The index.js file is inside myapp/packages/hotel/src.
what I did:
changed the root folder in /000-default.conf to "var/www/html/myapp/packages/hotel/src"
deleted the existing html folder with sudo rm -r html
made the git clone command sudo git clone www.xyz123.. html
When i open the website, there is "index of / " and the directories. The index of doesent even point to the src folder, its still inside the main directory.
What did i miss? It should load the index.js
Re. 1: Use an absolute path:
DocumentRoot /var/www/html/myapp/packages/hotel/src
Re. 3: Use git archive instead of clone as you don't want the .git directory to be served. If your intention was for index.js to be an app that runs on the server, then you want to use node.js instead of apache2 to serve it.
I am setting up a new React app on EC2 instance (ubuntu). I have installed nodeJS and npm and I am able to build my app successfully.
Issue is my code is in /var/www/html folder and my site example.com is pointed to this folder.
when I run
npm run build
It builds a folder under /html like /html/build now my app runs on example.com/build. Resources for these files comes from example.com/static/style.css etc but they actually reside under example.com/build/static
I can edit asset-manifest.json and change the path but thats not appropriate solution as I need to get rid of /build folder for production
I am not super familiar with deployments to EC2 but this looks like you just need to either copy the entire contents of your app inside var/www/html, or you need to tell apache or nginx to look to the right folder (in this case /build)
For example, with apache you probably have a file inside /etc/apache2/sites-enabled/ that is pointing to /var/www/html, you could change that to /var/www/html/build and restart apache.
You can check this for examples on how to write these configurations https://gist.github.com/rambabusaravanan/578df6d2486a32c3e7dc50a4201adca4
I tried the exact same thing with an Ubuntu LEMP, and had no issue sftp as root.
I deployed a one-click CENTOS 6 LEMP with Vultr. I can SSH into it fine, but the root credentials don't work to stfp into it. It just times out.
I've tried creating a new user with root access, added to wheel and even tried adding to visudo - with this used I can sftp ok, but when I navigate to...
/usr/share/ngix/html/
... to create folders and upload static websites pages - I get permission error.
All I want to do here is this really, host simple static website.
Why can't I sftp as root?
What am I missing here?
SFTP as root typically requires you to be able to SSH as root. It is common (and good security practice) for the default sshd configs to not allow root login.
Check your sshd configs.
I have a need to make any files that are created in the specific Linux directory to have 777 permission.
I would like to have all the users to be able to do Read, Write and Execute on all files under this folder. So what is the best way or Linux command to make it happen?
What I am doing is that I am spinning off two separate containers one for Nginx server and one for PHP:FPM app server to host Laravel 5.4 app.
Please consider the following scenario. I have a docker application container A (PHP:FPM) which is used to serve the web application files to docker container B (Nginx). Now when I access the website, I am delivering the web pages through the web container. Both the containers are within the same network and I share the volumes from my app container to my web container. But when the web container tries to read the files on the app container I get the error which is something like below:
The stream or file "/var/www/storage/logs/laravel.log" could not be
opened: failed to open stream: Permission denied
So I added RUN chmod -R 777 storage in my docker file.
However it is not solving the issue.
So I also tried using SGID to fix the issue by adding one more line in my dockerfile as RUN chmod -R ug+rwxs storage. Still it is not solving the issue of permission.
On a separate note, funny thing is that on my MAC Docker container this works without any issue ( I mean without adding chmod -R 777 to folder or using SGID for setting permission to a folder in my docker file). But when the same code is run on Linux AMI EC2 instance (Amazon AMI Linux EC2) ... the permission issue start to occur.
So how do I fix this ?
The solution is to launch both containers using the same user identified by the same uid. For instance you can choose root or any uid when running the container:
docker run --user root ...
Alternatively, you can switch to another user, before startup, inside your Dockerfile by adding the following before the CMD or ENTRYPOINT
USER root
I have solved it by figuring out user name under which cache files are created when someone access the application url . And then updating my dockerfile to include statement for SGID ownership for that user on the root of app folder where all source code resides (so all subfolder and files included later in whatever way ... at run-time sometime... are accessible from web container for that user) and then using chmod 777 permission on specific folders that needs to have chmod 777 permission.