How to set folder permissions for a particular container on Elastic Beanstalk - linux

I have troubles setting permissions for a web folder on Elastic Beanstalk. I run multiple containers using custom docker images in one instance: apache-php, mysql, memcached, etc.. For the container "apache-php" I map a folder with my yii2 application to /var/www/html/.
When I manually make a bundle and do upload / deploy via Elastic Beanstalk console I sure have right permissions for the folder and everything works fine.
Now, when I deploy the app using "eb deploy", it drops all permissions and I get a server error and "The directory is not writable by the Web process: /var/www/html/backend/web/assets" in logs.
I can connect via ssh and set necessary permissions manually, but sure this is not convenient, since needs to be done every time I re-deploy the app.
So, my questions is what is the best way to automatically set permission for particular folder in particular container on Elastic Beanstalk?
Perhaps, I can use .ebextensions, but I didn't find how to run "container_commands" for particular container.

AWS EB Deployment starts your app in /var/app/ondeck
When deploying elastic beanstalk, your app is first unzipped into /var/app/ondeck/
Most likely, your local folder being deployed does not have the permissions you want on them.
If you need to make adjustments to your app, or the shell, during deployment, .ebextensions/*.config is the right place to do it.
Container commands should be run to that path
But keep in mind, that these commands will run EVERY time you deploy, whether needed or not, unless you use some method to test for pre-config.
container_commands:
08user_config:
test: test ! -f /opt/elasticbeanstalk/.preconfig-complete
command: |
echo "jail-me" > /home/ec2-user/.userfile
09writable_dirs:
command: |
chmod -R 770 /var/app/ondeck/backend/web/assets
chmod -R 770 /var/app/ondeck/[path]
99complete:
command: |
touch /opt/elasticbeanstalk/.preconfig-complete
files:
"/etc/profile.d/myalias.sh":
mode: "000644"
owner: root
group: root
content: |
alias webroot='cd /var/www/html/backend/web; ls -al --color;'
echo " ========== "
echo " The whole point of Elastic Beanstalk is that you shouldn't need to SSH into the server. "
echo " ========== "

Yes you should use ebextensions.
Create a folder in your app source root called .ebextensions. Create a file with a .config extension say 01-folder-permissions.config. Files are processed in lexicographical order of their name.
Contents of the file can be:
container_commands:
change_permissions:
command: chmod 777 /var/www/some-folder
Replace with appropriate folder and permissions. Read about container commands here.

Related

How to provide 777 default permission on all files within a given Linux folder

I have a need to make any files that are created in the specific Linux directory to have 777 permission.
I would like to have all the users to be able to do Read, Write and Execute on all files under this folder. So what is the best way or Linux command to make it happen?
What I am doing is that I am spinning off two separate containers one for Nginx server and one for PHP:FPM app server to host Laravel 5.4 app.
Please consider the following scenario. I have a docker application container A (PHP:FPM) which is used to serve the web application files to docker container B (Nginx). Now when I access the website, I am delivering the web pages through the web container. Both the containers are within the same network and I share the volumes from my app container to my web container. But when the web container tries to read the files on the app container I get the error which is something like below:
The stream or file "/var/www/storage/logs/laravel.log" could not be
opened: failed to open stream: Permission denied
So I added RUN chmod -R 777 storage in my docker file.
However it is not solving the issue.
So I also tried using SGID to fix the issue by adding one more line in my dockerfile as RUN chmod -R ug+rwxs storage. Still it is not solving the issue of permission.
On a separate note, funny thing is that on my MAC Docker container this works without any issue ( I mean without adding chmod -R 777 to folder or using SGID for setting permission to a folder in my docker file). But when the same code is run on Linux AMI EC2 instance (Amazon AMI Linux EC2) ... the permission issue start to occur.
So how do I fix this ?
The solution is to launch both containers using the same user identified by the same uid. For instance you can choose root or any uid when running the container:
docker run --user root ...
Alternatively, you can switch to another user, before startup, inside your Dockerfile by adding the following before the CMD or ENTRYPOINT
USER root
I have solved it by figuring out user name under which cache files are created when someone access the application url . And then updating my dockerfile to include statement for SGID ownership for that user on the root of app folder where all source code resides (so all subfolder and files included later in whatever way ... at run-time sometime... are accessible from web container for that user) and then using chmod 777 permission on specific folders that needs to have chmod 777 permission.

dotnet build access to path is denied

I've created a jenkins server, and I am trying to build a .net core 2.0.0 project on the server. I've been able to successfully pull from source control and store source files in the workspace. However, I'm running into an issue with running the dotnet build command. This is what I'm getting.
/usr/share/dotnet/sdk/2.0.0/Microsoft.Common.CurrentVersion.targets(4116,5):
error MSB3021: Unable to copy file
"obj/Debug/netcoreapp2.0/ubuntu.16.04-x64/Musify.pdb" to
"bin/Debug/netcoreapp2.0/ubuntu.16.04-x64/Musify.pdb". Access to the
path is denied. [/var/lib/jenkins/workspace/Musify/Musify.csproj]
now, I've given read write and execute permissions to every file and directory in /usr/share/dotnet/sdk/2.0.0/, and I've given read write and execute to every file and directory in my workspace (/var/lib/jenkins/workspace/Musify). I also believe my jenkins user is part of the sudo group.
The weird thing I am experiencing, is that I am able to, as root, run dotnet build in my workspace directory (/var/lib/jenkins/workspace/Musify), and the project builds. I cannot however, get the same results under the jenkins user (who should be part of the sudo group). My question is, how can I verify that Jenkins is using the jenkins system user, and that this user has the correct permissions to run this command. I am hosting jenkins on an ubuntu 16.04 x64 server.
UPDATE:
At the command line on your jenkins host run
ps -ef | grep jenkins
the first column will give you the USERID and it should be, as you say, jenkins
Then if you can login as jenkins to the host where the jenkins server is running run the following ....
groups
this will list out the groups that jenkins is a part of
If you want to fix the dotnet build issue take following actions:
Set DOTNET_CLI_HOME environment variable on the docker to a common
path like /tmp on the container. This path is used by the dotnet
to create necessary files to build the project. Check
Dotnet build permission denied in Docker container running Jenkins
Use -o or another accessible path to create the artifacts in the desired directory. e.g. dotnet build -o /tmp/dotnet/build/
microsoftisnotthatbad.sln
Re the jenkins user problem, run whoami in the container. If you get whoami: cannot find name for user ID blahblah it means the user is not found in the passwd file. There are 2 answers under Docker Plugin for Jenkins Pipeline - No user exists for uid 1005, if item 1 did not work, try the second:
Mount the host passwd to the container.
If the jenkins user is logged using an identity provider like LDAP on the Jenkins server or the slave server your job is using, the passwd file of the host will not have the jenkins user. Check the other answer on that post.

Write error when trying to run unicorn: directory for pid=/var/www/twimpush/pids/unicorn.pid not writable (ArgumentError)

I've followed the steps in the DigitalOcean guides here and here towards setting up a Sinatra server using nginx and Unicorn. I'm on the second to last step:
start the Unicorn and run it as a daemon using the configuration file:
Make sure that you are inside the application directory
i.e. /my_app
unicorn -c unicorn.rb -D
Running that command, I get the error:
directory for pid=/var/www/twimpush/pids/unicorn.pid not writable
(ArgumentError)
I've tried this as both root, and as a user called deployer, to which I gave write permissions.
When I cloned my git repo, it didn't include the empty pids folder inside my repo. I added it with mkdir pids, in addition to the other required folders mentioned in the first guide, and it worked.

AWS Elastic Beanstalk - User Permission Problems

I am trying to configure our Node.js application to be deployed with Amazon Elastic Beanstalk.
Actually I did a few configuration files inside .ebextensions to enable Websockets, doing yum installs for several modules and to install some custom software we need.
So far the App deployment works and all configured software is installed by Beanstalk.
The Problem I have is that the nodejs user wich runs the node application, doesnt have permission to execute the commandline tools installed by our beanstalk custom config.
To be more concrete:
The app supports user file uploads and the uploaded files are saved
to some temp folder on the instance (that works like it should).
Then the app does a commandline execution to convert the uploaded
file in to a custom file format, whats executing something like
/home/ec2-user/converter/bin convert filename output filename.
At this point I get this error:
{ [Error: spawn EACCES] code: 'EACCES', errno: 'EACCES', syscall: 'spawn' }
Overall the app requires several commandline tools for such conversion tasks to run correctly.
Actually they all have the same problem. Even tools installed by yum, such as Imagemagick, are not beeing executed by the app.
Manually, by using the ec2-user account, I am able to execute all these, all files are in place at the right system paths and they work fine. So all installations seem to work right.
I already tried to grant permissions to the user nodejs manually and did chmod the files, but this doesnt seem to take any effect here.
Big question is.. how can I grant the required permissions to the nodejs user or as alternative how to use a defined User to execute node.js?
I believe that the nodejs user doesn't have privileges to use the shell:
[ec2-user#host ~]$ cat /etc/passwd
....
nodejs:x:497:497::/tmp:/sbin/nologin
According to the docs, node runs the command in a shell and returns it.
I also tried:
[ec2-user#host ~]$ pwd
/home/ec2-user
[ec2-user#host ~]$ cat test.js
#!/opt/elasticbeanstalk/node-install/node-v0.10.31-linux-x64/bin/node
require('child_process').exec('/usr/bin/whoami', function (err, data) {
console.log(data);
});
[ec2-user#host ~]$ ls -l
total 4
-rwxrwxrwx 1 ec2-user ec2-user 169 Nov 3 21:49 test.js
[ec2-user#host ~]$ sudo -u nodejs /home/ec2-user/test.js
sudo: unable to execute /home/ec2-user/test.js: Permission denied
I will say that this works, which im confused about (maybe someone can chime in to clarify):
$ sudo -u nodejs /usr/bin/whoami
nodejs
HOWEVER, as an outside observer it seems more like Beanstalk isn't a good fit for you. Generally, Beanstalk is a hands-off fully managed abstraction by design and messing around with the file system permissions and user permissions is over-stepping those boundaries.
As an aside, maybe you want to consider moving to OpsWorks instead. From http://aws.amazon.com/opsworks/faqs/:
Q: How is AWS OpsWorks different than AWS Elastic Beanstalk?
AWS OpsWorks and AWS Elastic Beanstalk both focus on operations, but
with very different orientations. AWS Elastic Beanstalk seeks to
automatically provide key operations activities so that developers can
maximize the time they spend on development and minimize the time they
spend on operations. In contrast, AWS OpsWorks delivers integrated
experiences for IT administrators and ops-minded developers who want a
high degree of productivity and control over operations.
I finally found the solution:
Beanstalk is using the ec2-user account to run bash commands.
So everything installed by commandline cannot be executed by the nodejs user account because of permission conflicts.
Solution was to copy all installed tools in to /usr/local/bin, where they can be executed by any user.
07_myprogram:
command: sudo cp bin/* /usr/local/bin
cwd: /home/ec2-user/myprogram
ignoreErrors: true

Node.js and ulimit

I have seen a small amount of discussion here and there about setting ulimit -n (file handles) in Linux when using Node. Default on most linux distros is 1024. I can find no recommendations anywhere. Normally for apache you'd set it pretty high. Any thoughts on this? Easy to set it high to start with, but not sure there is a need. We are using Mongo remotely, not opening a lot of files locally.
I received this answer back from AWS support, and it works:
As for the container, every beanstalk instance, is a container with beanstalk software that will download your application upon startup, and modify system parameters depending on the environment type, and on your .ebextensions folder on your application.
So in order to achieve my suggestion, you will need to create a .ebextensions on your application, with the contents I have mentioned.
Just as a recap, please create a file named app.config inside your .ebextensions folder on your application, with the following (updated) content:
files:
"/etc/security/limits.conf":
mode: "00644"
owner: "root"
group: "root"
content: |
* soft nofile 20000
* hard nofile 20000
commands:
container_commands:
command: "ulimit -HSn 20000; service httpd restart;"
After you added this file, save the project and make a new deployment.
As for ssh, if you want to run a user with more limits, you can run the following command after you are logged in:
sudo su -c "ulimit -HSn 20000; su - ec2-user"
And that current session will have the limits you so desire.
For reference:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html#customize-containers-format-services
https://unix.stackexchange.com/questions/29577/ulimit-difference-between-hard-and-soft-limits
http://www.cyberciti.biz/faq/linux-increase-the-maximum-number-of-open-files/

Resources