AWS Elastic Beanstalk - User Permission Problems - node.js

I am trying to configure our Node.js application to be deployed with Amazon Elastic Beanstalk.
Actually I did a few configuration files inside .ebextensions to enable Websockets, doing yum installs for several modules and to install some custom software we need.
So far the App deployment works and all configured software is installed by Beanstalk.
The Problem I have is that the nodejs user wich runs the node application, doesnt have permission to execute the commandline tools installed by our beanstalk custom config.
To be more concrete:
The app supports user file uploads and the uploaded files are saved
to some temp folder on the instance (that works like it should).
Then the app does a commandline execution to convert the uploaded
file in to a custom file format, whats executing something like
/home/ec2-user/converter/bin convert filename output filename.
At this point I get this error:
{ [Error: spawn EACCES] code: 'EACCES', errno: 'EACCES', syscall: 'spawn' }
Overall the app requires several commandline tools for such conversion tasks to run correctly.
Actually they all have the same problem. Even tools installed by yum, such as Imagemagick, are not beeing executed by the app.
Manually, by using the ec2-user account, I am able to execute all these, all files are in place at the right system paths and they work fine. So all installations seem to work right.
I already tried to grant permissions to the user nodejs manually and did chmod the files, but this doesnt seem to take any effect here.
Big question is.. how can I grant the required permissions to the nodejs user or as alternative how to use a defined User to execute node.js?

I believe that the nodejs user doesn't have privileges to use the shell:
[ec2-user#host ~]$ cat /etc/passwd
....
nodejs:x:497:497::/tmp:/sbin/nologin
According to the docs, node runs the command in a shell and returns it.
I also tried:
[ec2-user#host ~]$ pwd
/home/ec2-user
[ec2-user#host ~]$ cat test.js
#!/opt/elasticbeanstalk/node-install/node-v0.10.31-linux-x64/bin/node
require('child_process').exec('/usr/bin/whoami', function (err, data) {
console.log(data);
});
[ec2-user#host ~]$ ls -l
total 4
-rwxrwxrwx 1 ec2-user ec2-user 169 Nov 3 21:49 test.js
[ec2-user#host ~]$ sudo -u nodejs /home/ec2-user/test.js
sudo: unable to execute /home/ec2-user/test.js: Permission denied
I will say that this works, which im confused about (maybe someone can chime in to clarify):
$ sudo -u nodejs /usr/bin/whoami
nodejs
HOWEVER, as an outside observer it seems more like Beanstalk isn't a good fit for you. Generally, Beanstalk is a hands-off fully managed abstraction by design and messing around with the file system permissions and user permissions is over-stepping those boundaries.
As an aside, maybe you want to consider moving to OpsWorks instead. From http://aws.amazon.com/opsworks/faqs/:
Q: How is AWS OpsWorks different than AWS Elastic Beanstalk?
AWS OpsWorks and AWS Elastic Beanstalk both focus on operations, but
with very different orientations. AWS Elastic Beanstalk seeks to
automatically provide key operations activities so that developers can
maximize the time they spend on development and minimize the time they
spend on operations. In contrast, AWS OpsWorks delivers integrated
experiences for IT administrators and ops-minded developers who want a
high degree of productivity and control over operations.

I finally found the solution:
Beanstalk is using the ec2-user account to run bash commands.
So everything installed by commandline cannot be executed by the nodejs user account because of permission conflicts.
Solution was to copy all installed tools in to /usr/local/bin, where they can be executed by any user.
07_myprogram:
command: sudo cp bin/* /usr/local/bin
cwd: /home/ec2-user/myprogram
ignoreErrors: true

Related

How to provide 777 default permission on all files within a given Linux folder

I have a need to make any files that are created in the specific Linux directory to have 777 permission.
I would like to have all the users to be able to do Read, Write and Execute on all files under this folder. So what is the best way or Linux command to make it happen?
What I am doing is that I am spinning off two separate containers one for Nginx server and one for PHP:FPM app server to host Laravel 5.4 app.
Please consider the following scenario. I have a docker application container A (PHP:FPM) which is used to serve the web application files to docker container B (Nginx). Now when I access the website, I am delivering the web pages through the web container. Both the containers are within the same network and I share the volumes from my app container to my web container. But when the web container tries to read the files on the app container I get the error which is something like below:
The stream or file "/var/www/storage/logs/laravel.log" could not be
opened: failed to open stream: Permission denied
So I added RUN chmod -R 777 storage in my docker file.
However it is not solving the issue.
So I also tried using SGID to fix the issue by adding one more line in my dockerfile as RUN chmod -R ug+rwxs storage. Still it is not solving the issue of permission.
On a separate note, funny thing is that on my MAC Docker container this works without any issue ( I mean without adding chmod -R 777 to folder or using SGID for setting permission to a folder in my docker file). But when the same code is run on Linux AMI EC2 instance (Amazon AMI Linux EC2) ... the permission issue start to occur.
So how do I fix this ?
The solution is to launch both containers using the same user identified by the same uid. For instance you can choose root or any uid when running the container:
docker run --user root ...
Alternatively, you can switch to another user, before startup, inside your Dockerfile by adding the following before the CMD or ENTRYPOINT
USER root
I have solved it by figuring out user name under which cache files are created when someone access the application url . And then updating my dockerfile to include statement for SGID ownership for that user on the root of app folder where all source code resides (so all subfolder and files included later in whatever way ... at run-time sometime... are accessible from web container for that user) and then using chmod 777 permission on specific folders that needs to have chmod 777 permission.

How to set folder permissions for a particular container on Elastic Beanstalk

I have troubles setting permissions for a web folder on Elastic Beanstalk. I run multiple containers using custom docker images in one instance: apache-php, mysql, memcached, etc.. For the container "apache-php" I map a folder with my yii2 application to /var/www/html/.
When I manually make a bundle and do upload / deploy via Elastic Beanstalk console I sure have right permissions for the folder and everything works fine.
Now, when I deploy the app using "eb deploy", it drops all permissions and I get a server error and "The directory is not writable by the Web process: /var/www/html/backend/web/assets" in logs.
I can connect via ssh and set necessary permissions manually, but sure this is not convenient, since needs to be done every time I re-deploy the app.
So, my questions is what is the best way to automatically set permission for particular folder in particular container on Elastic Beanstalk?
Perhaps, I can use .ebextensions, but I didn't find how to run "container_commands" for particular container.
AWS EB Deployment starts your app in /var/app/ondeck
When deploying elastic beanstalk, your app is first unzipped into /var/app/ondeck/
Most likely, your local folder being deployed does not have the permissions you want on them.
If you need to make adjustments to your app, or the shell, during deployment, .ebextensions/*.config is the right place to do it.
Container commands should be run to that path
But keep in mind, that these commands will run EVERY time you deploy, whether needed or not, unless you use some method to test for pre-config.
container_commands:
08user_config:
test: test ! -f /opt/elasticbeanstalk/.preconfig-complete
command: |
echo "jail-me" > /home/ec2-user/.userfile
09writable_dirs:
command: |
chmod -R 770 /var/app/ondeck/backend/web/assets
chmod -R 770 /var/app/ondeck/[path]
99complete:
command: |
touch /opt/elasticbeanstalk/.preconfig-complete
files:
"/etc/profile.d/myalias.sh":
mode: "000644"
owner: root
group: root
content: |
alias webroot='cd /var/www/html/backend/web; ls -al --color;'
echo " ========== "
echo " The whole point of Elastic Beanstalk is that you shouldn't need to SSH into the server. "
echo " ========== "
Yes you should use ebextensions.
Create a folder in your app source root called .ebextensions. Create a file with a .config extension say 01-folder-permissions.config. Files are processed in lexicographical order of their name.
Contents of the file can be:
container_commands:
change_permissions:
command: chmod 777 /var/www/some-folder
Replace with appropriate folder and permissions. Read about container commands here.

Error: ENOENT, no such file or directory './assets'

After setting up koa-static-folder, my image loads great when I test over localhost with http://localhost:3000/assets/myimage.jpg
But after deploying our node code to an Ubuntu server, we get:
Error: ENOENT, no such file or directory './assets'
What's Ubuntu's issue here? Not sure how to resolve this.
The code that is working locally is:
var koa = require('koa')(),
serve = require('koa-static-folder');
koa.use(serve('./assets'));
Sounds like a permission issue (but I could be wrong!) such that the user that node.js is running under does not have access rights to the assets folder. If this is the problem you have to change access permissions to the folder (see chmod) or run node.js as a user that has the access rights.
If the server isn't publicly accessible you could run the application as sudo to verify if it is a permissions problem or not. Note that using sudo is not a long term solution since it is highly irresponsible/unsecure to run the application as the root user.

Keystone configuration file permissions

I'm playing around with Juju and OpenStack and I installed Keystone Identity service on one of the nodes. SSH-ing into the machine I noticed that the permissions of the configuration file /etc/keystone/keystone.conf are 644 (rw-r--r--) which means it is readable by any user on the system.
Keeping in mind that this file contains the MySQL username and password, wouldn't be it right the file to be readable only by the keystone user?
Note that I've tried installing using both Juju and by hand using a fresh Ubuntu 14.04 with the same results.
Edit: Forgot to mention that OpenStack documentation doesn't mention anything about permissions in its docs.
I don't think any other openstack services use keystone.conf. So you may change ownership to keystone and change permission so that only keystone can read.
chown keystone:keystone /etc/keystone/keystone.conf
chmod 600 /etc/keystone/keystone

Node.js and ulimit

I have seen a small amount of discussion here and there about setting ulimit -n (file handles) in Linux when using Node. Default on most linux distros is 1024. I can find no recommendations anywhere. Normally for apache you'd set it pretty high. Any thoughts on this? Easy to set it high to start with, but not sure there is a need. We are using Mongo remotely, not opening a lot of files locally.
I received this answer back from AWS support, and it works:
As for the container, every beanstalk instance, is a container with beanstalk software that will download your application upon startup, and modify system parameters depending on the environment type, and on your .ebextensions folder on your application.
So in order to achieve my suggestion, you will need to create a .ebextensions on your application, with the contents I have mentioned.
Just as a recap, please create a file named app.config inside your .ebextensions folder on your application, with the following (updated) content:
files:
"/etc/security/limits.conf":
mode: "00644"
owner: "root"
group: "root"
content: |
* soft nofile 20000
* hard nofile 20000
commands:
container_commands:
command: "ulimit -HSn 20000; service httpd restart;"
After you added this file, save the project and make a new deployment.
As for ssh, if you want to run a user with more limits, you can run the following command after you are logged in:
sudo su -c "ulimit -HSn 20000; su - ec2-user"
And that current session will have the limits you so desire.
For reference:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html#customize-containers-format-services
https://unix.stackexchange.com/questions/29577/ulimit-difference-between-hard-and-soft-limits
http://www.cyberciti.biz/faq/linux-increase-the-maximum-number-of-open-files/

Resources