I have seen a small amount of discussion here and there about setting ulimit -n (file handles) in Linux when using Node. Default on most linux distros is 1024. I can find no recommendations anywhere. Normally for apache you'd set it pretty high. Any thoughts on this? Easy to set it high to start with, but not sure there is a need. We are using Mongo remotely, not opening a lot of files locally.
I received this answer back from AWS support, and it works:
As for the container, every beanstalk instance, is a container with beanstalk software that will download your application upon startup, and modify system parameters depending on the environment type, and on your .ebextensions folder on your application.
So in order to achieve my suggestion, you will need to create a .ebextensions on your application, with the contents I have mentioned.
Just as a recap, please create a file named app.config inside your .ebextensions folder on your application, with the following (updated) content:
files:
"/etc/security/limits.conf":
mode: "00644"
owner: "root"
group: "root"
content: |
* soft nofile 20000
* hard nofile 20000
commands:
container_commands:
command: "ulimit -HSn 20000; service httpd restart;"
After you added this file, save the project and make a new deployment.
As for ssh, if you want to run a user with more limits, you can run the following command after you are logged in:
sudo su -c "ulimit -HSn 20000; su - ec2-user"
And that current session will have the limits you so desire.
For reference:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html#customize-containers-format-services
https://unix.stackexchange.com/questions/29577/ulimit-difference-between-hard-and-soft-limits
http://www.cyberciti.biz/faq/linux-increase-the-maximum-number-of-open-files/
Related
I referred many solutions yet no luck. I have a linux automation which runs few gcloud commands with some conditions. I made this script with node js, but it is incredibly slow that I even finish it manually before the scrips completes the run.
Same with the gcloud commands when I connect to a cluster and kubectl commands when i query something.
Please help!!
It could be a DNS config error on WSL side. I hadthe same issue today, here's how I fixed it !
1. Checking the (deadly slow) response time
[tbg#~] time kubectl get deployments
No resources found in default namespace.
real 0m1.212s
user 0m0.151s
sys 0m0.050s
2. Checking the WSL/DNS configuration
[tbg#~] cat /etc/wsl.conf
[network]
generateResolvConf=false
[tbg#~] cat /etc/resolv.conf
nameserver XX.XXX.XXX.X
nameserver YYY.YY.YY.YY
nameserver 1.1.1.1
If you see that, remove these lines to get back to automatic resolv.conf generation and restart WSL (wsl --shutdown)
3. Checking the (fixed !) response time
[tbg#~] time kubectl get deployments
No resources found in default namespace.
real 0m10.530s
user 0m0.087s
sys 0m0.043s
I found out my resolv.conf configuration was causing that latency, by trying to reinstall kubectl with apt, and finding apt really slow too
Right now access to /mnt folders in WSL2 is too slow and by default at launch the entire Windows PATH is added to the Linux $PATH so any Linux binary that scans $PATH will make things unbearably slow.
To disable this feature, edit the /etc/wsl.conf to add the following section:
[interop]
appendWindowsPath = false
Avoid adding Windows Path to Linux $PATH and best for now is adding folders to the $PATH manually.
Terminate the WSL distro (wsl.exe --terminate <distro_name>) to make it immediately effective or wsl.exe --shutdown and start the terminal again.
Refer to the stack link for more information.
I have an application monitoring files sent to a FTP server (proftpd 1.3.5a). I am using pywatchdog to monitor file creation on FTP server root (app running locally), but under some very specific circumstance it does not issue a notification: when I create a new dir through ftp and, after that, create a file under this directory. The file creation/modification events are not caught!
In order to reproduce it in a simple way I've used pyinotify (0.9.6) itself and it looks like the problem comes from there. So, a simple way to reproduce the problem:
Install proftpd and pyinotify (python3) on the server with default settings
In the server, run the following command to monitor ftp root (recursive and autoadd turned on - considering user "user"):
python3 -m pyinotify -v -r -a /home/user
In the client, create a sample.txt, connect in the ftp server and issue the following commands, in this order:
mkdir dir_a
cd dir_a
put sample.txt
There will be no events related to sample.txt - neither create nor modify!
I've tried to remove the ftp factor from the issue by manually creating and moving directories inside the observed target and creating files inside these directories, but the issue does not happen - it all works smoothly.
Any help will be appreciated!
I'm not well know all beanstalkd tricks, and I need to increase max-open files for beanstalkd in our AWS EC2 instances. I found couple resources in internet(that looks more trusted for me), that suggest to change not only beanstalkd configurations, and system configurations like that:
# file: /etc/default/beanstalkd
BEANSTALKD_LISTEN_ADDR=127.0.0.1
BEANSTALKD_LISTEN_PORT=11300
START=yes
BEANSTALKD_EXTRA="-b /var/lib/beanstalkd -f 1"
# Should match your /etc/security/limits.conf settings
ulimit -n 100000
And explanation that why I should change "/etc/security/limits.conf" is:
"Lot's of resources online tell you to update your /etc/security/limits.conf and /etc/pam.d/common-session* settings to increase your maximum number of available file descriptors. However, the default beanstalkd installation on Ubuntu 12.04+ uses an init script that starts the daemon process using start-stop-daemon which does not use your system settings when setting the processes ulimits. Just add this line to your defaults and you're good to go!"
I don't want to change any global system settings. All I want is change beanstalkd settings.
So why i should make this changes if default beanstalkd installation on Ubuntu 12.04+ uses an init script that starts the daemon process using start-stop-daemon which does not use your system settings when setting the processes ulimits?
And if someone know better way to increasing max-open files for beanstalkd in AWS EC2 instance, without this changes in system settings?
Thank you for your time!
Best resource, how to raising open files: https://underyx.me/2015/05/18/raising-the-maximum-number-of-file-descriptors
I have troubles setting permissions for a web folder on Elastic Beanstalk. I run multiple containers using custom docker images in one instance: apache-php, mysql, memcached, etc.. For the container "apache-php" I map a folder with my yii2 application to /var/www/html/.
When I manually make a bundle and do upload / deploy via Elastic Beanstalk console I sure have right permissions for the folder and everything works fine.
Now, when I deploy the app using "eb deploy", it drops all permissions and I get a server error and "The directory is not writable by the Web process: /var/www/html/backend/web/assets" in logs.
I can connect via ssh and set necessary permissions manually, but sure this is not convenient, since needs to be done every time I re-deploy the app.
So, my questions is what is the best way to automatically set permission for particular folder in particular container on Elastic Beanstalk?
Perhaps, I can use .ebextensions, but I didn't find how to run "container_commands" for particular container.
AWS EB Deployment starts your app in /var/app/ondeck
When deploying elastic beanstalk, your app is first unzipped into /var/app/ondeck/
Most likely, your local folder being deployed does not have the permissions you want on them.
If you need to make adjustments to your app, or the shell, during deployment, .ebextensions/*.config is the right place to do it.
Container commands should be run to that path
But keep in mind, that these commands will run EVERY time you deploy, whether needed or not, unless you use some method to test for pre-config.
container_commands:
08user_config:
test: test ! -f /opt/elasticbeanstalk/.preconfig-complete
command: |
echo "jail-me" > /home/ec2-user/.userfile
09writable_dirs:
command: |
chmod -R 770 /var/app/ondeck/backend/web/assets
chmod -R 770 /var/app/ondeck/[path]
99complete:
command: |
touch /opt/elasticbeanstalk/.preconfig-complete
files:
"/etc/profile.d/myalias.sh":
mode: "000644"
owner: root
group: root
content: |
alias webroot='cd /var/www/html/backend/web; ls -al --color;'
echo " ========== "
echo " The whole point of Elastic Beanstalk is that you shouldn't need to SSH into the server. "
echo " ========== "
Yes you should use ebextensions.
Create a folder in your app source root called .ebextensions. Create a file with a .config extension say 01-folder-permissions.config. Files are processed in lexicographical order of their name.
Contents of the file can be:
container_commands:
change_permissions:
command: chmod 777 /var/www/some-folder
Replace with appropriate folder and permissions. Read about container commands here.
I am trying to configure our Node.js application to be deployed with Amazon Elastic Beanstalk.
Actually I did a few configuration files inside .ebextensions to enable Websockets, doing yum installs for several modules and to install some custom software we need.
So far the App deployment works and all configured software is installed by Beanstalk.
The Problem I have is that the nodejs user wich runs the node application, doesnt have permission to execute the commandline tools installed by our beanstalk custom config.
To be more concrete:
The app supports user file uploads and the uploaded files are saved
to some temp folder on the instance (that works like it should).
Then the app does a commandline execution to convert the uploaded
file in to a custom file format, whats executing something like
/home/ec2-user/converter/bin convert filename output filename.
At this point I get this error:
{ [Error: spawn EACCES] code: 'EACCES', errno: 'EACCES', syscall: 'spawn' }
Overall the app requires several commandline tools for such conversion tasks to run correctly.
Actually they all have the same problem. Even tools installed by yum, such as Imagemagick, are not beeing executed by the app.
Manually, by using the ec2-user account, I am able to execute all these, all files are in place at the right system paths and they work fine. So all installations seem to work right.
I already tried to grant permissions to the user nodejs manually and did chmod the files, but this doesnt seem to take any effect here.
Big question is.. how can I grant the required permissions to the nodejs user or as alternative how to use a defined User to execute node.js?
I believe that the nodejs user doesn't have privileges to use the shell:
[ec2-user#host ~]$ cat /etc/passwd
....
nodejs:x:497:497::/tmp:/sbin/nologin
According to the docs, node runs the command in a shell and returns it.
I also tried:
[ec2-user#host ~]$ pwd
/home/ec2-user
[ec2-user#host ~]$ cat test.js
#!/opt/elasticbeanstalk/node-install/node-v0.10.31-linux-x64/bin/node
require('child_process').exec('/usr/bin/whoami', function (err, data) {
console.log(data);
});
[ec2-user#host ~]$ ls -l
total 4
-rwxrwxrwx 1 ec2-user ec2-user 169 Nov 3 21:49 test.js
[ec2-user#host ~]$ sudo -u nodejs /home/ec2-user/test.js
sudo: unable to execute /home/ec2-user/test.js: Permission denied
I will say that this works, which im confused about (maybe someone can chime in to clarify):
$ sudo -u nodejs /usr/bin/whoami
nodejs
HOWEVER, as an outside observer it seems more like Beanstalk isn't a good fit for you. Generally, Beanstalk is a hands-off fully managed abstraction by design and messing around with the file system permissions and user permissions is over-stepping those boundaries.
As an aside, maybe you want to consider moving to OpsWorks instead. From http://aws.amazon.com/opsworks/faqs/:
Q: How is AWS OpsWorks different than AWS Elastic Beanstalk?
AWS OpsWorks and AWS Elastic Beanstalk both focus on operations, but
with very different orientations. AWS Elastic Beanstalk seeks to
automatically provide key operations activities so that developers can
maximize the time they spend on development and minimize the time they
spend on operations. In contrast, AWS OpsWorks delivers integrated
experiences for IT administrators and ops-minded developers who want a
high degree of productivity and control over operations.
I finally found the solution:
Beanstalk is using the ec2-user account to run bash commands.
So everything installed by commandline cannot be executed by the nodejs user account because of permission conflicts.
Solution was to copy all installed tools in to /usr/local/bin, where they can be executed by any user.
07_myprogram:
command: sudo cp bin/* /usr/local/bin
cwd: /home/ec2-user/myprogram
ignoreErrors: true