Increasing max-open files for Beanstalkd in AWS EC2 instance - linux

I'm not well know all beanstalkd tricks, and I need to increase max-open files for beanstalkd in our AWS EC2 instances. I found couple resources in internet(that looks more trusted for me), that suggest to change not only beanstalkd configurations, and system configurations like that:
# file: /etc/default/beanstalkd
BEANSTALKD_LISTEN_ADDR=127.0.0.1
BEANSTALKD_LISTEN_PORT=11300
START=yes
BEANSTALKD_EXTRA="-b /var/lib/beanstalkd -f 1"
# Should match your /etc/security/limits.conf settings
ulimit -n 100000
And explanation that why I should change "/etc/security/limits.conf" is:
"Lot's of resources online tell you to update your /etc/security/limits.conf and /etc/pam.d/common-session* settings to increase your maximum number of available file descriptors. However, the default beanstalkd installation on Ubuntu 12.04+ uses an init script that starts the daemon process using start-stop-daemon which does not use your system settings when setting the processes ulimits. Just add this line to your defaults and you're good to go!"
I don't want to change any global system settings. All I want is change beanstalkd settings.
So why i should make this changes if default beanstalkd installation on Ubuntu 12.04+ uses an init script that starts the daemon process using start-stop-daemon which does not use your system settings when setting the processes ulimits?
And if someone know better way to increasing max-open files for beanstalkd in AWS EC2 instance, without this changes in system settings?
Thank you for your time!

Best resource, how to raising open files: https://underyx.me/2015/05/18/raising-the-maximum-number-of-file-descriptors

Related

Nodejs/Gcloud/kubectl any command we run from WSL2 is deadly slow

I referred many solutions yet no luck. I have a linux automation which runs few gcloud commands with some conditions. I made this script with node js, but it is incredibly slow that I even finish it manually before the scrips completes the run.
Same with the gcloud commands when I connect to a cluster and kubectl commands when i query something.
Please help!!
It could be a DNS config error on WSL side. I hadthe same issue today, here's how I fixed it !
1. Checking the (deadly slow) response time
[tbg#~] time kubectl get deployments
No resources found in default namespace.
real 0m1.212s
user 0m0.151s
sys 0m0.050s
2. Checking the WSL/DNS configuration
[tbg#~] cat /etc/wsl.conf
[network]
generateResolvConf=false
[tbg#~] cat /etc/resolv.conf
nameserver XX.XXX.XXX.X
nameserver YYY.YY.YY.YY
nameserver 1.1.1.1
If you see that, remove these lines to get back to automatic resolv.conf generation and restart WSL (wsl --shutdown)
3. Checking the (fixed !) response time
[tbg#~] time kubectl get deployments
No resources found in default namespace.
real 0m10.530s
user 0m0.087s
sys 0m0.043s
I found out my resolv.conf configuration was causing that latency, by trying to reinstall kubectl with apt, and finding apt really slow too
Right now access to /mnt folders in WSL2 is too slow and by default at launch the entire Windows PATH is added to the Linux $PATH so any Linux binary that scans $PATH will make things unbearably slow.
To disable this feature, edit the /etc/wsl.conf to add the following section:
[interop]
appendWindowsPath = false
Avoid adding Windows Path to Linux $PATH and best for now is adding folders to the $PATH manually.
Terminate the WSL distro (wsl.exe --terminate <distro_name>) to make it immediately effective or wsl.exe --shutdown and start the terminal again.
Refer to the stack link for more information.

Why does mongodb complain about transparent_hugepage?

A few questions are already asking about how to fix the mongodb warning:
** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always.'
** We suggest setting it to 'never'
But I'm wondering if it should be fixed. I get this warning from MongoDB 3.0.1 on a Ubuntu VM running on Google's Cloud. Should I trust MongoDB that 'never' is better? Or should I trust Google/Ubuntu that they set it to 'always' for a good reason? I imagine there are tradeoffs to be considered and don't know what I'd be trading to keep it or fix it.
Asking how to fix it is fine, but asking whether to fix it seems wiser.
Edit: Mongodb have addressed this issue since I wrote this answer. Their recommendation is at https://docs.mongodb.com/master/tutorial/transparent-huge-pages/ and probably ought to be your go-to solution. My original answer will still work, but I'd consider it a hack now that an official solution is available.
Original answer: According to the MongoDB documentation, http://docs.mongodb.org/manual/reference/transparent-huge-pages/, and support, https://jira.mongodb.org/browse/DOCS-2131, transparent_hugepage (THP) is designed to create fewer large memory blocks rather than many small memory blocks in systems with a lot of memory. This is great if your software needs large contiguous memory accesses. For MongoDB, however, regardless of memory available, it requires numerous smaller memory accesses and therefore performs better with THP disabled.
That makes me think either way will work, but you'll get better mongo (or any database) performance with THP off, giving you smaller bites of memory. If you don't have much memory anyway, THP probably ought to be off no matter what you run.
Several ways to do that are outlined in the link above. The most universally applicable appears to be editing rc.local.
$ sudo nano /etc/rc.local
Insert the following lines before the "exit 0" line.
...
if test -f /sys/kernel/mm/transparent_hugepage/khugepaged/defrag; then
echo 0 > /sys/kernel/mm/transparent_hugepage/khugepaged/defrag
fi
if test -f /sys/kernel/mm/transparent_hugepage/defrag; then
echo never > /sys/kernel/mm/transparent_hugepage/defrag
fi
if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
echo never > /sys/kernel/mm/transparent_hugepage/enabled
fi
exit 0
Note: redhat-based systems may use "redhat_transparent_hugepage" rather than "transparent_hugepage" and can be checked by:
ls /sys/kernel/mm/*transparent_hugepage*/enabled
cat /sys/kernel/mm/*transparent_hugepage*/enabled
To apply the changes, reboot (which will run rc.local) or:
$ sudo su
# source /etc/rc.local
# service mongod restart
# exit
to properly apply the changes made above
For Ubuntu using upstart scripts:
Since we are deploying machines with Ansible I don't like modifying rc files or GRUB configs.
I tried using sysfsutils / sysfs.conf but ran into timing issues when starting the services on fast (or slow machines). It looked like sometimes mongod was started before sysfsutils. Sometimes it worked, sometimes it did not.
Since mongod is an upstart process I found that the cleanest solution was to add the file /etc/init/mongod_vm_settings.conf with the following content:
# Ubuntu upstart file at /etc/init/mongod_vm_settings.conf
#
# This file will set the correct kernel VM settings for MongoDB
# This file is maintained in Ansible
start on (starting mongod)
script
echo "never" > /sys/kernel/mm/transparent_hugepage/enabled
echo "never" > /sys/kernel/mm/transparent_hugepage/defrag
end script
This will run the script just before mongod will be started.
Restart mongod (sudo service mongod restart) and done.
In Ubuntu I used the option 'Init Script' of this document: http://docs.mongodb.org/manual/tutorial/transparent-huge-pages/
None of these worked for me on Amazon ec2 instance running Ubuntu 14.04, not even the init.d script recommended by MongoDB. I had to use the hugeadm tool by first installing it via apt-get and then running sudo hugeadm --thp-never, this post pointed me to hugeadm. I'm still trying to figure out how to disable the transparent_hugepage defrag. hugeadm doesn't seem to have an easy way to do that.

Need help to change postgresql port on CentOS 7

I just installed the postgresql (as it says on postgresql), server is running like charm, no problem at all.
I just tried(want) to change the default port (5432) to (9898).
First I just tried to do it by postgresql.conf file under /var/lib/pgsql/data/postgresql.conf.
I just remove the comment for port related line, and change it as port=9898, but there is a comment saying overriding port here doesn't change anything for RHEL and deriven guys, it also says try to override the port config by service config file(cannot find it, where is it?).
I also change the postmaster.opts too (doesn't work the same).
Finally! how may I change the Postgresql 9.2.7 port number on CentOS 7?
Finally I found it, the service file is /lib/systemd/system/postgresql.service, I just change the following line.
Environment=PGPORT=9898
stop the service as
service postgresql stop
then reload the daemon services using this
systemctl daemon-reload
Finally start the postgresql using
service postgresql start
Now it's working like charm :D
Login to psql. Try
show config_file ;
That is the file you should change. Did you restart the server after changing the port?
You can also try the file under /etc/rc.d/init.d for PostgreSQL if it is running as a service.
From /lib/systemd/system/postgresql.service
# It's not recommended to modify this file in-place, because it will be
# overwritten during package upgrades. If you want to customize, the
# best way is to create a file "/etc/systemd/system/postgresql.service",
# containing
# .include /lib/systemd/system/postgresql.service
# ...make your changes here...
# For more info about custom unit files, see
# http://fedoraproject.org/wiki/Systemd#How_do_I_customize_a_unit_file.2F_add_a_custom_unit_file.3F
# For example, if you want to change the server's port number to 5433,
# create a file named "/etc/systemd/system/postgresql.service" containing:
# .include /lib/systemd/system/postgresql.service
# [Service]
# Environment=PGPORT=5433
# This will override the setting appearing below.
I think it is better to follow the steps above.
I am using Amazon EC2 instance with Amazon Linux AMI release ( A kind of CentOS it seems). I needed to change PGPORT variable in /etc/init.d/postgresql file and restart the postgresql service using 'service postgresql restart'. And it works!!
PGPORT=some_new_port # /etc/init.d/postgresql

Node.js and ulimit

I have seen a small amount of discussion here and there about setting ulimit -n (file handles) in Linux when using Node. Default on most linux distros is 1024. I can find no recommendations anywhere. Normally for apache you'd set it pretty high. Any thoughts on this? Easy to set it high to start with, but not sure there is a need. We are using Mongo remotely, not opening a lot of files locally.
I received this answer back from AWS support, and it works:
As for the container, every beanstalk instance, is a container with beanstalk software that will download your application upon startup, and modify system parameters depending on the environment type, and on your .ebextensions folder on your application.
So in order to achieve my suggestion, you will need to create a .ebextensions on your application, with the contents I have mentioned.
Just as a recap, please create a file named app.config inside your .ebextensions folder on your application, with the following (updated) content:
files:
"/etc/security/limits.conf":
mode: "00644"
owner: "root"
group: "root"
content: |
* soft nofile 20000
* hard nofile 20000
commands:
container_commands:
command: "ulimit -HSn 20000; service httpd restart;"
After you added this file, save the project and make a new deployment.
As for ssh, if you want to run a user with more limits, you can run the following command after you are logged in:
sudo su -c "ulimit -HSn 20000; su - ec2-user"
And that current session will have the limits you so desire.
For reference:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html#customize-containers-format-services
https://unix.stackexchange.com/questions/29577/ulimit-difference-between-hard-and-soft-limits
http://www.cyberciti.biz/faq/linux-increase-the-maximum-number-of-open-files/

PHP-FPM and capistrano, "No input file specified"

I use capistrano to deploy new versions of a website to servers that run nginx and php-fpm, and sometimes it seems like php-fpm gets a bit confused after deployment and expects old files to exist, generating the "No input file specified" error. I thought it could have had something to do with APC, which I uninstalled, but I realize the process doesn't get as far as checking stuff with APC.
Is there a permission friendly way to tell php-fpm that after deployment it needs to flush its memory (or similar), that I could use? I don't think I want to do sudo restarts.
rlimit_files isn't set in php-fpm.conf, and ulimit -n is 250000.
Nginx has it's own rather aggressive filecache. It's worse when NFS is involved since that has it's own cache as well. Tell capistrano to restart nginx after deployment.
It can also be an issue with your configuration as Mohammad suggests, but then a restart shouldn't fix the issue, so you can tell the two apart.

Resources