14 "Mojave" on my macbook and I am trying to increase the fs.inotify.max_user_watches value in /etc/sysctl.conf (to solve another problem). To conclude this rite I need to run sudo sysctl -p /etc/sysctl.conf. But I get
"illegal option -- p"
When I check the man page on osx it in fact does not have the -p option (to supply a file) nor the --system option (to load all known config files); on another system I clearly see that those options are available.
How else then can I get sysctl to take my new configs? Is there a different way to configure fs.inotify.max_user_watches on osx?
On Big Sur, the first lines for sysctl manpage are:
SYSCTL(8) BSD System Manager's Manual SYSCTL(8)
NAME
sysctl -- get or set kernel state
This must mean sysctl itself can be used to update some values. However, sysctl does not show the fs.inotify.max_user_watches name. Must be another mac thing...
Related
I need to create kernel panic and I tried following
sysctl kernel.panic=0 && echo c > /proc/sysrq-trigger
When I ran the commands above. I see system always reboots. I need system to be in panic mode without rebooting
Use -w option when you want to change a sysctl setting under RHEL.
Multiple commands example:
> sysctl -w kernel.panic="0"
> echo c > /proc/sysrq-trigger
Notice that if you want to preserve kernel settings after reboot, it's always better to add them to the /etc/sysctl.conf file. However the quickly setting method maybe enough for your testing requirments.
Also make sure you don't paste both commands "sysctl -w kernel.panic=0 echo c > /proc/sysrq-trigger" together. (I'm always giving this recommendation when i see multiple shell commands posted together, like i see in your question). Or use && operator to execute the next command like this:
Single line example:
sysctl -w kernel.panic="0" && echo c > /proc/sysrq-trigger
I am using an OpenVZ VPS with only 128M RAM.
The RAM is so limited that I cannot get locale-gen to run successfully. The script always gets killed during the operation.
Killed localedef -i $input -c -f $charset -A /usr/share/locale/locale.alias $locale
Is there any way that I can set correct locale information manually? e.g. run the command on another computer and copy necessary files?
Short answer: no, it won't help to copy locale data generated elsewhere and it might corrupt the system since commands like cd and ls depend on it.
It should suffice to run locale-gen on a 128M RAM VPS.
If it keeps failing, try switching to a locale with less footprint. Any locale ending with .UTF-8 requires more memory and CPU time to generate. In most case, switching from en_US.UTF-8 to en_US.iso88591 can save some memory.
So instead of
sudo locale-gen en_US.UTF-8
It's worth trying
sudo locale-gen en_US.iso88591
I have an embedded system. An old linux OS runs on it. When i enter "uname -r" command i get the version information as "3.3.8-3.4".
I want to modify some of network kernel parameters (increase tcp receive buffer size etc.) in /proc/sys. But sysctl command does not exist in this old linux kernel version. Also sysctl.conf does not exist under /etc directory
I tried changing kernel parameter files manually but system does not allow this operation even for super user.
How can i modify kernel parameters in this linux version?
You can use /proc/sys. For example the following command:
echo 1 > /proc/sys/net/ipv4/ip_forward
... is basically the same as
sysctl -w net.ipv4.ip_forward=1
However, you'll need to make sure on your own that parameters will be set on boot.
As the title says.
I've found this question:
How to increase Neo4j's maximum file open limit (ulimit) in Ubuntu?
But I don't even has this file: /etc/init.d/neo4j-service, I'm guessing it's because I'm using RHEL5, not Debian, as the responder was using.
Then I've added both two lines:
root soft nofile 40000
root hard nofile 40000
into my /etc/security/limits.conf
Then after logging out and logging in again, $ulimit -Sn and $ulimit -Hn still returns 1024,
Also, I don't even has this file:
/etc/pam.d/common-session under pam.d directory. Should I create this file myself and just one that one line in here? I don't think this should be the way out.
Any ideas please?
Thanks
I don't know what is true RHEL way, but you can change the limit using sysctl:
$ sysctl -w fs.file-max=100000
To make the change permanent, add next string to /etc/sysctl.conf:
fs.file-max = 100000
then apply the change using command
$ sysctl -p
I've seeing occasional segmentation faults in glibc on several different Fedora Core 9 Hudson Slaves. I've attempted to configure each slave to generate core files and place them in /corefiles, but have had no luck.
Here is what I've done on each linux slave:
1) Create a corefile storage location
sudo install -m 1777 -d /corefiles
2) Directed the corefiles to the storage location by adding the following to /etc/sysctl.conf
kernel.core_pattern = /corefiles/core.%e-PID:%p-%t-signal_%s-%h
3) Enabled unlimited corefiles for all users by adding the following to /etc/profile
ulimit -c unlimited
Is there some additional Linux magic required or do I need to do something to the Hudson slave or JVM?
Thanks for the help
Did you reboot or run "sysctl -p" (as root) after editing /etc/sysctl.conf ?
Also, if i remember correctly, ulimit values are per user and calling ulimit wont survive a boot. You should add this to /etc/security/limits.conf:
* soft core unlimited
Or call ulimit in the script that starts hudson if you don't wont everyone to produce coredumps.
I figured this out :-).
The issue is Hudson invokes the bash shell as a non-interactive shell, which will bypass the ulimit setting in /etc/profile. The solution is to add the BASH_ENV environmental variable tothe Hudson slaves and set the value to a file with ulimit -c unlimited set.