Linux unable to create core dump from application [closed] - linux

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have two servers running a vendor application. On one server if the app crashes it creates a core dump but the second it does not.
The servers were supposed to be set up the same but I am trying to figure out why the application doesn't create a core dump. I've checked all the typical settings and have been doing research with no luck.
The strange part is that if I run a kill -s SIGSEGV $$ as my app user, it generates a core dump in the same directory the app is supposed to create the core dump. The vendor and Linux group are both unsure at the moment that is why I'm looking here for help.
$ cat /proc/sys/kernel/core_pattern
core
$ cat /proc/sys/kernal/core_uses_pid
1
$ ulimit -c
unlimited
$ cat /etc/security/limits.conf | grep core
* soft core unlimited
* hard core unlimited
$ cat /etc/profile | grep ulimit
ulimit -c unlimited > /dev/null 2>&1
$ cat /proc/sys/fs/suid_dumpable
0
$ cat /etc/sysconfig/init | grep CORE
DAEMON_COREFILE_LIMIT='unlimited'

There could be several other reasons why the coredump is not created. Check the list of possible reasons in core(5): http://linux.die.net/man/5/core
Check dmesg output.
Check the specific process corefile size limit in /proc/PID/limits.
Check if the process user can create a file of typical coredump size in /proc/PID/cwd directory.
Specify absolute file path in /proc/sys/kernel/core_pattern, pointing to a known writable location.
Create a short program adhering to the coredump-accepting protocol, saving it somewhere, and specify it in /proc/sys/kernel/core_pattern, according to core(5). Coredumps piped to programs are not subject to limits.

Related

Auditd - auditctl rule to monitor dir only (not all sub dir and files etc..) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I am trying to use auditd to monitor changes to a directory.
The problem is that when I setup a rule it does monitor the dir I specified but also all the sub dir and files making the monitor useless due to endless verbosity.
Here is the rule I setup:
auditctl -w /home/raven/public_html -p war -k raven-pubhtmlwatch
when I search the logs using
ausearch -k raven-pubhtmlwatch
I get thousands of lines of logs that list everything under public_html/
How can I limit the rule to changes on the directory specified only?
Thank you very much.
A watch is really a syscall rule in disguise. If you place a watch on a
directory, auditctl will turn it into:
-a exit,always -F dir=/home/raven/public_html -F perm=war -F key=raven-pubhtmlwatch
The -F dir field is recursive. However, if you just want to watch the directory
entries, you can change that to -F path.
-a exit,always -F path=/home/raven/public_html -F perm=war -F key=raven-pubhtmlwatch
This is not recursive and just watches the inode that the directory occupies.
I had to add the rule manually in:
/etc/audit/audit.rules
then restart auditd using
/etc/init.d/auditd restart
now the rules are added and it works great!
All credit goes to Steve # redhat who answered my question in the audit mailing list:
https://www.redhat.com/archives/linux-audit/2013-September/msg00057.html

Lost httpd.conf file located apache [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
How can I find where my httpd.conf file is located?
I am running an Ubuntu Linux server from the Amazon Web Services EC2 (Elastic Compute Cloud) and I can't find my Apache config.
Get the path of running Apache
$ ps -ef | grep apache
apache 12846 14590 0 Oct20 ? 00:00:00 /usr/sbin/apache2
Append -V argument to the path
$ /usr/sbin/apache2 -V | grep SERVER_CONFIG_FILE
-D SERVER_CONFIG_FILE="/etc/apache2/apache2.conf"
Reference:
http://commanigy.com/blog/2011/6/8/finding-apache-configuration-file-httpd-conf-location
See http://wiki.apache.org/httpd/DistrosDefaultLayout for discussion of where you might find Apache httpd configuration files on various platforms, since this can vary from release to release and platform to platform. The most common answer, however, is either /etc/apache/conf or /etc/httpd/conf
Generically, you can determine the answer by running the command:
httpd -V
(That's a capital V). Or, on systems where httpd is renamed, perhaps apache2ctl -V
This will return various details about how httpd is built and configured, including the default location of the main configuration file.
One of the lines of output should look like:
-D SERVER_CONFIG_FILE="conf/httpd.conf"
which, combined with the line:
-D HTTPD_ROOT="/etc/httpd"
will give you a full path to the default location of the configuration file

fork: retry: Resource temporarily unavailable [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I tried installing Intel MPI Benchmark on my computer and I got this error:
fork: retry: Resource temporarily unavailable
Then I received this error again when I ran ls and top command.
What is causing this error?
Configuration of my machine:
Dell precision T7500
Scientific Linux release 6.2 (Carbon)
This is commonly caused by running out of file descriptors.
There is the systems total file descriptor limit, what do you get from the command:
sysctl fs.file-nr
This returns counts of file descriptors:
<in_use> <unused_but_allocated> <maximum>
To find out what a users file descriptor limit is run the commands:
sudo su - <username>
ulimit -Hn
To find out how many file descriptors are in use by a user run the command:
sudo lsof -u <username> 2>/dev/null | wc -l
So now if you are having a system file descriptor limit issue you will need to edit your /etc/sysctl.conf file and add, or modify it it already exists, a line with fs.file-max and set it to a value large enough to deal with the number of file descriptors you need and reboot.
fs.file-max = 204708
Another possibility is too many threads. We just ran into this error message when running a test harness against an app that uses a thread pool. We used
watch -n 5 -d "ps -eL <java_pid> | wc -l"
to watch the ongoing count of Linux native threads running within the given Java process ID. After this hit about 1,000 (for us--YMMV), we started getting the error message you mention.

what does this shellscript do? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
#! /bin/bash
#
# clear_ram.sh - Clear as much user-space ram as possible
# (until the OOM_killer gets us)
#
swapoff -a
mem=$(free -b | grep Mem | awk '{print $2}')
mount none -t tmpfs -o size=$mem /tmp
dd if=/dev/zero of=/tmp/zero.dat bs=1M &
echo "17" > /proc/$(pidof dd)/oomadj
while (pidof dd); do kill -USR1 $(pidof dd); done
this is a shell script.
what does this code do?
NOT HOMEWORK
This script
deactivates swap
obtains the amount of RAM in bytes
mounts a ramdisk equal to available RAM
writes zeros to the ramdisk via dd
Attempts to set the dd process to be first on the chopping block for the Out Of Memory killer
prints the process ID of dd and its current status for as long as it keeps running
I say "attempts" because it should be writing to oom_adj and not oomadj, at least for recent kernels, and because the max value is 15 and not 17.
There's also a bug here, because it will print the PID and status for all executing dd, not just the one in the script.
As the comment says, eventually the kernel Out Of Memory killer will kill the process.
I'm pretty sure it's a silly thing to do. I don't know of a reason why you would actually need to zero memory this way.

How do I change the number of open files limit in Linux? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
When running my application I sometimes get an error about too many files open.
Running ulimit -a reports that the limit is 1024. How do I increase the limit above 1024?
Edit
ulimit -n 2048 results in a permission error.
You could always try doing a ulimit -n 2048. This will only reset the limit for your current shell and the number you specify must not exceed the hard limit
Each operating system has a different hard limit setup in a configuration file. For instance, the hard open file limit on Solaris can be set on boot from /etc/system.
set rlim_fd_max = 166384
set rlim_fd_cur = 8192
On OS X, this same data must be set in /etc/sysctl.conf.
kern.maxfilesperproc=166384
kern.maxfiles=8192
Under Linux, these settings are often in /etc/security/limits.conf.
There are two kinds of limits:
soft limits are simply the currently enforced limits
hard limits mark the maximum value which cannot be exceeded by setting a soft limit
Soft limits could be set by any user while hard limits are changeable only by root.
Limits are a property of a process. They are inherited when a child process is created so system-wide limits should be set during the system initialization in init scripts and user limits should be set during user login for example by using pam_limits.
There are often defaults set when the machine boots. So, even though you may reset your ulimit in an individual shell, you may find that it resets back to the previous value on reboot. You may want to grep your boot scripts for the existence ulimit commands if you want to change the default.
If you are using Linux and you got the permission error, you will need to raise the allowed limit in the /etc/limits.conf or /etc/security/limits.conf file (where the file is located depends on your specific Linux distribution).
For example to allow anyone on the machine to raise their number of open files up to 10000 add the line to the limits.conf file.
* hard nofile 10000
Then logout and relogin to your system and you should be able to do:
ulimit -n 10000
without a permission error.
1) Add the following line to /etc/security/limits.conf
webuser hard nofile 64000
then login as webuser
su - webuser
2) Edit following two files for webuser
append .bashrc and .bash_profile file by running
echo "ulimit -n 64000" >> .bashrc ; echo "ulimit -n 64000" >> .bash_profile
3) Log out, then log back in and verify that the changes have been made correctly:
$ ulimit -a | grep open
open files (-n) 64000
Thats it and them boom, boom boom.
If some of your services are balking into ulimits, it's sometimes easier to put appropriate commands into service's init-script. For example, when Apache is reporting
[alert] (11)Resource temporarily unavailable: apr_thread_create: unable to create worker thread
Try to put ulimit -s unlimited into /etc/init.d/httpd. This does not require a server reboot.

Resources