How to dump core of an init spawned process - linux

I am trying to force core dump of a program. Core dumping is enabled via
ulimit -c unlimited
If my program is launched by the init process, and I kill it like this
kill -6 <pid_of_prog>
I can't find the core.
However, if it is launched from a terminal, and I kill it with the above command, then it dumps core. The program chdir to a directory when it is launched, and the core file is found in this directory.

ulimit does not set the limit of already launched process, so my init launched process is not affected by the ulimit command. I guess the correct answer is to use setrlimit

Related

Why kill command does not generate core while gcore can?

I can use gcore generating the core file of my application which was built with debug symbol. Out of curiosity, I tried using kill command to generated the core file, but no core is generated.
Here are the steps I took:
I first ran the following commands:
ulimit -c unlimited
sudo sysctl -w kernel.core_pattern=/tmp/core-%e.%p.%h.%t
Then start the application.
Then I tried the SIGABRT, SIGTRAP, SIGQUIT, there is no core file generated:
kill -SIGABRT `pidof my_app`
kill -SIGTRAP `pidof my_app`
kill -SIGQUIT `pidof my_app`
In all these runs, my_app was stoped, but there is no core file, locally or /tmp.
I am using Ubuntu 20.04.
Do you see anything wrong?

How to get pid of mpich processes

I'm running my execuatable through mpirun command. I want to get the pid of processes being created so that I could kill them later if required. I'm using MPICH. In openmpi there is an option -report-pid which gives pids. Is there anything similar in MPICH?

Is it possible to pass input to a running service or daemon?

I want to create a Java console application that runs as a daemon on Linux, I have created the application and the script to run the application as a background daemon. The application runs and waits for command line input.
My question:
Is it possible to pass command line input to a running daemon?
On Linux, all running processes have a special directory under /proc containing information and hooks into the process. Each subdirectory of /proc is the PID of a running process. So if you know the PID of a particular process you can get information about it. E.g.:
$ sleep 100 & ls /proc/$!
...
cmdline
...
cwd
environ
exe
fd
fdinfo
...
status
...
Of note is the fd directory, which contains all the file descriptors associated with the process. 0, 1, and 2 exist for (almost?) all processes, and 0 is the default stdin. So writing to /proc/$PID/fd/0 will write to that process' stdin.
A more robust alternative is to set up a named pipe connected to your process' stdin; then you can write to that pipe and the process will read it without needing to rely on the /proc file system.
See also Writing to stdin of background process on ServerFault.
The accepted answer above didn't quite work for me, so here's my implementation.
For context I'm running a Minecraft server on a Linux daemon managed with systemctl. I wanted to be able to send commands to stdin (StandardInput).
First, use mkfifo /home/user/server_input to create a FIFO file somewhere (also known as the 'named pipe' solution mentioned above).
[Service]
ExecStart=/usr/local/bin/minecraft.sh
StandardInput=file:/home/user/server_input
Then, in your daemon *.service file, execute the bash script that runs your server or background program and set the StandardInput directive to the FIFO file we just created.
In minecraft.sh, the following is the key command that runs the server and gets input piped into the console of the running service.
tail -f /home/user/server_input| java -Xms1024M -Xmx4096M -jar /path/to/server.jar nogui
Finally, run systemctl start your_daemon_service and to pass input commands simply use:
echo "command" > /home/user/server_input
Creds to the answers given on ServerFault

Upstart: each process on different core

I'm trying to use upstart to launch multiple instances of node.js - each on separate cpu core listening on different port.
Launch configuration:
start on startup
task
env NUM_WORKERS=2
script
for i in `seq 1 $NUM_WORKERS`
do
start worker N=$i
done
end script
Worker configuration:
instance $N
script
export HOME="/node"
echo $$ > /var/run/worker-$N.pid
exec sudo -u root /usr/local/bin/node /node/server.js >> /var/log/worker-$N.sys.log 2>&1
end script
How do I specify that each process should be launched on a separate core to scale node.js inside the box?
taskset allows you to set CPU affinities for any Linux process. But Linux kernel already favors keeping a process on the same CPU to optimize performance.

I don't get coredump with all process

I try to get a coredump, so i use :
ulimit -c unlimited
I run my program in background, and I kill it :
kill -SEGV %1
But i just get :
[1]+ Exit 1 ./Test
And no coredumps are created.
I did the same with other programs and it works, so why that didn't work with all ? Anybody can help me ?
Thanks. (GNU/Linux, Debian 2.6.26)
If your program traps the SEGV signal and does something else, it won't invoke the OS core dump routine. Check that it doesn't do that.
Under Linux, processes which change their user ID using setuid, seteuid or some other parameters get excluded from dumping core for security reasons (Think: /bin/passwd dumps core while reading /etc/shadow into memory)
You can re-enable dumping core on Linux programs which change their user ID by calling prctl() after the change of UID
Also you might want to check that the program you're running is not changing its working directory ( chdir() ), because then it will create the core file in a different directory than the one you're running it from.
And you can try this too:
kill -ABRT pid
Try (as root):
sysctl kernel.core_pattern=core
and then repeat your experiment. On some systems that variable is set to /dev/null by default.
However, if you see exit status 1, perhaps the program indeed intercepts the signal.

Resources