I wanna know which process, package, service or kernel module caused a kworker to start.
Plain and simple but i couldn't find a solution.
Things i tried pstree, workqueue:workqueue_queue_work > /sys/kernel/debug/tracing/set_event gives cryptic data on threads not usefull ,ps -ef shows kthreadd as parent for all kworkers so not usefull.
There must be a way but couldn't find it.
Related
Let's say, I have chrome running, which has 100 different processeses, not all of which are direct children. What's the best way to programmatically get all of the processes from either the procfs or the any syscall may be (I believe getrusage only allows calling process), given the PID of the main chrome parent in the hierarchy?
Also, is there any API that's equivalent to PSAPI in Windows which provides OpenProcess, GetProcessMemoryInfo etc, that allows you to iterate through memory efficiently, rather than parsing the procfs?
Most efficient way please. No calling other processes like ps, pstree, pgrep, etc.
Side context: This is mostly an educational exercise to find the most efficient way to do this, which I started going down trying to write a simple script in nodejs to try and get all the processes programatically and then, calculate the sum of the memory taken by the process tree, including each.
I'm actually the author of a C++ library that is designed to do exactly that - pfs.
pfs attempts to make all the interesting information inside procfs accessible through a very simple API. If you find it lacking any useful information, please create an issue, and I'll try to add it.
Seeing that you require that information from Node.js, you might be able to use the library for "inspiration" or for research purposes (as in, understand where the information is located).
Regarding the process tree: The procfs contains the parent PID of every process (find it under stat and/or status). You can enumerate all the running processes and store them into a container and then iterate over it while drawing or retaining the order you require.
I'm looking for code examples on how to use the Linux system call ptrace() to trace system calls of a process and all its child, grandchild, etc processes. Similar to the behaviour of strace when it is fed the fork flag -f.
I'm aware of the alternative of looking into the sources of strace but I'm asking for a clean tutorial first in the hopes of getting a more isolated explanation.
I'm gonna use this to implement a fast generic system call memoizer similar to https://github.com/nordlow/strace-memoize but written in a compiled language. My current code examples I want to extend with this logic is my fork of ministrace at https://github.com/nordlow/ministrace/blob/master/ministrace.c
RTFM PTRACE_SETOPTIONS with the PTRACE_O_TRACECLONE, PTRACE_O_TRACEFORK and PTRACE_O_TRACEVFORK flags. In a nutshell, if you set it on a process, any time it creates children, those will automatically be traced as well.
My title is more than no explicite so feel free to change it (don't really know how to name it)
I use a php script to check if a list of pid is running, My issue is that pid identifying is not enough and some other program can get the pid number later on when mine is over.
So, is there something I can do to identify than pid is the good pid that I need to check and not another one.
I think to hash /proc/<pid>/cmdline but even that is not 100% safe (another program can be the same software and the same parameters (it's rare but possible).
if an example is needed:
I run several instance of wget
one of them have PID number 8426
some times later…
I check if PID 8426 is running, it is so my php script react and don't check file downloaded but the fact is that PID 8426 of wget is over and it's another program that running pid 8426.
If the new program run for a long time (eg: a service) I can wait a long time for my php script to check the downloaded file.
Have you tried employing an object-oriented paradigm, where you could encapsulate the specific PID number into its specific object (i.e., specific program)? To accomplish this, you need to create a class (say you give it the arbitrary name "SOURCE") from which these programs can be obtained as objects belonging to that class. Doing so will encapsulate any information (e.g., PID), including the methods of that specific program to that program alone and, therefore, provide a safer way than doing a hash. Similar methods can be found in the object-oriented programming paradigm of Python.
You can read the binary file that /proc/<pid>/exe points to. The following concept is done in a shell but probably can do that in any language including php:
$ readlink "/proc/$$/exe"
/bin/bash
I am facing problem with shell script i have ascript which will be running in infinite loop so say its havin PID X.The process is running for 4-5 hours but automatically the process getting killed.This is happening only for some long time running system and i am observing some times after 2 hours also its getting killed.
I am not able to find the reason the why its going down why its getting killed.No one is using the system other than me.And i am running the process as a root user.
Can any one explain or suspect the reason who is killing the process.
Below is the sample script
#!/bin/bash
until ./test.tcl; do
echo "Server 'test.tcl' crashed with exit code $?. Respawing.." >&2
done
In test.tcl script i am running it for infinite loop and the script is used to trap signal and do some special operation.But we find that test.tcl is also going down.
So is there any way from where i capture who and how it gets killed.
Enable core dump in your system, it is the most commonly used method for app crash analysis. I know it is a bit painful to gdb core file, but more or less you can find something out of it.
Here is a reference link for you.(http://www.cyberciti.biz/tips/linux-core-dumps.html).
Another way to do this is tracing you script by "strace -p PID-X", note that it will slow down your system, espeically several hours in your case, but it can be the last resort.
Hope above helpful to you.
Better to check all the signals generated and caught by OS at that time by specific script might be one of signal is causing to kill your process.
Anytime I have a badly behaving process (pegging CPU, or frozen, or otherwise acting strangely) I generally kill it, restart it and hope it doesn't happen again.
If I wanted to explore/understand the problem (i.e. debug someone else's broken program as it's running) what are my options?
I know (generally) of things like strace, lsof, dmesg, etc. But I'm not really sure of the best way start poking around productively.
Does anyone have a systematic approach for getting to the bottom of these issues? Or general suggestions? Or is killing & restarting really the best one can do?
Thanks.
If you have debugging symbols of the program in question installed, you can attach to it with gdb and look what is wrong there. Start gdb, type attach pid where pid is the process id of the program in question (you can find it via top or ps). Then type Ctrl-C to stop it. Saying backtrace gives you the call stack, that means it tells which line of code is currently running and which functions called the currently running function.