Currently, I am taking up the long method of doing this by getting a list of processes using the following command
sudo ps -eo pid,command | grep -v grep | awk '{print $1}' > pids.txt
And then iterating through the process ids and executing in background the strace of each process and generating logs for each process with the process id in the log's extension
filename="$1"
while read -r line
do
chmod +x straceProgram.sh
./straceProgram.sh $line &
done < "$filename"
straceProgram.sh
pid="$1"
sudo strace -p $pid -o log.$pid
However, the problem with this approach is that if there is any new process which gets started, it will not be straced since the strace is on the process ids stored in the pids.txt during the first run.
The list of pids.txt can be updated with new process ids, however, I was inquisitive on running a strace at an operating system level which would strace all the activities being performed.
Could there be a better way to do this?
If your resulting filesystem is going to be a kernel filesystem driver, I would recommend using tracefs to gather the information you require. I would recommend against making this a kernel filesystem unless you have a lot of time and a lot of testing resources. It is not trivial.
If you want an easier, safer alternative, write your filesystem using fuse. The downside is that performance is not quite as good and there are a few places where it cannot be used, but it is often acceptable. Note that there is already an implementation of a logging filesystem under fuse.
use the strace -f (fork) option, also I suggest the -s 9999 for more details
Related
I am trying to locate a file used by a binary file during its execution. Using strace helps but its way too convoluted, macroed with grep is good enough, but does there exist an utility which can help me dump only files used by a binary?
you can try using:
lsof -p PID of the running process
lsof -c ssh would show all files opened by processes starting with the letter
Or try ltrace or maybe fuser
I've seen strace be used with some complex grep piping.. but it all depends on what exactly the end goal is.
You can also utilize the -e options in strace to filter, example is:
sudo strace -t -e trace=open,close,read,getdents,write,connect,accept whoami >/dev/null
and grep from there..
I am using ssconvert (Gnumeric) to convert large Excel files into separate CSV files. Most files work, however with some of the larger files with additional formatting the process dies abruptly and says 'killed'.
ssconvert -S '/tmp/inputfile.xlsx' '/tmp/output.csv'
Is there any special handling for larger files that can be used?
If the user or sysadmin did not kill the program the kernel may have. The kernel would only kill a process under exceptional circumstances such as extreme resource starvation.
dmesg | grep -E -i -B100 'killed process'
I am trying to see what would happen about system call when I running one command, but it seems those command after | can't be shown? like:
strace -f cat a.txt| cat
It seems strace and -f perimeter can show the whole process. I think the last part is in the child progress created by fork. Why and how to make it?
From the strace manual (emphasis mine).
-f Trace child processes as they are created by
currently traced processes as a result of the fork(2),
vfork(2) and clone(2) system calls.
The traced process in your case is the first cat process. The second cat process is not a child of the first cat process. The fork is done by the shell.
One way to achieve what you want is to trace the shell:
strace -f bash -c "cat a.txt| cat"
If I run top -p $(pgrep -d',' scrapy) I get information on the scrapy process, but this process probably triggers other python related processes. How can I get information on these processes as well in real time as the top command does?
Thanks,
Dani
What you're looking for is a program or script that will gather the CPU usage of all child processes spawned by scrapy.
If you wanted to script this yourself, you could look at the output of ps -p {scrapy pid} -L to get all the threads spawned by the instantiation of scrapy.
Or, you could chain together a couple Linux commands to have a one-liner:
ps -C scrapy -o pcpu= | awk '{cpu_usage+=$1} END {print cpu_usage}'
ps:
-C specifies the command name to output
-o pcou= tells ps to only display cpu usage
awk:
{cpu_usage+=$1} END loops over the response from ps
{print cpu_usage} will send the sum to STDOUT.
In userspace Linux, I have a process blocking on a semaphore, as found by strace. Once the error condition occurs, the blocking is repeatable, so there must be another process that holds the semaphore and did not release it.
Is there a way to know which other process is currently holding the semaphore?
ipcs lists the semaphore, so does /proc/sysvipc/sem. Where can I find info on the holding process?
Semaphores aren't mutexes. You don't "hold" them. If the process is blocked, that means it's waiting for someone else to do an "up" or "V" operation on it in the future. There's no kernel tool that will tell you what the future behavior of software will be.
To find the pids associated with the list of Semaphore Arrays listed by ipcs -s you can run this:
for pid in $( for semid in $( sudo ipcs -s | awk '/0x/{ print $2 }' ); do sudo ipcs -s -i $semid | tail -2 | head -1 | awk '{print $5}'; done | sort -u ); do ps uh -p $pid; done
There may be an easier way but you can use the semctl() call with the GETPID cmd. That should return the process that executed the last semop() call for the semaphore. This may or may not be your rogue process but it is probably a good hint.
"ipcs -p " can not show the semaphores of the process holding, that must be a bug, or it's a limit because it's hard to show.
You have to query by yourslef.
run "ipcs -s" to get all semid
for each semid run "ipcs -s -i "
for each semnum, to get owner pid,
if the owner pid is you wish, then show the current semid and semnum.
Note: if the process just read semaphores, then you may cannot get such information via ipcs command.
Did you try
ipcs -p