I have question related to this one.
I want to (programatically) measure L3 Hits (Accesses) and Misses on an AMD EPYC 7742 CPU (Zen2). I run Linux Kernel 5.4.0-66-generic on Ubuntu Server 20.04.2 LTS. According to the question linked above, the events rFF04 (L3LookupState) and r0106 (L3CombClstrState) should represent the L3 accesses and misses, respectively. Furthermore, Kernel 5.4 should support these events.
However, when measuring it with perf, I run into issues. Similar to the question linked above, if I run numactl -C 0 -m 0 perf stat -e instructions,cycles,r0106,rFF04 ./benchmark, I only measure 0 values. If I try to use numactl -C 0 -m 0 perf stat -e instructions,cycles,amd_l3/r8001/,amd_l3/r0106/, perf complains about "unknown terms". If I use the perf event names, i.e. numactl -C 0 -m 0 perf stat -e instructions,cycles,l3_request_g1.caching_l3_cache_accesses, l3_comb_clstr_state.request_miss perf outputs <not supported> for these events.
Furthermore, I actually want to measure this using perf's C API. Currently, I dispatch a perf_event_attr with type PERF_TYPE_RAW and config set to, e.g., 0x8001. How do I get the amd_l3 PMU stuff into my perf_event_attr object? Otherwise, it would be equivalent to numactl -C 0 -m 0 perf stat -e instructions,cycles,r0106,rFF04 ./benchmark, which is measuring undefined values.
Thank you so much for your help.
I try to open a network connection through a pair of pseudo tty's on linux os.
# slattach -v /dev/ptmx
cslip started on /dev/ptmx interface sl0
OK, this was the "creating side" for the pseudo tty.
I can look in /dev/pts and find the new pty there.
If I now try to use slattach also on this side I got:
slattach -v /dev/pts/3
slattach: tty_open(/dev/pts/3, RW): Input/output error
I traced with strace:
28 5505 write(1, "slattach: tty_open: trying to op"..., 46) = 46
29 5505 open("/dev/pts/3", O_RDWR|O_NONBLOCK) = -1 EIO (Input/output error)
30 5505 write(2, "slattach: tty_open(/dev/pts/3, R"..., 55) = 55
31 5505 exit_group(3)
All this happens on different distros of ubuntu, tested on 10.04 and 11.04, both are failing.
What I'm doing wrong?
You may want to take a look at the man page pty(7).
Basically, /dev/ptmx uses the Unix 98 pseudo-terminal interface and requires that your program uses grantpt(3) and unlockpt(3). Here, slattach (the one that opens /dev/ptmx, not the other one) doesn't do so, and any program that tries to open the slave pseudo-terminal associated to the master will fail, as you experienced.
You can force slattach to do grantpt() and unlockpt() by overloading the open() call with an external
routine, see this example
I am running kernel 2.6.33.7.
Previously, I was running v2.6.18.x. On 2.6.18, the flush processes were named pdflush.
After upgrading to 2.6.33.7, the flush processes have a format of "flush-:".
For example, currently I see flush process "flush-8:32" popping up in top.
In doing a google search to try to determine an answer to this question, I saw examples of "flush-8:38", "flush-8:64" and "flush-253:0" just to name a few.
I understand what the flush process itself does, my question is what is the significance of the numbers on the end of the process name? What do they represent?
Thanks
Device numbers used to identify block devices. A kernel thread may be spawned to handle a particular device.
(On one of my systems, block devices are currently numbered as shown below. They may change from boot to boot or hotplug to hotplug.)
$ grep ^ /sys/class/block/*/dev
/sys/class/block/dm-0/dev:254:0
/sys/class/block/dm-1/dev:254:1
/sys/class/block/dm-2/dev:254:2
/sys/class/block/dm-3/dev:254:3
/sys/class/block/dm-4/dev:254:4
/sys/class/block/dm-5/dev:254:5
/sys/class/block/dm-6/dev:254:6
/sys/class/block/dm-7/dev:254:7
/sys/class/block/dm-8/dev:254:8
/sys/class/block/dm-9/dev:254:9
/sys/class/block/loop0/dev:7:0
/sys/class/block/loop1/dev:7:1
/sys/class/block/loop2/dev:7:2
/sys/class/block/loop3/dev:7:3
/sys/class/block/loop4/dev:7:4
/sys/class/block/loop5/dev:7:5
/sys/class/block/loop6/dev:7:6
/sys/class/block/loop7/dev:7:7
/sys/class/block/md0/dev:9:0
/sys/class/block/md1/dev:9:1
/sys/class/block/sda/dev:8:0
/sys/class/block/sda1/dev:8:1
/sys/class/block/sda2/dev:8:2
/sys/class/block/sdb/dev:8:16
/sys/class/block/sdb1/dev:8:17
/sys/class/block/sdb2/dev:8:18
/sys/class/block/sdc/dev:8:32
/sys/class/block/sdc1/dev:8:33
/sys/class/block/sdc2/dev:8:34
/sys/class/block/sdd/dev:8:48
/sys/class/block/sdd1/dev:8:49
/sys/class/block/sdd2/dev:8:50
/sys/class/block/sde/dev:8:64
/sys/class/block/sdf/dev:8:80
/sys/class/block/sdg/dev:8:96
/sys/class/block/sdh/dev:8:112
/sys/class/block/sdi/dev:8:128
/sys/class/block/sr0/dev:11:0
/sys/class/block/sr1/dev:11:1
/sys/class/block/sr2/dev:11:2
You should also be able to figure this out by searching for those numbers in /proc/self/mountinfo, eg:
$ grep 8:32 /proc/self/mountinfo
25 22 8:32 / /var rw,relatime - ext4 /dev/mapper/sysvg-var rw,barrier=1,data=ordered
This has the side benefit of working with nfs as well:
$ grep 0:73 /proc/self/mountinfo
108 42 0:73 /foo /mnt/foo rw,relatime - nfs host.domain.com:/volume/path rw, ...
Note, the data I included here is fabricated, but the mechanism works just fine.
I have a processor AT91SAM9G20 running a 2.6 kernel. Watchdog is enabled at bootstrap level and configured for 16 seconds. Watchdog mode register can be configured only once.
When code hangs either in bootstrap, bootloader or kernel, the board reboots. But once kernel comes up even though watchdog is not refreshed in any of the applications, the board is not being reset after 16 seconds, but 15 minutes.
Who is refreshing the watchdog?
In our case, the watchdog should be influenced by applications, so that the board can reset if our application hangs.
These are the running processes:
1 root init
2 root [kthreadd]
3 root [ksoftirqd/0]
4 root [watchdog/0]
5 root [events/0]
6 root [khelper]
63 root [kblockd/0]
72 root [ksuspend_usbd]
78 root [khubd]
85 root [kmmcd]
107 root [pdflush]
108 root [pdflush]
109 root [kswapd0]
110 root [aio/0]
740 root [mtdblockd]
828 root [rpciod/0]
982 root [jffs2_gcd_mtd10]
1003 root /sbin/udevd -d
1145 daemon portmap
1158 dbus dbus-daemon --system
1178 root /usr/sbin/ifplugd -i eth0 -fwI -u0 -d5 -l -q
1190 root /usr/sbin/ifplugd -i eth1 -fwI -u0 -d5 -l -q
1221 default avahi-daemon: running [SP14.local]
1226 root /usr/sbin/dropbear
1246 root /root/bin/host_app
1254 root /root/bin/mini_httpd -c *.cgi -d /root/bin -u root -E /root/bin/
1256 root -sh
1257 root /sbin/syslogd -n -m 0
1258 root /sbin/klogd -n
1259 root /usr/bin/tail -f /var/log/messages
1265 root ps -e
We are using the watchdog for soft lockups available in kernel-2.6.25-ts.at91sam9g20/kernel/softlockup.c
If you enabled the watchdog driver in your kernel, the watchdog driver sets up a kernel timer, in charge of resetting the watchdog. The corresponding code is linux/drivers/watchdog/at91sam9_wdt.c. So it works like this:
If no application opens the /dev/watchdog file, then the kernel takes care of resetting the watchdog. Since it is a timer, it won't appear as a dedicated kernel thread, but handled by the soft IRQ thread. Now, if an application opens this file, it becomes responsible of the watchdog, and can reset it by writing to the file, as documented by the documentation linked in Richard's post.
Is the watchdog driver configured in your kernel?
If not, you should configure it, and see if the reset still happens. If it still happens, it is likely that your reset comes from somewhere else.
If your kernel is too old to have a proper watchdog driver (not present in 2.6.25) you should backport it from 2.6.28. Or you can try to disable the watchdog in your bootloader and see if the reset still occurs.
In July 2016 commit 3fbfe92647 (watchdog: change watchdog_need_worker logic) in the 4.7 kernel to watchdog_dev.c enabled the same behavior as shodanex's answer for all watchdog timer drivers. This doesn't seem to be documented anywhere other than this thread and the source code.
/*
* A worker to generate heartbeat requests is needed if all of the
* following conditions are true.
* - Userspace activated the watchdog.
* - The driver provided a value for the maximum hardware timeout, and
* thus is aware that the framework supports generating heartbeat
* requests.
* - Userspace requests a longer timeout than the hardware can handle.
*
* Alternatively, if userspace has not opened the watchdog
* device, we take care of feeding the watchdog if it is
* running.
*/
return (hm && watchdog_active(wdd) && t > hm) ||
(t && !watchdog_active(wdd) && watchdog_hw_running(wdd));
This may give you a hint: http://www.mjmwired.net/kernel/Documentation/watchdog/watchdog-api.txt
It makes perfect sense to have a user space daemon handling the watchdog. It probably defaults to a 15 minute timeout.
we had a similar problem regarding WDT on AT91SAM9263. Problem was with bit 29 WDIDLEHLT of WDT_MR (Address: 0xFFFFFD44) register. This bit was set to 1 but it should be 0 for our application needs.
Bit explanation from datasheet documentation:
• WDIDLEHLT: Watchdog Idle Halt
0: The Watchdog runs when the system is in idle mode.
1: The Watchdog stops when the system is in idle state.
This means that WDT counter does not increment when kernel is in idle state, hence the 15 or more delay until reset happens.
You can try "dd if=/dev/zero of=/dev/null" which will prevent kernel from entering idle state and you should get a reset in 16 seconds (or whatever period you have set in WDT_MR register).
So, the solution is to update u-boot code or other piece of code that sets WDT_MR register. Remember this register is write once...
Wouldn't the kernel be refreshing the watchdog timer? The watchdog is designed to reset the board if the whole system hangs, not just a single application.
A colleague once told me that the last option when everything has failed to debug on Linux was to use strace.
I tried to learn the science behind this strange tool, but I am not a system admin guru and I didn’t really get results.
So,
What is it exactly and what does it do?
How and in which cases should it be used?
How should the output be understood and processed?
In brief, in simple words, how does this stuff work?
Strace Overview
strace can be seen as a light weight debugger. It allows a programmer / user to quickly find out how a program is interacting with the OS. It does this by monitoring system calls and signals.
Uses
Good for when you don't have source code or don't want to be bothered to really go through it.
Also, useful for your own code if you don't feel like opening up GDB, but are just interested in understanding external interaction.
A good little introduction
Here is a gentle introduction to using strace to debug process hangs: strace introduction
In simple words, strace traces all system calls issued by a program along with their return codes. Think things such as file/socket operations and a lot more obscure ones.
It is most useful if you have some working knowledge of C since here system calls would more accurately stand for standard C library calls.
Let's say your program is /usr/local/bin/cough. Simply use:
strace /usr/local/bin/cough <any required argument for cough here>
or
strace -o <out_file> /usr/local/bin/cough <any required argument for cough here>
to write into 'out_file'.
All strace output will go to stderr (beware, the sheer volume of it often asks for a redirection to a file). In the simplest cases, your program will abort with an error and you'll be able to see what where its last interactions with the OS in strace output.
More information should be available with:
man strace
strace lists all system calls done by the process it's applied to. If you don't know what system calls mean, you won't be able to get much mileage from it.
Nevertheless, if your problem involves files or paths or environment values, running strace on the problematic program and redirecting the output to a file and then grepping that file for your path/file/env string may help you see what your program is actually attempting to do, as distinct from what you expected it to.
Strace stands out as a tool for investigating production systems where you can't afford to run these programs under a debugger. In particular, we have used strace in the following two situations:
Program foo seems to be in deadlock and has become unresponsive. This could be a target for gdb; however, we haven't always had the source code or sometimes were dealing with scripted languages that weren't straight-forward to run under a debugger. In this case, you run strace on an already running program and you will get the list of system calls being made. This is particularly useful if you are investigating a client/server application or an application that interacts with a database
Investigating why a program is slow. In particular, we had just moved to a new distributed file system and the new throughput of the system was very slow. You can specify strace with the '-T' option which will tell you how much time was spent in each system call. This helped to determine why the file system was causing things to slow down.
For an example of analyzing using strace see my answer to this question.
I use strace all the time to debug permission issues. The technique goes like this:
$ strace -e trace=open,stat,read,write gnome-calculator
Where gnome-calculator is the command that you want to run.
strace -tfp PID will monitor the PID process's system calls, thus we can debug/monitor our process/program status.
Strace can be used as a debugging tool, or as a primitive profiler.
As a debugger, you can see how given system calls were called, executed and what they return. This is very important, as it allows you to see not only that a program failed, but WHY a program failed. Usually it's just a result of lousy coding not catching all the possible outcomes of a program. Other times it's just hardcoded paths to files. Without strace you get to guess what went wrong where and how. With strace you get a breakdown of a syscall, usually just looking at a return value tells you a lot.
Profiling is another use. You can use it to time execution of each syscalls individually, or as an aggregate. While this might not be enough to fix your problems, it will at least greatly narrow down the list of potential suspects. If you see a lot of fopen/close pairs on a single file, you probably unnecessairly open and close files every execution of a loop, instead of opening and closing it outside of a loop.
Ltrace is strace's close cousin, also very useful. You must learn to differenciate where your bottleneck is. If a total execution is 8 seconds, and you spend only 0.05secs on system calls, then stracing the program is not going to do you much good, the problem is in your code, which is usually a logic problem, or the program actually needs to take that long to run.
The biggest problem with strace/ltrace is reading their output. If you don't know how the calls are made, or at least the names of syscalls/functions, it's going to be difficult to decipher the meaning. Knowing what the functions return can also be very beneficial, especially for different error codes. While it's a pain to decipher, they sometimes really return a pearl of knowledge; once I saw a situation where I ran out of inodes, but not out of free space, thus all the usual utilities didn't give me any warning, I just couldn't make a new file. Reading the error code from strace's output pointed me in the right direction.
Minimal runnable example
If a concept is not clear, there is a simpler example that you haven't seen that explains it.
In this case, that example is the Linux x86_64 assembly freestanding (no libc) hello world:
hello.S
.text
.global _start
_start:
/* write */
mov $1, %rax /* syscall number */
mov $1, %rdi /* stdout */
mov $msg, %rsi /* buffer */
mov $len, %rdx /* buffer len */
syscall
/* exit */
mov $60, %rax /* exit status */
mov $0, %rdi /* syscall number */
syscall
msg:
.ascii "hello\n"
len = . - msg
GitHub upstream.
Assemble and run:
as -o hello.o hello.S
ld -o hello.out hello.o
./hello.out
Outputs the expected:
hello
Now let's use strace on that example:
env -i ASDF=qwer strace -o strace.log -s999 -v ./hello.out arg0 arg1
cat strace.log
We use:
env -i ASDF=qwer to control the environment variables: https://unix.stackexchange.com/questions/48994/how-to-run-a-program-in-a-clean-environment-in-bash
-s999 -v to show fuller information on the logs
strace.log now contains:
execve("./hello.out", ["./hello.out", "arg0", "arg1"], ["ASDF=qwer"]) = 0
write(1, "hello\n", 6) = 6
exit(0) = ?
+++ exited with 0 +++
With such a minimal example, every single character of the output is self evident:
execve line: shows how strace executed hello.out, including CLI arguments and environment as documented at man execve
write line: shows the write system call that we made. 6 is the length of the string "hello\n".
= 6 is the return value of the system call, which as documented in man 2 write is the number of bytes written.
exit line: shows the exit system call that we've made. There is no return value, since the program quit!
More complex examples
The application of strace is of course to see which system calls complex programs are actually doing to help debug / optimize your program.
Notably, most system calls that you are likely to encounter in Linux have glibc wrappers, many of them from POSIX.
Internally, the glibc wrappers use inline assembly more or less like this: How to invoke a system call via sysenter in inline assembly?
The next example you should study is a POSIX write hello world:
main.c
#define _XOPEN_SOURCE 700
#include <unistd.h>
int main(void) {
char *msg = "hello\n";
write(1, msg, 6);
return 0;
}
Compile and run:
gcc -std=c99 -Wall -Wextra -pedantic -o main.out main.c
./main.out
This time, you will see that a bunch of system calls are being made by glibc before main to setup a nice environment for main.
This is because we are now not using a freestanding program, but rather a more common glibc program, which allows for libc functionality.
Then, at the every end, strace.log contains:
write(1, "hello\n", 6) = 6
exit_group(0) = ?
+++ exited with 0 +++
So we conclude that the write POSIX function uses, surprise!, the Linux write system call.
We also observe that return 0 leads to an exit_group call instead of exit. Ha, I didn't know about this one! This is why strace is so cool. man exit_group then explains:
This system call is equivalent to exit(2) except that it terminates not only the calling thread, but all threads in the calling process's thread group.
And here is another example where I studied which system call dlopen uses: https://unix.stackexchange.com/questions/226524/what-system-call-is-used-to-load-libraries-in-linux/462710#462710
Tested in Ubuntu 16.04, GCC 6.4.0, Linux kernel 4.4.0.
Strace is a tool that tells you how your application interacts with your operating system.
It does this by telling you what OS system calls your application uses and with what parameters it calls them.
So for instance you see what files your program tries to open, and weather the call succeeds.
You can debug all sorts of problems with this tool. For instance if application says that it cannot find library that you know you have installed you strace would tell you where the application is looking for that file.
And that is just a tip of the iceberg.
strace is a good tool for learning how your program makes various system calls (requests to the kernel) and also reports the ones that have failed along with the error value associated with that failure. Not all failures are bugs. For example, a code that is trying to search for a file may get a ENOENT (No such file or directory) error but that may be an acceptable scenario in the logic of the code.
One good use case of using strace is to debug race conditions during temporary file creation. For example a program that may be creating files by appending the process ID (PID) to some predecided string may face problems in multi-threaded scenarios. [A PID+TID (process id + thread id) or a better system call such as mkstemp will fix this].
It is also good for debugging crashes. You may find this (my) article on strace and debugging crashes useful.
Here's some examples of how I use strace to dig into websites. Hope this is helpful.
Check for time to first byte like so:
time php index.php > timeTrace.txt
See what percentage of actions are doing what. Lots of lstat and fstat could be an indication that it's time to clear the cache:
strace -s 200 -c php index.php > traceLstat.txt
Outputs a trace.txt so you can see exactly what calls are being made.
strace -Tt -o Fulltrace.txt php index.php
Use this to check on whether anything took between .1 to .9 of a second to load:
cat Fulltrace.txt | grep "[<]0.[1-9]" > traceSlowest.txt
See what missing files or directories got caught in the strace. This will output a lot of stuff involving our system - the only relevant bits involve the customer's files:
strace -vv php index.php 2>&1 | sed -n '/= -1/p' > traceFailures.txt
I liked some of the answers where it reads strace checks how you interacts with your operating system.
This is exactly what we can see. The system calls. If you compare strace and ltrace the difference is more obvious.
$>strace -c cd
Desktop Documents Downloads examples.desktop Music Pictures Public Templates Videos
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
0.00 0.000000 0 7 read
0.00 0.000000 0 1 write
0.00 0.000000 0 11 close
0.00 0.000000 0 10 fstat
0.00 0.000000 0 17 mmap
0.00 0.000000 0 12 mprotect
0.00 0.000000 0 1 munmap
0.00 0.000000 0 3 brk
0.00 0.000000 0 2 rt_sigaction
0.00 0.000000 0 1 rt_sigprocmask
0.00 0.000000 0 2 ioctl
0.00 0.000000 0 8 8 access
0.00 0.000000 0 1 execve
0.00 0.000000 0 2 getdents
0.00 0.000000 0 2 2 statfs
0.00 0.000000 0 1 arch_prctl
0.00 0.000000 0 1 set_tid_address
0.00 0.000000 0 9 openat
0.00 0.000000 0 1 set_robust_list
0.00 0.000000 0 1 prlimit64
------ ----------- ----------- --------- --------- ----------------
100.00 0.000000 93 10 total
On the other hand there is ltrace that traces functions.
$>ltrace -c cd
Desktop Documents Downloads examples.desktop Music Pictures Public Templates Videos
% time seconds usecs/call calls function
------ ----------- ----------- --------- --------------------
15.52 0.004946 329 15 memcpy
13.34 0.004249 94 45 __ctype_get_mb_cur_max
12.87 0.004099 2049 2 fclose
12.12 0.003861 83 46 strlen
10.96 0.003491 109 32 __errno_location
10.37 0.003303 117 28 readdir
8.41 0.002679 133 20 strcoll
5.62 0.001791 111 16 __overflow
3.24 0.001032 114 9 fwrite_unlocked
1.26 0.000400 100 4 __freading
1.17 0.000372 41 9 getenv
0.70 0.000222 111 2 fflush
0.67 0.000214 107 2 __fpending
0.64 0.000203 101 2 fileno
0.62 0.000196 196 1 closedir
0.43 0.000138 138 1 setlocale
0.36 0.000114 114 1 _setjmp
0.31 0.000098 98 1 realloc
0.25 0.000080 80 1 bindtextdomain
0.21 0.000068 68 1 opendir
0.19 0.000062 62 1 strrchr
0.18 0.000056 56 1 isatty
0.16 0.000051 51 1 ioctl
0.15 0.000047 47 1 getopt_long
0.14 0.000045 45 1 textdomain
0.13 0.000042 42 1 __cxa_atexit
------ ----------- ----------- --------- --------------------
100.00 0.031859 244 total
Although I checked the manuals several time, I haven't found the origin of the name strace but it is likely system-call trace, since this is obvious.
There are three bigger notes to say about strace.
Note 1: Both these functions strace and ltrace are using the system call ptrace. So ptrace system call is effectively how strace works.
The ptrace() system call provides a means by which one process (the
"tracer") may observe and control the execution of another process
(the "tracee"), and examine and change the tracee's memory and
registers. It is primarily used to implement breakpoint debugging
and system call tracing.
Note 2: There are different parameters you can use with strace, since strace can be very verbose. I like to experiment with -c which is like a summary of things. Based on -c you can select one system-call like -e trace=open where you will see only that call. This can be interesting if you are examining what files will be opened during the command you are tracing.
And of course, you can use the grep for the same purpose but note you need to redirect like this 2>&1 | grep etc to understand that config files are referenced when the command was issued.
Note 3: I find this very important note. You are not limited to a specific architecture. strace will blow you mind, since it can trace over binaries of different architectures.