Profiling methods for highly time sensitive applications - linux

I am working in an embedded Linux environment debugging a highly timing sensitive issue related to the pairing/binding of Zigbee devices.
Our architecture is such that data read from Zigbee Front End Module via SPI interface and then passed from Kernel space to user space for processing. The processed data and response is then passed back to kernel space and clocked out over the SPI interface again.
The Zigbee 802.15.4 timing requirements specifies that we need to respond within 19.5ms and we frequently have situations where we respond just outside of this window which results in a failure and packet loss on the network.
The Linux kernel is not running with pre-emption enabled and it may not be possible to enable preemption either.
My suspicion is that since the kernel is not preemptible there is another task/process which is using the ioctl() interface and this holds off the Zigbee application just long enough that the 19.5ms window is exceeded.
I have tried the following tools
oprofile - not much help here since it profiles the entire system and the application is not actually very busy during this time since it moves such small amounts of data
strace - too much overhead, I don't have much experience using it though so maybe the output can be refined. The overhead affects the performance so much that the application does not funciton at all
Are there any other lightweight methods of profiling a system like this?
Is there anyway to catch when an ioctl call is pended on another task/thread? (assuming this is the root cause of the issue)

Good question.
Here's an idea. Don't think of it as profiling.
Think of catching it in the act.
I would investigate creating a watchdog timer to go off after the 16.5ms interval.
Whenever you are successful, reset the timer.
That way, it will only go off when there's a failure.
At that point, I would try to take a stack sample of the process, or possibly another process that might be blocking it.
That's an adaptation of this technique.
It will take some work, but I'd be surprised if there's any tool that will tell you exactly what's going on, short of an in-circuit-emulator.

LTTng is the tool you are looking for. Like Oprofile, it profiles the entire system, but you will be able to see exactly what is going on with each process and kernel thread, in a timeline fashion. You will be able to view the interaction of the threads and scheduler around the point of interest, that is, when you miss your Zigbee deadline. You may have to get clever and use some method of triggering (or more likely, stopping) the LTTng trace once you've detected the missed packet, or you might get lucky and catch it right away just using the command line tools to start and stop tracing.
You may have to do some work to get there, for example you'll have to invest some time and energy in 1) enabling your kernel to run LTTng if it doesn't have it already, and 2) learning how to use it. It is a powerful tool, and useful for a variety of profiling and analysis tasks. Most commercial embedded Linux vendors have complete end-to-end LTTng products and configuration if you have that option. If not, you should be able to find plenty of useful help and examples on line. LTTng has been around for a very long time! Happy hunting!

Related

Is it possible to read the instruction pointer of a thread without stopping the tracee?

I am considering writing an application-specific sampling based profiler on linux. The ptrace API, if I understand the man page correctly, relies on instrumentation in the kernel that stops the tracee whenever certain events happen in the kernel.
Is there a way to read the instruction pointer of a thread (from another thread on another core) without stopping the process?
First, the instruction pointer alone is useless for profiling, no matter how application-specific.
Look at the second answer on this post for a discussion of all the related issues.
Second, to get any useful information out of a thread, you do have to stop it long enough to read the information, and then it can start up again.
(Notice, this is what happens whenever it answers an interrupt of any kind.)
Don't think you need a large number of samples (or that your sampling has to be fast for that reason).
That's a long-running widely accepted idea (and taught, by people who should know better), and it is without foundation, statistical or otherwise.
(Academics might want to look here.)
Third, take a look at lsstack.
If you want to write your own profiler, it would be a good code base to start from.

Linux process stuck on wait for event even when event occurs

I have a very strange system behavior in a few places which can be described in short: There is a process either in user or kernel space which waits for event, and although the event occurs, the process does not wakes up.
I will describe this below, but since the problem is in many different places (at least 4) I am starting to look for a system problem and not a local one something like preemption flag (already checked and not the problem) that will make the difference.
The system is Linux working on Freescale IMX6 which is brand new and still in beta phase. Same code is working well on many other Linux systems.
The system is running 2 separate processes, one is showing video using gstreamer playing from a file, using new image processor which has never been used. If this process runs alone the system can run over-night.
Another process is working with digital tuner connected over USB. The process only gets the device version in a loop, again when running alone can run over-night.
If these 2 process are running at the same time on the system, one is stuck within a few minutes. If we change the test parameters (like the periodic get version timing) the other process will get stuck.
The processes always stuck on wait for event (either wait_event_interruptible in kernel driver, or in user space on pthread_cond_wait). The event itself occurs and I have logs to see that. But the process does not wake up.
Trying to kill that process makes in Zombie. I managed to find one place with a very specific timing problem where condition check was misplaced and could cause this kind of stuck if the process was switched in the right place. It solved one problem and I got to another with the same characteristic. Anyway the bug that was found could not explain why it happens so often, it could explain theoretical bug which will stuck once in a lot of time, but not this fast.
Anyway - something in the system cause this to show up very fast even if the problem is real. Again - this code (except for the display driver which is new) is working in other systems, and even on the same system when working alone. Those processes are not related and not working with one another, the common about them is the machine they are running on.
It is probably has something to do with system resources (memory use 100M out of 1G, CPU usage is 5%), scheduler behavior or something on the system configuration. Anyone has ideas what could cause these kind of problems?
If it's a brand new port of Linux, then it may be that you actually have a real kernel bug - or a hardware bug if it's new hardware.
However, you need really good evidence, so strace, ftrace and perhaps even some instrumentation of the relevant kernel code to show this to someone who can actually fix the problem - I'm guessing that since you are asking this question in the way you are, that you are not a regular kernel hacker.
Sorry if this isn't really the answer you were looking for.

Record thread events

Suppose I need to peek on a thread's state at regular intervals and record its state along the whole execution of a program. I wouldn't know how to start thinking about this. Any pointers (pun?)? I'm on Linux, using gcc, phreads and C and have access to all usual Linux tools. Basically, I guess I'm asking about how to build a simple profiler for threads that will tell me how long a thread has been in some or other state during the execution of the program.
I want to be able to create graphs like Threadscope does. The X axis is time, the Y axis is core/thread number and the "colors" are state: green means running, orange is garbage collection, and so on. Does this make more sense now?
.
For Linux specific solution, you might like to have a look at /proc/<pid>/stat and /proc/<pid>/task/<tid>/stat for process and thread statistics, respectively. Have a look at proc(5) manual page for full description of all the fields there (online http://man7.org/linux/man-pages/man5/proc.5.html - search for /proc/[pid]/stat). Specifically, at least the fields cutime and stime are of interests to you. These are monotonically increasing times, so you need to remember the previously measured value to be able to produce the time spent in the process/thread during the given time slice, in order to produce the data for your graphs. (This is how top(1) works.)
However, for the profiler to distinguish different states makes the problem more complicated. How do the profiler distinguish that the profiled program is in which state? It seems to me the profiled program threads need to signal this in some way to the profiler. You need to have some kind of tailored solution for this state sharing (unless you can run the different states in different threads and make the distinction this way, which I doubt).
If the state transitions are done in single place (e.g. enter GC and leave GC in your example), then one way would be as follows:
The monitored threads would get the start and end times of the special states by using POSIX function clock_gettime() - with clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &tp) you can get the process time and with clock_gettime(CLOCK_THREAD_CPUTIME_ID, &tp) you can get the thread time (both monotonically increasing, again).
The thread could communicate these timings to the profiler program with some kind of IPC.
If the profiler application knows the thread times of entering and leaving a state, then because it knows the thread time values at the change of measuring slices, it can determine how much of the thread time is spent in the reported states within a reporting time slice (and of course here we need to adjust the start time for a state to equal the start of the next reporting time slice).
The time the whole process has spent on a specific state can be calculated by summing up the thread times for that state.
Note that through /proc/<pid>/stat or /proc/<pid>/task/<tid>/stat, the measurement accuracy is not very good (clock ticks, often units of 10ms), but I do not know other way of getting timing information from outside of the process/thread. The function clock_gettime() gives very accurate times (nominally nanosecond accuracy, but note that at least in some MIPS and ARM systems the accuracy is as bad as with the stat files under /proc due to unexisting implementation of accurate timer reading for these fields within Linux kernel). You also would need to do some experimentation to make sure these two timing sources really would give the same results (by reading both values from the same threads). You can of course use these /proc/.../stat files inside the thread, but the accuracy just is not very good unless you spend a lot of time within a state.
Well, the direct match to profiling info produced by the haskell compiler and processed by Threadscope is, using C and GCC, the gprof utility (it's part of the GNU binutils).
For it to work correctly with pthreads you need each thread to trigger some timer initialization function. This can be done without modifying your code with this pthreads wrapper library: http://sam.zoy.org/writings/programming/gprof.html . I haven't dealt with the problem recently, it may be that something has changed and the wrapper isn't needed anymore...
As to GUI to interpret the profiling results, there is kprof (http://kprof.sourceforge.net). Unfortunately, AFAIK it doesn't produce thread duration graphs, for that you'll have to work your own solution with the textual info produced by gprof.
If you are not picky about using the "standard" solution offered by the GCC, you may wanna try this: http://code.google.com/p/gperftools/?redir=1 (didn't try it personally, but heard good opinions).
Good luck!
Take a look at at Intel VTune Amplifier XE (formerly … Intel Thread Profiler) to see if it will meet your needs.
This and other Intel Linux development tools are available free for non-commercial use.
In the video Using the Timeline in Intel VTune Amplifier XE showing a timeline of a multi-threaded application, at 9:20 the presenter mentions
"...with the frame API you can programmatically mark certain events or phases in your code. And these marks will appear on the timeline."
I think it will be rather difficult build a simple profiler simply because there are many different factors that you have to consider and system profiling is an inherently complex task, made all the more so when you are profiling a multithreaded application. The best advice I can think of is to look at something that already exists, for example OProfile.
One advantage of OProfile is that it is open source so the source code is available. But beyond this I suspect that asking how to build a profiling application might be beyond the scope of what someone can answer in a SO question, which might be why this question hasn't gotten very many responses. Hopefully looking at some example will help you get started and then perhaps if you have more focused questions you could get some more detailed responses.

Analyzing and profiling multi-threaded application

We are having a multithreaded application which has heavy packet processing across multiple pipeline stages. The application is in C under Linux.
The entire application works fine and has no memory leaks or thread saftey issues. However, in order to analyse the application, how can we profile and analyse the threads?
In particular here is what we are interested in:
the resource usage done by each thread
frequency and timing with which threads were having contentions to acquire locks
Amount of overheads due to synchronization
any bottlenecks in the system
what is the best system throughput we can get
What are the best techniques and tools available for the same?
Take a look at at Intel VTune Amplifier XE (formerly … Intel Thread Profiler) to see if it will meet your needs.
This and other Intel Linux development tools are available free for non-commercial use.
In the video Using the Timeline in Intel VTune Amplifier XE a timeline of a multi-threaded application is demonstrated. The presenter uses a graphic display to show lock activity and how to dig down to the source line of the particular lock causing serialization. At 9:20 the presenter mentions "with the frame API you can programmatically mark certain events or phases in your code. And these marks will appear on the timeline."
I worked on a similar system some years ago. Here's how I did it:
Step 1. Get rid of unnecessary time-takers in individual threads. For that I used this technique. This is important to do because the overall messaging system is limited by the speed of its parts.
Step 2. This part is hard work but it pays off. For each thread, print a time-stamped log showing when each message was sent, received, and acted upon. Then merge the logs into a common timeline and study it. What you are looking for is a) unnecessary retransmissions, for example due to timeouts, b) extra delay between the time a message is received and when it is acted upon. This can happen, for example, if a thread has multiple messages in its input queue, some of which can be processed more quickly than others. It makes sense to process those first.
You may need to alternate between these two.
Don't expect this to be easy. Some programmers are too fine to be bothered with this kind of dirty work. But, you could be pleasantly surprised at how fast you can make the whole thing go.
1) Don't know. There are some profilers available for linux.
2) If you are pipelining, each stage should be doing sufficient work to ensure that contention on the P-C queues is minimal. You can dig this out with some timings - if a stage takes 10ms+ to process a packet, you can forget about contention/lock issues. If it takes 100uS, you should consider amalgamating a couple stages so that each stage does more work.
3) Same as (2), unless there is a separate synchronization issue with some global data or whatever.
4) Dumping/logging the queue counts every second would be useful. The longest queue will be before the stage with the narrowest neck.
5) No idea - don't know how your current system works, what hardware it's running on etc. There are some 'normal' optimizations - eliminating memory-manager calls with object pools, adding extra threads to stages with the heaviest CPU loadings, things like that, but 'what is the best system throughput we can get' - too ethereal to say.
Do you have flexibility to develop under Darwin (OSX) and deploy on Linux? The performance analysis tools are excellent and easy to use (Shark and Thread Viewer are useful for your purpose).
There are many Linux performance tools, of course. gprof, Valgrind (with Cachegrind, Callgrind, Massif), and Vtune will do what you need.
To my knowledge, there is no tool that will directly answer your questions. However, the answers may be found by cross referencing the data points and metrics from both instrumentation and sampling based solutions.

Performance Evaluation of Linux Scheduler

I have done some simple changes to the scheduler in the Linux Kernel. Now, I would
like to see how those changes affect the response time of the system; in other words,
I would like to know how long a context switch takes with my modifications compared to the original scheduler. A straightforward approach would be to use the time stamp counter, and use then the printk to output the time it took for the context switch; obviously, in this case a lot of information is printed out. So I wonder if there is any other, better approach to measure the Linux scheduler response time?
Thanks
There are several kernel-level trace frameworks, which might help you. See the Kernel Trace Systems page on eLinux.org for a nice overview of the available options.

Resources