As we all know, we can get RAM currently used by a process in Linux using commands like ps, top and vmstat or reading the pseudo-filesystem /proc. But how can i get the same information in freeRTOS where we could not use commands and there exist no file system.
First there's no process context in RTOS. In FreeRTOS there're tasks(which are analogous to threads in Linux) and the main context which again is lost once the Scheduler is started. The stack memory occupied by each task is configured by the client at task creation.
However once the system is running you can query if the stack reaches its maximum value by using the following API.
uxTaskGetStackHighWaterMardk(TaskHandle_t task)
Please refer https://www.freertos.org/uxTaskGetStackHighWaterMark.html
Remember that INCLUDE_uxTaskGetStackHighWaterMark should be defined to 1 to use this feature.
For heap memory I assume you're using one of FreeRTOS heap allocation strategies(heap_1,heap_2 etc). In that case if you've globally overridden your malloc/free/new/new[]/delete/delete[] to use FreeRTOS pvPortMalloc, there's a way to register a hook function that gets called when the system runs out of heap.
Refer https://www.freertos.org/a00016.html
At the same time it is possible to retrieve run time status from the scheduler by using the following API.
void vTaskGetRunTimeStats( char *pcWriteBuffer );
Of course, this will suspend/unsuspend the scheduler frequently, so will not be a real solution for your production code, but is still a good debugging aid.
Refer https://www.freertos.org/rtos-run-time-stats.html.
Related
There are plenty of programs available which can be used to calculate context switch time from user-space. But all of these have several overhead - like overhead of clock_gettime() timer, overhead of read/write operation in pipe.
Is it possible to measure context switch time in Linux kernel space where above overhead won't be there?
May be TWO GLOABL VARIABLES can be added in kernel module which will store the time when context_switch function is called , the time when context_switch is finished.
The challenges I am facing with this approach is context switch function can be called by any process and from any core.
Is it feasible or advisable to add something to struct task_struct or struct rq ?
I am using Ubuntu 16.04 OS .
if you want to check the delay on context switch between threads (not including thread execution time) :
based on kernel config, you can refer to
__schedule : scheduler main API
preempt_schedule_common
schedule
preempt_schedule_context
preempt_schedule_irq
However, it would be hard to calculate the exact delay since entire scheduling functionality doesn't seem to protected by spinlock_irq_disable() context. If you don't disable "local interrupt", your delay calculation will include ISR serving time.
__schedule is disabling local_irq for some specific critical section. Anyway, core part of scheduling is __schedule() API.
I'm trying to debug some performance issues with pthreads on Linux and I think sched_getcpu() may be lying to me. It reports a constant CPU for each thread, whereas profiling experiments seem to suggest the threads are actually migrating from one core to another during their life-time.
I wonder if sched_cpu() just reports the first CPU that the thread started running on, and is oblivious to thread migration ? Has anyone else noticed this, or seen any evidence that the the return value of sched_getcpu() might change ? If it's not realiable, are there any other methods for tracking current CPU (use CPUID maybe ?) ?
http://man7.org/linux/man-pages/man2/getcpu.2.html indicates sched_getcpu() is just a wrapper for getcpu().
http://man7.org/linux/man-pages/man2/getcpu.2.html suggests that the information provided is accurate, because an old caching option is no longer used:
The tcache argument is unused since Linux 2.6.24...it specified a
pointer to a caller-allocated buffer in thread-local storage that was
used to provide a caching mechanism for getcpu(). Use of the cache
could speed getcpu() calls, at the cost that there was a very small
chance that the returned information would be out of date. The caching
mechanism was considered to cause problems when migrating threads
between CPUs, and so the argument is now ignored.
So unless you are using a pre-2.6.24 kernel it seems unlikely you could be seeing old/cached information.
Calling sched_getcpu has two problems:
It only tells you where the thread is running when it executes the call,
Calling a system routine could cause a thread to migrate.
If you are using Intel runtime you could set KMP_AFFINITY=verbose as it will provide the same information (formatted differently) on stderr when the program executes its first parallel section.
I have multiple instances of a particular process running on my system . At some point during the process execution, some of the internal data structures gets overwritten with invalid data. This happens on random instances at random intervals. Is there a way to debug this other than by setting memory access breakpoints?. Also, is it possible set memory access breakpoint on all these process simultaneously without starting a separate instance of gdb for each process?. The process runs on x86_64 linux system with 2.6 kernel.
If you haven't already done so, would recommend using valgrind (http://valgrind.org). It can detect many types of memory bugs including memory over/under runs, memory leaks, double frees, etc.
Also, is it possible set memory access break-point on all these process simultaneously without starting a separate instance of gdb for each process?
I don't think so that using gdb you can set breakpoints for all the processes in one go. According to me, you have separately attach each process and set the breakpoints.
For memory errors, valgrind is much more useful than GDB.
Assuming the instances you are talking about are forked or spawned from a single parent, you don't need separate instances of valgrind.
Just use valgrind --trace-children=yes
See http://man7.org/linux/man-pages/man1/valgrind.1.html
As to your question on GDB, an instance can debug one process at a time only.
You can only debug one process per gdb session. If your program forks, gdb follows the parent process if no other options to set follow-fork-mode was given.
see: http://www.delorie.com/gnu/docs/gdb/gdb_26.html
If you have memory problems it is even possible to run valgrind in combination with gdb or use some other memory debugging library like efence. Efence replaces some library calls e.g. malloc/free with own functions. After that efence and also valgrind use the mmu to catch invalid memory access. This is typically done by adding some space before and after each allocated memory block. If this spare memory is accessed by your application the library ( efence ) or valgrind stops execution. In connection with gdb you will be pointed to the source line which access the forbidden memory area.
Having multiple processes needs multiple instances of gdb which is in practive no real problem.
I know the process ID of process X. After my process was preempted when it was scheduled again can I determine that process X was scheduled in between that time?
Can I know if process X updated the Cache Memory or not given its process ID?
Are there assembly code or API to do this in linux? Can anyone suggest coding examples or any technique?
It is not a "process" which access the CPU cache. It is any execution of machine instruction on the CPU core.
In particular, when a core is running in kernel mode, it is by definition not running in a process, and it is obviously using the CPU cache (since every memory access goes thru the cache)
So your question does not have any sense, if you speak of the CPU cache.
The file system cache (sometimes called page cache) is managed by the kernel, and you can't really attribute it to some specific process (e.g two processes reading the same file would use the same cached data). It is related to the mere action of accessing a file data (by whatever process doing that). See e.g. linuxatemyram
You might perhaps get some measure system-wide about CPU cache or file system cache, probably thru proc(5) (see also oprofile)
Can I know if process X updated the Cache Memory or not given its process ID?
If you are talking about CPU cache, then no. Both data cache and instruction cache are transparent to the system. There is no way to find out if it was update by a program x. But yes it will be used to speed up execution for sure.
In my server, there exists several CPUs (0-7). I need run parallel code, and each process affiliate with one CPU, so how do I know the CPU information for each process?
For example, if two processes (#0 and #1) exist, and process #0 uses CPU 5 and process #1 uses CPU 7.
how do I know that by programming in C or Fortran?
Use the sched_getcpu() call.
Keep in mind that a process/thread can be scheduled freely to run on any available cpu/core, so one of your processes could run on core 1 one second, and on core 2 the next milisecond. You can restrict which processors a process is allowed to run on with sched_setaffinity()
I'm not aware of any system call on Linux that will give you general information about what CPU a thread in running on. #nos is correct that sched_getcpu() will tell you which CPU a thread is running on, but only for the calling context.
You can do this by querying the /proc file system. However, if you find yourself building your application around this functionality, it is likely that you need to reexamine your design.
The file /proc/<pid>/stats contains a field that provides you with the last CPU the process ran on. You would just need to parse the output. (use man proc to see the field list).
In general it is the task of the operating system to abstract such things from applications.
Normally I see my applications (as simple as doing a grep on a huge file) change CPU core every once in a while.
Now if you want to force an application on a specific core you can manually set the CPU affinity.
I've written some pretty strange software in the past and I've never had the desire to know and/or control this.
Why would you want to know?
More generally, why do you want to know? The Linux kernel is very good at scheduling processes/threads to make the best use of the available cores.
Generally, you have to change the CPU affinity, because a process can migrate between processors: CPU Affinity (Linux Journal, 2003).