What are the addresses in core files? - linux

This is part of my core file:
[New Thread 30385]
[New Thread 30383]
[New Thread 30381]
[New Thread 30379]
[New Thread 30378]
[New Thread 30270]
[New Thread 30268]
Core was generated by `test'.
Program terminated with signal 11, Segmentation fault.
#0 0x001cd1a6 in ?? ()
Does it mean my program crashes at 0x001cd1a6 or the program crashes while trying to read/write to that address?
There is no executable code at that address.
Another thing is it gives a different address every time it crashes.

Does it mean my program crashes at 0x001cd1a6
Yes.
There is no executable code at that address.
Well, that would certainly cause a crash (due to illegal instruction).
Another thing is it gives a different address every time it crashes.
Your program has threads, so its allocation pattern is likely different every time it runs, as threads are scheduled differently.
Also, Linux uses address randomization, so if you run even non-threaded program multiple times, you'll end up with different addresses. On the other hand, GDB disables that randomization, so if you run non-threaded program under GDB, it should crash in the same place every time.
You are likely calling a virtual function on an object that has been invalidated (e.g. deleted). Use where GDB command to find out how you end up on invalid address.
Also, don't ever call your executable test on UNIX: this conflicts with /usr/bin/test, which many shell scripts will use.

Related

Is a coroutine a kind of thread that is managed by the user-program itself (rather than managed by the kernel)?

In my opinion,
Kernel is an alias for a running program whose program text is in the kernel area and can access all memory spaces;
Process is an alias for a running program whose program has an independent memory space in the user memory area. Which process can get the use of the CPU is completely managed by the kernel;
Thread is an alias for a running program whose program-text is in the memory space of a process and completely shares the memory space with another thread of the same process. Which thread can get the use of the CPU is completely managed by the kernel;
Coroutine is an alias for a running program whose program-text is in the memory space of a process.And it is a user thread that the process decides itself (not the kernel) how to use, and the kernel is only responsible for allocating CPU resources to the process.
Since the process itself has no right to schedule like the kernel, the coroutine can only be concurrent but not parallel.
Am I correct in saying Above?
process is an alias for a running program...
The modern way to think of a process is to think of it as a container for a collection of threads and the resources that those threads need to execute.
Every process (except for "zombie" processes that can exist in some systems) must have at least one thread. It also has a virtual address space, open file handles, and maybe sockets and other resources that are shared by the threads.
Thread is an alias for a running program...
The problem with saying that is, "running program" sounds too much like "process," and a thread is most definitely not a process. (E.g., a thread can only exist in a process.)
A computer scientist might tell you that a thread is one particular execution of the application's code. I like to think of a thread as an independent agent who executes the code.
coroutine...is a user thread...
I'm going to mostly leave that one alone. "Coroutine" seems to mean something different from the highly formalized, and not particularly useful coroutines that I learned about more than forty years ago. What people call "coroutines" today seem to have somewhat in common with what I call "green threads," but there are details of how and when and why they are used that I don't yet understand.
Green threads (a.k.a., "user mode threads") simply are threads that the kernel doesn't know about. They are pretty much just like the threads that the kernel does know about except, the kernel scheduler never preempts them because, Duh! it doesn't know about them. Context switches between green threads can only happen at specific points where the application allows it (e.g., by calling a yield() function or, by calling some library function that is a documented yield point.)
kernel is an alias for a running program...
The kernel also is most definitely not a process.
I don't know every detail about every operating system, but the bulk of kernel code does not run independently of the applications that the kernel serves. It only runs when an application thread enters kernel mode by making a system call. The thread that runs the kernel code still belongs to the application process, but the code that determines the thread's behavior at that point is written or chosen by the kernel developers, not by the application developers.

What does happen with kernel level threads when process ends?

If we have a process with kernel level threads running, and that process ends what does exactly happen with those threads?
I suppose they end too, but what are exact steps?
I suppose they end too, but what are exact steps?
The exact steps are: they simply evaporate into nothing.
More precisely, when the process executes exit (or exit_group on Linux) system call, the OS deschedules any running threads, whatever instruction they are currently on, and then destroys all kernel resources associated with them (memory mappings, file descriptors, etc.).
It's as if the kernel plucks them out of existence. One moment they are executing on CPU or waiting to be scheduled, and the next moment they simply do not exist.

What's the exact difference between gdb and actual OS environment for multiprocess?

I've been debugging multi-process job. Create multiple threads at program initialization. I found while I'm using gdb for debug, the threads can all be set up successfully, but when I execute the program directly in linux environment, it stucks after part of the threads being created. I'm thinking it must be some schedule problem between thread sleep and wakeup but haven't figure that out yet..
And although gdb can create the threads successfully, it quits with an unexpected segmentation fault in a glibc function after thread killed itself:
res_thread_freeres () at res_init.c:642
642 if (_res.nscount == 0)
which is also wierd because I can check the value of _res.nscount, it didn't overflow definately.
So.. Does anybody have a clue about the execution difference between an actual os and gdb debug environment? Thanks!
Update:
I've located the problem to pthread being set to SCHED_FIFO, after I removed this, it works fine. But I'm still not aware of why the program works fine in gdb environment.. Actually the thread state of the program got changed the moment it is attached to gdb.

How to list threads were killed by the kernel?

Is there any way to list all the killed processes in a linux device?
I saw this answer suggesting:
check in:
/var/log/kern.log
but it is not generic. there is any other way to do it?
What I want to do:
list thread/process if it got killed. What function in the kernel should I edit to list all the killed tid/pid and their names, or alternitavily is there a sysfs does it anyway?
The opposite of do_fork is do_exit, here:
do_exit kernel source
I'm not able to find when threads are exiting, other than:
release_task
I believe "task" and "thread" are (almost) synonymous in Linux.
First, task and thread contexts are different in the kernel.
task (using tasklet api) runs in software interrupt context (meaning you cannot sleep while you are in the task ctx) while thread (using kthread api, or workqueue api) runs the handler in process ctx (i.e. sleep-able ctx).
In both cases, if a thread hangs in the kerenl, you cannot kill it.
if you run "ps" command from the shell, you can see it there (normally with "[" and "]" braces) but any attempt to kill it won't work.
the kernel is trusted code, such a situation shouldn't happen, and if it does, it indicates a kernel (or kernel module) bug.
normally the whole machine will hand after a while because the core running that thread is not responding (you will see a message in /var/log/messages or the console with more info) in some other cases the machine may survive but that specific core is dead. depends on the kernel configuration.

understand GDB output for new thread (Linux systag)

I am currently debugging an application which uses pthreads. When I attach GDB
it continuously prints messages of this form:
[New Thread a_hex_number (LWP a_dec_number)]
I assume that a_hex_number is an address, but whose address it is?
I assume a_dec_number is a unique identifier for created thread, is it?
Are my assumptions right?
Can anyone give me more detail about the numbers and their meaning?
I already read this document but I am still having trouble to get the full picture.
Probably an info about the Linux systags would help me a lot.
I assume that a_hex_number is an address, but whose address it is?
It's the address of a thread descriptor (on Linux also the result of pthread_self() call).
I assume a_dec_number is a unique identifier for created thread, is it?
No, it's the thread-id assigned by the kernel to this thread. It's the same thing as visible in ps output (on Linux, clone(2) threads and processes have very few differences at the kernel level).

Resources