From the output of strace -o file lldb someprog, I found there is no ptrace.
Then how can lldb get features like PTRACE_ATTACH/PTRACE_SINGLESTEP and so on?
Ironically, the lldb process doesn't do any actual debugging. Instead, it always uses a proxy (lldb-server on Linux, debugserver on Darwin)(*) to do the actual debugging, and communicates with it using the gdb remote serial protocol. lldb-server does use ptrace on Linux (and somewhat on Darwin).
(*) I think there still is an in-process adaptor for Windows, but IIRC they are switching to lldb-server as well.
Related
I've upgraded my Linux development VM from Ubuntu 16.04 to 18.04 recently, and noticed one thing that has changed. This is on x86-64. With 16.04, I've always had this workflow where I'd build the project I'm working on with gcc (5.4, the stock version in 16.04) and -fsanitize=address and -O0 -g, and then run the executable through gdb (7.11.1, also the version that came with Ubuntu). This worked fine, and at the end, LeakSanitizer would produce a leak report if it detected memory leaks.
In 18.04, this doesn't seem to work anymore; LeakSanitizer complains about running under ptrace:
==5820==LeakSanitizer has encountered a fatal error.
==5820==HINT: For debugging, try setting environment variable LSAN_OPTIONS=verbosity=1:log_threads=1
==5820==HINT: LeakSanitizer does not work under ptrace (strace, gdb, etc)
Then the program crashes:
Thread 1 "spyglass" received signal SIGABRT, Aborted.
__GI_raise (sig=sig#entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
I'm not sure what is causing the new behavior. On 18.04 I'm building with the default gcc shipped (7.3.0), using -fsanitize=address -O0 -g and debugging with the default gdb (8.1.0). Can the old behavior be somehow re-enabled? Or do I need to change my workflow and detach from the program before killing it to get a leak report?
LeakSanitizer internally uses ptrace, probably to suspend all threads such that it can scan for leaks without false positives (see issue 9). Only one application can use ptrace, so if you run your application under gdb or strace, then LeakSanitizer won't be able to attach via ptrace.
If you are not interested in leak debugging, disable it:
export ASAN_OPTIONS=detect_leaks=0
If you do want to enable leak debugging, you must detach the debugger before LeakSanitizer starts scanning. To be able to attach a debugger shortly afterwards, sleep a bit (for example, 10 seconds):
export ASAN_OPTIONS=sleep_before_dying=10
./program
Then in another shell, attach to the application again:
gdb -q -p $(pidof program)
For more a description of the above (and other) options, see https://github.com/google/sanitizers/wiki/AddressSanitizerFlags.
Recently I have installed Ubuntu 12.04 LTS ISO image in my desktop. Below is the output of the kernel version I have installed:
# uname -r
3.5.0-41-generic
I am trying to develop a VFS and want the kernel source code version '3.5.0-41-generic' for reference purpose - where can I find the same?
What are the excellent kernel debugging options looking at logs and mapping them to kernel code?
How and which debugger I can use to debug a live kernel flow execution?
Are there ways I can add more printk methods and re-modify the modules? Say I want to know how a FS mount method works - I can modify the required FS code (adding more printk functions) re-compile and reload the modules. Now with aid of my new printk functions I can understand the flow
Why don't you install vanilla 3.5 kernel and try to develop on it?
As a kernel debugger you can use kGDB or just printk.
But... I suggest you to test your vfs on linux running on qemu. Qemu is able to debug the running linux - so you can connect gdb to it and debug the whole emulating system.
In short, I need to understand how to configure eclipse to run "optirun gbd" instead of "gdb". An explanation of what exactly I'm trying to accomplish follows.
I need to run my debug app in eclipse such that it will use the nvidia optimus card instead of the integrated card. My app requires opengl support that is only available this way.
I've got a laptop with an nvidia optimus video card. I'm running linux (ubuntu). I've successfully set up bumblebee such that I can take advantage of the optimus technology. This requires that, to use the nvidia card, I run a given program "foo" with the program "optirun:" optirun foo.
I need to configure eclipse to launch my program in debug mode under optirun. If I run from command line: optirun gdb app everything works as expected.
Edit: Changing the "GDB Debugger" field inside the debug configuration to optirun gdb does not work. Lanching eclipse by optirun eclipse does, however. But this is a detriment to battery life.
Go to "Debug Configurations", open "Debugger" tab. Change "GDB debugger" from gdb to optirun gdb.
Works in Eclipse Juno, Ubuntu 12.04.
Since I'm sure eclipse uses the shell to execute the program, a workaround is to alias gdb to optirun gdb in ~/.bashrc
I look into this issue today and I found another solution. As long as you have Bumblebee installed (http://www.bumblebee-project.org/) and you know you can attach optirun to an executable (try with glxgears for example) you can attach it to cuda-gdb.
What I did is create a script:
#!/bin/bash
optirun /usr/local/cuda/bin/cuda-gdb $*
And save it to /usr/local/cuda/bin or somewhere else it doesn't matter, with the appropriate permissions for execution (755).
What it does is very simple, it runs optirun cuda-gdb args where args is whatever the command line sends it.
In terminal just run opti_cuda-gdb with or without arguments.
For example I named it opti_cuda-gdb and placed it in that directory (which conveniently is added to the path if CUDA is properly configured).
If you use an IDE to develop, like say Netbeans, point the debbuger executable to that script.
I've been successfully compiled and debbuged code using CuSparse and CuBlas with NetBeans running in a SAMSUNG SF410 with Nvidia Optimus and Ubuntu 11.04 and 11.10.
I'm open to provide further details if you think I omitted something.
When I start MacVim within terminal I get a nasty error message saying it has caught a deadly singal SEGV. I really don't know what's going on. Like wise when I start the application by double clicking it on my Doc the app opens but I can't do anything.
Is there any way to fix this?
I have had the same problem, and traced it to the Command-T plugin containing native extensions that were compiled against a different version of Ruby (1.8) to the one in my environment (1.9).
I recommend disabling all of your plugins/addons, and re-enabling them one by one.
You might get more of a hint what's going wrong by running MacVim's vim process inside gdb (Xcode required):
paul#paulbookpro ~ βΈ© gdb /Applications/MacVim.app/Contents/MacOS/Vim [11:20:55]
GNU gdb 6.3.50-20050815 (Apple version gdb-1705) (Fri Jul 1 10:50:06 UTC 2011)
...
This GDB was configured as "x86_64-apple-darwin"...Reading symbols for shared libraries ................ done
(gdb) run
Starting program: /usr/local/Cellar/macvim/7.3-61/MacVim.app/Contents/MacOS/Vim
Hopefully gdb will report some useful information about the segfault, and you can use commands like backtrace to get more data.
Good luck.
Signal SEGV means "segmentation violation" and generally indicates a bug in the application. You can try reinstalling it, or contact the software vendor.
I want to use GNU DDD (gdb graphic shell) to debug Linux kernel, that is running (in some distro) inside qemu.
I have vmlinux image outside of Qemu, and launch Qemu with -s -S, so it acts like gdbserver (stops at start and waits for debuging commands).
Now, how to connect DDD to that gdbserver using local vmlinux image?
Should I just open image and tell gdb 'target remote'?
You basically answered your own question - yes, use target remote gdb command in ddd to connect:
$ gdb qemuKernelFile
(gdb) target remote localhost:1234
With minor adjustments, you can use procedure described in great detail here.
A picture from this tutorial: