I have opened an executable in IDA Pro, and found the location I want to break at, 0x3390 from the beginning of the file.
How do I set a breakpoint in lldb at the memory address, start of program + 0x3390.
I tried b s -a 0x3390 but it doesn't work, I presume because I need the actual address, not the offset.
The image list command will show the load address of the __TEXT.__text section of all the executables loaded in the program. If you need more information, image dump sections will dump the addresses of all the sections. From this you should be able to figure out what to offset your address with. Note that the program may not load at its pre-run address, so you may have to figure out the base address after you've started up.
Then you should be able to say:
(lldb) br set -a <FileAddress>+<Address>
Note, b is an alias for some fancy regular-expression based command that tries to emulate the gdb breakpoint expression parser, so you either need to disable that alias or use br to get the full breakpoint command.
Related
Following Situation I need to setup the DISPLAY Variable for my WSL2 to transmit goutput to the Xserver running on my Host-System.
In general I would do this by using my .bashrc:
export DISPLAY=$(ip route list default | awk '{print $3}'):0
So I started by setting the DISPLAY Variable with
set -Ux DISPLAY $(ip route list default | awk '{print $3}'):0
which worked in the first place.
The Issue: The Variable is now set inside .config/fish/fish_variables as SETUVAR --export DISPLAY:<MY-IP>:0
Seems fine for the moment but since my Homenet uses DHCP it might happen that my IP is changing. How do I set the variable by calling the top used command?
Your mistake was using set -U. That creates a "universal" variable. Instead, simply do set -x in your ~/.config/fish/config.fish so the var is created every time you start a fish shell. See also https://fishshell.com/docs/current/faq.html#why-doesn-t-set-ux-exported-universal-variables-seem-to-work. Universal variables shouldn't be used for values that can change each time you start a fish shell or that might be different for concurrently running shells.
Kurtis's answer would normally be correct, but this is WSL2, and there's (IMHO) a better solution on WSL2 that can use fish universal variables.
set -Ux DISPLAY (hostname).local:0
As long as the hostname matches the Windows Computer Name (which it should and does by default), then that will always use mDNS to return the correct address, even if the IP has changed.
Note that you'll need to remove the global variable definition from ~/.config/fish/config.fish or else the universal will be shadowed by the global.
Explanation:
You might think that it is the dynamically assigned DHCP address changing that is causing the problem, but that's not actually it. The IP address that you get back from ip route list default | awk '{print $3}' is not the one that is assigned to Windows by DHCP on your home network.
It's actually the address of a virtual switch provided by WSL2 that allows WSL2 to communicate with the Windows host (and beyond).
When the Windows IP address changes, it doesn't matter to this virtual switch. Side note: I just confirmed this to make sure on WSL2 by changing my Windows IP manually.
The problem here is actually that the switch address changes each time the computer (or WSL2) restarts. This is why most instructions for setting your DISPLAY on WSL2 tell you to map it to this address, rather than hardcoding it.
As Kurtis said, however, this type of assignment would (typically) be a problem if you were using a fish universal variable, since the virtual switch IP does change each reboot.
But WSL2 provides a convenient mDNS lookup for the address, in the form of <computername>.local. By hardcoding that into the universal variable, DISPLAY should always resolve correctly, even after the IP address itself changes.
I'm sorry about the weird snippets. I won't be able to paste the exact code.
The following list from an ELF file shows addresses and commands at them.
0x4000XXXX: [someInstr] [someReg], [someReg2], [someReg3]
0x4000XXXY: [someInstr] [someRegValue], [somereg3]
0x4000XXXZ: [jumpInstruction] [someReg3] + 0xXXX, [someReg4]
0x4000XXXA: [someInstr]
0x4000XXXB: [someInstr]
0x4000XXXC: [someInstr]
0x4000XXXD:
The third instruction adds 0xXXX [which is some address value] to value in someReg3 register. Going there,
0x4000YYYY: [someInstruction]
0x4000YYYZ: [someInstruction]
0x4000YYYX: [someInstruction]
0x4000YYYA:
we see that the execution will stop once 0x4000YYYA address comes up as it is blank. [The instructions above it are all linear ones like OR, AND etc.]
My question is, why are the blanks even there?
In the example I gave above, I have no idea where the exact "ending" instruction resides, but using nm -S [filename] and readelf -l [filename] I was able to estimate 2 end points. Unfortunately, those addresses have unimplemented instructions and it causes interruptions in the program. The file has quite a lot of blank spaces but I've only included 2 for an example. These blanks are interrupting the program I'm running. Even if I skip the execution at these addresses, I have no idea where to stop.
Would anyone know how to disable the virtual terminals in linux? I am using Yocto, Morty version on an i.MX6 processor. Even though our base distribution is Yocto, unfortunately we have diverged from building it with recipes, so this is more of a straight linux question than Yocto…
To give some detail as to my problem: It is for an embedded device that has an HDMI port - when I attach a terminal to the HDMI port it shows the Linux Penguin logo, a getty service and blanks out after 600 seconds. I just want to use the hdmi port as an output with nothing displayed on the output and I want it to stay on all the time.
I have found that the hdmi port maps to /dev/tty1 – when I type: echo “asdfasdf” > /dev/tty1 I see the characters output to the monitor.
Here are a few things I have tried to no avail – a lot of these are not needed if I can figure out how to disable it as a virtual terminal…
• I figured out how to disable the getty service but a cursor still blinks. I don’t even want a cursor to show
• I have tried to disable the display of the penguins by disabling the LOGO in the kernel config parameters - I commented anything with LOGO out:
CONFIG_LOGO=y
CONFIG_LOGO_LINUX_MONO=y
CONFIG_LOGO_LINUX_VGA16=y
CONFIG_LOGO_LINUX_CLUT224=y
To no avail. The logo still shows : .
• The fact that it blanks after 600 seconds is console blanking – I can see it set to 600 in the file: /sys/module/kernel/parameters/consoleblank. When I issue the command: echo -e '\033[9;0]'>/dev/tty1
It sets the console blanking to 0 and wakes the terminal. Being able to wake the console up is limited success but I would like to disable the virtual terminal altogether…
• I tried commenting out any virtual terminal defines in the config file to no avail:
CONFIG_VT=y
CONFIG_VT_CONSOLE=y
CONFIG_VT_CONSOLE_SLEEP=y
CONFIG_HW_CONSOLE=y
CONFIG_VT_HW_CONSOLE_BINDING=y
Everything I have read suggests that /dev/tty1 is a virtual terminal or console. From what I read about the VT option, disabling the CONFIG_VT should do it:
VT — Virtual terminal Say yes here to get support for terminal devices
with display and keyboard devices. These are called "virtual" because
you can run several virtual terminals (also called virtual consoles)
on one physical terminal. You need at least one virtual terminal
device in order to make use of your keyboard and monitor. Therefore,
only people configuring an embedded system would want to say no here
in order to save some memory; the only way to log into such a system
is then via a serial or network connection. Virtual terminals are
useful because, for example, one virtual terminal can display system
messages and warnings, another one can be used for a text-mode user
session, and a third could run an X session, all in parallel.
Switching between virtual terminals is done with certain key
combinations, usually Alt-function key. If you are unsure, say yes, or
else you won't be able to do much with your Linux system.
But for some reason it doesn’t do anything!
• I found this thread; https://askubuntu.com/questions/357039/how-do-i-disable-virtual-consoles-tty1-6 among others, but none are much help since my distribution does not have any of the directories in the solutions offered in this thread or any others I have found. For instance I do not have a /etc/events.d nor do I have a /etc/default/console-setup file nor do I have a /etc/init directory… I imagine the reason for this is that my distribution uses systemd and the solutions are SysV based init maybe?
Disabling the logo or console blanking would not be necessary if I could just figure out how to disable that port as a terminal…
So does anyone have pointers or things I could try? I am relatively new (returning after 10 years - I worked with DNX 10 years ago v2.6 and it seems everything I knew about init is fairly obsolete lol) to linux so I am sure I am missing a lot…
Thanks,
- Chuck
I think I found the answer to my question. This is actually a frame buffer console documented here: Documentation/fb/fbcon.txt. From the documentation:
The framebuffer console (fbcon), as its name implies, is a text
console running on top of the framebuffer device. It has the
functionality of any standard text console driver, such as the VGA
console, with the added features that can be attributed to the
graphical nature of the framebuffer.
Commenting out the line
CONFIG_FRAMEBUFFER_CONSOLE=y
In the configuration file located in /arch/arm/configs will disable it.
Also this part of the documentation shows you how to disable it at runtime:
So, how do we unbind fbcon from the console? Part of the answer is in
Documentation/console/console.txt. To summarize:
Echo a value to the bind file that represents the framebuffer console
driver. So assuming vtcon1 represents fbcon, then:
echo 1 > sys/class/vtconsole/vtcon1/bind - attach framebuffer console
to
console layer echo 0 > sys/class/vtconsole/vtcon1/bind - detach framebuffer console from
console layer
When I issue the echo 0 command, the cursor stops blinking and starts blinking again when I issue the echo 1 command.
I think there is another way of doing it as well by modifying the Yocto build environment by putting the USE_VT="0" in the OpenEmbedded machine config file. The "USE_VT" variable is referenced by the sysvinit-inittab recipe. This answer was given to me from the Yocto Linux mailing list - but I have not tested it since we have diverged from Yocto...
I am using bash. I have switched off ASLR in Ubuntu 11.04 using
#sysctl -w kernel.randomize_va_space=0
And I have exported a variable from the shell using
$ export MYSHELL=/bin/sh
I wrote a C program to get the address of the MYSHELL:
void main(){
char* shell = getenv("MYSHELL");
if (shell)
printf("0x%x\n", (unsigned int)shell);
}
It spat out 0xbffffe82.
When I used it as a part of my attack for ret-to-libc, the address changes (although by a very small offset).
Why does this happen?
Also when I change the filename of the binary and use the previously successful address, it won't work, and it has been relocated to a different address. Why? In other words, What is the relation of binary names and environment variable addresses? Is this a protection feature by bash? How do I switch this off?
Note: this is not homework.
Stack layout at program startup is documented here. It should be obvious why changing the name of the program (length really) changes the layout.
Is it possible to run GDB with a program assembled with as and linked with ld? With gcc adding the flag -g allows for debugging but I get the error No symbol table is loaded. Use the "file" command when I try to add breakpoints to a loaded program.
Thanks!
EDIT Maybe I should make it clear that I'm learning and programming in assembly. All I really want is a stack trace but being able to use GDB would be great.
Resolution Running as -g does the trick.
Thank you to all that answered!!
It is possible. However, you need symbols in order to add symbolic breakpoints, and symbols are provided by debugging info; make sure your assembler and linker are providing those. EDIT With GNU as, use as -g. Or just use gcc -g: if you give it a .s file, it will invoke the assembler and linker as appropriate.
GDB understands debugging info in several formats: stabs, COFF, PE, DWARF, SOM. (Some of these are executable formats with debugging sections, others are debug info formats that can be embedded into executables, such as ELF.) gcc -g usually chooses whatever the platform's default is, gcc -ggdb usually chooses the most expressive (depending on your versions, possibly DWARF-3).
If you have debugging info embedded into or linked to by the executable, gdb will try to load it automatically. If you have it elsewhere, you may need to use file to tell gdb where to find it.
You can still debug without symbolic information. For example, you can issue break *0x89abcdef to insert a breakpoint at that address, if there's any code there.
you could try running as with the --gdwarf-2 or -g options, and make sure ld is not called with --strip-debug, and that your makefile/install process is not stripping the executable.
That's not an error preventing debugging, that's an error setting breakpoints in the way you are trying to do it. Since GDB doesn't have any symbol information, you'll have to set the breakpoints some other way.
If you don't have a symbol table, then you can't set breakpoints symbolically (by function name, line of code, etc). You could still set a breakpoint for a specific address, if you know the address you are trying to stop at.
gdb> b 0x12345678
Of course that's only useful if you know that you want to stop at 0x12345678
What does file say about your executable?