Why you need to use C++'s "data_seg" - visual-c++

When I trace one open source, I saw someone has the following code
#prama data_seg(".XXXX")
static char *test=NULL;
FILE *f1;
#prama data_seg()
However, even after checking http://msdn.microsoft.com/en-us/library/thfhx4st(VS.80).aspx, I still not sure why we need to do so, can someone help me to understand this part?
thank you

This is usually done to share the data that's designated to be in that segment. The code you have above will normally go in a DLL. You also use a .def file that specifies that the ".XXXX" segment is going to have the "SHARED" attribute.
When you do all that, the data in that segment gets shared between all the processes that load the DLL, so those variables are shared between all those processes.

Related

Linux ELF shared library issue

Currently I am working with ELF files and trying to deal with loading SO files. I am trying to "forcibly" link a new (a fake one, without actual calls to the code) SO dependency into executable file. To do that, I modified the .dynstr section contents (created a new section, filled it with the new contents, and resolved all sh_link fileds of Elf64_Shdr entries). Also I modified the .dynamic section (it has more than one null entry, so I modified one) to have DT_NEEDED type with linkage to the needed third-party SO name.
My small test app, being analyzed, appears to be fine (as readelf with -d option, or objdump -p, show). Nevertheless, when trying to run the application, it tells:
error while loading shared libraries: ��oU: cannot open shared object file: No such file or directory
Every time running, the name is different. This makes me think some addresses in the ELF loaded are invalid.
I understand that this way of patching is highly error-prone, but I am interested anyway. So, my question is: are there any ELF tools (like gdb or strace), which can debug image loading process (i.e. which can tell one what is wrong before entry point is hit)? Or are there any switches or options, which can help with this situation?
I have tried things like strace -d, but it would not tell anything interesting.
You do not mention patching DT_STRTAB and DT_STRSZ. These tags control how the dynamic loader locates the dynamic string table. The section headers are only used by the link editor, not at run time.
First of all, I did not manage to find any possibility to deal with sane debugging. My solution came in just because of hard-way raw ELF file hex bytes manual analysis.
My conception in general was right (forgot to mention the DT_STRTAB and DT_STRSZ modification though, thanks to Florian Weimer for reminding of it). The patchelf util (see in the postscriptum below) made me sure I am generally right.
The thing is: when you add a new section to the end, make sure you put data to the PLT right way. To add a new ".dynstr" section, I had to overwrite an auxiliary note segment (Elf**_Phdr::p_type == PT_NOTE) with a new segment, right for the new ".dynstr" section data. Not yet sure if such overwriting might cause some error.
It turned out that I put a raw ELF file ('offline') offset, but had to put this data RVA in the running image (after loading ELF into memory by the system loader, 'online'). Once I fixed it, the ELF started to work properly.
P.S. found a somewhat similar question: How can I change the filename of a shared library after building a program that depends on it? (a useful util for the same purpose I need, patchelf, is mentioned there; patchelf is available under Debian via APT, it is a nice tool for the stated purpose)

Shared object missing from pin tool

When I compile my pin tool and run ldd on the pin tool shared object the shared objects libxed.so, libpin3dwarf.so, libdl-dynamic.so, libstlport-dynamic.so, and libc-dynamic.so all cannot be found. I thought it might be the makefile.rules file, as I modified it to link some other object files, but even when compiling an example pin tool provided in the pin directory the same problem occurs. Does anyone know what the problem may be?
To make ldd able to find them you can create a new conf file in /etc/ld.so.conf.d/ (/etc/ld.so.conf.d/pin.conf for instance). Then, inside this file, your need to provide paths to pin's dynamic libraries :
path_to_your_pin_folder/pin-3.0-76991-gcc-linux/ia32/runtime/pincrt
path_to_your_pin_folder/pin-3.0-76991-gcc-linux/intel64/runtime/pincrt/
path_to_your_pin_folder/pin-3.0-76991-gcc-linux/extras/xed-ia32/lib/
path_to_your_pin_folder/pin-3.0-76991-gcc-linux/extras/xed-intel64/lib/
path_to_your_pin_folder/pin-3.0-76991-gcc-linux/ia32/lib-ext/
path_to_your_pin_folder/pin-3.0-76991-gcc-linux/intel64/lib-ext/
Try adding the relevant directories to your LD_LIBRARY_PATH environment variable.

Binary linked against different shared libraries of the same package

I have 2 shared libraries conflicting with each other, and other binaries linked against them. To be more detailed, I have something like this:
top-lib1.so linked with libprotobuf.so;
top-lib2.so linked with libprotobuf-lite.so;
binary linked with top-lib1.so and top-lib2.so.
The problem is that when I launch my binary, I have crash due to some memory corruption caused by double-free: the first from protobuf.so and the second from protobuf-lite.so (see related bug).
I haven't access to top-lib2.so sources, and I can't link top-lib1.so with protobuf-lite.so due to my app functionality.
Thus my question is: how to deal with it?
I can't leave both due to this crash, I can't re-link my lib (top-lib1.so) with libprotobuf-lite.so, and I can't change top-lib2.so.
Is there any way to re-link top-lib2.so with libprotobuf.so without sources? Or is there any other possibility?
You do have a few choices.
The upstream bug you mentioned states that "libprotobuf.so has everything libprotobuf-lite.so has, and more". If that is indeed the case, one possible solution is to binary-patch top-lib2.so's .dynamic section to reference libprotobuf.so instead of the -lite.so. The former is shorter, so simply overwriting the string libprotobuf-lite.so with libprotobuf.so\0e.so is all you should need.
If you don't want to binary-patch top-lib2.so, you have other choices:
You could link in all of top-lib1.so comprising object files and all of libprotobuf.so ones into the main binary and hide all libprotobuf's symbols in it (via linker script). If you do that, top-lib2.so can't tell that there is anything except libprotobuf-lite.so which it expects.
You could do the same with top-lib1.so -- i.e. hide libprotobuf inside of it.
You could link your copy of libprotobuf.so with -Wl,--default-symver, which will append ##libprotobuf.so version to every symbol exported from libprotobuf.so, and avoid the symbol collision that causes the problem in the first place.

Mono.Cecil Modifying the RVA of a method

I would like to modify the RVA of a method using Mono.Cecil. I noticed a similar question asked back in 2007 but is this doable in 0.95?
For eg: methodA.RVA = 0x1234;
I understand Mono.Cecil compute and write RVA during compilation but
are there anyways to go about modifying the RVA?
It can be done using CFF explorer though.
Thank You.
No this is not possible: that's simply not the goal of Mono.Cecil.
Cecil let you read, modify and write the managed code and metadata, but when it comes to the PE file organization, that's considered an implementation detail.

Can I get module handle from function address on linux?

Same as Win32:
GetModuleHandleEx(GET_MODULE_HANDLE_EX_FLAG_FROM_ADDRESS, (LPCTSTR)(void*)(myFunc), &h);
http://www.kernel.org/doc/man-pages/online/pages/man3/dlsym.3.html is not helpful.
Use dladdr. Documentation here.
Not per se, no; if the symbol was compiled in instead of accessed via dlopen()/dlsym(), then there is no handle to be returned. (The handle abstraction only exists to enable dlsym() fine control over where it loads symbols from; there is no such control over the original link, except via linker scripts.) In the normal course of events, an object is simply open()ed and mmap()ed, other details being hidden within ld.so and only accessible indirectly via RTLD_DEFAULT and RTLD_NEXT parameters to dlsym(). If you are using dlopen(), you are expected to keep track of your handles.
The only method I am aware of is to parse the contents of /proc/self/maps (and/or perhaps smaps) and then use the symbol address to calculate from the boundaries of the mapped .so (start and its size) within which mapped module the function lies.
Note: /proc/self is a symlink (on Linux) to the current process's (with ID <pid>) meta-information, i.e. /proc/<pid>.
It's possible there is some programming interface to this very information that you could use.
Edit: Ah, so dladdr() would be that interface.

Resources