I need some basic clarification on C++ static linkage. I have a file called data_client.lib. There are three independant consumers for the library file a.exe, b.exe and c.exe. There is a service called data_server.exe for which data_client.lib is the interface. Actually, I added another function to data_server.exe and corresponding interface to data_client.lib. Since just a.exe needs the extra functionality, I build a.exe only. I shipped data_server.exe, data_client.exe and a.exe as patch. Now, b.exe and c.exe randomly/inconsistently crashes throwing
mfc42u!CException::`RTTI Complete
Object Locator'+0x10
Does it make sense? If I also build b.exe and c.exe, then the crash does not happen. Is this the way it works?
Actually, I added another function to data_server.exe and corresponding interface to data_client.lib.
It's a little unclear from this exactly what was added to your library. However, if it's a new method or methods added to a class (rather than just some new standalone functions), there's a very high chance that recompiling everything will fix your problem. The vtable may or may not have been thrown out of whack by your changes.
It's also possible that your crashes have absolutely nothing to do with this and there's some other problem going on... but from your description, my money's on a vtable issue. If it were me, I'd recompile b.exe and c.exe and test again before investigating other issues.
Maybe You don't have explicit dependencies, but some of Your project headers uses, or put information implicitly into your library.
I do not know about the error, but your applications b.exe and c.exe are using an older version of the binding lib to communicate with a newer version of the data_server.exe. Some v_table indexes might be off or something if you added a function. You definately have to rebuild all the libraries.
Related
Is there any documentation for ServiceStack.Text.JSConfig with regard to MonoTouch AOT helpers?
I Found this...
ServiceStack JIT Error on MonoTouch
and I've had a look at the code but there are no comments and frankly it is a bit mysterious.
From my understanding of the AOT process all one needs to do to make sure a type/method is emitted is to have that type/method in one's source where the compiler thinks it may be used/called. It is not necessary to actually use/call anything at run time. The whole point of AOT is that it is a compile-time process. Consequently putting the use/call inside a method that is not used will work as long as the optimizer does not remove it.
I've been trying to use ServiceStack.Text.JsConfig.RegisterTypeForAot(); (in an unused method) to cure my AOT issues but have run into other weird issues when I have too many calls to it. See other question...
Calling ServiceStack.Text.JsConfig.RegisterTypeForAot<T>(); with MonoTouch causes SIGSEGV on startup on device
Could I maybe be using the RegisterTypeForAot() method wrongly?
What do the other methods do? RegisterForAot() and InitAot()
There is no documentation about JsConfig.InitForAot() other than what's already in-line in the JsConfig, i.e:
Provide hint to MonoTouch AOT compiler to pre-compile generic classes
for all your DTOs. Just needs to be called once in a static
constructor.
You should only have to call the JsConfig.InitForAot() stub and a JsConfig.RegisterTypeForAot<T>() for each type to let the MonoTouch compiler know what generic code needs to be pre-generated ahead of time so that all the code is available for generic reflection. If you run into a problem submit a small stand-alone test case with the issue on the GitHub project issues so we can see if there are any work around that can be done.
I have updated my development system to the new MonoTouch (6.0.1) and now whenever I'm referencing zxing.Monotouch types I get MissingMethodException on the constructor.
System.MissingMethodException: Method not found: 'MyClass..ctor'.
It's been 3 days now...
Anyone got any idea? I'm even willing to give up zxing if that what it takes (even though it's a wonderful library).
Edit
When I include zxing.Monotouch in the solution and reference it as a project the problem does not reproduce. If that's a clue I've missed it...
It's likely that the binary version of zxing.Monotouch is trying to access something that does not exists in 6.0.1. That's uncommon as we try to maintain source/binary compatibility unless the code is really broken (e.g. it would cause a crash anyway). I cannot be more precise without more data (e.g. a full build log).
If you include zxing.Monotouch as a reference then it will be rebuilt. If it works then it really looks like source compatibility was preserved (but not binary compatibility).
Whenever you have the source code available I encourage you to use .csproj (not .dll) references. Is has a few advantages, including the source/binary compatibility (above) and the fact that it makes things easier to debug from your project.
In my open-source project Artha I use libnotify for showing passive desktop notifications to the user.
Instead of statically linking libnotify, a lookup at runtime is made for the shared object (.so) file via dlload, if available on the target machine, Artha exposes the notification feature in it's GUI. On app. start, a call to dlload with filename param as libnotify.so.1 is made and if it returns a non-null pointer, then the feature is exposed.
A recurring problem with this model is that every time the version number of the library is bumped, Artha's code needs to be updated, currently libnotify.so.4 is the latest to entail such an occurance.
Is there a linux system call (irrespective of the distro the app. is running on), which can tell me if a particular library's shared object is available at runtime? I know that there exists the bruteforce option of enumerating the library by going from 1 to say 10, I find the solution ugly and inelegant.
Also, if this can be addressed via autoconf, then that solution is welcome too I.e. at build time, based on the target machine, the configure.h generated should've the right .so name that can be passed to dlload.
P.S.: I think good distros follow the style of creating links to libnotify.so.x so that a programmer can just do dlload("libnotify.so", RTLD_LAZY) and the right version numbered .so is loaded; unfortunately not all distros follow this, including Ubuntu.
The answer is: you don't.
dlopen() is not designed to deal with things like that, and trying to load whichever soversion you find on the system just because it happens to have the symbols you need is not a good way to do it.
Different sonames have different ABIs, and different ABIs means that you may be calling the same exact symbol name that is expecting a different set (or different size) of parameters, which will cause crashes or misbehaviour that are extremely difficult do debug.
You should have a read on how shared object versions work and what an ABI is.
The libfoo.so link is there for the link editor (ld) and is usually installed with the -devel packages for that reason; it might also very well not be a link but rather a text file with a linker script, often times on purpose to avoid exactly what you're trying to do.
I want to make a quick fix to one of the project's .so libraries. Is it safe to just recompile the .so and replace the original? Or I have to rebuild and reinstall the whole project? Or it depends?
It depends. Shared library needs to be binary-compatible with your executable.
For example,
if you changed the behaviour of one of library's internal functions, you probably don't need to recompile.
If you changed the size of a struct (e.g. by adding a member) that's known by the application, you will need to recompile, otherwise the library and the application will think the struct is smaller than it is, and will crash when the library tries to read an extra uninitialized member that the application didn't write to.
If you change the type or the position of arguments of any functions visible from the applications, you do need to recompile, because the library will try to read more arguments off the stack than the application has put on it (this is the case with C, in C++ argument types are the part of function signature, so the app will refuse run, rather than crashing).
The rule of thumb (for production releases) is that, if you are not consciously aware that you are maintaining binary compatibility, or not sure what binary compatibility is, you should recompile.
That's certainly the intent of using dynamic libraries: if something in the library needs updating, then you just update the library, and programs that use it don't need to be changed. If the signature of the function you're changing doesn't change, and it accomplishes the same thing, then this will in general be fine.
There are of course always edge cases where a program depends on some undocumented side-effect of a function, and then changing that function's implementation might change the side-effect and break the program; but c'est la vie.
If you have not changed the ABI of the shared library, you can just rebuild and replace the library.
It depends yes.
However, I assume you have the exact same source and compiler that built the other stuff and now if you only change in a .cpp file something, it is fine.
Other things e.g. changing an interface (between the shared lib and the rest of the system) in a header file is not fine.
If you don't change your library binary interface, it's ok to recompile and redeploy only the shared library.
Good references:
How To Write Shared Libraries
The Little Manual of API Design
I have inherited a very large and complex project (actually, a 'solution' consisting of 119 'projects', most of which are DLLs) that was built and tested under VC8 (VS2005), and I have the task of porting it to VC9 (VS2008).
The porting process I used was:
Copy the VC8 .sln file and rename it
to a VC9 .sln file.
Copy all of
the VC8 project files, and rename
them to VC9 project files.
Edit
all of the VC9 project files,
s/vc8/vc9.
Edit the VC9 .sln,
s/vc8/vc9/
Load the VC9 .sln with
VS2008, and let the IDE 'convert'
all of the project files.
Fix
compiler and linker errors until I
got a good build.
So far, I have run into the following issues in that last step.
1) A change in the way decorated names are calculated, causing truncation of the names.
This is more than just a warning (http://msdn.microsoft.com/en-us/library/074af4b6.aspx). Libraries built with this warning will not link with other modules. Applying the solution given in MSDN was non-trivial, but doable. I addressed this problem separately in How do I increase the allowed decorated name length in VC9 (MSVC 2008)?
2) A change that does not allow the assignment of zero to an iterator. This is per the spec, and it was fairly easy to find and fix these previously-allowed coding errors. Instead of assignment of zero to an iterator, use the value end().
3) for-loop scope is now per the ANSI standard. Another easy-to-fix problem.
4) More space required for pre-compiled headers. In some cases a LOT more space was required. I ended up using /Zm999 to provide the maximum PCH space. If PCH memory usage gets bumped up again, I assume that I will have to forgo PCH altogether, and just endure the increase in what is already a very long build time.
5) A change in requirements for copy ctors and default dtors. It appears that in template classes, under certain conditions that I haven't quite figured out yet, the compiler no longer generates a default ctor or a default dtor. I suspect this is a bug in VC9, but there may be something else that I'm doing wrong. If so, I'd sure like to know what it is.
6) The GUIDs in the sln and vcproj files were not changed. This does not appear to impact the build in any way that I can detect, but it is worrisome nevertheless.
Note that despite all of these issues, the project built, ran, and passed extensive QA testing under VC8. I have also back-ported all of the changes to the VC8 projects, where they still build and run just as happily as they did before (using VS2005/VC8). So, all of my changes required for a VC9 build at least appear to be backward-compatible, although the regression testing is still underway.
Now for the really hard problem: I have run into a difference in the startup sequence between VC8 and VC9 projects. The program uses a small-object allocator modeled after Loki, in Andrei Alexandrescu's Book Modern C++ Design. This allocator is initialized using a global variable defined in the main program module.
Under VC8, this global variable is constructed at the very beginning of the program startup, from code in a module crtexe.c. Under VC9, the first module that executes is crtdll.c, which indicates that the startup sequence has been changed. The DLLs that are starting up appear to be confusing the small-object allocator by allocating and deallocating memory before the global object can initialize the statistics, which leads to some spurious diagnostics. The operation of the program does not appear to be materially affected, but the QA folks will not allow the spurious diagnostics to get past them.
Is there some way to force the construction of a global object prior to loading DLLs?
What other porting issues am I likely to encounter?
Is there some way to force the construction of a global object prior to loading DLLs?
How about the DELAYLOAD option? So that DLLs aren't loaded until their first call?
That is a tough problem, mostly because you've inherited a design that's inherently dangerous because you're not supposed to rely on the initialization order of global variables.
It sounds like something you could try to work around by replacing the global variable with a singleton that other functions retrieve by calling a global function or method that returns a pointer to the singleton object. If the object exists at the time of the call, the function returns a pointer to it. Otherwise, it allocates a new one and returns a pointer to the newly allocated object.
The problem, of course, is that I can't think of a singleton implementation that would avoid the problem you're describing. Maybe this discussion would be useful: http://www.oneunified.net/blog/Personal/SoftwareDevelopment/CPP/Singleton.article
That's certainly an interesting problem. I don't have a solution other than perhaps to change the design so that there is no dependence on undefined behavior of the order or link/dll startup. Have you considered linking with the older linker? (or whatever the VS.NET term is)
Because the behavior of your variable and allocator relied on some (unknown at the time) arbitrary order of startup I would probably fix that so that it is not an issue in the future. I guess you are really asking if anyone knows how to do some voodoo in VC9 to make the problem disappear. I am interested in hearing it as well.
How about this,
Make your main program a DLL too, call it main.dll, linked to all the other ones, and export the main function as say, mainEntry(). Remove the global variable.
Create a new main exe which has the global variable and its initialization, but doesn't link statically to any of the other application DLLs (except for the allocator stuff).
This new main.exe then dynamically loads the main.dll using LoadLibrary(), then uses GetProcAddress to call mainEntry().
The solution to the problem turned out to be more straightforward than I originally thought. The initialization order problem was caused by the existence of several global variables of types derived from std container types (a basic design flaw that predated my position with that company). The solution was to replace all such globals with singletons. There were about 100 of them.
Once this was done, the initialization (and destruction) order was under programmer control.