MonoTouch: Adding DLL references in sub projects - xamarin.ios

In a way I am looking for best-practice here.
I have a common project that is shared by many of my apps. This project has FlurryAnaylics and the ATMHud DLLs as references.
If I do not also reference these DLLs in the main project, the apps will often, but not always, fail in the debug-to-device test. In the debug-to-simulator I don't need to add these DLLs to the main project.
So, the question is: Do I have to include references to DLLs in the main project that I have in sub projects all the time?

Whenever possible I use references to project files (csproj files) over references to assemblies (.dll). It makes a lot of things easier, like:
code navigation (IDE);
automatic build dependency (the source code you're reading is the one you're building, not something potentially out-of-sync);
source-level debugging (even if you can have it without it, you're sure to be in-sync);
(easier) switch between Debug|Release|... configurations;
changing defines (or any project-level option);
E.g.
Solution1.sln
Project1a.csproj
MonoTouch.Dialog.csproj (link to ../Common/MonoTouch.Dialog.csproj)
Solution2.sln
Project2a.csproj
MonoTouch.Dialog.csproj (link to ../Common/MonoTouch.Dialog.csproj)
Common.sln
MonoTouch.Dialog.csproj
Large solutions might suffer a bit from doing this (build performance, searching across files...). The larger they get the less likely everyone has to know about every part of it. So there's a diminished return on the advantages while the inconvenience grows with each project being added.
E.g. I would not want to have references to every framework assemblies inside Mono (but personally I could live with all the SDK assemblies of MonoTouch ;-)
Note: Working with assemblies references should not cause you random errors while debugging on device. If you can create such a test case please fill a bug report :-)

Related

Do we need both Automapper and Automapper.Net4 dlls to use Automapper?

Do we need Automapper and Automapper.Net4 dlls together to use the Automaper functionality in our code.
I mean can't we just have the one dll of them both. Using Automapper for the first time.
Need help.
Thanks in advance
All you need to do is do "Install-Package AutoMapper" and you're set. Because AutoMapper supports all major .NET platforms, things that are specific to your platform are in a platform-specific assembly. This is a very common approach for building cross-platform libraries.
In short, you shouldn't care, because NuGet takes care of everything for you. It's completely transparent to you as a user. You don't have to do anything extra to take advantage of the platform-specific features.
Why not ask Jimmy? AutoMapper using Portable Class Libraries.
From looking at the NuGet package, it would appear Automapper.dll is the core (it's common to all platform libraries), while Automapper.Net4.dll is the platform specific - both are necessary.
This is actually the correct answer:
Effectively the .Net4.dll assembly is combined into the one AutoMapper.dll. So you should delete that file. (Jimmy Bogard)
We spent the whole afternoon with a team debugging what is wrong (I got one customer bug report) and could not reproduce. Then finally we found out that the problem is with Automapper.Net4.dll. After deleting it, bug went away (before we already located in the code that the problem is with automapper).
Both are combined into just one nuget Package: Automapper

Visual Studio 2013 creates larger exe's - no MFC

I'm a little late with this question, but better late than never. I've been using Visual Studio 6.0 since it came out, but recently switched to VS 2013 on a new PC.
I've gotten my projects to build under 2013, but the resulting executables it produces are consistently bigger than VS6.0 produced. I've seen a similar thread on here about that happening in the transition from VS2008 to VS2010, and the comments and suggestions there all seem to attribute the change to changes in MFC libraries that are statically linked in. But my projects are straight C code. No C++, let alone MFC. And the 'Use of MFC' option on my project is set to "Use Standard Windows Libraries" (presumably set by the import tool that generated the 2013-compatible project). The only non-stadard library it uses is wsock32.lib.
The extra size isn't a killer, but it's a significant relative to the size of the whole app. My biggest .exe goes from 980Kb to 1.3Mb - about a 35% increase in size to an app whose small size was a selling point (i.e. install this tiny app and you have access to all of our goodies). That's without debugging info - the increase on the debug version is even more - but I don't really care about that.
Any ideas how to strip out the new cruft - or even to know what it is?
This is a good manual how to make your binaries smaller.
Basic ideas are the following:
Don't forget about Release mode
Declare #define WIN32_LEAN_AND_MEAN
Dynamically link to the C++ runtime
Compile the executable without debugging information
Compile with /O1, an 'optimize for size' flag
Remove iostream and fstream headers, use low level instead if possible
Typically you generate a MAP file on both systems, and figure out the sections that cause the largest contributions.
Anton's answer reminds me: first check if they are both linked the same way (both static or both dynamic, otherwise it is apples and oranges)

Avoiding CA2122 from Code Analysis in VS2012 with SecuritySafeCritical fails

I have here a C++/CLI solution which isn't mixed with native C++ (although we have this type too). It consists of three projects, where are two relevant for my question.
The first one is a static library (.lib) and deals with Acitve Diretytory matters.
The second one is the executable main project (.exe) which depends on the other projects.
I'm new to Visual Studio 2012 and want to use the advantages of tools like the code analysis. Running the code analysis over the solution reveals several CA2122 warnings:
CA2122 Do not indirectly expose methods with link demands
I understand the security concerns related to this warning and I think I understood how to deal with it, although I'm also new to this security stuff. This warnings are related to the Active Directory code when the whole solution is examined, while examining only the lib-project they will not appear and everything seems to be ok.
Now to the core of the problem:
I tried to mark all methods where I'm warned with the SecuritySafeCritical attribute
--> no changes, same warnings
I've solved this warning in another project by marking the whole assembly as SecurityCritical and adding the SecuritySafeCritical to the problematic method. This will not work since adding a AssemblyInfo.cpp with marking the assembly as SecurityCritical will not affect this problem. (I know that *.cpp seem to be obsolete in managed static librarys since the code seem to have to be complete in the header files making this kind of project obsolete... but we don't want to have .dll for every small part and we also want to have this stuff capsulated in an own project instead of having some loose header files or have it mixed with other regions)
After that I tried to mark the whole assembly of the main project as SecurityTransparent because so far I understand this SecuritySafeCritical marked code can be called by SecurityTransparent or SecurityCritical code (what is for me every kind of security). --> My as SecuritySafeCritical marked methods now are marked with CA2141 warnings and many other methods produce new warnings (most of them are related to exception handling):
CA2141:Transparent methods must not satisfy LinkDemands
CA2140: Transparent code must not reference security critical items
So I decided to try marking this assembly as SecurityCritical too.
--> My SecuritySafeCritical methods finally produce no warnings, but there are still all these other warnings from methods having exceptionhandling.
So I don't know how to solve this problem. I assume that having a managed static library is the problem and when having just a dll-project maybe I could solve the problem as mentionend in 2., but I want to avoid to share another *.dll project with our programs.
I searched for a solution but found nothing which would help in this case. Also informations on this topic are rare, out of date (because related to .Net Framework 2.0 while the whole security thing seems to be changed massively with .Net Framework 4.0) or hard to understand for me. So I hope someone has an idea what I could try or what I should do.

MissingMethodException when using zxing.Monotouch on iOS6

I have updated my development system to the new MonoTouch (6.0.1) and now whenever I'm referencing zxing.Monotouch types I get MissingMethodException on the constructor.
System.MissingMethodException: Method not found: 'MyClass..ctor'.
It's been 3 days now...
Anyone got any idea? I'm even willing to give up zxing if that what it takes (even though it's a wonderful library).
Edit
When I include zxing.Monotouch in the solution and reference it as a project the problem does not reproduce. If that's a clue I've missed it...
It's likely that the binary version of zxing.Monotouch is trying to access something that does not exists in 6.0.1. That's uncommon as we try to maintain source/binary compatibility unless the code is really broken (e.g. it would cause a crash anyway). I cannot be more precise without more data (e.g. a full build log).
If you include zxing.Monotouch as a reference then it will be rebuilt. If it works then it really looks like source compatibility was preserved (but not binary compatibility).
Whenever you have the source code available I encourage you to use .csproj (not .dll) references. Is has a few advantages, including the source/binary compatibility (above) and the fact that it makes things easier to debug from your project.

What are the porting issues going from VC8 (VS2005) to VC9 (VS2008)?

I have inherited a very large and complex project (actually, a 'solution' consisting of 119 'projects', most of which are DLLs) that was built and tested under VC8 (VS2005), and I have the task of porting it to VC9 (VS2008).
The porting process I used was:
Copy the VC8 .sln file and rename it
to a VC9 .sln file.
Copy all of
the VC8 project files, and rename
them to VC9 project files.
Edit
all of the VC9 project files,
s/vc8/vc9.
Edit the VC9 .sln,
s/vc8/vc9/
Load the VC9 .sln with
VS2008, and let the IDE 'convert'
all of the project files.
Fix
compiler and linker errors until I
got a good build.
So far, I have run into the following issues in that last step.
1) A change in the way decorated names are calculated, causing truncation of the names.
This is more than just a warning (http://msdn.microsoft.com/en-us/library/074af4b6.aspx). Libraries built with this warning will not link with other modules. Applying the solution given in MSDN was non-trivial, but doable. I addressed this problem separately in How do I increase the allowed decorated name length in VC9 (MSVC 2008)?
2) A change that does not allow the assignment of zero to an iterator. This is per the spec, and it was fairly easy to find and fix these previously-allowed coding errors. Instead of assignment of zero to an iterator, use the value end().
3) for-loop scope is now per the ANSI standard. Another easy-to-fix problem.
4) More space required for pre-compiled headers. In some cases a LOT more space was required. I ended up using /Zm999 to provide the maximum PCH space. If PCH memory usage gets bumped up again, I assume that I will have to forgo PCH altogether, and just endure the increase in what is already a very long build time.
5) A change in requirements for copy ctors and default dtors. It appears that in template classes, under certain conditions that I haven't quite figured out yet, the compiler no longer generates a default ctor or a default dtor. I suspect this is a bug in VC9, but there may be something else that I'm doing wrong. If so, I'd sure like to know what it is.
6) The GUIDs in the sln and vcproj files were not changed. This does not appear to impact the build in any way that I can detect, but it is worrisome nevertheless.
Note that despite all of these issues, the project built, ran, and passed extensive QA testing under VC8. I have also back-ported all of the changes to the VC8 projects, where they still build and run just as happily as they did before (using VS2005/VC8). So, all of my changes required for a VC9 build at least appear to be backward-compatible, although the regression testing is still underway.
Now for the really hard problem: I have run into a difference in the startup sequence between VC8 and VC9 projects. The program uses a small-object allocator modeled after Loki, in Andrei Alexandrescu's Book Modern C++ Design. This allocator is initialized using a global variable defined in the main program module.
Under VC8, this global variable is constructed at the very beginning of the program startup, from code in a module crtexe.c. Under VC9, the first module that executes is crtdll.c, which indicates that the startup sequence has been changed. The DLLs that are starting up appear to be confusing the small-object allocator by allocating and deallocating memory before the global object can initialize the statistics, which leads to some spurious diagnostics. The operation of the program does not appear to be materially affected, but the QA folks will not allow the spurious diagnostics to get past them.
Is there some way to force the construction of a global object prior to loading DLLs?
What other porting issues am I likely to encounter?
Is there some way to force the construction of a global object prior to loading DLLs?
How about the DELAYLOAD option? So that DLLs aren't loaded until their first call?
That is a tough problem, mostly because you've inherited a design that's inherently dangerous because you're not supposed to rely on the initialization order of global variables.
It sounds like something you could try to work around by replacing the global variable with a singleton that other functions retrieve by calling a global function or method that returns a pointer to the singleton object. If the object exists at the time of the call, the function returns a pointer to it. Otherwise, it allocates a new one and returns a pointer to the newly allocated object.
The problem, of course, is that I can't think of a singleton implementation that would avoid the problem you're describing. Maybe this discussion would be useful: http://www.oneunified.net/blog/Personal/SoftwareDevelopment/CPP/Singleton.article
That's certainly an interesting problem. I don't have a solution other than perhaps to change the design so that there is no dependence on undefined behavior of the order or link/dll startup. Have you considered linking with the older linker? (or whatever the VS.NET term is)
Because the behavior of your variable and allocator relied on some (unknown at the time) arbitrary order of startup I would probably fix that so that it is not an issue in the future. I guess you are really asking if anyone knows how to do some voodoo in VC9 to make the problem disappear. I am interested in hearing it as well.
How about this,
Make your main program a DLL too, call it main.dll, linked to all the other ones, and export the main function as say, mainEntry(). Remove the global variable.
Create a new main exe which has the global variable and its initialization, but doesn't link statically to any of the other application DLLs (except for the allocator stuff).
This new main.exe then dynamically loads the main.dll using LoadLibrary(), then uses GetProcAddress to call mainEntry().
The solution to the problem turned out to be more straightforward than I originally thought. The initialization order problem was caused by the existence of several global variables of types derived from std container types (a basic design flaw that predated my position with that company). The solution was to replace all such globals with singletons. There were about 100 of them.
Once this was done, the initialization (and destruction) order was under programmer control.

Resources