Writing string to file in Swift 3 for Ubuntu Linux - linux

I am currently porting my framework to server-side Swift and I stumbled upon a crash when writing a string to a file. Under macOS it is working.
The source code
#!/usr/bin/swift
import Foundation
let url = URL(string: "/tmp/foo")!
let line = "foobar"
print("writing to \(url)")
try line.write(to: url, atomically: true, encoding: String.Encoding.utf8)
causes that Segmentation Fault:
root#bd5d9b821031:/# ./test.swift
writing to <CFURL 0x67b66e0 [0x7ff283886840]>{string = /tmp/foo, encoding = 0, base = (null)}
0 swift 0x000000000333cb08 llvm::sys::PrintStackTrace(llvm::raw_ostream&) + 40
1 swift 0x000000000333b2d6 llvm::sys::RunSignalHandlers() + 54
2 swift 0x000000000333d63a
3 libpthread.so.0 0x00007ff28921b330
4 libswiftCore.so 0x00007ff285c3f648 swift_getErrorValue + 8
5 libswiftCore.so 0x00007ff2896462d1 swift_getErrorValue + 60845201
6 swift 0x0000000000c161f4 llvm::MCJIT::runFunction(llvm::Function*, llvm::ArrayRef<llvm::GenericValue>) + 996
7 swift 0x0000000000c197af llvm::ExecutionEngine::runFunctionAsMain(llvm::Function*, std::vector<std::string, std::allocator<std::string> > const&, char const* const*) + 1215
8 swift 0x00000000007e6bff swift::RunImmediately(swift::CompilerInstance&, std::vector<std::string, std::allocator<std::string> > const&, swift::IRGenOptions&, swift::SILOptions const&) + 2367
9 swift 0x00000000007e0818
10 swift 0x00000000007dbab7 swift::performFrontend(llvm::ArrayRef<char const*>, char const*, void*, swift::FrontendObserver*) + 2887
11 swift 0x00000000007a7568 main + 2872
12 libc.so.6 0x00007ff2879c4f45 __libc_start_main + 245
13 swift 0x00000000007a4f66
Stack dump:
0. Program arguments: /usr/bin/swift -frontend -interpret ./test.swift -target x86_64-unknown-linux-gnu -disable-objc-interop -color-diagnostics -module-name test
Segmentation fault
I also tried to wrap it in a do ... catch but the segfault still occurs. Any ideas how to fix it are deeply appreciated!

I guess problem is in this line
let url = URL(string: "/tmp/foo")!
Try replacing it with
let url = URL(fileURLWithPath: "/tmp/foo")
This way you gonna get proper file url

Related

Memory leak using CGAL's exact kernel

I am working on a project that uses OpenMP and the exact kernel of CGAL. As I was running my code in debug mode, and thanks to Visual Studio Leak Detector, I discovered a lot of unexpected memory leaks that are obviously related to the combination of those features. I know that an instance of CGAL::Lazy_exact_nt<CGAL::Gmpq> a.k.a. FT should not be shared between multiple threads, but since it is not the case here, I thought it was safe. Is there any way to fix these leaks ?
Here is a minimal reproducible example:
int main()
{
double xs_h = 1.2;
#pragma omp parallel
{
FT xs;
std::stringstream stream;
stream << xs_h;
stream >> xs;
}
}
And here is (part of) the output of VLD (7 memory leaks in total, all pointing to the same line of code):
---------- Block 26 at 0x00000000FEED2E50: 40 bytes ----------
Leak Hash: 0x6B0C3782, Count: 1, Total 40 bytes
Call Stack (TID 15080):
ucrtbased.dll!malloc()
D:\agent\_work\9\s\src\vctools\crt\vcstartup\src\heap\new_scalar.cpp (35): Test.exe!operator new() + 0xA bytes
D:\CGAL-4.13\include\CGAL\Lazy.h (818): Test.exe!CGAL::Lazy<CGAL::Interval_nt<0>,CGAL::Gmpq,CGAL::Lazy_exact_nt<CGAL::Gmpq>,CGAL::To_interval<CGAL::Gmpq> >::zero() + 0x77 bytes
D:\CGAL-4.13\include\CGAL\Lazy.h (766): Test.exe!CGAL::Lazy<CGAL::Interval_nt<0>,CGAL::Gmpq,CGAL::Lazy_exact_nt<CGAL::Gmpq>,CGAL::To_interval<CGAL::Gmpq> >::Lazy<CGAL::Interval_nt<0>,CGAL::Gmpq,CGAL::Lazy_exact_nt<CGAL::Gmpq>,CGAL::To_interval<CGAL::Gmpq> >() + 0x5 bytes
D:\CGAL-4.13\include\CGAL\Lazy_exact_nt.h (365): Test.exe!CGAL::Lazy_exact_nt<CGAL::Gmpq>::Lazy_exact_nt<CGAL::Gmpq>() + 0x28 bytes
D:\projects\...\src\main.cpp (20): Test.exe!main$omp$1() + 0xA bytes
VCOMP140D.DLL!vcomp_fork() + 0x2E5 bytes
VCOMP140D.DLL!vcomp_fork() + 0x2A1 bytes
VCOMP140D.DLL!vcomp_atomic_div_r8() + 0x20A bytes
KERNEL32.DLL!BaseThreadInitThunk() + 0x14 bytes
ntdll.dll!RtlUserThreadStart() + 0x21 bytes
I am using CGAL 4.13, and my compiler is Visual Studio 2019. I tried recompiling this code with macro CGAL_HAS_THREADS but it does not change the result (memory leaks do not vanish). Thanks for your attention.

How to declare 16-bits pointer to string in GCC C compiler for arm processor

I tried to declare an array of short pointers to strings (16-bits instead of default 32-bits) in GNU GCC C compiler for ARM Cortex-M0 processor to reduce flash consumption. I have about 200 strings in two language, so reducing the size of pointer from 32-bits to 16-bits could save 800 bytes of flash. It should be possible because the flash size is less than 64 kB so the high word (16-bits) of pointers to flash is constans and equal to 0x0800:
const unsigned char str1[] ="First string";
const unsigned char str2[] ="Second string";
const unsigned short ptrs[] = {&str1, &str2}; //this line generate error
but i got error in 3-th line
"error: initializer element is not computable at load time"
Then i tried:
const unsigned short ptr1 = (&str1 & 0xFFFF);
and i got:
"error: invalid operands to binary & (have 'const unsigned char (*)[11]' and 'int')"
After many attempts i ended up in assembly:
.section .rodata.strings
.align 2
ptr0:
ptr3: .short (str3-str0)
ptr4: .short (str4-str0)
str0:
str3: .asciz "3-th string"
str4: .asciz "4-th string"
compilation pass well, but now i have problem trying to reference pointers: ptr4 and ptr0 from C code. Trying to pass "ptr4-ptr0" as an 8-bit argument to C function:
ptr = getStringFromTable (ptr4-ptr0)
declared as:
const unsigned char* getStringFromTable (unsigned char stringIndex)
i got wrong code like this:
ldr r3, [pc, #28] ; (0x8000a78 <main+164>)
ldrb r1, [r3, #0]
ldr r3, [pc, #28] ; (0x8000a7c <main+168>)
ldrb r3, [r3, #0]
subs r1, r1, r3
uxtb r1, r1
bl 0x8000692 <getStringFromTable>
instead of something like this:
movs r0, #2
bl 0x8000692 <getStringFromTable>
I would be grateful for any suggestion.
.....after a few days.....
Following #TonyK and #old_timer advices i finally solved the problem in the following way:
in assembly i wrote:
.global str0, ptr0
.section .rodata.strings
.align 2
ptr0: .short (str3-str0)
.short (str4-str0)
str0:
str3: .asciz "3-th string"
str4: .asciz "4-th string"
then i declared in C:
extern unsigned short ptr0[];
extern const unsigned char str0[] ;
enum ptrs {ptr3, ptr4}; //automatically: ptr3=0, ptr4=1
const unsigned char* getStringFromTable (enum ptrs index)
{
return &str0[ptr0[index]] ;
}
and now this text:
ptr = getStringFromTable (ptr4)
is compiled to the correct code:
08000988: 0x00000120 movs r0, #1
0800098a: 0xfff745ff bl 0x8000818 <getStringFromTable>
i just have to remember to keep the order of enum ptrs each time i will add a string to the assembly and a new item to enum ptrs
Declare ptr0 and str0 as .global in your assembly language file. Then in C:
extern unsigned short ptr0[] ;
extern const char str0[] ;
const char* getStringFromTable (unsigned char index)
{
return &str0[ptr0[index]] ;
}
This works as long as the total size of the str0 table is less than 64K.
A pointer is an address and addresses in arm cannot be 16 bits that makes no sense, other than Acorn based arms (24 bit if I remember right), addresses are minimum 32 bits (for arm) and going into aarch64 larger but never smaller.
This
ptr3: .short (str3-str0)
does not produce an address (so it cant be a pointer) it produces an offset that is only usable when you add it to the base address str0.
You cannot generate 16 bit addresses (in a debugged/usable arm compiler), but since everything appears to be static here (const/rodata) that makes it even easier solve, solvable runtime as well, but even simpler pre-computed based on information provided thus far.
const unsigned char str1[] ="First string";
const unsigned char str2[] ="Second string";
const unsigned char str3[] ="Third string";
brute force takes like 30 lines of code to produce the header file below, much less if you try to compact it although ad-hoc programs don't need to be pretty.
This output which is intentionally long to demonstrate the solution (and to be able to visually check the tool) but the compiler doesn't care (so best to make it long and verbose for readability/validation purposes):
mystrings.h
const unsigned char strs[39]=
{
0x46, // 0 F
0x69, // 1 i
0x72, // 2 r
0x73, // 3 s
0x74, // 4 t
0x20, // 5
0x73, // 6 s
0x74, // 7 t
0x72, // 8 r
0x69, // 9 i
0x6E, // 10 n
0x67, // 11 g
0x00, // 12
0x53, // 13 S
0x65, // 14 e
0x63, // 15 c
0x6F, // 16 o
0x6E, // 17 n
0x64, // 18 d
0x20, // 19
0x73, // 20 s
0x74, // 21 t
0x72, // 22 r
0x69, // 23 i
0x6E, // 24 n
0x00, // 25
0x54, // 26 T
0x68, // 27 h
0x69, // 28 i
0x72, // 29 r
0x64, // 30 d
0x20, // 31
0x73, // 32 s
0x74, // 33 t
0x72, // 34 r
0x69, // 35 i
0x6E, // 36 n
0x67, // 37 g
0x00, // 38
};
const unsigned short ptrs[3]=
{
0x0000 // 0 0
0x000D // 1 13
0x001A // 2 26
};
The compiler then handles all of the address generation when you use it
&strs[ptrs[n]]
depending on how you write your tool can even have things like
#define FIRST_STRING 0
#define SECOND_STRING 1
and so on so that your code could find the string with
strs[ptrs[SECOND_STRING]]
making the program that much more readable. All auto generated from an ad-hoc tool that does this offset work for you.
the main() part of the tool could look like
add_string(FIRST_STRING,"First string");
add_string(SECOND_STRING,"Second string");
add_string(THIRD_STRING,"Third string");
with that function and some more code to dump the result.
and then you simply include the generated output and use the
strs[ptrs[THIRD_STRING]]
type syntax in the real application.
In order to continue down the path you started, if that is what you prefer (looks like more work but is still pretty quick to code).
ptr0:
ptr3: .short (str3-str0)
ptr4: .short (str4-str0)
str0:
str3: .asciz "3-th string"
str4: .asciz "4-th string"
Then you need to export str0 and ptr3, ptr4 (as needed depending on your assembler's assembly language) then access them as a pointer to str0+ptr3
extern unsigned int str0;
extern unsigned short ptr3;
...
... *((unsigned char *)(str0+ptr3))
fixing whatever syntax mistakes I intentionally or unintentionally added to that pseudo code.
That would work as well and you would have the one base address then the hundreds of 16 bit offsets to that address.
could even do some flavor of
const unsigned short ptrs[]={ptr0,ptr1,ptr2,ptr3};
...
(unsigned char *)(str0+ptrs[n])
using some flavor of C syntax to create that array but probably not worth that extra effort...
The solution a few of us have mentioned thus far (one example demonstrated above)(16 bit offsets which are NOT addresses which means NOT pointers) is much easier to code and maintain and use and maybe read depending on your implementation. However implemented it requires a full sized base address and offsets. It might be possible to code this in C without an ad-hoc tool, but the ad-hoc tool literally only takes a few minutes to write.
I write programs to write programs or programs to compress/manipulate data almost daily, why not. Compression is a good example of this want to embed a black and white image into your resource limited mcu flash? Don't put all the pixels in the binary, start with a run length encoding and go from there, which means a third party tool written by you or not that converts the real data into a structure that fits, same thing here a third party tool that prepares/compresses the data for the application. This problem is really just another compression algorithm since you are trying to reduce the amount of data without losing any.
Also note depending on what these strings are if it is possible to have duplicates or fractions the tool could be even smarter:
const unsigned char str1[] ="First string";
const unsigned char str2[] ="Second string";
const unsigned char str3[] ="Third string";
const unsigned char str4[] ="string";
const unsigned char str5[] ="Third string";
creating
const unsigned char strs[39]=
{
0x46, // 0 F
0x69, // 1 i
0x72, // 2 r
0x73, // 3 s
0x74, // 4 t
0x20, // 5
0x73, // 6 s
0x74, // 7 t
0x72, // 8 r
0x69, // 9 i
0x6E, // 10 n
0x67, // 11 g
0x00, // 12
0x53, // 13 S
0x65, // 14 e
0x63, // 15 c
0x6F, // 16 o
0x6E, // 17 n
0x64, // 18 d
0x20, // 19
0x73, // 20 s
0x74, // 21 t
0x72, // 22 r
0x69, // 23 i
0x6E, // 24 n
0x00, // 25
0x54, // 26 T
0x68, // 27 h
0x69, // 28 i
0x72, // 29 r
0x64, // 30 d
0x20, // 31
0x73, // 32 s
0x74, // 33 t
0x72, // 34 r
0x69, // 35 i
0x6E, // 36 n
0x67, // 37 g
0x00, // 38
};
const unsigned short ptrs[5]=
{
0x0000 // 0 0
0x000D // 1 13
0x001A // 2 26
0x0006 // 3 6
0x001A // 4 26
};

Assertion in appcore.cpp while loading regular DLL dynamically linked to MFC

I have inherited an application which consists of a regular DLL which is dynamically linked to MFC and which is loaded from a Windows service executable which also links dynamically to MFC. The code is being compiled using Microsoft Visual Studio 2008 Professional (old, I know...). This application has been 'working' for several years but I have found that I cannot run it as Debug build due to the following assertion in appcore.cpp:
Debug Assertion Failed!
Program: C:\Projects\CMM\Debug\CMM.exe
File: f:\dd\vctools\vc7libs\ship\atlmfc\src\mfc\appcore.cpp
Line: 380
For more information on how your program can cause an assertion
failure, see the Visual C++ documentation on asserts.
(Press Retry to debug the application)
which corresponds to the following code in the CWinApp constructor:
ASSERT(AfxGetThread() == NULL);
pThreadState->m_pCurrentWinThread = this;
ASSERT(AfxGetThread() == this);
This occurs when loading the DLL via LoadLibrary and leads me to suspect that the application has worked more by luck than judgement over the years (due to ASSERT not being included as part of the Release build).
My (admittedly limited) understanding of MFC is that although there should generally only be a single instance of CWinApp, it is permissible to have an additional one in regular DLLs which link dynamically to MFC, as in this case. The code has one instance in the service executable and one in the DLL. The CWinApp constructor gets called three (?) times, once for some internal instance within the MFC framework, once for the instance in the service executable and once for the instance in the DLL. The first two work fine, it is the third which blows up.
All of the exported functions start with AFX_MANAGE_STATE (although execution never gets that far) and the pre-processor flags are, I believe, correct w.r.t. Microsoft's documentation (_AFXDLL for the EXE, _AFXDLL, _USRDLL and _WINDLL for the DLL).
I have tried using AfxLoadLibrary instead of LoadLibrary to no effect. However, if I include
AFX_MANAGE_STATE( AfxGetStaticModuleState() )
at the start of the function which calls LoadLibrary/AfxLoadLibrary, the CWinApp object is actually constructed but execution then blows up in dllmodul.cpp instead.
Can anybody shed any light on why this might be happening or what I need to do to fix it?
EDIT
This is the call stack when the assertion occurs:
mfc90d.dll!CWinApp::CWinApp(const char * lpszAppName=0x00000000) Line 380 + 0x1c bytes C++
cimdll.dll!CCimDllApp::CCimDllApp() Line 146 + 0x19 bytes C++
cimdll.dll!`dynamic initializer for 'theApp''() Line 129 + 0xd bytes C++
msvcr90d.dll!_initterm(void (void)* * pfbegin=0x1b887c88, void (void)* * pfend=0x1b887d6c) Line 903 C
cimdll.dll!_CRT_INIT(void * hDllHandle=0x1b770000, unsigned long dwReason=1, void * lpreserved=0x00000000) Line 318 + 0xf bytes C
cimdll.dll!__DllMainCRTStartup(void * hDllHandle=0x1b770000, unsigned long dwReason=1, void * lpreserved=0x00000000) Line 540 + 0x11 bytes C
cimdll.dll!_DllMainCRTStartup(void * hDllHandle=0x1b770000, unsigned long dwReason=1, void * lpreserved=0x00000000) Line 510 + 0x11 bytes C
ntdll.dll!_LdrxCallInitRoutine#16() + 0x16 bytes
ntdll.dll!LdrpCallInitRoutine() + 0x43 bytes
ntdll.dll!LdrpInitializeNode() + 0x101 bytes
ntdll.dll!LdrpInitializeGraphRecurse() + 0x71 bytes
ntdll.dll!LdrpPrepareModuleForExecution() + 0x8b bytes
ntdll.dll!LdrpLoadDllInternal() + 0x121 bytes
ntdll.dll!LdrpLoadDll() + 0x92 bytes
ntdll.dll!_LdrLoadDll#16() + 0xd9 bytes
KernelBase.dll!_LoadLibraryExW#12() + 0x138 bytes
KernelBase.dll!_LoadLibraryExA#12() + 0x26 bytes
KernelBase.dll!_LoadLibraryA#4() + 0x32 bytes
mfc90d.dll!AfxCtxLoadLibraryA(const char * lpLibFileName=0x02a70ce0) Line 487 + 0x74 bytes C++
mfc90d.dll!AfxLoadLibrary(const char * lpszModuleName=0x02a70ce0) Line 193 + 0x9 bytes C++
CMM.exe!CMonDevDll::LoadDLL() Line 207 + 0x1b bytes C++
CMM.exe!CMonDevDll::LoadDllEntryPoints() Line 268 + 0x8 bytes C++
CMM.exe!CMonDevDll::Initialize(CMonDevRun * pMonDevRun=0x0019fe60) Line 186 + 0x8 bytes C++
CMM.exe!CCtcLinkMonDev::Initialize(CMonDevRun * pMonDevRun=0x0019fe60, CCtcRegistry & reg={...}, int nLinkId=1) Line 546 + 0x18 bytes C++
CMM.exe!CCtcLinkSwitchMgr::Initialize(CMonDevRun * pMonDevRun=0x0019fe60, CCtcRegistry & reg={...}) Line 188 + 0x14 bytes C++
CMM.exe!CMonDevRun::Initialize(ATL::CStringT<char,StrTraitMFC_DLL<char,ATL::ChTraitsCRT<char> > > szServiceName="CimService") Line 257 + 0x16 bytes C++
CMM.exe!CMonDevService::Run() Line 202 + 0x2d bytes C++
CommonFilesD.dll!CCtcServiceBase::ParseStandardArgs(int argc=-1, char * * argv=0x02a51b44) Line 278 + 0xf bytes C++
CMM.exe!main(int argc=4, char * * argv=0x02a51b38) Line 126 + 0x16 bytes C++
CMM.exe!__tmainCRTStartup() Line 586 + 0x19 bytes C
CMM.exe!mainCRTStartup() Line 403 C
kernel32.dll!#BaseThreadInitThunk#12() + 0x24 bytes
ntdll.dll!__RtlUserThreadStart() + 0x2f bytes
ntdll.dll!__RtlUserThreadStart#8() + 0x1b bytes
I finally managed to track down the cause of my crash. A library used by my DLL was linking statically to Boost.Thread and causing this issue, presumably due to a runtime mismatch. Changing the library to link dynamically to Boost instead seems to have fixed the problem.

can't get wxHaskell to work from ghci on Mac

I'm trying to run an example using EnableGUI function.
% ghci -framework Carbon Main.hs
*Main> enableGUI >> main
This is what I get instead of a working program:
2013-01-14 00:21:03.021 ghc[13403:1303] *** Assertion failure in +[NSUndoManager _endTopLevelGroupings], /SourceCache/Foundation/Foundation-945.11/Misc.subproj/NSUndoManager.m:328
2013-01-14 00:21:03.022 ghc[13403:1303] +[NSUndoManager(NSInternal) _endTopLevelGroupings] is only safe to invoke on the main thread.
2013-01-14 00:21:03.024 ghc[13403:1303] (
0 CoreFoundation 0x00007fff8c8ea0a6 __exceptionPreprocess + 198
1 libobjc.A.dylib 0x00007fff867243f0 objc_exception_throw + 43
2 CoreFoundation 0x00007fff8c8e9ee8 +[NSException raise:format:arguments:] + 104
3 Foundation 0x00007fff884966a2 -[NSAssertionHandler handleFailureInMethod:object:file:lineNumber:description:] + 189
4 Foundation 0x00007fff884fc8b7 +[NSUndoManager(NSPrivate) _endTopLevelGroupings] + 156
5 AppKit 0x00007fff8ecb832d -[NSApplication run] + 687
6 libwx_osx_cocoau_core-2.9.4.0.0.dylib 0x000000010ae64c96 _ZN14wxGUIEventLoop5DoRunEv + 40
7 libwx_baseu-2.9.4.0.0.dylib 0x000000010b37e0e5 _ZN13wxCFEventLoop3RunEv + 63
8 libwx_baseu-2.9.4.0.0.dylib 0x000000010b2e91bf _ZN16wxAppConsoleBase8MainLoopEv + 81
9 libwx_osx_cocoau_core-2.9.4.0.0.dylib 0x000000010ae1b04f _ZN5wxApp5OnRunEv + 29
10 libwx_baseu-2.9.4.0.0.dylib 0x000000010b32e8d1 _Z7wxEntryRiPPw + 102
11 libwxc.dylib 0x000000010bc8a9a4 ELJApp_InitializeC + 116
12 ??? 0x000000010beb9702 0x0 + 4494956290
)
2013-01-14 00:21:03.024 ghc[13403:1303] *** Assertion failure in +[NSUndoManager _endTopLevelGroupings], /SourceCache/Foundation/Foundation-945.11/Misc.subproj/NSUndoManager.m:328
When I'm compile and macosx-app it, it works rather well, but, for obvious reasons, I really want this to work from ghci.
What can I do? Google reveals nothing about the misterious problems of NSUndoManager used with Haskell. :(
ghci -fno-ghci-sandbox
works for me on OSX 10.8 , wx 0.90.0.1
thanks to Heinrich :https://github.com/jodonoghue/wxHaskell/pull/6
wxHaskell hasn't worked within ghci for a while. Apparently C++ memory management and re-using components within ghci causes problems. You juat have to rewrite main repeatedly. :(
The faq says
GHCi cannot mix static and dynamic libraries; it will be solved in the near future in wxHaskell.

help understanding MonoTouch crash log

My MonoTouch app (release build) is crashing randomly and I'm getting this in the crash log. Unfortunately, I don't see anything useful related to my app. It looks like it's down deep in the bowels of MonoTouch and iOS.
I'm running this on an iPhone 3G with OS 3.1.2.
Can anyone help me understand what this crash log means?
Incident Identifier: 222781AB-0F7C-4E1D-9E10-6EE946D6C320
CrashReporter Key: 0ee985a48f32f63b7e50536870f06a1ab4122600
Process: MyApp_iOS [593]
Path: /var/mobile/Applications/095A615B-2F9B-4A84-B0E3-EF1246915594/MyApp_iOS.app/MyApp_iOS
Identifier: MyApp_iOS
Version: ??? (???)
Code Type: ARM (Native)
Parent Process: launchd [1]
Date/Time: 2011-03-24 13:04:18.479 -0700
OS Version: iPhone OS 3.1.2 (7D11)
Report Version: 104
Exception Type: EXC_CRASH (SIGABRT)
Exception Codes: 0x00000000, 0x00000000
Crashed Thread: 0
Thread 0 Crashed:
0 dyld 0x2fe125b2 ImageLoaderMachOCompressed::findExportedSymbol(char const*, ImageLoader const**) const + 58
1 dyld 0x2fe0dcd6 ImageLoaderMachO::findExportedSymbol(char const*, bool, ImageLoader const**) const + 30
2 dyld 0x2fe0ee6e ImageLoaderMachOClassic::resolveUndefined(ImageLoader::LinkContext const&, macho_nlist const*, bool, bool, ImageLoader const**) + 434
3 dyld 0x2fe10250 ImageLoaderMachOClassic::doBindLazySymbol(unsigned long*, ImageLoader::LinkContext const&) + 212
4 dyld 0x2fe037ae dyld::bindLazySymbol(mach_header const*, unsigned long*) + 94
5 dyld 0x2fe0e29c stub_binding_helper_interface + 12
6 MyApp_iOS 0x0071a754 mono_handle_native_sigsegv (mini-exceptions.c:1762)
7 MyApp_iOS 0x0073d900 sigabrt_signal_handler (mini-posix.c:155)
8 libSystem.B.dylib 0x0008e81c _sigtramp + 28
9 libSystem.B.dylib 0x00033904 semaphore_wait_signal + 4
10 libSystem.B.dylib 0x00003ca8 pthread_mutex_lock + 440
11 MyApp_iOS 0x0088e76c GC_lock (pthread_support.c:1679)
12 MyApp_iOS 0x00884970 GC_malloc_atomic (malloc.c:259)
13 MyApp_iOS 0x007f26e4 mono_object_new_ptrfree_box (object.c:3996)
[... there are 10 active threads but I've only included the one that crashed]
Thread 0 crashed with ARM Thread State:
r0: 0x00000000 r1: 0x0097dc97 r2: 0x344d7c3c r3: 0x344dd2bd
r4: 0x344dd2bd r5: 0x00005681 r6: 0x0097dc97 r7: 0x2fffe6d8
r8: 0x344e7f34 r9: 0x00000001 r10: 0x0000007f r11: 0x0097dc97
ip: 0x344d8e4c sp: 0x2fffe658 lr: 0x2fe0dcdd pc: 0x2fe125b2
cpsr: 0x20000030
Another diagnosis option I've found is to:
Hook up AppDomain.CurrentDomain.UnhandledException
Put a try-catch around your entire "static void Main()" method
In both causes write the exception to Console.WriteLine().
Then run your app, open XCode and open the console window for your device while it's plugged in. Then cause the crash. You should be able to see a decent C# stack trace of the exception.
This has helped me fix many issues that only happen when running in release on the device.

Resources