Shared library bundled in the apk is not found at runtime - android-ndk

I do some native Android development which involves OpenSSL.
I cross compile it for armeabi (32b) using the Android NDK stand-alone toolchains. I cross-compile the native C libraries, copy the OpenSSL/native library .so files inside of my libs/ folder, which is referenced by my gradle this way:
sourceSets {
main {
jniLibs.srcDir(file("libs/"))
}
}
Anyway, the end result is that my .apk looks like this:
- > classes.dex
- > lib/
-> armeabi/
-> libcrypto.so
-> libssl.so
-> libmynativelibrary.so
- > res/ (...)
- > resources.arsc
- > META-INF/ (...)
- > kotlin/ (...)
- > AndroidManifest.xml
The shared libraries are correct 32-bit ARM ELF files. I've been using this exact APK on an API level 24 device with great success (Android 7.0+).
The issue: When I switch to an API level 21 device (Android 5.1-, I suspect that I would have the same issue with Android 6.0), the program instantly crashes when loading libmynativelibrary.so.
Since libcrypto.so is a dependency of libmynativelibrary.so, the program attempts to load it. It actually works fine on API level 24+, but crashes on API level 23-. It's because the loaded library is not the one in my .apk, but the one in the system. And such libraries seem to not be available below API level 24.
My question: How do I explicitly tell Android to look for the library in the .apk file first instead of the regular system libraries directories?
Thanks in advance.

Before Nogut, the system libraries were not protected from user apps. The name collisions are problematic, they caused Google to invent a separate namespace for the C++ shared runtime library, which is part of Android NDK.
The OpenSSL libraries are also widely used beyond your control. They may get loaded into your process even before you have a chance to load your own libssl.
Therefore, the best choice would be to build OpenSSL as static libs, and have libmynativelibrary.so linked to it statically. This way you have a monolithic binary that does not depend on others.
If you cannot follow this course, you should build OpenSSL libraries with mangled names, e.g. libmyssl.so and libmycrypto.so. This may help to avoid the simple name clash with system libraries.
Even better, follow the example of NDK and provide a unique namespace to you SSL API.
Don't expect that loading the libraries explicitly from their unpacked location at ApplicationInfo.nativeLibraryDir will be a robust solution: as I hunted before, the system libraries may happen to get loaded into your address space before.
Note that before Lollipop, you have too manually load all non-system dependencies, and in the proper order.
Also, the new NDK has dropped armeabi , so consider switching to armeabi-v7a.

Related

Difference between popular CMake build system and genproj tool for OpenCASCADE

While exploring about the platform setup for OpenCASCADE, I came to know about WOK commands which arent needed for CMake build system to use with OpenCASCADE
However another option of genproj tool (for which I havent yet found any exe but DLLs..) to be used with MSVC+ in built compiler so that we dont need any gcc installation
Whats the difference between the twos and which one is better and easier??
Also suggest me how to download and install and setup genproj on windows
OCCT project provided the following build systems:
CMake. This is the main building system since OCCT 7.0.0.
It allows building OCCT for almost every supported target platform.
WOK. This was an in-house building system used by OCCT before 7.0.0 release.
The tool handled classes defined in CDL (CAS.CADE definition language) files (WOK generated C++ header files from CDL) and supported building in a distributed environment (e.g. local WOK setup builds only modified source files and reused unmodified binary / object files from local network). WOK support has been discontinued since OCCT 7.5.0 and unlikely will be able building up-to-date OCCT sources (although project structure remains compatible with WOK).
genproj. This is a Tcl script allowing to generate project for building OCCT using Visual Studio (2010+), Code::Blocks, XCode and Qt Creator. This script has been initially extracted from WOK package (where it was implemented as command wgenproj in it's shell) and now maintained independently from it.
qmake. Experimental adm/qmake solution can be opened directly from QtCreator without CMake plugin (the project files will be generated recursively by qmake). Although header files generation (filling in inc folder) still should be done using genproj (qmake scripting capabilities were found too limited for this staff).
genproj doesn't require any DLLs or EXE files - it comes with OCCT itself and requires Tcl interpreter. On Windows platform it can be executed with genconf.bat and genproj.bat batch scripts in the root of OCCT source code folder. At first launch it will ask to put a path to tclsh.exe.
While CMake is the main building tool for OCCT project, genproj remains maintained and used by (some) developers - mostly due to personal habits and hatred to CMake. They differences of genproj from CMake that could be considered as advantages in some cases:
Generated project files can be moved to another location / computer without necessity to re-generate them.
Simplified 3rd-party dependency search tool genconf with GUI based on Tcl/Tk.
Batch-script environment/configuration files (env.bat and custom.bat), although CMake script in OCCT emulates similar files.
Generated Visual Studio solution contains Debug+Release and 32bit/64bit configurations.
Running Draw Harness and regression tests can be started directly from Visual Studio (without building any INSTALL target).
No problems with CMakeCache.txt.
Limitations of genproj:
No CMake configuration files. Other CMake-based projects would not be able re-using configuration files to simplify 3rd-party setup.
Regeneration of project files should be called explicitly.
Out-of-source builds are not supported (however, each configuration is put into dedicated sub-folder).
No INSTALL target.
No PCH (pre-compiler header file) generation.
It should be noted, that several attempts have been done to make compiler / linker flags consistent between CMake and genproj, but in reality they may be different.

Can I use mtouch to build a library?

I want to build a dll from all my package dependencies using mtouch. I have tried different options and failed.
Giving the root-assembly as my dll plus all packages gives me MT0052: No command specified
I think mtouch can not do that . From doc of Using mtouch to Bundle Xamarin.iOS Apps , you can see :
The process of turning a .NET executable into an application is mostly driven by the mtouch command, a tool that integrates many of the steps required to turn the application into a bundle. This tool is also used to launch your application in the simulator and to deploy the software to an actual iPhone or iPod Touch device.
It just transfers a existed .NET executable into an application ,can not help you bind library into an application .
You also can see the COMPILATION MODE doc of mtouch , there are four mode :
--abi=ABI
Comma-separated list of ABIs to target. Currently supported: armv6, armv6+llvm, armv7, armv7+llvm, armv7+llvm+thumb2, armv7s, armv7s+llvm, armv7s+llvm+thumb2. Fat binaries are automatically created if more than one ABI is targetted.
To use the LLVM optimizing compiler code generation backend instead of Mono's default code generation backend target one of the llvm ABIs. Build times take considerably longer for native code, but the generated code is shorter and performs better.
You may also instruct the LLVM code generator to produce ARM Thumb instructions by targetting one of the llvm+thumb2 targets. Thumb instructions produce more compact executables.
--cxx
Enables C++ support. This is required if you are linking with some third party libraries that use the C++ runtime. With this flag, mtouch uses the C++ compiler to drive the compilation process instead of the C compiler.
-sim=DIRECTORY
This compiles the program and assemblies passed on the command line into the specified directory for use with the iOS simulator. This generates a standalone program that is entirely driven by the C# or ECMA CIL code.
-dev=DIRECTORY
This compiles the program and assemblies passed on the command line into the specified directory for use on an iPod Touch, iPhone or iPad device. The target directory can be used as the contents of a .app directory This generates a standalone program that is entirely driven by the C# or ECMA CIL code.
mtouch not supports binding library , it just compiles a existed executable which already binded library .
By the way , if want to bind a third party library , official doc recommands you to use Binding iOS Libraries .

Is /nodefaultlib:msvcr100 the proper approach to handling msvcr100.dll vs msvcr100d.dll defaultlib issue

For a cross-platform software project that builds on Linux and Windows we have distinct ways to handle third-party libraries. On Linux we build and link against the versions distributed with the CentOS/RHEL distribution, which means we link against release builds, whereas on Windows we maintain our own third-party library "packages" and on Windows we build two versions of every library - a release version that links msvcr100 and msvcp100 and a debug version that links msvcr100d and msvcp100d.
My question is simply whether it is necessary to build the debug version of the third-party dependencies on Windows or can we simply use /nodefaultlib:msvcr100 when building debug builds of our own software.
A follow up question: Where can I learn about good practices in this regard. I've read the MSDN pages about the msvc runtime, but there is very little there in terms of recommendations.
EDIT:
Let me rephrase the question more concisely. With VS2010, what is the problem with using /nodefaultlib:msvcr100 to link an executable build with /MDd when linking with libraries that are compiled with /MD.
My motivation for this is to avoid to have to build both release and debug version of third party libraries that I use. Also I want my debug build to run faster.
From the document for /MD, /MT, /LD (Use Run-Time Library):
MD: Causes your application to use the multithread- and DLL-specific version of the run-time library. Defines _MT and _DLL and causes the compiler to place the library name MSVCRT.lib into the .obj file.
Applications compiled with this option are statically linked to MSVCRT.lib. This library provides a layer of code that allows the linker to resolve external references. The actual working code is contained in MSVCR100.DLL, which must be available at run time to applications linked with MSVCRT.lib
/MDd: Defines _DEBUG, _MT, and _DLL and causes your application to use the debug multithread- and DLL-specific version of the run-time library. It also causes the compiler to place the library name MSVCRTD.lib into the .obj file.
So there is no documentation for any difference done to the generated code other than _DEBUG being defined.
You only use the Debug build of the CRT to debug your app. It contains lots of asserts to help you catch mistakes in your code. You never ship the debug build of your project, always the Release build. Nor can you, the license forbids shipping msvcr100d.dll. So building your project correctly automatically avoids the dependency on the debug version of the CRT.
The /nodefaultlib linker option was intended to allow linking your program with a custom CRT implementation. Quite rare but some programmers care a lot about building small programs and the standard CRT isn't exactly small.
Some programmers use the /nodefaultlib has a hack around a link problem. Induced when they link code that was built with Debug configuration settings with code built with Release configuration settings. Or link code that has incompatible CRT choices, /MD vs /MT. This can work, no guarantee, but of course only sweeps the real problem under the floor mat.
So no, it is not the proper choice, fixing the core problem should be your goal. Ensure that all your .obj and .lib files are built with the same compiler options and you won't have this problem. If that means that you have to pester a library owner for a proper build then pester first, hack around it only when you've discovered that you don't want to have a dependency on that .lib anymore but don't yet have the time to find an alternative.

Accessing hardware with Android NDK

I need to extend the functionality of the android.hardware.Camera class and so I have written my own class and companion JNI library to meet my needs. If I place my JNI code and Android.mk file in the Android source tree and build the OS, my library builds and I can use it and the Java class in an application without any problems (on an evaluation module at least).
The problem is that I would prefer to build my JNI library with the NDK but I need several libraries that are not in the NDK (e.g. libandroid_runtime and libcamera_client).
Is it possible to use the NDK to access hardware such as the camera? If so, what is the proper way to get access to OS libraries?
You can access non-standard shared libraries from NDK, but that is undocumented and is not guaranteed to work on different devices. Vendors like HTC, Samsung and other can simply implement them differently.
Only proper way how to use functionality not available in NDK is to wrap it with Java classe/functions, and then use them from native code.

Mono Class as shared library?

.Net class can be compiled into a shared library (.dll). Can a mono class be compiled into a shared library in linux (.so)? how?
.Net .dll files are not real, i.e. native, shared libraries. By default, Mono also produces and consumes .dll files, using the same assembly format as Microsoft .Net. Both runtimes generate native code from this intermediate format during runtime.
However, it is possible to perform Ahead-Of-Time (AOT) compilation and save the resulting .so file to disk (Microsoft .Net equivalent of this is the ngen.exe native image generation and cache). When you invoke Mono with the --aot flag, it will save the native code in form of a .so library and use it whenever the same file is loaded again. You probably also want to add the -O=all flag to enable all optimizations (some of them are disabled by default because they are costly to perform).
However, please bear in mind that the cached native library probably won't be usable for linking into native programs.

Resources