JNI code in linux shared lib - linux

I have a c++ library ported to linux.
Now I'm adding a JNI code so I can add a java wrapper.
The question is:
Is adding JNI to the same lib will effect the c++ application of the users --> So might it become not working if java not installed ... etc (Because it links to some code in jni.h and other stuff)?

It won't affect the current library if you add JNI to your library. JNI are a collection of interfaces and callback that make your library can be used by JVM. Without JNI, your library can't be used from JVM.
Your library will grow up in size and more symbols be exported when you add JNI.
By adding JNI to your current library, it means your library can be used as a normal library meanwhile can be loaded from JVM.

You may wish to consider creating a JNI wrapper in C/C++ and statically link to your current library. That way your currently library will still work for C/C++ apps and your code will be easier to debug and maintain. If you choose this route, you may also want to look into enabling link-time optimization for your JNI wrapper. Alternatively you could also dynamically link the JNI wrapper and just put both libraries in /lib (or where ever is appropriate), but you will have a runtime dependency on that base library.

Related

How does Java JNI interact with Kotlin/Native to find object values?

From JetBrains site :
Kotlin/Native is a technology for compiling Kotlin to native binaries that run without any VM.
But how does Kotlin interact with JNI , in my knowledge if a C/C++ program using JNI wants to access a java field it has to use the GetFieldID function, but the C program needs information from the JVM describing the object and its value.
How does kotlin/Native resolve the value of fields, if Kotlin makes programs that doesn't depend on the VM how can it get the value of Java fields?
What is Kotlin native
Kotlin/Native does not do the same thing as JNI, the site describes Kotlin/Native like this...
Kotlin/Native is a technology for compiling Kotlin to native binaries that run without any VM. It comprises a LLVM-based backend for the Kotlin compiler and a native implementation of the Kotlin runtime library. Kotlin/Native is primarily designed to allow compilation for platforms where virtual machines are not desirable or possible (such as iOS, embedded targets), or where a developer needs to produce a reasonably-sized self-contained program that does not require an additional runtime.
[Source]
JNI lets Java code talk to native code, whereas Kotlin/Native allows you to compile Kotlin code into a native executable that does not require a JVM to run.

Is /nodefaultlib:msvcr100 the proper approach to handling msvcr100.dll vs msvcr100d.dll defaultlib issue

For a cross-platform software project that builds on Linux and Windows we have distinct ways to handle third-party libraries. On Linux we build and link against the versions distributed with the CentOS/RHEL distribution, which means we link against release builds, whereas on Windows we maintain our own third-party library "packages" and on Windows we build two versions of every library - a release version that links msvcr100 and msvcp100 and a debug version that links msvcr100d and msvcp100d.
My question is simply whether it is necessary to build the debug version of the third-party dependencies on Windows or can we simply use /nodefaultlib:msvcr100 when building debug builds of our own software.
A follow up question: Where can I learn about good practices in this regard. I've read the MSDN pages about the msvc runtime, but there is very little there in terms of recommendations.
EDIT:
Let me rephrase the question more concisely. With VS2010, what is the problem with using /nodefaultlib:msvcr100 to link an executable build with /MDd when linking with libraries that are compiled with /MD.
My motivation for this is to avoid to have to build both release and debug version of third party libraries that I use. Also I want my debug build to run faster.
From the document for /MD, /MT, /LD (Use Run-Time Library):
MD: Causes your application to use the multithread- and DLL-specific version of the run-time library. Defines _MT and _DLL and causes the compiler to place the library name MSVCRT.lib into the .obj file.
Applications compiled with this option are statically linked to MSVCRT.lib. This library provides a layer of code that allows the linker to resolve external references. The actual working code is contained in MSVCR100.DLL, which must be available at run time to applications linked with MSVCRT.lib
/MDd: Defines _DEBUG, _MT, and _DLL and causes your application to use the debug multithread- and DLL-specific version of the run-time library. It also causes the compiler to place the library name MSVCRTD.lib into the .obj file.
So there is no documentation for any difference done to the generated code other than _DEBUG being defined.
You only use the Debug build of the CRT to debug your app. It contains lots of asserts to help you catch mistakes in your code. You never ship the debug build of your project, always the Release build. Nor can you, the license forbids shipping msvcr100d.dll. So building your project correctly automatically avoids the dependency on the debug version of the CRT.
The /nodefaultlib linker option was intended to allow linking your program with a custom CRT implementation. Quite rare but some programmers care a lot about building small programs and the standard CRT isn't exactly small.
Some programmers use the /nodefaultlib has a hack around a link problem. Induced when they link code that was built with Debug configuration settings with code built with Release configuration settings. Or link code that has incompatible CRT choices, /MD vs /MT. This can work, no guarantee, but of course only sweeps the real problem under the floor mat.
So no, it is not the proper choice, fixing the core problem should be your goal. Ensure that all your .obj and .lib files are built with the same compiler options and you won't have this problem. If that means that you have to pester a library owner for a proper build then pester first, hack around it only when you've discovered that you don't want to have a dependency on that .lib anymore but don't yet have the time to find an alternative.

Accessing hardware with Android NDK

I need to extend the functionality of the android.hardware.Camera class and so I have written my own class and companion JNI library to meet my needs. If I place my JNI code and Android.mk file in the Android source tree and build the OS, my library builds and I can use it and the Java class in an application without any problems (on an evaluation module at least).
The problem is that I would prefer to build my JNI library with the NDK but I need several libraries that are not in the NDK (e.g. libandroid_runtime and libcamera_client).
Is it possible to use the NDK to access hardware such as the camera? If so, what is the proper way to get access to OS libraries?
You can access non-standard shared libraries from NDK, but that is undocumented and is not guaranteed to work on different devices. Vendors like HTC, Samsung and other can simply implement them differently.
Only proper way how to use functionality not available in NDK is to wrap it with Java classe/functions, and then use them from native code.

Possible to link .dll with .lib?

Here's my dilemma: I'm attempting to create a .dll version of my project. This project uses the V8 and CURL libraries which are currently built as debug .libs. I'd like to package all of them up in a single DLL that can be shared (I understand I need to alter my code with __declspec(dllexport) but that's a separate issue) to others.
Do I need to compile the V8 and CURL libraries as DLLs then somehow wrap them up in my own DLL?
If you have a .lib with no .dll for the CURL libraries, then they are most certainly static libraries. When you link them to your DLL, the code from these libraries is linked into your DLL.
I've generally had to include the source for the dependencies (in your case both V8 and CURL) in my project and build that way to get them completely incorporated without extra headaches.
If you have libs and you link to those you SHOULD get them merged though.

Mono Class as shared library?

.Net class can be compiled into a shared library (.dll). Can a mono class be compiled into a shared library in linux (.so)? how?
.Net .dll files are not real, i.e. native, shared libraries. By default, Mono also produces and consumes .dll files, using the same assembly format as Microsoft .Net. Both runtimes generate native code from this intermediate format during runtime.
However, it is possible to perform Ahead-Of-Time (AOT) compilation and save the resulting .so file to disk (Microsoft .Net equivalent of this is the ngen.exe native image generation and cache). When you invoke Mono with the --aot flag, it will save the native code in form of a .so library and use it whenever the same file is loaded again. You probably also want to add the -O=all flag to enable all optimizations (some of them are disabled by default because they are costly to perform).
However, please bear in mind that the cached native library probably won't be usable for linking into native programs.

Resources