Caching JNI objects and thread-safety (in Android) - android-ndk

I'm writing a C++ app with native threads (pthreads) and I need to call some Java methods etc. I'm not sure which JNI objects can be safely cached ie stored in my C++ object for use later, possibly/probably by a different thread. I do know that if my class' methods can be called by different threads I mustn't cache the JNIEnv, but instead cache the JavaVM and get a JNIEnv by attaching the current thread. But does that also mean I can't cache anything obtained from a JNIEnv? I need to use the objects obtained by the following JNIEnv methods:
FindClass, GetMethodID, NewObject, NewGlobalRef
Do those stay valid across threads, or do I have to get new ones every time? If the latter, is there a way to create an object in one native thread and be able to access the same object in a different thread?

JNI methods like FindClass, GetMethodID, GetFieldID are expensive operation that are guaranteed to generate the same result over the life of the JVM. Since these operations are time consuming, it is wise to store the result somewhere to be reused later on in the native side (this is caching).
JNI caching regards only these JNI function calls. If you want to cache any other C++ or Java object this is a different topic. (Just to be clear).
The cached classes, methods and fields do not depend on the thread they are retrieved from, so they are valid across different threads. At most you have to perform thread safe operations when getting or setting some object's field with Set<type>Field or Get<type>Field.
Since FindClass returns a local reference to the class object, you have to turn it into a global reference to guarantee its reuse after the function that retrieves it ends. You can achieve this by using NewGlobalReference:
jclass tmp_double_Class = env->FindClass( "java/lang/Double" ); // Check for exceptions!
double_Class = static_cast<jclass>( env->NewGlobalRef( tmp_double_Class ) );
if( double_Class == NULL )
return;
env->DeleteLocalRef( tmp_double_Class );
Here you have an example of the all JNI Caching topic:
MyJni.cpp:
// Just a shortcut for checking for exceptions
#define CHECK_JNI_EXCEPTION( JNIenv ) \
if( JNIenv->ExceptionCheck() )\
{\
JNIenv->ExceptionClear();\
return JNI_FALSE;\
}\
\
// Global variables
jclass point_Class;
jmethodID point_ctor_Method;
jfieldID point_x_Field;
jfieldID point_y_Field;
JNIEXPORT jboolean JNICALL Java_com_company_package_MyClass_nativeInit( JNIEnv * env,
jclass clazz )
{
// Cache java.lang.Double class, methods and fields
jclass tmp_point_Class = env->FindClass( "android/graphics/Point" );
CHECK_JNI_EXCEPTION( env )
point_Class = static_cast<jclass>( env->NewGlobalRef( tmp_point_Class ) );
if( point_Class == NULL )
return JNI_FALSE;
env->DeleteLocalRef( tmp_point_Class );
point_ctor_Method = env->GetMethodID( point_Class, "<init>", "(II)V" );
CHECK_JNI_EXCEPTION( env )
point_x_Field = env->GetFieldID( point_Class, "x", "I" );
CHECK_JNI_EXCEPTION( env )
point_y_Field = env->GetFieldID( point_Class, "y", "I" );
CHECK_JNI_EXCEPTION( env )
return JNI_TRUE;
}
MyJni.java:
package com.company.package;
class MyClass {
// ... All java code here ...
// Trigger JNI Caching (could be also done using JNI_OnLoad...)
private static native void nativeInit();
static {
System.loadLibrary( "mylib" );
nativeInit(); // should check the result
}
}
Have fun ;)

Objects are not thread-specific. They are initially "local" references, and if you want to keep a copy you have to tell the VM that you're doing so by creating (and, eventually, deleting) a "global" reference.
See http://developer.android.com/training/articles/perf-jni.html, especially the "Local and Global References" section.

Related

In Kotlin Native, how to keep an object around in a separate thread, and mutate its state from any other thead without using C pointers?

I'm exploring Kotlin Native and have a program with a bunch of Workers doing concurrent stuff
(running on Windows, but this is a general question).
Now, I wanted to add simple logging. A component that simply logs strings by appending them as new lines to a file that is kept open in 'append' mode.
(Ideally, I'd just have a "global" function...
fun log(text:String) {...} ]
...that I would be able to call from anywhere, including from "inside" other workers and that would just work. The implication here is that it's not trivial to do this because of Kotlin Native's rules regarding passing objects between threads (TLDR: you shouldn't pass mutable objects around. See: https://github.com/JetBrains/kotlin-native/blob/master/CONCURRENCY.md#object-transfer-and-freezing ).
Also, my log function would ideally accept any frozen object. )
What I've come up with are solutions using DetachedObjectGraph:
First, I create a detached logger object
val loggerGraph = DetachedObjectGraph { FileLogger("/foo/mylogfile.txt")}
and then use loggerGraph.asCPointer() ( asCPointer() ) to get a COpaquePointer to the detached graph:
val myPointer = loggerGraph.asCPointer()
Now I can pass this pointer into the workers ( via the producer lambda of the Worker's execute function ), and use it there. Or I can store the pointer in a #ThreadLocal global var.
For the code that writes to the file, whenever I want to log a line, I have to create a DetachedObjectGraph object from the pointer again,
and attach() it in order to get a reference to my fileLogger object:
val fileLogger = DetachedObjectGraph(myPointer).attach()
Now I can call a log function on the logger:
fileLogger.log("My log message")
This is what I've come up with looking at the APIs that are available (as of Kotlin 1.3.61) for concurrency in Kotlin Native,
but I'm left wondering what a better approach would be ( using Kotlin, not resorting to C ). Clearly it's bad to create a DetachedObjectGraph object for every line written.
One could pose this question in a more general way: How to keep a mutable resource open in a separate thread ( or worker ), and send messages to it.
Side comment: Having Coroutines that truly use threads would solve this problem, but the question is about how to solve this task with the APIs currently ( Kotlin 1.3.61 ) available.
You definitely shouldn't use DetachedObjectGraph in the way presented in the question. There's nothing to prevent you from trying to attach on multiple threads, or if you pass the same pointer, trying to attach to an invalid one after another thread as attached to it.
As Dominic mentioned, you can keep the DetachedObjectGraph in an AtomicReference. However, if you're going to keep DetachedObjectGraph in an AtomicReference, make sure the type is AtomicRef<DetachedObjectGraph?> and busy-loop while the DetachedObjectGraph is null. That will prevent the same DetachedObjectGraph from being used by multiple threads. Make sure to set it to null, and repopulate it, in an atomic way.
However, does FileLogger need to be mutable at all? If you're writing to a file, it doesn't seem so. Even if so, I'd isolate the mutable object to a separate worker and send log messages to it rather than doing a DetachedObjectGraph inside an AtomicRef.
In my experience, DetachedObjectGraph is super uncommon in production code. We don't use it anywhere at the moment.
To isolate mutable state to a Worker, something like this:
class MutableThing<T:Any>(private val worker:Worker = Worker.start(), producer:()->T){
private val arStable = AtomicReference<StableRef<T>?>(null)
init {
worker.execute(TransferMode.SAFE, {Pair(arStable, producer).freeze()}){
it.first.value = StableRef.create(it.second()).freeze()
}
}
fun <R> access(block:(T)->R):R{
return worker.execute(TransferMode.SAFE, {Pair(arStable, block).freeze()}){
it.second(it.first.value!!.get())
}.result
}
}
object Log{
private val fileLogger = MutableThing { FileLogger() }
fun log(s:String){
fileLogger.access { fl -> fl.log(s) }
}
}
class FileLogger{
fun log(s:String){}
}
The MutableThing uses StableRef internally. producer makes the mutable state you want to isolate. To log something, call Log.log, which will wind up calling the mutable FileLogger.
To see a basic example of MutableThing, run the following test:
#Test
fun goIso(){
val mt = MutableThing { mutableListOf("a", "b")}
val workers = Array(4){Worker.start()}
val futures = mutableListOf<Future<*>>()
repeat(1000) { rcount ->
val future = workers[rcount % workers.size].execute(
TransferMode.SAFE,
{ Pair(mt, rcount).freeze() }
) { pair ->
pair.first.access {
val element = "ttt ${pair.second}"
println(element)
it.add(element)
}
}
futures.add(future)
}
futures.forEach { it.result }
workers.forEach { it.requestTermination() }
mt.access {
println("size: ${it.size}")
}
}
The approach you've taken is pretty much correct and the way it's supposed to be done.
The thing I would add is, instead of passing around a pointer around. You should pass around a frozen FileLogger, which will internally hold a reference to a AtomicRef<DetachedObjectGraph>, the the attaching and detaching should be done internally. Especially since DetachedObjectGraphs are invalid once attached.

Why locally created struct can't be send to another thread?

why in D I can't send to another thread through Tid.send local instances of structs?
I would like to make simple handling of thread communication like this:
void main()
{
...
tid.send(Command.LOGIN, [ Variant("user"), Variant("hello1234") ] );
...
}
void thread()
{
...
receive(
(Command cmd, Variant[] args) { ... })
)
...
}
If I understand it correctly, D should create the array of Variants in the stack and
then copy content of the array the send function right? So there should not be any
issues about synchronization and concurrency. I'm quiet confused, this concurrency is
wierd, I'm used to code with threads in C# and C.
Also I'm confused about the shared keyword, and creating shared classes.
Usualy when I try to call method of shared class instance from non-shared object, the
compiler throws an error. Why?
you should idup the array and it will be able to go through, normal arrays are default sharable (as they have a shared mutable indirection)
as is the compiler can rewrite the sending as
Variant[] variants = [ Variant("user"), Variant("hello1234") ] ;
tid.send(Command.LOGIN, variants);
and Variant[] fails the hasUnsharedAlias test
you can fix this by making the array shared or immutable (and receiving the appropriate one on the other side)

what is the difference between std::call_once and function-level static initialization

1) std::call_once
A a;
std::once_flag once;
void f ( ) {
call_once ( once, [ ] { a = A {....}; } );
}
2) function-level static
A a;
void f ( ) {
static bool b = ( [ ] { a = A {....}; } ( ), true );
}
For your example usage, hmjd's answer fully explains that there is no difference (except for the additional global once_flag object needed in the call_once case.) However, the call_once case is more flexible, since the once_flag object isn't tied to a single scope. As an example, it could be a class member and be used by more than one function:
class X {
std::once_flag once;
void doSomething() {
std::call_once(once, []{ /* init ...*/ });
// ...
}
void doSomethingElse() {
std::call_once(once, []{ /*alternative init ...*/ });
// ...
}
};
Now depending on which member function is called first the initialization code can be different (but the object will still only be initialized once.)
So for simple cases a local static works nicely (if supported by your compiler) but there are some less common uses that might be easier to implement with call_once.
Both code snippets have the same behaviour, even in the presence of exceptions thrown during initialization.
This conclusion is based on (my interpretation of) the following quotes from the c++11 standard (draft n3337):
1 Section 6.7 Declaration statement clause 4 states:
The zero-initialization (8.5) of all block-scope variables with static storage duration (3.7.1) or thread storage duration (3.7.2) is performed before any other initialization takes place. Constant initialization (3.6.2) of a block-scope entity with static storage duration, if applicable, is performed before its block is first entered. An implementation is permitted to perform early initialization of other block-scope variables with static or thread storage duration under the same conditions that an implementation is permitted to statically initialize a variable with static or thread storage duration in namespace scope (3.6.2). Otherwise such a variable is initialized the first time control passes through its declaration; such a variable is considered initialized upon the completion of its initialization. If the initialization exits by throwing an exception, the initialization is not complete, so it will be tried again the next time control enters the declaration. If control enters the declaration concurrently while the variable is being initialized, the concurrent execution shall wait for completion of the initialization.88 If control re-enters the declaration recursively while the variable is being initialized, the behavior is undefined.
This means that in:
void f ( ) {
static bool b = ( [ ] { a = A {....}; } ( ), true );
}
b is guaranteed to be initialized once only, meaning the lambda is executed (successfully) once only, meaning a = A {...}; is executed (successfully) once only.
2 Section 30.4.4.2 Function call-once states:
An execution of call_once that does not call its func is a passive execution. An execution of call_once that calls its func is an active execution. An active execution shall call INVOKE (DECAY_COPY ( std::forward(func)), DECAY_COPY (std::forward(args))...). If such a call to func throws an exception the execution is exceptional, otherwise it is returning. An exceptional execution shall propagate the exception to the caller of call_once. Among all executions of call_once for any given once_flag: at most one shall be a returning execution; if there is a returning execution, it shall be the last active execution; and there are passive executions only if there is a returning execution.
This means that in:
void f ( ) {
call_once ( once, [ ] { a = A {....}; } );
the lambda argument to std::call_once is executed (successfully) once only, meaning a = A {...}; is executed (successfully) once only.
In both cases a = A{...}; is executed (successfully) once only.

Proper Object Disposal In C++/CLI

Consider the following class:
public ref class Workspace
{
protected:
Form^ WorkspaceUI;
SplitContainer^ WorkspaceSplitter;
AvalonEditTextEditor^ TextEditor;
ScriptOffsetViewer^ OffsetViewer;
SimpleTextViewer^ PreprocessedTextViewer;
ListView^ MessageList;
ListView^ FindList;
ListView^ BookmarkList;
ListView^ VariableIndexList;
TextBox^ VariableIndexEditBox;
Label^ SpoilerText;
ToolStrip^ WorkspaceMainToolBar;
ToolStripButton^ ToolBarNewScript;
ToolStripButton^ ToolBarOpenScript;
ToolStripButton^ ToolBarPreviousScript;
ToolStripButton^ ToolBarNextScript;
ToolStripSplitButton^ ToolBarSaveScript;
ToolStripDropDown^ ToolBarSaveScriptDropDown;
ToolStripButton^ ToolBarSaveScriptNoCompile;
ToolStripButton^ ToolBarSaveScriptAndPlugin;
ToolStripButton^ ToolBarRecompileScripts;
ToolStripButton^ ToolBarCompileDependencies;
ToolStripButton^ ToolBarDeleteScript;
ToolStripButton^ ToolBarNavigationBack;
ToolStripButton^ ToolBarNavigationForward;
ToolStripButton^ ToolBarSaveAll;
ToolStripButton^ ToolBarOptions;
ArbitraryCustomClass^ CustomClassInstance;
public:
Workspace()
{
WorkspaceUI = gcnew Form();
WorkspaceSplitter = gcnew SplitContainer();
// ...
Form->Controls->Add(WorkspaceSplitter);
// ...
WorkspaceUI->Show();
}
~Workspace
{
// dispose stuff here
}
};
What would be the most efficient and elegant way to dispose an instance of the above class so that all of its memory is reclaimed by the GC during its next collection? Do I need to call delete explicitly on each member and/or reset them to nullptr?
NB. You may not need to do anything. Memory for objects is reclaimed by the GC when references no longer exist that point to it.
You only need to explicitly reclaim when an object implements IDisposable. In C++/CLI this maps to destructors.
So if none of the objects you're allocating need to be disposed, you can ignore the rest of this answer. But supposing they do...
Remove the ^ from each field and they will be reclaimed automatically.
It would also mean that they would be default-constructed automatically when the Workspace is constructed, which may save you a lot of gcnew stuff in your hand-written constructor.
That is, if you say:
Form WorkspaceUI;
Then you don't need to say:
WorkspaceUI = gcnew Form();
The compiler has already generated that for you - imagine it being inserted at the start of your constructor.
Nor do you need to dispose/delete anything.
Finally, you need to use . instead of -> to access members of the objects that you declare in this way:
Form.Controls->Add(WorkspaceSplitter);
Update:
In C++/CLI, handles to ref classes are declared with ^, and this is analogous to the way pointers to native classes are declared with *.
Also correspondingly, there needs to be a way to get a handle to an object. To get a pointer to a native object, we prefix with &. To get a handle to a ref object, we prefix with %. For example:
ref class Fred { };
// function that accepts a handle
void ping(Fred ^h) { }
// Elsewhere... declare object of type Fred
Fred f;
// Get handle to pass to function
ping(%f);
If repeatedly creating and deleting objects of your class leads to out-of-memory there are two possibilities:
You are inadvertently holding references to it (or something it allocates). See my answer to this question: Memory Leaks in C# WPF (it doesn't have anything specific to do with C# or WPF really, it's just a matter of using the debugger interactively)
You need to call Dispose on one or more of the objects you allocate inside your class.
If it's the latter, in C++/CLI there is built-in support for calling Dispose automatically - C++/CLI treats a disposable object as if it was a C++ ref class with a destructor.
So if you delete a handle, you're calling Dispose on the object it points to.
Or if (as I suggest above) you simply have member objects, you don't even need to explicitly delete. When the outer containing class is destructed (i.e. something calls its Dispose method), it will automatically call Dispose on any member objects that require it.

inserting "this" into an STL map from the constructor

VERSION 1
class Doh {
private:
static std::map<const std::string, const Doh*> someMap;
std::string stringValue_;
public:
Doh(std::string str) : stringValue_(str) {
Doh::someMap.insert(
std::make_pair<const std::string,const Doh*>
(this->stringValue_,this)
);
}
}
The above was ok with MSVC 2010 but with MSVC 2008 it fails – and I guess it is because the object is not constructed yet when it is inserted in the map (I got a memory access violation).
So, I tried a delayed insertion, which worked:
VERSION 2
Doh(std::string str) : stringValue_(str) {
boost::thread(&Doh::insertIntoTheStaticMap,this);
}
void insertIntoTheStaticMap() {
boost::this_thread::sleep(boost::posix_time::milliseconds(1000));
Doh::someMap.insert(
std::make_pair<const std::string,const Doh*>
(this->stringValue_,this)
);
}
But as you might be able to guess, my intention is to have the static Doh::someMap as a common lookup dictionary.
VERSION 1 didn’t need any thread-safety because I would create all Doh instances in the same thread – in initialization blocks - which would be called by dynamic initializers before I enter main().
But with VERSION 2, the naïve sleep() is neither graceful nor reliable (not to mention, I might need to lock the map before insertion).
What would be a nice KISS approach?
Only potential issue I see is the initialization of the static member, if there are multiple source files. Try guarding it with a function.
class Doh {
private:
static std::map< std::string, Doh * > &get_map() {
static std::map< std::string, Doh * > someMap;
return someMap; // initialize upon first use
}
std::string stringValue_;
public:
Doh(std::string str) : stringValue_(str) {
get_map().insert(
std::make_pair
(this->stringValue_,this)
);
}
};
In neither version is there any sign of init for stringvalue_ - what does the debugger show you about this key when you hit the map insert in version 1 of the code? How is this field set up, and what is its type?
Running this in the debugger for VS2008 should allow you to narrow down the point of failure into the <map> source, I would have thought.

Resources