DoCallBack CrossAppDomainDelegate behavior for non-static delegates - c#-4.0

Please consider the following piece of code:
// Create a new application domain
AppDomain ad = AppDomain.CreateDomain("New domain");
Worker work = new Worker();
// if Worker class is marked as 'MarshalByRefObject', this will run in current
// appdomain.
// if Worker class is NOT marked as 'MarshalByRefObject' and is marked as
// 'Serializable', this will run in a new appdomain.
ad.DoCallBack(work.PrintDomain);
// or ad.DoCallBack(new CrossAppDomainDelegate(work.PrintDomain));
// But for static methods:
// If ppp method is static, no marking is required and it will run in
// a new AppDomain.
ad.DoCallBack(Worker.ppp);
How do we explain this behavior of DoCallBack?
Why is the non-static method PrintDomain executed in the current domain when the Worker class is marked MarshalByRefObject?
Why is the non-static method PrintDomain executed in a new AppDomain when the Worker class is marked Serializable?
Why doesn't the static method need any markings?

Why is the non-static method PrintDomain executed in the current domain when the Worker class is marked MarshalByRefObject?
Because that's what MBRO does, it creates a proxy for the object that you created in your primary appdomain. Which marshals the call from the secondary appdomain to the appdomain that owns the object, the primary appdomain.
Why is the non-static method PrintDomain executed in a new AppDomain when the Worker class is marked Serializable?
Because that scenario does not use a proxy. The object itself is marshaled from the primary to the secondary appdomain. Possible because you marked it [Serializable]. The call therefore executes in the secondary appdomain.
Why doesn't the static method need any markings?
It is unclear what you mean by "markings", but it isn't any different for a static method. Some code to play with, remove the comment on the base class to compare the two scenarios:
using System;
class Program {
static void Main(string[] args) {
var dom = AppDomain.CreateDomain("Test");
var obj = new WorkerMbro();
dom.DoCallBack(obj.PrintDomain);
dom.DoCallBack(obj.PrintDomainStatic);
Console.ReadLine();
}
}
[Serializable]
class WorkerMbro /* : MarshalByRefObject */ {
public void PrintDomain() {
Console.WriteLine(AppDomain.CurrentDomain.FriendlyName);
}
public void PrintDomainStatic() {
Console.WriteLine(AppDomain.CurrentDomain.FriendlyName);
}
}
Output as posted:
Test
Test
Output with the comments removed so the proxy is used:
ConsoleApplication1.vshost.exe
ConsoleApplication1.vshost.exe

Related

delete Q_GLOBAL_STATIC in correct thread

I have an application consisting of an executable and several DLLs.
The exe links a dll statically. This dll creates a worker thread which instantiates a Q_GLOBAL_STATIC object (first access). So this object (derived from QObject) lives in this worker thread.
At program exit this object needs to be destroyed in the worker thread. But it is actually destroyed when the static dll gets unloaded, which is happening in the main thread.
How do I delete this global static object in the correct thread?
I tried manually deleting it when my worker thread finishes. But this results in a crash after my destructor returns.
You need to explicit call destructor on that object like this obj.~Test(); then after check if object is destroyed obj.isDestroyed() if it is not working.
Use QLibrary to load and unload dll ( https://doc.qt.io/archives/qt-4.8/qlibrary.html). Where in Q_GLOBAL_STATIC documentation(https://doc.qt.io/qt-5/qglobalstatic.html#destruction) it is mention that destructor will be called on unload of library or atexit() funciton of application.
I'd like to share my solution. To make it work, I introduced a simple private class which I use as a QGlobalStatic. This global object of this class holds the real singleton object and allows safe deletion of it in the correct thread.
The header file:
class MySingletonPrivate;
class MySingleton : public QObject
{
Q_OBJECT
public:
static MySingleton* instance();
static void deleteInstance();
protected:
MySingleton( QObject *parent = nullptr );
~MySingleton();
friend class MySingletonPrivate;
};
The cpp file:
class MySingletonPrivate
{
public:
MySingletonPrivate()
: inst( new MySingleton() )
{
QObject::connect( QThread::currentThread(), &QThread::finished, inst, &MySingleton::deleteLater );
}
~MySingletonPrivate()
{
Q_ASSERT( inst.isNull() );
}
MySingleton* instance() const
{
return inst;
}
private:
QPointer< MySingleton > inst;
};
Q_GLOBAL_STATIC( MySingletonPrivate, s_instance );
MySingleton* MySingleton::instance()
{
return s_instance->instance();
}
void MySingleton::deleteInstance()
{
delete s_instance->instance();
}
Using the signal QThread::finished the global instance is deleted at the end of the worker thread.
I left out the include instructions and the constructor and destructor implemention to make the answer shorter.
Of course, the first call to MySingleton::instance() must happen in the worker thread.

What happens if I instantiate an object within its own constructor?

//object class
public class test
{
public test()
{
test t1 = new test();
}
}
//client class in same folder
public class Client
{
public static void main(String[] args)
{
test t = new test();
}
}
Does this just make infinite empty test objects?
What happens to the computer memory space/RAM do things just get deleted as more objects are continuously created?
Your initial reasoning is correct: you get an "infinite" number of test objects. Each object creates a new one. The constructor for the first object created (by main) never finishes.
However, the Java Virtual Machine (JVM) has a limit on the stack size. Every time the constructor calls the next constructor, an entry is pushed onto the stack. Within a short amount of time, you'll get a stack overflow exception and your program will be terminated.

Guice and RequestScoped behaviour in multiple threads

I am using Guice's RequestScoped and Provider in order to get instances of some classes during a user request. This works fine currently. Now I want to do some job in a background thread, using the same instances created during request.
However, when I call Provider.get(), guice returns an error:
Error in custom provider, com.google.inject.OutOfScopeException: Cannot
access scoped object. Either we are not currently inside an HTTP Servlet
request, or you may have forgotten to apply
com.google.inject.servlet.GuiceFilter as a servlet
filter for this request.
afaik, this is due to the fact that Guice uses thread local variables in order to keep track of the current request instances, so it is not possible to call Provider.get() from a thread different from the thread that is handling the request.
How can I get the same instances inside new threads using Provider? It is possible to achieve this writing a custom scope?
I recently solved this exact problem. There are a few things you can do. First, read up on ServletScopes.continueRequest(), which wraps a callable so it will execute as if it is within the current request. However, that's not a complete solution because it won't forward #RequestScoped objects, only basic things like the HttpServletResponse. That's because #RequestScoped objects are not expected to be thread safe. You have some options:
If your entire #RequestScoped hierarchy is computable from just the HTTP response, you're done! You will get new instances of these objects in the other thread though.
You can use the code snippet below to explicitly forward all RequestScoped objects, with the caveat that they will all be eagerly instantiated.
Some of my #RequestScoped objects couldn't handle being eagerly instantiated because they only work for certain requests. I extended the below solution with my own scope, #ThreadSafeRequestScoped, and only forwarded those ones.
Code sample:
public class RequestScopePropagator {
private final Map<Key<?>, Provider<?>> requestScopedValues = new HashMap<>();
#Inject
RequestScopePropagator(Injector injector) {
for (Map.Entry<Key<?>, Binding<?>> entry : injector.getAllBindings().entrySet()) {
Key<?> key = entry.getKey();
Binding<?> binding = entry.getValue();
// This is like Scopes.isSingleton() but we don't have to follow linked bindings
if (binding.acceptScopingVisitor(IS_REQUEST_SCOPED)) {
requestScopedValues.put(key, binding.getProvider());
}
}
}
private final BindingScopingVisitor<Boolean> IS_REQUEST_SCOPED = new BindingScopingVisitor<Boolean>() {
#Override
public Boolean visitScopeAnnotation(Class<? extends Annotation> scopeAnnotation) {
return scopeAnnotation == RequestScoped.class;
}
#Override
public Boolean visitScope(Scope scope) {
return scope == ServletScopes.REQUEST;
}
#Override
public Boolean visitNoScoping() {
return false;
}
#Override
public Boolean visitEagerSingleton() {
return false;
}
};
public <T> Callable<T> continueRequest(Callable<T> callable) {
Map<Key<?>, Object> seedMap = new HashMap<>();
for (Map.Entry<Key<?>, Provider<?>> entry : requestScopedValues.entrySet()) {
// This instantiates objects eagerly
seedMap.put(entry.getKey(), entry.getValue().get());
}
return ServletScopes.continueRequest(callable, seedMap);
}
}
I have faced the exact same problem but solved it in a different way. I use jOOQ in my projects and I have implemented transactions using a request scope object and an HTTP filter.
But then I created a background task which is spawned by the server in the middle of the night. And the injection is not working because there is no request scope.
Well. The solutions is simple: create a request scope manually. Of course there is no HTTP request going on but that's not the point (mostly). It is the concept of the request scope. So I just need a request scope that exists alongside my background task.
Guice has an easy way to create a request scope: ServletScope.scopeRequest.
public class MyBackgroundTask extends Thread {
#Override
public void run() {
RequestScoper scope = ServletScopes.scopeRequest(Collections.emptyMap());
try ( RequestScoper.CloseableScope ignored = scope.open() ) {
doTask();
}
}
private void doTask() {
}
}
Oh, and you probably will need some injections. Be sure to use providers there, you want to delay it's creation until inside the created scope.
Better use ServletScopes.transferRequest(Callable) in Guice 4

Code restricting: Create of more than one object in private constructor in C#

I want to create a code in C# for Private constructor.
I want that it should allow only one object to be created but when I try to create more than one a message showing no more object can be created should be shown.
I don't want to use static constructor in this code.
How do I do that in C#?
You may use static counter or flag that will be set in your private counstructor to show that at least one instance were created.
But first of all, I suggest you to pay attention to Singleton pattern.
Simple, this is the classic singleton pattern:
public class MyClass
{
public static MyClass Instance;
private MyClass()
{
if (Instance != null)
throw new InvalidOperationException("no more object can be created");
// do other constructor logic
Instance = this;
}
}
Please note that synchronization has been left out. This code need synchronization for thread safe purpose.

Custom ClassLoader not garbage collected

In an attempt to solve this problem, I built a (very) small project that is reproducing part of it. It is a NetBeans project using Glassfish v2.1.1 and OpenJpa-1.2.2.
Globally, the goal is to be able to reload dynamically some business code (called 'tasks') without the need to (re)make a full deployment (eg via asadmin). In the project there are two of them: PersonTask and AddressTask which are simply accessing some data and printing them out.
In order to do that, I've implemented a custom class loader that read the binary of class files and inject it via the defineClass method. Basically, this CustomClassLoader is a singleton and is implemented like this:
public class CustomClassLoader extends ClassLoader {
private static CustomClassLoader instance;
private static int staticId = 0;
private int id; //for debugging in VisualVM
private long threadId; //for debugging in VisualVM
private CustomClassLoader(ClassLoader parent) {
super(parent);
threadId = Thread.currentThread().getId();
id = staticId;
++staticId;
}
private static CustomClassLoader getNewInstance() {
if (instance!=null) {
CustomClassLoader ccl = instance;
instance = null;
PCRegistry.deRegister(ccl); //https://issues.apache.org/jira/browse/GERONIMO-3326
ResourceBundle.clearCache(ccl); //found some references in there while using Eclipse Memory Analyzer Tool
Introspector.flushCaches(); //http://java.jiderhamn.se/category/classloader-leaks/
System.runFinalization();
System.gc();
}
ClassLoader parent = Thread.currentThread().getContextClassLoader();
instance = new CustomClassLoader(parent);
return instance;
}
//...
}
//this class is included in the EAR like a normal class
public abstract class AbstractTask {
protected Database database; /* wrapper around the EntityManager, filled when instance is created */
public abstract void process(Integer id);
}
//this one is dynamically loaded by the CustomClassLoader
public class PersonTask extends AbstractTask {
#Override
public void process(Integer id) {
//keep it empty for now
}
}
In my EJB facade (EntryPointBean), I simply do a lookup of the class, create a new instance of it and call the process method on it. The code in the project is slightly different, but the idea is quite the same:
CustomClassLoader loader = CustomClassLoader.getNewInstance();
Class<?> clazz = loader.loadClass("ch.leak.tasks.PersonTask");
Object instance = clazz.newInstance();
AbstractTask task = (AbstractTask)instance;
/* inject a new Database instance into the task */
task.process(...);
Until now, all is fine. If this code is run many times (via ch.leak.test.Test), there will be only one single instance of the CustomClassLoader when a heap analysis is done, meaning the previous instances have been successfully collected.
Now, here is the line triggering a leak:
public class PersonTask extends AbstractTask {
#Override
public void process(Integer id) {
Person p = database.getEntity("SELECT p FROM Person p WHERE p.personpk.idpk=?1", new Long(id));
//...
}
}
This simple access to the database has a strange consequence: the first time the code is run, the CustomClassLoader being used will never be garbage collected (even without any GC roots). However, all the further CustomClassLoader created won't leak.
As we can see in the dump below (done with VisualVM), the CustomClassLoader with instance id 0 is never garbage collected...
Finally, one other thing I've seen when exploring the heap dump: my entities are declared twice in the PermGen and half of them have no instances and also no GC root (but they are not linked to the CustomClassLoader).
It seems that OpenJPA has something to do with those leaks... but I don't know where I can search for more information of what I'm doing wrong. I have also put the heap dump directly in the zip with the project.
Does anyone have an idea ?
Thanks !

Resources