Memory Leak Issue with spring-cloud-starter-hystrix and spring-cloud-starter-archaius integration - memory-leaks

We are using spring-cloud-starter-hystrix with spring-cloud-starter-archaius where we are unable to stop the poolingconfigurationSource thread of archaius once the war is un-deployed. But spring-cloud-starter-archaius is working fine without hystrix and thread is stopped once war is un-deployed.

Try reseting Hystrix before the Spring Application shuts down
#EnableCircuitBreaker
#SpringBootApplication
public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
#PreDestroy
public void cleanUp() {
Hystrix.reset();
}
}

**Issue resolved permanently.**
**There are 2 approach :**
1) Create ContextListener in Servlet and in destroy method , copy below code.
2) If you are using Histrix + Spring Boot + Archaius then on main spring application java file , copy below code in method annonated with #PreDestory annotations.
**Solution :**
try {
if (ConfigurationManager.getConfigInstance() instanceof DynamicConfiguration) {
DynamicConfiguration config = (DynamicConfiguration) ConfigurationManager.getConfigInstance();
config.stopLoading();
} else if (ConfigurationManager.getConfigInstance() instanceof ConcurrentCompositeConfiguration) {
ConcurrentCompositeConfiguration configInst = (ConcurrentCompositeConfiguration) ConfigurationManager
.getConfigInstance();
List<AbstractConfiguration> configs = configInst.getConfigurations();
if (configs != null) {
for (AbstractConfiguration config : configs) {
if (config instanceof DynamicConfiguration) {
((DynamicConfiguration) config).stopLoading();
break;
}
}
}
}
} catch (Exception e) {
e.printStackTrace();
}

both Davin and Ashish Patel are right: there are multiple leaks caused by Spring cloud.
The presence of a some threads named pollingConfigurationSource can be partially fixed by the solution proposed by Davin. You also need to make sure not to have any file named config.properties in your classpath because com.netflix.config.sources.URLConfigurationSource (look in the source for all the cases) will search for common patsh and start an exectutor thread. there are multiple path in the code that causes an executorservice on thread "pollingConfigurationSource" to be started (an not be always stopped). In my case removing "config.properties" solved this leak
The other leak I'm aware of is caused by Hystrix/RjJava. Instead of calling Histrix.reset call rx.schedulers.Schedulers.shutdown(); this will force threads "RxIoScheduler-" to exit.

Related

Implementing clustered coordianted timer (runs on one node only in Payara Micro Cluster) using IScheduledExecutorService

I am trying to achieve the following behavior for the clustered coordinated events:
timer (event) is executed only in one thread\JVM in the Payara Micro cluster;
in case node goes down - timer (event) will be executed on another node in the cluster.
From the Payara Micro guide:
Persistent timers are NOT coordinated across a Payara Micro cluster.
They are always executed on an instance with the same name that
created the timers.
and
If that instance goes down, the timer will be recreated on another
instance with the same name once it joins the cluster. Until that
time, the timer becomes inactive.
Seems persistent timers will not work as desired in Payara Micro cluster by definition.
As such I am trying to use IScheduledExecutorService from Hazelcast, what seems to be a perfect match.
Basically implementation with IScheduledExecutorService works well except the scenario when the new Payara Micro node is starting & joining cluster (the cluster where some events already scheduled using IScheduledExecutorService). During this time the following exceptions happen:
Exception 1: java.lang.RuntimeException: ConcurrentRuntime not initialized
[2021-02-15T23:00:31.870+0800] [] [INFO] [] [fish.payara.nucleus.cluster.PayaraCluster] [tid: _ThreadID=63 _ThreadName=hz.angry_yalow.event-5] [timeMillis: 1613401231870] [levelValue: 800] [[
Data Grid Status
Payara Data Grid State: DG Version: 4 DG Name: testClusterDev DG Size: 2
Instances: {
DataGrid: testClusterDev Name: testNode0 Lite: false This: true UUID: 493b19ed-a58d-4508-b9ef-f5c58e05b859 Address: /10.41.0.7:6900
DataGrid: testClusterDev Lite: false This: false UUID: f12342bf-a37e-452a-8c67-1d36dd4dbac7 Address: /10.41.0.7:6901
}]]
[2021-02-15T23:00:32.290+0800] [] [WARNING] [] [com.hazelcast.internal.partition.operation.MigrationRequestOperation] [tid: _ThreadID=160 _ThreadName=ForkJoinPool.commonPool-worker-6] [timeMillis: 1613401232290] [levelValue: 900] [[
[10.41.0.7]:6900 [testClusterDev] [4.1] Failure while executing MigrationInfo{uuid=fc68e9ac-1081-4f9b-a70a-6fb0aae19016, partitionId=27, source=[10.41.0.7]:6900 - 493b19ed-a58d-4508-b9ef-f5c58e05b859, sourceCurrentReplicaIndex=0, sourceNewReplicaIndex=1, destination=[10.41.0.7]:6901 - f12342bf-a37e-452a-8c67-1d36dd4dbac7, destinationCurrentReplicaIndex=-1, destinationNewReplicaIndex=0, master=[10.41.0.7]:6900, initialPartitionVersion=1, partitionVersionIncrement=2, status=ACTIVE}
com.hazelcast.nio.serialization.HazelcastSerializationException: java.lang.RuntimeException: ConcurrentRuntime not initialized
at com.hazelcast.internal.serialization.impl.SerializationUtil.handleException(SerializationUtil.java:103)
at com.hazelcast.internal.serialization.impl.AbstractSerializationService.readObject(AbstractSerializationService.java:292)
at com.hazelcast.internal.serialization.impl.ByteArrayObjectDataInput.readObject(ByteArrayObjectDataInput.java:567)
at com.hazelcast.scheduledexecutor.impl.ScheduledRunnableAdapter.readData(ScheduledRunnableAdapter.java:106)
at com.hazelcast.internal.serialization.impl.DataSerializableSerializer.readInternal(DataSerializableSerializer.java:160)
at com.hazelcast.internal.serialization.impl.DataSerializableSerializer.read(DataSerializableSerializer.java:106)
at com.hazelcast.internal.serialization.impl.DataSerializableSerializer.read(DataSerializableSerializer.java:51)
at com.hazelcast.internal.serialization.impl.StreamSerializerAdapter.read(StreamSerializerAdapter.java:44)
at com.hazelcast.internal.serialization.impl.AbstractSerializationService.readObject(AbstractSerializationService.java:286)
at com.hazelcast.internal.serialization.impl.ByteArrayObjectDataInput.readObject(ByteArrayObjectDataInput.java:567)
at com.hazelcast.scheduledexecutor.impl.TaskDefinition.readData(TaskDefinition.java:144)
at com.hazelcast.internal.serialization.impl.DataSerializableSerializer.readInternal(DataSerializableSerializer.java:160)
at com.hazelcast.internal.serialization.impl.DataSerializableSerializer.read(DataSerializableSerializer.java:106)
at com.hazelcast.internal.serialization.impl.DataSerializableSerializer.read(DataSerializableSerializer.java:51)
at com.hazelcast.internal.serialization.impl.StreamSerializerAdapter.read(StreamSerializerAdapter.java:44)
at com.hazelcast.internal.serialization.impl.AbstractSerializationService.readObject(AbstractSerializationService.java:286)
at com.hazelcast.internal.serialization.impl.ByteArrayObjectDataInput.readObject(ByteArrayObjectDataInput.java:567)
at com.hazelcast.scheduledexecutor.impl.ScheduledTaskDescriptor.readData(ScheduledTaskDescriptor.java:208)
at com.hazelcast.internal.serialization.impl.DataSerializableSerializer.readInternal(DataSerializableSerializer.java:160)
at com.hazelcast.internal.serialization.impl.DataSerializableSerializer.read(DataSerializableSerializer.java:106)
at com.hazelcast.internal.serialization.impl.DataSerializableSerializer.read(DataSerializableSerializer.java:51)
at com.hazelcast.internal.serialization.impl.StreamSerializerAdapter.read(StreamSerializerAdapter.java:44)
at com.hazelcast.internal.serialization.impl.AbstractSerializationService.readObject(AbstractSerializationService.java:286)
at com.hazelcast.internal.serialization.impl.ByteArrayObjectDataInput.readObject(ByteArrayObjectDataInput.java:567)
at com.hazelcast.scheduledexecutor.impl.operations.ReplicationOperation.readInternal(ReplicationOperation.java:87)
at com.hazelcast.spi.impl.operationservice.Operation.readData(Operation.java:750)
at com.hazelcast.internal.serialization.impl.DataSerializableSerializer.readInternal(DataSerializableSerializer.java:160)
at com.hazelcast.internal.serialization.impl.DataSerializableSerializer.read(DataSerializableSerializer.java:106)
at com.hazelcast.internal.serialization.impl.DataSerializableSerializer.read(DataSerializableSerializer.java:51)
at com.hazelcast.internal.serialization.impl.StreamSerializerAdapter.read(StreamSerializerAdapter.java:44)
at com.hazelcast.internal.serialization.impl.AbstractSerializationService.readObject(AbstractSerializationService.java:286)
at com.hazelcast.internal.serialization.impl.ByteArrayObjectDataInput.readObject(ByteArrayObjectDataInput.java:567)
at com.hazelcast.internal.partition.ReplicaFragmentMigrationState.readData(ReplicaFragmentMigrationState.java:97)
at com.hazelcast.internal.serialization.impl.DataSerializableSerializer.readInternal(DataSerializableSerializer.java:160)
at com.hazelcast.internal.serialization.impl.DataSerializableSerializer.read(DataSerializableSerializer.java:106)
at com.hazelcast.internal.serialization.impl.DataSerializableSerializer.read(DataSerializableSerializer.java:51)
at com.hazelcast.internal.serialization.impl.StreamSerializerAdapter.read(StreamSerializerAdapter.java:44)
at com.hazelcast.internal.serialization.impl.AbstractSerializationService.readObject(AbstractSerializationService.java:286)
at com.hazelcast.internal.serialization.impl.ByteArrayObjectDataInput.readObject(ByteArrayObjectDataInput.java:567)
at com.hazelcast.internal.partition.operation.MigrationOperation.readInternal(MigrationOperation.java:249)
at com.hazelcast.spi.impl.operationservice.Operation.readData(Operation.java:750)
at com.hazelcast.internal.serialization.impl.DataSerializableSerializer.readInternal(DataSerializableSerializer.java:160)
at com.hazelcast.internal.serialization.impl.DataSerializableSerializer.read(DataSerializableSerializer.java:106)
at com.hazelcast.internal.serialization.impl.DataSerializableSerializer.read(DataSerializableSerializer.java:51)
at com.hazelcast.internal.serialization.impl.StreamSerializerAdapter.read(StreamSerializerAdapter.java:44)
at com.hazelcast.internal.serialization.impl.AbstractSerializationService.toObject(AbstractSerializationService.java:205)
at com.hazelcast.spi.impl.NodeEngineImpl.toObject(NodeEngineImpl.java:346)
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:437)
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:166)
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:136)
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.executeRun(OperationThread.java:123)
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102)
Caused by: java.lang.RuntimeException: ConcurrentRuntime not initialized
at org.glassfish.concurrent.runtime.ConcurrentRuntime.getRuntime(ConcurrentRuntime.java:121)
at org.glassfish.concurrent.runtime.InvocationContext.readObject(InvocationContext.java:214)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1184)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2296)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2187)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1667)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2405)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2329)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2187)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1667)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2405)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2329)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2187)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1667)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:503)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:461)
at com.hazelcast.internal.serialization.impl.defaultserializers.JavaDefaultSerializers$JavaSerializer.read(JavaDefaultSerializers.java:83)
at com.hazelcast.internal.serialization.impl.defaultserializers.JavaDefaultSerializers$JavaSerializer.read(JavaDefaultSerializers.java:76)
at fish.payara.nucleus.hazelcast.PayaraHazelcastSerializer.read(PayaraHazelcastSerializer.java:84)
at com.hazelcast.internal.serialization.impl.StreamSerializerAdapter.read(StreamSerializerAdapter.java:44)
at com.hazelcast.internal.serialization.impl.AbstractSerializationService.readObject(AbstractSerializationService.java:286)
... 50 more
]]
[2021-02-15T23:00:32.304+0800] [] [WARNING] [] [com.hazelcast.internal.partition.impl.MigrationManager] [tid: _ThreadID=160 _ThreadName=ForkJoinPool.commonPool-worker-6] [timeMillis: 1613401232304] [levelValue: 900] [10.41.0.7]:6900 [testClusterDev] [4.1] Migration failed: MigrationInfo{uuid=fc68e9ac-1081-4f9b-a70a-6fb0aae19016, partitionId=27, source=[10.41.0.7]:6900 - 493b19ed-a58d-4508-b9ef-f5c58e05b859, sourceCurrentReplicaIndex=0, sourceNewReplicaIndex=1, destination=[10.41.0.7]:6901 - f12342bf-a37e-452a-8c67-1d36dd4dbac7, destinationCurrentReplicaIndex=-1, destinationNewReplicaIndex=0, master=[10.41.0.7]:6900, initialPartitionVersion=1, partitionVersionIncrement=2, status=ACTIVE}
This seems to happen because the new node is not fully initialized (as it is just starting). Looks like this exception is less critical comparing with the next one.
Exception 2: java.lang.NullPointerException: Failed to execute java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask
[2021-02-15T23:44:19.544+0800] [] [SEVERE] [] [com.hazelcast.spi.impl.executionservice.ExecutionService] [tid: _ThreadID=35 _ThreadName=hz.elated_murdock.scheduled.thread-] [timeMillis: 1613403859544] [levelValue: 1000] [[
[10.4.0.7]:6901 [testClusterDev] [4.1] Failed to execute java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask#55a27ce3
java.lang.NullPointerException
at org.glassfish.concurrent.runtime.ContextSetupProviderImpl.isApplicationEnabled(ContextSetupProviderImpl.java:326)
at org.glassfish.concurrent.runtime.ContextSetupProviderImpl.setup(ContextSetupProviderImpl.java:194)
at org.glassfish.enterprise.concurrent.internal.ContextProxyInvocationHandler.invoke(ContextProxyInvocationHandler.java:94)
at com.sun.proxy.$Proxy154.run(Unknown Source)
at com.hazelcast.scheduledexecutor.impl.ScheduledRunnableAdapter.call(ScheduledRunnableAdapter.java:56)
at com.hazelcast.scheduledexecutor.impl.TaskRunner.call(TaskRunner.java:78)
at com.hazelcast.scheduledexecutor.impl.TaskRunner.run(TaskRunner.java:104)
at com.hazelcast.spi.impl.executionservice.impl.DelegateAndSkipOnConcurrentExecutionDecorator$DelegateDecorator.run(DelegateAndSkipOnConcurrentExecutionDecorator.java:77)
at com.hazelcast.internal.util.executor.CachedExecutorServiceDelegate$Worker.run(CachedExecutorServiceDelegate.java:217)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
at com.hazelcast.internal.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:76)
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102)
]]
This exception happens on the new node which is joining cluster. This doesn't happen always, probably Hazelcast is trying to execute event on the new node which is starting, and it fails becasue environment still not fully initialized. The issue that after two such failed attempts - event gets unloaded by Hazelcast.
Implementation insights:
Method which schedules event using IScheduledExecutorService (resides in application scoped bean in the main app WAR):
#Resource
ContextService _ctxService;
public void sheduleClusteredEvent() {
IScheduledExecutorService executorService = _instance.getScheduledExecutorService("default");
ClusteredEvent ce = new ClusteredEvent(new DiagEvent(null, "TestEvent1"));
Object ceProxy = _ctxService.createContextualProxy(ce, Runnable.class, Serializable.class);
executorService.scheduleAtFixedRate((Runnable) ceProxy, 0, 3, TimeUnit.SECONDS);
}
ClusteredEvent class (resides in a separate JAR and added to classpath via --addLibs param to the Payara Micro). It needs to somehow inform the main app about the event to be trigered, thus BeanManager.fireEvent() is used.
public class ClusteredEvent implements Runnable, Serializable {
private final DiagEvent _event;
public ClusteredEvent(DiagEvent event) {
_event = event;
}
#Override
public void run() {
// For sake of shortness - all check for nulls etc. were removed
((BeanManager) ic.lookup("java:comp/BeanManager")).fireEvent(_event);
}
}
So my questions:
How to solve the mentioned above exceptions / issues?
Am I on the right direction in achieving coordinated clustred events behaviour in Payara Micro cluster? I would expect this to be a trivial task working out-of-the-box, but instead it requires some custom implementation as persistent timers do not work as desired. Is there any other more elegant way available with Payara Micro Cluster (>=v5.2021.1) of achiving coordinated clustred events behaviour?
Thank you so much in advance!
Update 1:
Just to recall that the main purpose of this exercise is to have coordinated timer (events) functionality available in the Payara Micro Cluster, thus suggestions on more elegant solutions are highly welcome.
Addressing questions/suggestions from the comments:
Q1:
why do you need to create a contextual proxy for the even object?
A1: Indeed making the contextual proxy out of the plain ClusteredEvent() object - adds the main complexity here and causes listed above exceptions (meaning: scheduling ClusteredEvent() without making a contextual proxy out of it - works fine and doesn't cause exceptions, but there is a caveat).
The reason contextual proxy is used as I need to somehow trigger the main app running on Payara Micro from the un-managed thread launched by IScheduledExecutorService. So far I haven't found any other workable way of triggering any CDI/EJB bean in the main app from the un-managed thread. Only making it contextual - allows ClusteredEvent.run() to communicate with the main app via BeanManger for example.
Any suggestions on how to establish communication between un-managed thread and CDI/EJB beans running in separate app (and both running on the same Payara Micro instance) - are welcome.
Q2:
You can for example wrap the ceProxy to a Runnable, that executes ceProxy.run() in a try catch block
A2: I have tried it and indeed it helps to handle the "Exception 2" mentioned above. I am posting implementation of the ClusteredEventWrapper class below, try/catch inside run() method handles "Exception 2".
Q3:
The first exception comes from hazelcast trying to deserialize the
proxy on the new instance, which fails because the proxy needs an
initilaized environment to deserialize. To solve this, you would need
to wrap the ceProxy object and customize the deserialization of the
wrapper to wait until the ContextService is initilaized.
A3: Adding custom implementation for serialization/deserialization of ClusteredEventWrapper indeed allows to handle "Exception 1", but here I am still struggling on the best way of handling it. Postponing deserialization via Thread.sleep() - causes new (different) exceptions. Supressing of exceptions - need to check, but in that case I am afraid ClusteredEventWrapper will not be properly deserialized on the new (starting) node, as Hazelcast will consider sync was good and will not try to sync it again (I may be wrong - this I still need to check). As currently seems Hazelcast tries to sync several times util the "Exception 1" gone.
Implementation of the ClusteredEventWrapper which wraps ClusteredEvent:
public class ClusteredEventWrapper implements Runnable, Serializable {
private static final long serialVersionUID = 5878537035999797427L;
private static final Logger LOG = Logger.getLogger(ClusteredEventWrapper.class.getName());
private final Runnable _clusteredEvent;
public ClusteredEventWrapper(Runnable clusteredEvent) {
_clusteredEvent = clusteredEvent;
}
#Override
public void run() {
try {
_clusteredEvent.run();
} catch (Throwable e) {
if (e instanceof NullPointerException
&& e.getStackTrace() != null && e.getStackTrace().length > 0
&& "org.glassfish.concurrent.runtime.ContextSetupProviderImpl".equals(e.getStackTrace()[0].getClassName())
&& "isApplicationEnabled".equals(e.getStackTrace()[0].getMethodName())) {
// Means we got the "Exception 2" (posted above)
LOG.log(Level.WARNING, "Skipping scheduled event execution on this node as this node is still being initialized...");
} else {
LOG.log(Level.SEVERE, "Error executing scheduled event", e);
}
}
}
private void writeObject(ObjectOutputStream out) throws IOException {
LOG.log(Level.INFO, "1_WRITE_OBJECT...");
out.defaultWriteObject();
}
private void readObject(ObjectInputStream in) throws IOException, ClassNotFoundException {
LOG.log(Level.INFO, "2_READ_OBJECT...");
int retry = 0;
while (readObjectInner(in) != true && retry < 5) { // This doesn't work good, need to think of some other way on handling it
retry++;
LOG.log(Level.INFO, "2_READ_OBJECT: retry {0}", retry);
try {
// We need to wait
Thread.sleep(15000);
} catch (InterruptedException ex) {
}
}
}
private boolean readObjectInner(ObjectInputStream in) throws IOException, ClassNotFoundException {
try {
in.defaultReadObject();
return true;
} catch (Throwable e) {
if (e instanceof RuntimeException && "ConcurrentRuntime not initialized".equals(e.getMessage())) {
// This means node which is trying to desiarialize this objet is not ready yet
return false;
} else {
// For all other exceptions - we throw error
throw e;
}
}
}
}
So now event scheduled in the following way:
#Resource
ContextService _ctxService;
public void sheduleClusteredEvent() {
IScheduledExecutorService executorService = _instance.getScheduledExecutorService("default");
ClusteredEvent ce = new ClusteredEvent(new DiagEvent(null, "PersistentEvent1"));
Object ceProxy = _ctxService.createContextualProxy(ce, Runnable.class, Serializable.class);
executorService.scheduleAtFixedRate(new ClusteredEventWrapper((Runnable) ceProxy), 0, 3, TimeUnit.SECONDS);
}
Below I am posting implemented solution based on suggestions from #OndroMih in the comments:
Excerpt 1:
...a better approach to this is to avoid wrapping your object into a
contextual and instead register BeanManager into a global variable
(singleton) at application startup. In ClusteredEvent.run() you would
retrieve it from a static method, e.g. Registry.getBeanManager(). This
method would have to wait until the application starts up and saves
its BeanManager instance with Registry.setBeanManager()
And this one:
Excerpt 2:
Or maybe even better if you store a reference to the
ManagedExecutorService instead of the BeanManager, execute the run
method with that executor and just inject anything you need.
#OndroMih, please post them as reply - I will mark it as an accepted answer!
Before going into details of the implementation - few words on our application packaging: it consists of:
the main war file which is bundled into Payara Micro as Uber jar, so we do not redeploy application war, we start and stop the whole Payara Micro with the deployed war on it;
and tiny jar lib with few classes which are used mainly in Hazelcast and provided via --addLibs arg to Payara Micro Uber jar to avoid ClassNotFoundExceptions when Hazelcast syncs objects in DataGrid.
And now about the implementation which has given us the desired behavior for the clustered timer/events (see the 1st post):
I) Using ManagedExecutorService as per suggestion above indeed looks much more flexible as it allows to inject any desired object into clustered event, so I started with this approach. But due to some reason - I was not able to inject anything. Due to limited time I left this for investigation in future and switched to the next approach. I am also providing sample code for this case in the end of this post.
II) So I switched to the scenario with BeanManager.
I got the Registry signleton implemented as follows (all comments are removed in sake of shortness). This class resides in the tiny jar added via --addLibs arg to Payara Micro:
public final class Registry {
private ManagedExecutorService _executorService;
private BeanManager _beanManager;
private Registry() {
}
public ManagedExecutorService getExecutorService() {
return _executorService;
}
public void setExecutorService(ManagedExecutorService executorService) {
_executorService = executorService;
}
public BeanManager getBeanManager() {
return _beanManager;
}
public void setBeanManager(BeanManager beanManager) {
_beanManager = beanManager;
}
public static Registry getInstance() {
return InstanceHolder._instance;
}
private static class InstanceHolder {
private static final Registry _instance = new Registry();
}
}
In the main app war we already had an AppListener class which listens for the event when app is deployed, so we added Registry population logic into it:
public class AppListener implements SystemEventListener {
...
#Resource
private ManagedExecutorService _managedExecutorService;
#Resource
private BeanManager _beanManager;
#Override
public void processEvent(SystemEvent event) throws AbortProcessingException {
try {
if (event instanceof PostConstructApplicationEvent) {
LOG.log(Level.INFO, ">> Application started");
...
// Once app marked as started - populate global objects in the Registry
Registry.getInstance().setExecutorService(_managedExecutorService);
Registry.getInstance().setBeanManager(_beanManager);
}
...
} catch (Exception e) {
LOG.log(Level.SEVERE, ">> Error processing event: " + event, e);
}
}
}
ClusteredEvent class which as scheduled via IScheduledExecutorService.scheduleAtFixedRate() also resides in the tiny jar and has the following implementation:
public final class ClusteredEvent implements NamedTask, Runnable, Serializable {
...
private final MultiTenantEvent _event;
public ClusteredEvent(MultiTenantEvent event) {
if (event == null) {
throw new NullPointerException("Event can not be null");
}
_event = event;
}
#Override
public void run() {
try {
if (Registry.getInstance().getBeanManager() == null) {
LOG.log(Level.WARNING, "Skipping timer execution - application not initialized yet...");
return;
}
Registry.getInstance().getBeanManager().fireEvent(_event);
} catch (Throwable e) {
LOG.log(Level.SEVERE, "Error executing timer: " + _event, e);
}
}
#Override
public final String getName() {
return _event.getName();
}
}
And basically that is all. Scheduling is done using the following simple steps:
#Resource(lookup = "payara/Hazelcast")
private HazelcastInstance _instance;
_instance.getScheduledExecutorService("default").scheduleAtFixedRate(new ClusteredEvent(event), initialDelaySec, invocationPeriodSec, TimeUnit.SECONDS);
All tests went good so far. I was worried that Registry.getBeanManager() getting 'spoiled' after some time due to some closed contexts somewhere (I am not sure about the nature of the BeanManager reference), but tests have shown that ref to BeanManager stays valid after 1 day, so hopefully it will work fine.
Another concern (even not a concern, but caveat to be considered) that there is no possibility to control on which node event is to be fired by IScheduledExecutorService, as such when event is triggered on the node which is not yet initialized (still starting) - the event gets skipped. But for our usage-scenario this is acceptable, so currently we can live with these considerations.
And getting back to the issue with usage of ManagedExecutorService: ClusteredEvent was implemented like provided below:
public class ClusteredEvent implements Runnable, Serializable {
private final MultiTenantEvent _event;
public ClusteredEvent(MultiTenantEvent event) {
_event = event;
}
#Override
public void run() {
try {
LOG.log(Level.INFO, "TIMER THREAD NAME: {0}", Thread.currentThread().getName());
if (Registry.getInstance().getExecutorService() == null) {
LOG.log(Level.WARNING, "Skipping timer execution - application not initialized yet...");
return;
}
Registry.getInstance().getExecutorService().submit(new Callable<Boolean>() {
#Override
public Boolean call() throws Exception {
LOG.log(Level.INFO, "Timer.Run() THREAD NAME: {0}", Thread.currentThread().getName());
String beanManagerJndiName = "java:comp/BeanManager";
try {
Context ic = new InitialContext();
BeanManager beanManager = (BeanManager) ic.lookup(beanManagerJndiName);
beanManager.fireEvent(_event);
return true;
} catch (NullPointerException | NamingException ex) {
LOG.log(Level.SEVERE, "ERROR: no BeanManager resource could be located by JNDI name: " + beanManagerJndiName, ex);
return false;
}
}
}).get();
} catch (Throwable e) {
LOG.log(Level.SEVERE, "Error executing timer: " + _event, e);
}
}
}
Output was the following:
[2021-02-24 07:56:07] [INFO] [ua.appName.model.event.ClusteredEvent run]
TIMER THREAD NAME: hz.competent_mccarthy.cached.thread-11
[2021-02-24 07:56:07] [INFO] [ua.appName.model.event.ClusteredEvent$1 call]
Timer.Run() THREAD NAME: concurrent/__defaultManagedExecutorService-managedThreadFactory-Thread-1
[2021-02-24 07:56:07] [SEVERE] [ua.appName.model.event.ClusteredEvent$1 call]
ERROR: no BeanManager resource could be located by JNDI name: java:comp/BeanManager
javax.naming.NamingException: Lookup failed for 'java:comp/BeanManager' in SerialContext[myEnv={java.naming.factory.initial=com.sun.enterprise.naming.impl.SerialInitContextFactory, java.naming.factory.url.pkgs=com.sun.enterprise.naming, java.naming.factory.state=com.sun.corba.ee.impl.presentation.rmi.JNDIStateFactoryImpl} [Root exception is javax.naming.NamingException: Invocation exception: Got null ComponentInvocation ]
at com.sun.enterprise.naming.impl.SerialContext.lookup(SerialContext.java:496)
at com.sun.enterprise.naming.impl.SerialContext.lookup(SerialContext.java:442)
at javax.naming.InitialContext.lookup(InitialContext.java:417)
at javax.naming.InitialContext.lookup(InitialContext.java:417)
at ua.appName.model.event.ClusteredEvent$1.call(ClusteredEvent.java:70)
at ua.appName.model.event.ClusteredEvent$1.call(ClusteredEvent.java:63)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.glassfish.enterprise.concurrent.internal.ManagedFutureTask.run(ManagedFutureTask.java:143)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
at org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:250)
Caused by: javax.naming.NamingException: Invocation exception: Got null ComponentInvocation
at com.sun.enterprise.naming.impl.GlassfishNamingManagerImpl.getComponentId(GlassfishNamingManagerImpl.java:870)
at com.sun.enterprise.naming.impl.GlassfishNamingManagerImpl.lookup(GlassfishNamingManagerImpl.java:737)
at com.sun.enterprise.naming.impl.JavaURLContext.lookup(JavaURLContext.java:167)
at com.sun.enterprise.naming.impl.SerialContext.lookup(SerialContext.java:476)
... 11 more
So line Timer.Run() THREAD NAME: concurrent/__defaultManagedExecutorService-managedThreadFactory-Thread-1 confirms that code runs already inside the managed thread but still I was not able to inject or lookup nothing. I left this investigation for future this time.
Once again, many thanks to #OndroMih for your suggestions on the implementation!
Thank you!

On servlet 3.0 webserver, is it good to make all servlets and filters async?

I am confused with Async feature introduced in Servlet 3.0 spec
From Oracle site (http://docs.oracle.com/javaee/7/tutorial/doc/servlets012.htm):
To create scalable web applications, you must ensure that no threads
associated with a request are sitting idle, so the container can use
them to process new requests.
There are two common scenarios in which a thread associated with a
request can be sitting idle.
1- The thread needs to wait for a resource to become available or process data before building the response. For example, an application
may need to query a database or access data from a remote web service
before generating the response.
2- The thread needs to wait for an event before generating the response. For example, an application may have to wait for a JMS
message, new information from another client, or new data available in
a queue before generating the response.
The first item happens a lot (nearly always, we always query db or call a remote webservice to get some data). And calling an external resource will always consume some time.
Does it mean that we should ALWAYS use servelt async feature for ALL our servelts and filter ?!
I can ask this way too, if I write all my servelts and filters async, will I lose anything (performance)?!
If above is correct the skeleton of ALL our servlets will be:
public class Work implements ServletContextListener {
private static final BlockingQueue queue = new LinkedBlockingQueue();
private volatile Thread thread;
#Override
public void contextInitialized(ServletContextEvent servletContextEvent) {
thread = new Thread(new Runnable() {
#Override
public void run() {
while (true) {
try {
ServiceFecade.doBusiness();
AsyncContext context;
while ((context = queue.poll()) != null) {
try {
ServletResponse response = context.getResponse();
PrintWriter out = response.getWriter();
out.printf("Bussiness done");
out.flush();
} catch (Exception e) {
throw new RuntimeException(e.getMessage(), e);
} finally {
context.complete();
}
}
} catch (InterruptedException e) {
return;
}
}
}
});
thread.start();
}
public static void add(AsyncContext c) {
queue.add(c);
}
#Override
public void contextDestroyed(ServletContextEvent servletContextEvent) {
thread.interrupt();
}
}

Java: Running transaction in multithreaded environment

We are launching a website that will have a very heavy volume for a short period of time. It is basically giving tickets. The code is written in Java, Spring & Hibernate. I want to mimic the high volume by spawning multiple threads and trying to get the ticket using JUnit test case. The problem is that in my DAO class the code just simply dies after I begin transaction. I mean there is no error trace in the log file or anything like that. Let me give some idea about the way my code is.
DAO code:
#Repository("customerTicketDAO")
public class CustomerTicketDAO extends BaseDAOImpl {// BaseDAOImpl extends HibernateDaoSupport
public void saveCustomerTicketUsingJDBC(String customerId) {
try{
getSession().getTransaction().begin(); //NOTHING HAPPENS AFTER THIS LINE OF CODE
// A select query
Query query1 = getSession().createSQLQuery("my query omitted on purpose");
.
.
// An update query
Query query2 = getSession().createSQLQuery("my query omitted on purpose");
getSession().getTransaction().commite();
} catch (Exception e) {
}
}
Runnable code:
public class InsertCustomerTicketRunnable implements Runnable {
#Autowired
private CustomerTicketDAO customerTicketDAO;
public InsertCustomerTicketRunnable(String customerId) {
this.customerId = customerId;
}
#Override
public void run() {
if (customerTicketDAO != null) {
customerTicketDAO.saveCustomerTicketUsingJDBC(customerId);
}
}
}
JUnit method:
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(locations={"file:src/test/resources/applicationContext-test.xml"})
public class DatabaseTest {
#Before
public void init() {
sessionFactory = (SessionFactory)applicationContext.getBean("sessionFactory");
Session session = SessionFactoryUtils.getSession(sessionFactory, true);
TransactionSynchronizationManager.bindResource(sessionFactory, new SessionHolder(session));
customerTicketDAO = (CustomerTicketDAO)applicationContext.getBean("customerTicketDAO");
}
#After
public void end() throws Exception {
SessionHolder sessionHolder = (SessionHolder) TransactionSynchronizationManager.unbindResource(sessionFactory);
SessionFactoryUtils.closeSession(session);
}
#Test
public void saveCustomerTicketInMultipleThreads () throws Exception {
ExecutorService executor = Executors.newFixedThreadPool(NTHREDS);
for (int i=0; i<1000; i++) {
executor.submit(new InsertCustomerTicketRunnable(i));
}
// This will make the executor accept no new threads
// and finish all existing threads in the queue
executor.shutdown();
// Wait until all threads are finish
executor.awaitTermination(1, TimeUnit.SECONDS);
}
I see no data being inserted into the database. Can someone please point me as to where I am going wrong?
Thanks
Raj
SessionFactory is thread safe but Session is not. So my guess is that you need to call SessionFactoryUtils.getSession() from within each thread, so that each thread gets its own instance. You are currently calling it from the main thread, so all children threads try to share the same instance.
Naughty, naughty!
public void saveCustomerTicketUsingJDBC(String customerId) {
try {
getSession().getTransaction().begin(); //NOTHING HAPPENS AFTER THIS LINE OF CODE
.
.
} catch (Exception e) {
}
}
You should never (well, hardly ever) have an empty catch block, if there is a problem you will find that your code 'just simply dies' with no log messages. Oh look, that's what's happening ;)
At the very minimum you should log the exception, that will go a long way towards you helping you find what the problem is (and from there, the solution).

Ninject 2.0 InRequestScope() causing me problems - dependencies not being disposed

I'm using Ninject 2.0 with an MVC 2/EF 4 project in order to inject my repositories into my controllers. I've read that when doing something like that, one should bind using InRequestScope(). When I do that, I get a new repository per request, but the old repositories aren't being disposed. Since the old repositories are remaining in memory, I get conflicts with multiple ObjectContexts existing at the same time.
My concrete repositories implement IDisposable:
public class HGGameRepository : IGameRepository, IDisposable
{
// ...
public void Dispose()
{
if (this._siteDB != null)
{
this._siteDB.Dispose();
}
}
}
And my Ninject code:
public class NinjectControllerFactory : DefaultControllerFactory
{
private IKernel kernel = new StandardKernel(new HandiGamerServices());
protected override IController GetControllerInstance(System.Web.Routing.RequestContext requestContext, Type controllerType)
{
try
{
if (controllerType == null)
{
return base.GetControllerInstance(requestContext, controllerType);
// return null;
}
}
catch (HttpException ex)
{
if (ex.GetHttpCode() == 404)
{
IController errorController = kernel.Get<ErrorController>();
((ErrorController)errorController).InvokeHttp404(requestContext.HttpContext);
return errorController;
}
else
{
throw ex;
}
}
return (IController)kernel.Get(controllerType);
}
private class HandiGamerServices : NinjectModule
{
public override void Load()
{
Bind<HGEntities>().ToSelf().InRequestScope();
Bind<IArticleRepository>().To<HGArticleRepository>().InRequestScope();
Bind<IGameRepository>().To<HGGameRepository>().InRequestScope();
Bind<INewsRepository>().To<HGNewsRepository>().InRequestScope();
Bind<ErrorController>().ToSelf().InRequestScope();
}
}
}
What am I doing wrong?
I'm quite sure that you are wrong about the guess that your objects are not disposed. This just does not happen when you think it will happen. But the fact that this does happen later should not give you any problems with ObjectContexts unless you are doing something wrong. With a high load you will have a lot of ObjectContexts at the same time anyway.
What can become a problem though is that the memory usage increases. That's why the request scope needs to be released actively. The Ninject MVC extensions will take care of that. Otherwise have a look at the OnePerRequestModule to see how it is done:
https://github.com/ninject/Ninject.Web.Common/blob/master/src/Ninject.Web.Common/OnePerRequestHttpModule.cs

Logging component using log4j in java/java EE application

I'm trying to build a component for application level (java/java ee) logging using log4j.where i can create the jar of the component and put it in class path of any application and use it. Below approach i have followed
I override log method like debug , trace, info etc.
single and multiple argument substutution e.g. MessageFormatter.format("Hi {}. My name is {}.", "Alice", "Bob"); will return the string "Hi Alice. My name is Bob.".
say for example for trace message
public boolean isTraceEnabled() { return logger.isTraceEnabled();}
public void trace(String msg, Throwable throwable, Object... args) {
log(isTraceEnabled(),throwable,msg,args);//
}
private void log(boolean isEnabled, Throwable throwable, String msg,Object... args)
{
if(throwable!=null){
String message=MessageFormatter.getFormattedMessage(throwable);//Formated the exception message
msg=msg+message;
throwable=null;
}
if (args == null || args.length == 0) {
logger.log(FQCN,LEVELmsg, throwable);
} else {
if (isEnabled) {
String formattedMsg = MessageFormatter.arrayFormat(msg, args);//single and multiple argument substutution
logger.log(FQCN, UtilConstant. Level.TRACEformattedMsg, throwable);
}
}
}
My aim is to build the component which can cater all the Java EE applications. Is that two approach sufficient or I need to do more on that. Please help.
It seems to me you're reinventing the wheel. Check out the SLF4J, it can do the things that you're aiming to implement, and it shields you from underlying messaging system, which you can change at any time (it works with log4j out of the box, too).

Resources