Transformer throws exception for returning null. I'm getting the message payload and doing my business logic in transformer. Then, sending response to fileoutput channel. I've tried using .handle method too instead of transformer, but getting one way message exception.
EDIT
#Bean
IntegrationFlow integrationFlow() {
return IntegrationFlows.from(this.sftpMessageSource()).channel(fileInputChannel()).
handle(service, "callMethod").channel(fileOutputChannel()).
handle(orderOutMessageHandler()).get();
}
EDIT 2
[ERROR] 2020-06-14 14:49:48.053 [task-scheduler-9] LoggingHandler - java.lang.AbstractMethodError: Method org/springframework/integration/sftp/session/SftpSession.getHostPort()Ljava/lang/String; is abstract
at org.springframework.integration.sftp.session.SftpSession.getHostPort(SftpSession.java)
at org.springframework.integration.file.remote.session.CachingSessionFactory$CachedSession.getHostPort(CachingSessionFactory.java:295)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer.copyFileToLocalDirectory(AbstractInboundFileSynchronizer.java:496)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer.copyIfNotNull(AbstractInboundFileSynchronizer.java:400)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer.transferFilesFromRemoteToLocal(AbstractInboundFileSynchronizer.java:386)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer.lambda$synchronizeToLocalDirectory$0(AbstractInboundFileSynchronizer.java:349)
at org.springframework.integration.file.remote.RemoteFileTemplate.execute(RemoteFileTemplate.java:437)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer.synchronizeToLocalDirectory(AbstractInboundFileSynchronizer.java:348)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizingMessageSource.doReceive(AbstractInboundFileSynchronizingMessageSource.java:265)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizingMessageSource.doReceive(AbstractInboundFileSynchronizingMessageSource.java:66)
at org.springframework.integration.endpoint.AbstractFetchLimitingMessageSource.doReceive(AbstractFetchLimitingMessageSource.java:45)
at org.springframework.integration.endpoint.AbstractMessageSource.receive(AbstractMessageSource.java:167)
at org.springframework.integration.endpoint.SourcePollingChannelAdapter.receiveMessage(SourcePollingChannelAdapter.java:250)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.doPoll(AbstractPollingEndpoint.java:359)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.pollForMessage(AbstractPollingEndpoint.java:328)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.lambda$null$1(AbstractPollingEndpoint.java:275)
at org.springframework.integration.util.ErrorHandlingTaskExecutor.lambda$execute$0(ErrorHandlingTaskExecutor.java:57)
at org.springframework.core.task.SyncTaskExecutor.execute(SyncTaskExecutor.java:50)
at org.springframework.integration.util.ErrorHandlingTaskExecutor.execute(ErrorHandlingTaskExecutor.java:55)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.lambda$createPoller$2(AbstractPollingEndpoint.java:272)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at org.springframework.scheduling.concurrent.ReschedulingRunnable.run(ReschedulingRunnable.java:93)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266)
at java.util.concurrent.FutureTask.run(FutureTask.java)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
The transformer is designed to always return a reply because it is a transformation operation. Therefore you can’t return null from your method . You get one-way error probably because your handle method is void.
A JEE application I am looking at sometimes - no idea why it happens - goes into this wel exception.
WELD-001304: More than one context active for scope type javax.enterprise.context.SessionScoped
at org.jboss.weld.manager.BeanManagerImpl.internalGetContext(BeanManagerImpl.java:678)
at org.jboss.weld.manager.BeanManagerImpl.getContext(BeanManagerImpl.java:645)
at org.jboss.weld.bean.ContextualInstanceStrategy$DefaultContextualInstanceStrategy.getIfExists(ContextualInstanceStrategy.java:89)
at org.jboss.weld.bean.ContextualInstanceStrategy$CachingContextualInstanceStrategy.getIfExists(ContextualInstanceStrategy.java:164)
at org.jboss.weld.bean.ContextualInstance.getIfExists(ContextualInstance.java:63)
at org.jboss.weld.bean.proxy.ContextBeanInstance.getInstance(ContextBeanInstance.java:87)
at org.jboss.weld.bean.proxy.ProxyMethodHandler.getInstance(ProxyMethodHandler.java:131)
at org.apache.deltaspike.core.impl.scope.window.WindowBeanHolder$Proxy$_$$_WeldClientProxy.getContextualStorage(Unknown Source)
at org.apache.deltaspike.core.impl.scope.window.WindowContextImpl.getContextualStorage(WindowContextImpl.java:119)
at org.apache.deltaspike.core.util.context.AbstractContext.get(AbstractContext.java:78)
at org.jboss.weld.contexts.PassivatingContextWrapper$AbstractPassivatingContextWrapper.get(PassivatingContextWrapper.java:70)
at org.jboss.weld.bean.ContextualInstanceStrategy$DefaultContextualInstanceStrategy.getIfExists(ContextualInstanceStrategy.java:89)
at org.jboss.weld.bean.ContextualInstance.getIfExists(ContextualInstance.java:63)
at org.jboss.weld.bean.proxy.ContextBeanInstance.getInstance(ContextBeanInstance.java:87)
at org.jboss.weld.bean.proxy.ProxyMethodHandler.getInstance(ProxyMethodHandler.java:131)
at org.apache.deltaspike.core.impl.scope.viewaccess.ViewAccessViewHistory$Proxy$_$$_WeldClientProxy.getLastView(Unknown Source)
at org.apache.deltaspike.core.impl.scope.viewaccess.ViewAccessContext.close(ViewAccessContext.java:131)
at org.apache.deltaspike.core.impl.scope.viewaccess.ViewAccessContext.onProcessingViewFinished(ViewAccessContext.java:119)
at org.apache.deltaspike.jsf.impl.listener.request.DeltaSpikeLifecycleWrapper.render(DeltaSpikeLifecycleWrapper.java:118)
at javax.faces.lifecycle.LifecycleWrapper.render(LifecycleWrapper.java:92)
at org.apache.deltaspike.jsf.impl.listener.request.JsfClientWindowAwareLifecycleWrapper.render(JsfClientWindowAwareLifecycleWrapper.java:160)
at javax.faces.webapp.FacesServlet.service(FacesServlet.java:659)
at io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:74)
It is not clear why this is happening.
And the side effects are not always the same.
Some times when this situation happens, you really need to restart the server, eve if you try to loging in incognito mode with a different browser.
It is as if the BeanManager from WELD has become completely toasted.
This error is coming most often when you sleep your computer and the next day you start interacting with the application.
But interesting enough, this is also happening quite often if I start triggering selinum tests.
Have no idea why the selenium testing would exacerbate the exception.
What I have seen now by putting a break point, is that we have WELD trying to resolve some injection point annotations.
At some point, one of the injeaction points it wants to resolve is a:
at org.jboss.weld.bean.proxy.ContextBeanInstance.getInstance(ContextBeanInstance.java:87)
It is doing a
T existingInstance = ContextualInstance.getIfExists(bean, manager);
Where the Managed Bean
parameter bean = [class org.apache.deltaspike.core.impl.scope.window.WindowBeanHolder] with qualifiers [#Any #Default]
It is tring to resolve the bean injext:
[[BackedAnnotatedField] #Inject private org.apache.deltaspike.core.impl.scope.window.WindowBeanHolder.windowContextQuotaHandler]
Into the bean:
Managed Bean [class org.apache.deltaspike.jsf.impl.scope.window.JsfWindowContextQuotaHandler] with qualifiers [#Any #Default]
This window bean holder (could be any other bean I believe, it is irrelevant) is annotated with the session scoped annotation:
#SessionScoped
public class WindowBeanHolder extends AbstractBeanHolder<String>
{
And the system then breaks because the wildfly beanmanager impl, when it enter the logic of returning the SessionScoped context at line 678:
private Context internalGetContext(Class<? extends Annotation> scopeType) {
Context activeContext = null;
final List<Context> ctx = contexts.get(scopeType);
if (ctx == null) {
return null;
}
for (Context context : ctx) {
if (context.isActive()) {
if (activeContext == null) {
activeContext = context;
} else {
throw BeanManagerLogger.LOG.duplicateActiveContexts(scopeType.getName());
}
}
}
return activeContext;
}
This BeanManagerImpl code will not be happy with the fact that apparently a two SessionScopedContexts are at the same active.
In this code fragment what I see with the debugger is that the contexts variable is holding the following 3 session scoped contexts:
[
org.jboss.weld.contexts.bound.BoundSessionContextImpl#136b5782,
org.jboss.weld.module.web.context.http.HttpSessionContextImpl#6ef334ee,
org.jboss.weld.module.web.context.http.HttpSessionDestructionContext#37a5a86e
]
The first ACTIVE CONTEXT found was:
oactiveContext = rg.jboss.weld.module.web.context.http.HttpSessionContextImpl#6ef334ee
The second CONTEXT we have is:
context = org.jboss.weld.module.web.context.http.HttpSessionDestructionContext#37a5a86e
That means for whatever reason, in my seystem both the HttpSessionContextImpl and the HttpSessionDestructionContext.
I am clueless is a to how these contexts are being toggled between ACTIVE/INACTIVE, if this is supposed to be "ThreadContext" specific flag that for a given thread the session scoped is corrupted but other threads with differ JESSESIONID cookie would start in an appropriate virgin state. where only one session context is active.
Any ideas of what could be causing this?
Note, widlfly 13.0.Final uses:
<dependency>
<groupId>org.jboss.weld</groupId>
<artifactId>weld-core-impl</artifactId>
<version>3.0.4.Final</version>
<scope>provided</scope>
</dependency>
Many thanks
The issue is now understood.
This duplicate active session scoped context is caused by two factors.
First an application specific problem that only occurs in widlfy and not in weblogic.
Second a wildfly problem (this one is debatable it is a matter of opinion, but I believe wildfly should be more robust here).
The first problem.
Whenever an HTTP session becomes expired in wildfly / undertow, a session reaper process will kick in to terminate the session. Every application server has its own process of doing this.
The following stack trace snippet depicts what is going on when a session is being destoryed in wildfly.
####2019-11-19 16:40:49,522 ThreadId:441 ERROR org.jboss.threads.errors - Thread Thread[default task-4,5,main] threw an uncaught exception: java.lang.RuntimeException: org.jboss.weld.contexts.ContextNotActiveException: WELD-001303: No active contexts for scope type javax.enterprise.context.RequestScoped
at io.undertow.servlet.core.SessionListenerBridge.sessionDestroyed(SessionListenerBridge.java:75)
at io.undertow.server.session.SessionListeners.sessionDestroyed(SessionListeners.java:61)
at io.undertow.server.session.InMemorySessionManager$SessionImpl.invalidate(InMemorySessionManager.java:586)
at io.undertow.server.session.InMemorySessionManager$SessionImpl$2$1.run(InMemorySessionManager.java:393)
at org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35)
at org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:1985)
at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1487)
at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1378)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.jboss.weld.contexts.ContextNotActiveException: WELD-001303: No active contexts for scope type javax.enterprise.context.RequestScoped
at org.jboss.weld.manager.BeanManagerImpl.getContext(BeanManagerImpl.java:647)
at org.jboss.weld.bean.ContextualInstanceStrategy$DefaultContextualInstanceStrategy.getIfExists(ContextualInstanceStrategy.java:89)
at org.jboss.weld.bean.ContextualInstanceStrategy$CachingContextualInstanceStrategy.getIfExists(ContextualInstanceStrategy.java:164)
at org.jboss.weld.bean.ContextualInstance.getIfExists(ContextualInstance.java:63)
at org.jboss.weld.bean.proxy.ContextBeanInstance.getInstance(ContextBeanInstance.java:87)
at org.jboss.weld.bean.proxy.ProxyMethodHandler.getInstance(ProxyMethodHandler.java:131)
at org.apache.deltaspike.core.impl.scope.window.WindowIdHolder$Proxy$_$$_WeldClientProxy.getWindowId(Unknown Source)
at org.apache.deltaspike.core.impl.scope.window.WindowContextImpl.getCurrentWindowId(WindowContextImpl.java:85)
at org.apache.deltaspike.core.impl.scope.window.InjectableWindowContext.getCurrentWindowId(InjectableWindowContext.java:54)
at com.XXXXX.YYYYY.framework.web.util.JsfUtilBean.getWindowId(JsfUtilBean.java:230)
at com.XXXXX.YYYYY.framework.web.util.JsfUtilBean$Proxy$_$$_WeldClientProxy.getWindowId(Unknown Source)
at com.XXXXX.YYYYY.framework.web.enterprisetouch.boxframework.push.SomeIrrlevantAppSpecificClass.logout(SomeIrrlevantAppSpecificClass.java:105)
at com.XXXXX.YYYYY.framework.web.enterprisetouch.boxframework.push.SomeIrrlevantAppSpecificClass$Proxy$_$$_WeldClientProxy.logout(Unknown Source)
at com.XXXXX.YYYYY.framework.web.client.YYYYYSessionController.sessionDestroyed(YYYYYSessionController.java:245)
at com.XXXXX.YYYYY.framework.web.client.YYYYYSessionController$Proxy$_$$_WeldClientProxy.sessionDestroyed(Unknown Source)
at com.XXXXX.YYYYY.framework.web.security.YYYYYSessionListener.sessionDestroyed(YYYYYSessionListener.java:80)
at io.undertow.servlet.core.ApplicationListeners.sessionDestroyed(ApplicationListeners.java:315)
at io.undertow.servlet.core.SessionListenerBridge.doDestroy(SessionListenerBridge.java:98)
at io.undertow.servlet.core.SessionListenerBridge.access$000(SessionListenerBridge.java:41)
at io.undertow.servlet.core.SessionListenerBridge$1.call(SessionListenerBridge.java:54)
at io.undertow.servlet.core.SessionListenerBridge$1.call(SessionListenerBridge.java:51)
at io.undertow.servlet.core.ServletRequestContextThreadSetupAction$1.call(ServletRequestContextThreadSetupAction.java:42)
at io.undertow.servlet.core.ContextClassLoaderSetupAction$1.call(ContextClassLoaderSetupAction.java:43)
at org.wildfly.extension.undertow.security.SecurityContextThreadSetupAction.lambda$create$0(SecurityContextThreadSetupAction.java:105)
at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1514)
at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1514)
at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1514)
at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1514)
at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1514)
at io.undertow.servlet.core.SessionListenerBridge.sessionDestroyed(SessionListenerBridge.java:73)
... 8 more
The above stack trace is important for two reasons.
One it gives you an idea of how underdow decides to initiate killing of sessions that have timed out.
Second it is showing you a very real possibility on any application - it can break when handling a session destruction event.
The code that is blowing up the stack trace above works perfectly fine in weblogic, because we have a RequestScope active.
The code in wildfly is breaking because there is no request scope active.
So I have not fixed the code blowing up in the stack trace above yet because I wanted to find a hack to get myself out of the problem that was reported here - since this could happen agian in the future very easily.
So for me the request scope exception is irrelevant, what is relevat are the side effects that come after.
It is also very important to understand that before the code that is blowing up above was called something very important took place.
Namely the activation of what will be in the future the SECOND / DUPLICATION session scope context.
Please look at the stack trace that I will put bellow.
This stack trace takes place whenever UNDERTOW decides to initiate killing of sessions.
When this happens wildfly comes int and decides to activate the very special session scope context implementation that is used outside of HTTP requests.
-------------------
-- HTTP SESSION - DEACTIVATION:
---------------------
Thread [default task-102] (Suspended (breakpoint at line 41 in org.jboss.weld.contexts.AbstractManagedContext))
org.jboss.weld.module.web.context.http.HttpSessionDestructionContext(org.jboss.weld.contexts.AbstractManagedContext).setActive(boolean) line: 41
org.jboss.weld.module.web.context.http.HttpSessionDestructionContext(org.jboss.weld.contexts.AbstractManagedContext).activate() line: 49
org.jboss.weld.module.web.context.http.HttpSessionDestructionContext(org.jboss.weld.contexts.AbstractBoundContext<S>).activate() line: 66
org.jboss.weld.module.web.servlet.WeldTerminalListener.sessionDestroyed(javax.servlet.http.HttpSessionEvent) line: 95
io.undertow.servlet.core.ApplicationListeners.sessionDestroyed(javax.servlet.http.HttpSession) line: 315
io.undertow.servlet.core.SessionListenerBridge.doDestroy(io.undertow.server.session.Session) line: 98
io.undertow.servlet.core.SessionListenerBridge.access$000(io.undertow.servlet.core.SessionListenerBridge, io.undertow.server.session.Session) line: 41
io.undertow.servlet.core.SessionListenerBridge$1.call(io.undertow.server.HttpServerExchange, io.undertow.server.session.Session) line: 54
io.undertow.servlet.core.SessionListenerBridge$1.call(io.undertow.server.HttpServerExchange, java.lang.Object) line: 51
io.undertow.servlet.core.ServletRequestContextThreadSetupAction$1.call(io.undertow.server.HttpServerExchange, C) line: 42
io.undertow.servlet.core.ContextClassLoaderSetupAction$1.call(io.undertow.server.HttpServerExchange, C) line: 43
org.wildfly.extension.undertow.security.SecurityContextThreadSetupAction.lambda$create$0(io.undertow.servlet.api.ThreadSetupHandler$Action, io.undertow.server.HttpServerExchange, java.lang.Object) line: 105
org.wildfly.extension.undertow.security.SecurityContextThreadSetupAction$$Lambda$764.660751084.call(io.undertow.server.HttpServerExchange, java.lang.Object) line: not available
org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(io.undertow.servlet.api.ThreadSetupHandler$Action, io.undertow.server.HttpServerExchange, java.lang.Object) line: 1514
org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction$$Lambda$765.1975131456.call(io.undertow.server.HttpServerExchange, java.lang.Object) line: not available
org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(io.undertow.servlet.api.ThreadSetupHandler$Action, io.undertow.server.HttpServerExchange, java.lang.Object) line: 1514
org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction$$Lambda$765.1975131456.call(io.undertow.server.HttpServerExchange, java.lang.Object) line: not available
org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(io.undertow.servlet.api.ThreadSetupHandler$Action, io.undertow.server.HttpServerExchange, java.lang.Object) line: 1514
org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction$$Lambda$765.1975131456.call(io.undertow.server.HttpServerExchange, java.lang.Object) line: not available
org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(io.undertow.servlet.api.ThreadSetupHandler$Action, io.undertow.server.HttpServerExchange, java.lang.Object) line: 1514
org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction$$Lambda$765.1975131456.call(io.undertow.server.HttpServerExchange, java.lang.Object) line: not available
org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(io.undertow.servlet.api.ThreadSetupHandler$Action, io.undertow.server.HttpServerExchange, java.lang.Object) line: 1514
org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction$$Lambda$765.1975131456.call(io.undertow.server.HttpServerExchange, java.lang.Object) line: not available
io.undertow.servlet.core.SessionListenerBridge.sessionDestroyed(io.undertow.server.session.Session, io.undertow.server.HttpServerExchange, io.undertow.server.session.SessionListener$SessionDestroyedReason) line: 73
io.undertow.server.session.SessionListeners.sessionDestroyed(io.undertow.server.session.Session, io.undertow.server.HttpServerExchange, io.undertow.server.session.SessionListener$SessionDestroyedReason) line: 61
io.undertow.server.session.InMemorySessionManager$SessionImpl.invalidate(io.undertow.server.HttpServerExchange, io.undertow.server.session.SessionListener$SessionDestroyedReason) line: 586
io.undertow.server.session.InMemorySessionManager$SessionImpl$2$1.run() line: 393
org.jboss.threads.ContextClassLoaderSavingRunnable.run() line: 35
org.jboss.threads.EnhancedQueueExecutor.safeRun(java.lang.Runnable) line: 1985
org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(java.lang.Runnable) line: 1487
org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run() line: 1378
java.lang.Thread.run() line: 748
This stack trace here is critical to understand.
So normally, on what we can call an happy HTTP request, widlfly will activate the following session scope context class.
org.jboss.weld.module.web.context.http.HttpSessionContextImpl(org.jboss.weld.contexts.AbstractManagedContext).setActive(boolean) line: 41
But as shown above, when it is time to kill of an http session, then instead it activates that other implementation class of a session scope the. The
org.jboss.weld.module.web.context.http.HttpSessionDestructionContext(org.jboss.weld.contexts.AbstractManagedContext).setActive(boolean) line: 41
So until here is the story clear?
Let us summarize until here.
(a) We have our application running happy
(b) We close our browser window and we let the http session timeout
(c) When undertow kills of the http session wildfly will acttivate the context HttpSessionDestructionContext
(d) Our application speific session listener that is curious about ending sessions will blow up because the request scope context is not active
(e) ... I suppose the story finishes up very badly, because wildfly would most likely have some nice class or logic to in the end of proper session termination to deactivate the HttpSessionDestructionContext but whatever this logic is and whatever makes it gets triggered, for this application it will NEVER get triggered.
So what are we left with at this point?
With a very subtle bug.
We finally have a very corrupted thread.
Whatever thread undertow used to kill of the session is now forever doomed to not support any more the session scoped context.
Why?
Because when you look at the implementation of these session contexts, you will see their state of being ative or not active is a THREAD LOCAL variable.
What this means is for as long as this thread lives, the state it had fro the previous run will remain.
And here is where Wildfly is having a problem, in my opinion.
Widlfly could either have a "pre-emptive" cleanup logic that tries to make sure that before a thread is used to handover a request the thread local variables are cleaned up to avoid going on ahead with a dirty thread.
Or it would need to have some sort of resilience mechanism to make sure that when it activates a scope in the context of a running thread, before the thread finishes well or bad, that the scope is ultimately deactivated.
Ok.
To finalize.
So now we have an application that blew up and as reward this applicaton now has one thread that is toasted.
This issue is normally very difficult to reproduce because you normally do not have a very low timeout.
So you have the feeling that this issue comes when you leave your applicaton server running for extended period time. Like you go home, put your computer to sleep and the next morning everything is blowing up (because all of your http sesisons have timed out since then).
The best way to make this issue reproducible, is to go t your standalone.xml and set it up like this:
<servlet-container name="default" default-session-timeout="1">
<jsp-config/>
<websockets/>
</servlet-container>
You set the timeout of a session to run every minute.
Then with chrome you just open an icognito window loging to your application to get the JESSIONID created and close the window.
Open a different iconito window login again and close again.
Repeat this for many sessions.
Then in parallel, what you can do as well, is make sure that you set a DEBUG BREAKPOINT on the
org.jboss.weld.contexts.AbstractManagedContext
Put the break point on the method:
protected void setActive(boolean active) {
getManagedState().setActive(active);
}
Put in there the following breakpoint condition:
this.getClass().getName().contains("HttpSessionDestructionContext")
What this break point will allow you to do is to make sure that when UNDERTOW starts killing of your sessions, you block the thread and that forces the killing process of the sessions to go over multiple different threds.
The larger the number of different threads that have been used to terminate sessions, the more threads you have corrupted, the more likely you are to re-use the thread in the future in a normal http request that will blow up with this error.
Essentially, you are just simulating what in a normal production server might take hours to happen.
In a productive server a session can be active for many hours if a user is active, unitl he goes home or whatever.
And the next day is when you start having threads in your thread pool that are unsuable.
Ok to finalize.
Soon I will be fixing that code that wsa breaking due to the lack request scope context.
But before that, I want to make sure i am somehow able to heal my widlfly threads in case in the future this situation happes again.
To do this I am using a servlet listener as shown bellow.
(the usage of listener and helper as separate classes has to do with now having the WAR file blow up when you try to deploy on weblogic and weblogic might complain it has no idea of what this org.jboss.weld.module.web.context.http.HttpSessionDestructionContext is about).
package wahteverpackage;
import java.util.Arrays;
import java.util.List;
import javax.enterprise.context.ApplicationScoped;
import javax.enterprise.inject.spi.BeanManager;
import javax.inject.Inject;
import javax.servlet.ServletRequestEvent;
import javax.servlet.ServletRequestListener;
import LALALLA.commons.util.constants.CommonsConstants;
import LALALLA.commons.util.util.BasicApplicationServerIdentificationUtil;
#ApplicationScoped
public class LALALLAWildflyWELD001304ServletRequestListener implements ServletRequestListener {
/**
* The issue takes place after an http session is timed out and a subsequent http request is handled by the thread
* that managed the timeout that thread is corrupted. So we want to fix the corrupted thread that is handling na
* http request.
*/
private static final List<String> RELEVANT_SERVLET_REQUEST_PROTOCOLS = Arrays.asList("http", "https");
#Inject
BeanManager beanManager;
final String appServerName;
final Boolean isWildfly;
/**
* Create a new LALALLAWildflyWELD001304ServletRequestListener.
*
*/
public LALALLAWildflyWELD001304ServletRequestListener() {
super();
appServerName = BasicApplicationServerIdentificationUtil.getApplicationServerName();
isWildfly = CommonsConstants.WILDFLY.equals(appServerName);
}
#Override
public void requestDestroyed(ServletRequestEvent sre) {
// on request destroyed we do not care to do no anything
}
#Override
public void requestInitialized(ServletRequestEvent sre) {
// (a) We only want this code to have any effect if it is running in wildfly
if (!isWildfly) {
return;
}
// (a) Make sure we are dealing with an http servlet request
if (!isHttpServletRequest(sre)) {
return;
}
// (b) We only want to intervene in the request if the current thread
// be broken by the multiple contexts active exception
LALALLAWildflyWELD001304ServletRequestListenerHelper helper = new LALALLAWildflyWELD001304ServletRequestListenerHelper();
if (!helper.isCurrentThreadCorruptedWithWELD001304Exception(beanManager)) {
return;
}
helper.tryToRepairCorruptedThread(beanManager);
}
/**
* Check if we are dealing with an http servlet request.
*
* #param sre
* the servlet request
* #return TRUE if we are dealing with an http servlet request
*/
protected boolean isHttpServletRequest(ServletRequestEvent sre) {
return RELEVANT_SERVLET_REQUEST_PROTOCOLS.contains(sre.getServletRequest().getScheme().toLowerCase());
}
}
Finally the helper class that was isolated out to be able to deploy on weblogic and that is doing the vodoo of trying to repair the thread is the following.
package wahteverpackage;
import javax.enterprise.context.SessionScoped;
import javax.enterprise.inject.spi.BeanManager;
import javax.servlet.http.HttpSession;
import org.jboss.weld.contexts.AbstractBoundContext;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import LALALLALALALLA.commons.util.cdi.CdiUtil;
/**
* This listener relates to the issue
*
* LALALLAMaint-3541 - Wildfly - WELD-001304 - More than one context active for scope type
* javax.enterprise.context.SessionScoped</a>
*
* We want to create a listener to tries to avoid the problem that once in wildlfy the HttpSessionDestructionContext is
* activated during a session timeout, if the session timeout event blows up, the thread that processed the session
* timeout will have its thread local state variables corrupted always activating this context implementation.
*/
public class LALALLAWildflyWELD001304ServletRequestListenerHelper {
private static final Logger LOGGER = LoggerFactory
.getLogger(LALALLAWildflyWELD001304ServletRequestListenerHelper.class);
/**
* Try to access the {#link SessionScoped} context to see if our thread will be damaged by a
*
* {#code org.jboss.weld.exceptions.IllegalStateException: WELD-001304: More than one context active for scope type javax.enterprise.context.SessionScoped}
*
* #return TRUE if the session scope for this specific thread has been corrupted. The corruption is most likely
* derived to a session timeout explosion with the
* {#link LALALLALALALLA.framework.web.security.LALALLASessionListener} blowing up when the sessionDestroyed
* is called.
*/
public boolean isCurrentThreadCorruptedWithWELD001304Exception(BeanManager beanManager) {
try {
// usual
beanManager.getContext(SessionScoped.class);
return false;
} catch (Exception e) {
org.jboss.weld.exceptions.IllegalStateException duplicateActiveContextsExceptionExample = org.jboss.weld.logging.BeanManagerLogger.LOG
.duplicateActiveContexts(SessionScoped.class.getName());
if (duplicateActiveContextsExceptionExample.getMessage().equals(e.getMessage())) {
// We are hiting a thread that has been corrupted and the session context for this thread
// as two implementation of the session scoped context currently active
return true;
}
return false;
}
}
/**
* The try to repair corrupted thread is concerned with the fact that the thread local memory of the current thread
* is stating that the HttpSessionDestructionContext is active, and this memory state is not being cleared because
* when the session timed out the process of handling the session timeout was interrupted due to some blow up
* exception.
*
* To fix this of course the code that is blowing up should be repaired. But to be on the safe side we have this
* work around code that will try to heal the thred and put it back to to a workable state.
*
*/
public void tryToRepairCorruptedThread(BeanManager beanManager) {
LOGGER.error("The current thread appears to be corrupted facing the"
+ " WELD-001304: More than one context active for scope type javax.enterprise.context.SessionScoped."
+ " See https://stackoverflow.com/questions/58930939/wildflt-13-weld-001304-more-than-one-context-active-for-scope-type-javax-enterp "
+ " It is posisble that this thread has been used in the past to handle an http session timeout that blew up during the "
+ " io.undertow.servlet.core.ApplicationListeners.sessionDestroyed phase. "
+ " If this is the case it is very likely that the thread local memory of state of the current thread is corrupted"
+ " with the wildfly HttpSessionDestructionContext stating that it is active when it should not be active at all."
+ " We will try to deactive this context. ");
AbstractBoundContext<HttpSession> httpSessionDestructionContext = getSessionDestructionContext(beanManager);
if (httpSessionDestructionContext.isActive()) {
LOGGER.warn(
"Our assumtion that the problem is that the wildfly HttpSessionDestructionContext is active is true. We will now try to cleanup the thread local of this thread by "
+ " forcing this HttpSessionDestructionContext to deactivate ");
httpSessionDestructionContext.deactivate();
} else {
LOGGER.warn(
"The situation is not clear. two or more contexts are active for the session context, our expectation is that the two activate contexts are the org.jboss.weld.context.http.HttpSessionContext and the org.jboss.weld.module.web.context.http.HttpSessionDestructionContext "
+ " but the HttpSessionDestructionContext seems not to be ACTIVE. ");
}
}
/**
* The HttpSessionDestructionContext obtained using the same technique as HttpContextLifecycle.
*
* #return The HttpSessionDestructionContext that we have seen to be active in requests where it should not be
* active.
*/
// Ignore the fact that we are using fully qualified names
// we want to be able to deploy this java class to weblogic
// without getting explosions that imports at the class level are not prese
#SuppressWarnings("squid:S1942")
public AbstractBoundContext<HttpSession> getSessionDestructionContext(BeanManager beanManager) {
// In a cdi container we get injected a a proxy to the bean manager
// in org.jboss.weld.module.web.servlet.HttpContextLifecycle is getting the beanMangerImpl directly
// and using a different technique to get the HttpSessionDestructionContext using the same approach as the
return CdiUtil.getBeanByClass(beanManager,
org.jboss.weld.module.web.context.http.HttpSessionDestructionContext.class);
}
}
Finally, I will probably be opening an issue in :
https://issues.jboss.org/projects/WFLY/issues
Thanks for all the help.
We have a cluster of servers that are are monitoring a shared network mount for processing EDI files. We recently added code to use the RedisMetadataStore as follows:
#Bean
public ConcurrentMetadataStore metadataStore() {
return new RedisMetadataStore(redisConnectionFactory);
}
#Bean
public FileSystemPersistentAcceptOnceFileListFilter persistentAcceptOnceFileFilter() {
return new FileSystemPersistentAcceptOnceFileListFilter(metadataStore(), "edi-file-locks");
}
#Bean
public IntegrationFlow flowInboundNetTransferFile(
#Value("${edi.incoming.directory.netTransfers}") String inboundDirectory,
#Value("${edi.incoming.age-before-ready-seconds:30}") int ageBeforeReadySeconds,
#Value("${taskExecutor.inboundFile.corePoolSize:4}") int corePoolSize,
#Qualifier("taskExecutorInboundFile") TaskExecutor taskExecutor) {
//Setup a filter to only pick up a files older than a certain age, relative to the current time. This prevents cases
//where something is writing to the file as the EDI processor is moving that file.
LastModifiedFileListFilter lastModifiedFilter = new LastModifiedFileListFilter();
lastModifiedFilter.setAge(ageBeforeReadySeconds);
return IntegrationFlows
.from(
Files
.inboundAdapter(new File(inboundDirectory))
.locker(ediDocumentLocker())
.filter(new ChainFileListFilter<File>())
.filter(new IgnoreHiddenFileListFilter())
.filter(lastModifiedFilter)
.filter(persistentAcceptOnceFileFilter()),
e -> e.poller(Pollers.fixedDelay(20000).maxMessagesPerPoll(corePoolSize).taskExecutor(taskExecutor)))
.channel(channelInboundFile())
.get();
}
This was working fine in our lower environments, however, we use a Redis cluster in our production environment and when we deployed to that environment we are encountering an exception :
org.springframework.dao.InvalidDataAccessApiUsageException: WATCH is currently not supported in cluster mode.
at org.springframework.data.redis.connection.jedis.JedisClusterConnection.watch(JedisClusterConnection.java:2450)
at org.springframework.data.redis.connection.DefaultStringRedisConnection.watch(DefaultStringRedisConnection.java:951)
at org.springframework.data.redis.core.RedisTemplate$24.doInRedis(RedisTemplate.java:885)
at org.springframework.data.redis.core.RedisTemplate.execute(RedisTemplate.java:204)
at org.springframework.data.redis.core.RedisTemplate.execute(RedisTemplate.java:166)
at org.springframework.data.redis.core.RedisTemplate.watch(RedisTemplate.java:882)
at org.springframework.data.redis.support.collections.DefaultRedisMap$2.execute(DefaultRedisMap.java:225)
at org.springframework.data.redis.support.collections.DefaultRedisMap$2.execute(DefaultRedisMap.java:221)
at org.springframework.data.redis.core.RedisTemplate.execute(RedisTemplate.java:226)
at org.springframework.data.redis.support.collections.DefaultRedisMap.replace(DefaultRedisMap.java:221)
at org.springframework.data.redis.support.collections.RedisProperties.replace(RedisProperties.java:238)
at org.springframework.integration.redis.metadata.RedisMetadataStore.replace(RedisMetadataStore.java:154)
at org.springframework.integration.file.filters.AbstractPersistentAcceptOnceFileListFilter.accept(AbstractPersistentAcceptOnceFileListFilter.java:83)
at org.springframework.integration.file.filters.AbstractFileListFilter.filterFiles(AbstractFileListFilter.java:40)
at org.springframework.integration.file.filters.ChainFileListFilter.filterFiles(ChainFileListFilter.java:50)
at org.springframework.integration.file.DefaultDirectoryScanner.listFiles(DefaultDirectoryScanner.java:95)
at org.springframework.integration.file.FileReadingMessageSource.scanInputDirectory(FileReadingMessageSource.java:387)
at org.springframework.integration.file.FileReadingMessageSource.receive(FileReadingMessageSource.java:366)
at org.springframework.integration.endpoint.SourcePollingChannelAdapter.receiveMessage(SourcePollingChannelAdapter.java:224)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.doPoll(AbstractPollingEndpoint.java:245)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.access$000(AbstractPollingEndpoint.java:58)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$1.call(AbstractPollingEndpoint.java:190)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$1.call(AbstractPollingEndpoint.java:186)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$Poller$1.run(AbstractPollingEndpoint.java:353)
at org.springframework.integration.util.ErrorHandlingTaskExecutor$1.run(ErrorHandlingTaskExecutor.java:55)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
org.springframework.dao.InvalidDataAccessApiUsageException: WATCH is currently not supported in cluster mode.
at org.springframework.data.redis.connection.jedis.JedisClusterConnection.watch(JedisClusterConnection.java:2450)
at org.springframework.data.redis.connection.DefaultStringRedisConnection.watch(DefaultStringRedisConnection.java:951)
at org.springframework.data.redis.core.RedisTemplate$24.doInRedis(RedisTemplate.java:885)
at org.springframework.data.redis.core.RedisTemplate.execute(RedisTemplate.java:204)
at org.springframework.data.redis.core.RedisTemplate.execute(RedisTemplate.java:166)
at org.springframework.data.redis.core.RedisTemplate.watch(RedisTemplate.java:882)
at org.springframework.data.redis.support.collections.DefaultRedisMap$2.execute(DefaultRedisMap.java:225)
at org.springframework.data.redis.support.collections.DefaultRedisMap$2.execute(DefaultRedisMap.java:221)
at org.springframework.data.redis.core.RedisTemplate.execute(RedisTemplate.java:226)
at org.springframework.data.redis.support.collections.DefaultRedisMap.replace(DefaultRedisMap.java:221)
at org.springframework.data.redis.support.collections.RedisProperties.replace(RedisProperties.java:238)
at org.springframework.integration.redis.metadata.RedisMetadataStore.replace(RedisMetadataStore.java:154)
at org.springframework.integration.file.filters.AbstractPersistentAcceptOnceFileListFilter.accept(AbstractPersistentAcceptOnceFileListFilter.java:83)
at org.springframework.integration.file.filters.AbstractFileListFilter.filterFiles(AbstractFileListFilter.java:40)
at org.springframework.integration.file.filters.ChainFileListFilter.filterFiles(ChainFileListFilter.java:50)
at org.springframework.integration.file.DefaultDirectoryScanner.listFiles(DefaultDirectoryScanner.java:95)
at org.springframework.integration.file.FileReadingMessageSource.scanInputDirectory(FileReadingMessageSource.java:387)
at org.springframework.integration.file.FileReadingMessageSource.receive(FileReadingMessageSource.java:366)
at org.springframework.integration.endpoint.SourcePollingChannelAdapter.receiveMessage(SourcePollingChannelAdapter.java:224)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.doPoll(AbstractPollingEndpoint.java:245)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.access$000(AbstractPollingEndpoint.java:58)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$1.call(AbstractPollingEndpoint.java:190)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$1.call(AbstractPollingEndpoint.java:186)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$Poller$1.run(AbstractPollingEndpoint.java:353)
at org.springframework.integration.util.ErrorHandlingTaskExecutor$1.run(ErrorHandlingTaskExecutor.java:55)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
Looking a little closer, it looks like this store is using RedisProperties as a backing store, which in turn is using features of the Redis client that are not supported with the Cluster client. Has anyone worked around this issue? Perhaps written an alternate store that does support a Redis cluster?
First time using Camel Routes, all the Apache docs and the info around sof make it look very easy.... (this is in groovy, using Apache Camel and the Camel Salesforce components as libs)
class SFRouteClass extends RouteBuilder {
#Override
void configure() {
def camelRouteId
from("direct:moveFailedMessage")
.process { Exchange exchange ->
exchange.out
}
from('direct:sfe')
.onException(Exception.class)
.to('log:sfe?level=INFO&showHeaders=true&multiline=true')
.to('direct:moveFailedMessage')
.maximumRedeliveries(0)
.end()
.routeId('sfe')
.enrich('direct:salesforceCheckLeadByIDUserHashId'.toString(), new AggregationStrategy() { ..... }
}
org.apache.camel.component.salesforce.api.SalesforceException: Error {404:Not Found} executing {GET:https://.....}
at org.apache.camel.component.salesforce.internal.client.AbstractClientBase$1.onComplete(AbstractClientBase.java:176)
at org.eclipse.jetty.client.ResponseNotifier.notifyComplete(ResponseNotifier.java:193)
at org.eclipse.jetty.client.ResponseNotifier.notifyComplete(ResponseNotifier.java:185)
at org.eclipse.jetty.client.HttpReceiver.terminateResponse(HttpReceiver.java:453)
at org.eclipse.jetty.client.HttpReceiver.responseSuccess(HttpReceiver.java:400)
at org.eclipse.jetty.client.http.HttpReceiverOverHTTP.messageComplete(HttpReceiverOverHTTP.java:266)
at org.eclipse.jetty.http.HttpParser.parseContent(HttpParser.java:1487)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:1245)
at org.eclipse.jetty.client.http.HttpReceiverOverHTTP.parse(HttpReceiverOverHTTP.java:156)
at org.eclipse.jetty.client.http.HttpReceiverOverHTTP.process(HttpReceiverOverHTTP.java:117)
at org.eclipse.jetty.client.http.HttpReceiverOverHTTP.receive(HttpReceiverOverHTTP.java:69)
at org.eclipse.jetty.client.http.HttpChannelOverHTTP.receive(HttpChannelOverHTTP.java:89)
at org.eclipse.jetty.client.http.HttpConnectionOverHTTP.onFillable(HttpConnectionOverHTTP.java:123)
at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.camel.component.salesforce.api.SalesforceException: {errors:[{"errorCode":"NOT_FOUND","message":"The requested resource does not exist"}],statusCode:404}
at org.apache.camel.component.salesforce.internal.client.DefaultRestClient.createRestException(DefaultRestClient.java:121)
at org.apache.camel.component.salesforce.internal.client.AbstractClientBase$1.onComplete(AbstractClientBase.java:175)
... 16 common frames omitted
This works if there is no error. My problem is the onException never triggers. I have tried to make a global errorHandler to the config, and I have tried to make a global onExcpetion. Nothing catches the error and I cannot figure out why.
You need to either
set no error handler on the route that starts at direct:salesforceCheckLeadByIDUserHashId so the route scoped error handler you have on the other route can kick in
Or add / move that onException to that route as well.
Have you tried using the doTry / doCatch way ?
from('direct:sfe')
.routeId('sfe')
.doTry()
.enrich('direct:salesforceCheckLeadByIDUserHashId'.toString(), new AggregationStrategy() { ..... }
.doCatch(Exception.class)
.to('log:sfe?level=INFO&showHeaders=true&multiline=true')
.to('direct:moveFailedMessage')
.maximumRedeliveries(0) // NOt needed as default is 0
.end()
We've a Java application which periodically insert rows into the Oracle DB. This is a multi-threaded application. All threads barring one gets stuck periodically. We're thinking of upgrading the Oracle JDBC Driver, but I've a feeling it might show up again. Just wanted to get some info on if its an error with our code or something else. I've both the stacktrace and the parts of code below. We see locked periodically in the thread info. Do give us some info as to what could me wrong.
----Code----
LogEventBatchPreparedStatementUpdater statementUpdater = new LogEventBatchPreparedStatementUpdater(logEvents);
// _jdbcTemplate.batchUpdate(INSERT_SQL, statementUpdater);
Connection connection = null;
PreparedStatement preparedStatement = null;
try
{
connection = _dataSource.getConnection();
connection.setAutoCommit(false);
preparedStatement = connection.prepareStatement(INSERT_SQL);
for (int i = 0; i < statementUpdater.getBatchSize(); i++)
{
statementUpdater.setValues(preparedStatement, i);
preparedStatement.addBatch();
}
preparedStatement.executeBatch();
connection.commit();
}
catch (SQLException e)
{
_Log.error("Error inserting log line batch",e );
}
finally
{
try
{
preparedStatement.close();
connection.close();
}
catch (SQLException e)
{
_Log.error("Error inserting log line batch",e );
}
}
----Stack Trace----
"Thread-258 " daemon prio=6 tid=0x09437400 nid=0x2300 runnable [0x0f55f000]
java.lang.Thread.State: RUNNABLE
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(Unknown Source)
at oracle.net.ns.Packet.receive(Unknown Source)
at oracle.net.ns.NetInputStream.getNextPacket(Unknown Source)
at oracle.net.ns.NetInputStream.read(Unknown Source)
at oracle.net.ns.NetInputStream.read(Unknown Source)
at oracle.net.ns.NetInputStream.read(Unknown Source)
at oracle.jdbc.ttc7.MAREngine.unmarshalUB1(MAREngine.java:931)
at oracle.jdbc.ttc7.MAREngine.unmarshalSB1(MAREngine.java:893)
at oracle.jdbc.ttc7.Oall7.receive(Oall7.java:369)
at oracle.jdbc.ttc7.TTC7Protocol.doOall7(TTC7Protocol.java:1891)
at oracle.jdbc.ttc7.TTC7Protocol.parseExecuteFetch(TTC7Protocol.java:109
3)
- locked <0x1ce417c0> (a oracle.jdbc.ttc7.TTC7Protocol)
at oracle.jdbc.driver.OracleStatement.executeNonQuery(OracleStatement.ja
va:2047)
at oracle.jdbc.driver.OracleStatement.doExecuteOther(OracleStatement.jav
a:1940)
at oracle.jdbc.driver.OraclePreparedStatement.executeBatch(OraclePrepare
dStatement.java:3899)
- locked <0x18930c00> (a oracle.jdbc.driver.OraclePreparedStatement)
- locked <0x1ce3f9f0> (a oracle.jdbc.driver.OracleConnection)
at org.apache.commons.dbcp.DelegatingStatement.executeBatch(DelegatingSt
atement.java:294)
at ************.insertLogEventBatch(JdbcL
ogEventBatchDao.java:61)
at ************.DBLogEventBatchProcessor.processLo
gLineBatch(DBLogEventBatchProcessor.java:30)
at ************.LogLineBatcher.processLogLineBatch
(LogLineBatcher.java:274)
at ************.LogLineBatcher.processBatchBasedOn
Time(LogLineBatcher.java:192)
at ************.LogLineBatcher.manageBatch(LogLine
Batcher.java:178)
at ************.LogLineBatcher.access$000(LogLineB
atcher.java:24)
at ************.LogLineBatcher$1.run(LogLineBatche
r.java:152)
at java.lang.Thread.run(Unknown Source)
The fact that the thread state is RUNNABLE, and that it is trying to read from a socket, imply that it is simply waiting for a response from the database. So the thing to investigate is what the database session is waiting on. If you can identify the session in the V$SESSION view, the EVENT column will indicate what it is waiting on. Seems like there could potentially be a lock wait in the database.
FYI, where the thread dump says "locked", e.g. locked <0x1ce417c0>, that is just telling yu that the thread has acquired a lock; I believe the hex code is the ID of the object on which the lock is held.
Here is some useful information on interpreting thread dumps.