Cognos Promptmany function - cognos

I'm trying to use promptmany function to make the prompt optional to the user incase didn't select be default value as arrival announced the thing here when i use the function :
[STAGES] in (#promptmany ('STAGES','string','Arrival announced')#)
I get following error
XQE-V5-0011
V5 syntax error found in expression "[STAGES] in (Arrival announced)", invalid token "Arrival" found after "[STAGES] in (".
Details
RSV-SRV-0042 Trace back:RSReportService.cpp(747): XQEException: CCL_CAUGHT: RSReportService::processImpl()RSReportServiceMethod.cpp(258): XQEException: CCL_RETHROW: RSReportServiceMethod::process(): promptPagingForward_RequestRSASyncExecutionThread.cpp(848): XQEException: RSASyncExecutionThread::checkExceptionRSASyncExecutionThread.cpp(305): XQEException: CCL_CAUGHT: RSASyncExecutionThread::runImpl(): promptPagingForward_RequestRSASyncExecutionThread.cpp(904): XQEException: CCL_RETHROW: RSASyncExecutionThread::processCommand(): promptPagingForward_RequestExecution/RSRenderExecution.cpp(587): XQEException: CCL_RETHROW: RSRenderExecution::executeAssembly/RSDocAssemblyDispatch.cpp(323): XQEException: CCL_RETHROW: RSDocAssemblyDispatch::dispatchAssemblyAssembly/RSLayoutAssembly.cpp(79): XQEException: CCL_RETHROW: RSLayoutAssembly::assembleAssembly/RSDocAssemblyDispatch.cpp(417): XQEException: CCL_RETHROW: RSDocAssemblyDispatch::dispatchChildrenAssemblyForwardAssembly/RSReportPagesAssembly.cpp(178): XQEException: CCL_RETHROW: RSReportPagesAssembly::assembleAssembly/RSDocAssemblyDispatch.cpp(367): XQEException: CCL_RETHROW: RSDocAssemblyDispatch::dispatchAssemblyAssembly/RSPageAssembly.cpp(314): XQEException: CCL_RETHROW: RSPageAssembly::assembleAssembly/RSDocAssemblyDispatch.cpp(367): XQEException: CCL_RETHROW: RSDocAssemblyDispatch::dispatchAssemblyAssembly/RSTableRowAssembly.cpp(177): XQEException: CCL_RETHROW: RSTableRowAssembly::assembleAssembly/RSDocAssemblyDispatch.cpp(367): XQEException: CCL_RETHROW: RSDocAssemblyDispatch::dispatchAssemblyAssembly/RSTableCellAssembly.cpp(151): XQEException: CCL_RETHROW: RSTableCellAssembly::assembleAssembly/RSDocAssemblyDispatch.cpp(417): XQEException: CCL_RETHROW: RSDocAssemblyDispatch::dispatchChildrenAssemblyForwardAssembly/RSDocAssemblyDispatch.cpp(367): XQEException: CCL_RETHROW: RSDocAssemblyDispatch::dispatchAssemblyAssembly/RSAssembly.cpp(677): XQEException: CCL_RETHROW: RSAssembly::createListIteratorAssembly/RSAssembly.cpp(732): XQEException: CCL_RETHROW: RSAssembly::createListIteratorRSQueryMgr.cpp(519): XQEException: CCL_RETHROW: RSQueryMgr::getListIteratorRSQueryMgr.cpp(586): XQEException: CCL_RETHROW: RSQueryMgr::getResultSetIteratorRSQueryMgr.cpp(678): XQEException: CCL_RETHROW: RSQueryMgr::createIteratorRSQueryMgrBasic.cpp(279): XQEException: CCL_RETHROW: RSQueryMgrBasic::executeRsapiCommandRSQueryMgrExecutionHandlerImpl.cpp(170): XQEException: CCL_RETHROW: RSQueryMgrExecutionHandlerImpl::execute()QFSSession.cpp(1153): XQEException: CCL_RETHROW: QFSSession::ProcessDoRequest()QFSSession.cpp(1151): XQEException: CCL_CAUGHT: QFSSession::ProcessDoRequest()QFSSession.cpp(1108): XQEException: CCL_RETHROW: QFSSession::ProcessDoRequest()QFSConnection.cpp(788): XQEException: CCL_RETHROW: QFSConnection::ExecuteQFSQuery.cpp(213): XQEException: CCL_RETHROW: QFSQuery::Execute v2XQEConnector.cpp(289): XQEException: CCL_THROW: XQEConnector::send
Can't put my hand on the problem
Thanks!!

Add more single quotes to default value.
[STAGES] in (#promptmany ('STAGES','string','''Arrival announced''')#)
or
[STAGES] in (#promptmany ('STAGES','string',sq('Arrival announced'))#)

Related

hive-exec gets into conflict with commons-lang3 in CDH 6.3.2 when generating a CSV file via Java Spark 2.4

I have been stumbling with java.io.InvalidClassException: org.apache.commons.lang3.time.FastDateParser lately, trying to solve it with no avail.
My current workspace consists of Talend Studio 7.3.1 patch R2020-09, executing on a remote cluster with Cloudera Hadoop 6.3.2 (Spark ver 2.4.0).
Following the stack trace:
org.apache.spark.SparkException: Job aborted.
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:198)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:159)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:122)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:668)
at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:668)
at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:668)
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:276)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:270)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:228)
at org.apache.spark.sql.DataFrameWriter.csv(DataFrameWriter.scala:656)
at ... (private stuff)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure:
Aborting TaskSet 5.0 because task 6 (partition 6)
cannot run anywhere due to node and executor blacklist.
Most recent failure:
Lost task 4.2 in stage 5.0 (TID 225, slaaeizeba13.enelint.global, executor 4): java.io.InvalidClassException: org.apache.commons.lang3.time.FastDateParser; local class incompatible: stream classdesc serialVersionUID = 2, local class serialVersionUID = 3
at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:616)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1843)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1713)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2000)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2245)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2169)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2027)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2245)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2169)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2027)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2245)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2169)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2027)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2245)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2169)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2027)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2245)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2169)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2027)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2245)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2169)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2027)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:422)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:83)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$11.apply(Executor.scala:407)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1408)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:413)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
This only happens on the .csv method, which is located inside a tFileOutputDelimeted component. We generate these CSV to move data to a Redshift database afterwards.
Upon investigating, the hive-exec-2.1.1-cdh6.3.2 jar contains a FastDateParser (inside it's own commons-lang3 implementation) with serialVersionUID = 2, while the commons-lang3-3.7 jar contains a FastDateParser with serialVersionUID = 3.
What I've tried so far:
Changing the CSV compression format.
Establishing a custom Spark serializer in the Spark configuration.
Explicitly adding a dependency to commons-lang3-3.7 and hive-exec-2.1.1-cdh6.3.2 while excluding commons-lang3 from the latter in the Standalone Job pom (Project Settings > Build > Maven > Default > Standalone Job)
Adding the code of the answer of this question commons-lang3 invalid versions in cloudera 6.1 with spark 2.4 in the same place
Anything I can do to avoid this issue without upgrading to a new patch?
Any help would be great. Thanks.

Unable to write parquet file due error "SparkException: Job aborted ...FileFormatWriter$.write

I am using spark-2.4.5 version with java 1.8 in my project.
Have a dataset resulted from join and few aggregations.
While writing the said dataset into s3 path in "parquet" file format.
I get below error
org.apache.spark.SparkException: Job aborted.
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:198)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:159)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:122)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:83)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:81)
at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:80)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:127)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:75)
at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:676)
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:285)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:271)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:229)
at com.project.processor.Processor.persistData(Processor.java:591)
at com.project.processor.Processor.main(Processor.java:78)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:685)
Caused by: org.apache.spark.sql.catalyst.errors.package$TreeNodeException: execute, tree:
Exchange hashpartitioning
What is wrong here ? I dont see any more clue about what is causing this issue.
How can I know what is wrong and how to fix it.

java.lang.NullPointerException at com.sun.enterprise.deployment.util.ComponentValidator.accept() while deploying

I am developing a Java EE application and is facing some error which I'm uncertain whether it's my code problem.
I created a an entity class called Staff and a stateless session bean for the entity class (Facade) to access it from the WAR application. However, initially I have one Managed Bean with a reference to that Facade using the #EJB annotation and it works fine; however, when I tried to refer to it in another Managed Bean, the below errors started to show:
Severe: Exception during lifecycle processing
java.lang.NullPointerException
at com.sun.enterprise.deployment.util.ComponentValidator.accept(ComponentValidator.java:554)
at com.sun.enterprise.deployment.util.DefaultDOLVisitor.accept(DefaultDOLVisitor.java:78)
at com.sun.enterprise.deployment.util.ComponentValidator.accept(ComponentValidator.java:123)
at com.sun.enterprise.deployment.util.ApplicationValidator.accept(ApplicationValidator.java:152)
at org.glassfish.web.deployment.util.WebBundleValidator.accept(WebBundleValidator.java:81)
at com.sun.enterprise.deployment.BundleDescriptor.visit(BundleDescriptor.java:625)
at org.glassfish.web.deployment.descriptor.WebBundleDescriptorImpl.visit(WebBundleDescriptorImpl.java:1999)
at org.glassfish.web.deployment.descriptor.WebBundleDescriptorImpl.visit(WebBundleDescriptorImpl.java:1989)
at com.sun.enterprise.deployment.util.ApplicationValidator.accept(ApplicationValidator.java:131)
at com.sun.enterprise.deployment.BundleDescriptor.visit(BundleDescriptor.java:625)
at com.sun.enterprise.deployment.archivist.ApplicationArchivist.validate(ApplicationArchivist.java:703)
at com.sun.enterprise.deployment.archivist.ApplicationArchivist.openWith(ApplicationArchivist.java:248)
at com.sun.enterprise.deployment.archivist.ApplicationFactory.openWith(ApplicationFactory.java:232)
at org.glassfish.javaee.core.deployment.DolProvider.processDOL(DolProvider.java:193)
at org.glassfish.javaee.core.deployment.DolProvider.load(DolProvider.java:227)
at org.glassfish.javaee.core.deployment.DolProvider.load(DolProvider.java:96)
at com.sun.enterprise.v3.server.ApplicationLifecycle.loadDeployer(ApplicationLifecycle.java:881)
at com.sun.enterprise.v3.server.ApplicationLifecycle.setupContainerInfos(ApplicationLifecycle.java:821)
at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:377)
at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:219)
at org.glassfish.deployment.admin.DeployCommand.execute(DeployCommand.java:491)
at com.sun.enterprise.v3.admin.CommandRunnerImpl$2$1.run(CommandRunnerImpl.java:539)
at com.sun.enterprise.v3.admin.CommandRunnerImpl$2$1.run(CommandRunnerImpl.java:535)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at com.sun.enterprise.v3.admin.CommandRunnerImpl$2.execute(CommandRunnerImpl.java:534)
at com.sun.enterprise.v3.admin.CommandRunnerImpl$3.run(CommandRunnerImpl.java:565)
at com.sun.enterprise.v3.admin.CommandRunnerImpl$3.run(CommandRunnerImpl.java:557)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:556)
at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:1464)
at com.sun.enterprise.v3.admin.CommandRunnerImpl.access$1300(CommandRunnerImpl.java:109)
at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1846)
at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1722)
at com.sun.enterprise.v3.admin.AdminAdapter.doCommand(AdminAdapter.java:534)
at com.sun.enterprise.v3.admin.AdminAdapter.onMissingResource(AdminAdapter.java:224)
at org.glassfish.grizzly.http.server.StaticHttpHandlerBase.service(StaticHttpHandlerBase.java:189)
at com.sun.enterprise.v3.services.impl.ContainerMapper$HttpHandlerCallable.call(ContainerMapper.java:459)
at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:167)
at org.glassfish.grizzly.http.server.HttpHandler.runService(HttpHandler.java:206)
at org.glassfish.grizzly.http.server.HttpHandler.doHandle(HttpHandler.java:180)
at org.glassfish.grizzly.http.server.HttpServerFilter.handleRead(HttpServerFilter.java:235)
at org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:119)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:283)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:200)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:132)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:111)
at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77)
at org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:536)
at org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:112)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.run0(WorkerThreadIOStrategy.java:117)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.access$100(WorkerThreadIOStrategy.java:56)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy$WorkerThreadRunnable.run(WorkerThreadIOStrategy.java:137)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:591)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:571)
at java.lang.Thread.run(Thread.java:748)
Been trying to search online for a solution but nothing was found. Would appreciate if anyone could answer it.

Cassandra : Exception on startup

I am getting the following erorr on starting Cassandra:
ERROR [main] 2018-04-09 18:19:25,014 CassandraDaemon.java:675 -
Exception encountered during startup
java.lang.AbstractMethodError: org.apache.cassandra.utils.JMXServerUtils$Exporter.exportObject(Ljava/rmi/Remote;ILjava/rmi/server/RMIClientSocketFactory;Ljava/rmi/server/RMIServerSocketFactory;Lsun/misc/ObjectInputFilter;)Ljava/rmi/Remote;
at javax.management.remote.rmi.RMIJRMPServerImpl.export(RMIJRMPServerImpl.java:150) ~[na:1.8.0_162]
at javax.management.remote.rmi.RMIJRMPServerImpl.export(RMIJRMPServerImpl.java:135) ~[na:1.8.0_162]
at javax.management.remote.rmi.RMIConnectorServer.start(RMIConnectorServer.java:405) ~[na:1.8.0_162]
at org.apache.cassandra.utils.JMXServerUtils.createJMXServer(JMXServerUtils.java:104) ~[main/:na]
at org.apache.cassandra.service.CassandraDaemon.maybeInitJmx(CassandraDaemon.java:139) [main/:na]
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:183) [main/:na]
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:569) [main/:na]
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:658) [main/:na]
Can someone please tell me where I could be going wrong ? I haven't made any changes since the last successful run, so I'm confused.

GF4 incorrect timeout exception thrown in Servlet when multiple large responses are returned (requested)

The issue occurs when multiple requests (in this example, for images) are handled by GF. For some reason, the response outputstream blocks (for larger writes) and I get a timeout after a few seconds.
I downloaded the latest (2/12/2013) promoted built from glassfish repo and replaced nucleus-grizzly-all.jar but I still get the same issue.
Any ideas of what causes this issue and how I can resolve it?
Thanks.
Exception
java.io.IOException: java.util.concurrent.TimeoutException
at org.glassfish.grizzly.utils.Exceptions.makeIOException(Exceptions.java:81)
at org.glassfish.grizzly.http.io.OutputBuffer.blockAfterWriteIfNeeded(OutputBuffer.java:958)
at org.glassfish.grizzly.http.io.OutputBuffer.write(OutputBuffer.java:682)
at org.apache.catalina.connector.OutputBuffer.writeBytes(OutputBuffer.java:355)
at org.apache.catalina.connector.OutputBuffer.write(OutputBuffer.java:342)
at org.apache.catalina.connector.CoyoteOutputStream.write(CoyoteOutputStream.java:161)
at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1793)
at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1769)
at org.apache.commons.io.IOUtils.copy(IOUtils.java:1744)
at com.app.web.servlet.ImageServlet.serveImage(ImageServlet.java:234)
at com.app.web.servlet.ImageServlet.service(ImageServlet.java:157)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1682)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:344)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214)
at com.dothatapp.web.filter.DoThatAppFilter.doFilter(DoThatAppFilter.java:27)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214)
at com.dothatapp.web.filter.AuthFilter.doFilter(AuthFilter.java:113)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:316)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:160)
at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:734)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:673)
at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:99)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:174)
at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:357)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:260)
at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:188)
at org.glassfish.grizzly.http.server.HttpHandler.runService(HttpHandler.java:191)
at org.glassfish.grizzly.http.server.HttpHandler.doHandle(HttpHandler.java:168)
at org.glassfish.grizzly.http.server.HttpServerFilter.handleRead(HttpServerFilter.java:189)
at org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:119)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:288)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:206)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:136)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:114)
at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77)
at org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:838)
at org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:113)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.run0(WorkerThreadIOStrategy.java:115)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.access$100(WorkerThreadIOStrategy.java:55)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy$WorkerThreadRunnable.run(WorkerThreadIOStrategy.java:135)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:564)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:544)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.util.concurrent.TimeoutException
at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:258)
at java.util.concurrent.FutureTask.get(FutureTask.java:119)
at org.glassfish.grizzly.http.io.OutputBuffer.blockAfterWriteIfNeeded(OutputBuffer.java:951)
... 48 more
Related:
Unable to find or serve resource
Possible causing bug:
https://java.net/jira/browse/GLASSFISH-20681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
I did two things wrong when updating my GF4.
I did not delete the osgi cache (...\glassfish4\glassfish\domains\domain1\osgi-cache)
The promoted grizzly build had version issues.
Resolution
In the attached Bug (above) there is a jar patch provides. I replaced the jar under modules directory, cleared the OSGI cache and everything works fine now.

Resources