Hazelcast hangs when server fails to handle query - hazelcast

I want to execute instance of supertype AbstractEntryProcessor (lets say SomeEntryProcessor) with IMap.executeOnKey method.
Server side ClassLoader doesn't have this class (SomeEntryProcessor).
So it expectedly fails with:
java.lang.ClassNotFoundException: com.package.SomeEntryProcessor
com.hazelcast.nio.serialization.HazelcastSerializationException: java.lang.ClassNotFoundException: com.package.SomeEntryProcessor
at com.hazelcast.nio.serialization.DefaultSerializers$ObjectSerializer.read(DefaultSerializers.java:190) ~[hazelcast-3.2.3.jar:3.2.3]
at com.hazelcast.nio.serialization.StreamSerializerAdapter.read(StreamSerializerAdapter.java:40) ~[hazelcast-3.2.3.jar:3.2.3]
at com.hazelcast.nio.serialization.SerializationServiceImpl.readObject(SerializationServiceImpl.java:279) ~[hazelcast-3.2.3.jar:3.2.3]
at com.hazelcast.nio.serialization.ByteArrayObjectDataInput.readObject(ByteArrayObjectDataInput.java:433) ~[hazelcast-3.2.3.jar:3.2.3]
at com.hazelcast.map.client.MapExecuteOnKeyRequest.read(MapExecuteOnKeyRequest.java:88) ~[hazelcast-3.2.3.jar:3.2.3]
But after this executeOnKey hangs forever.
I believe this happens due to infinite wait in method
ClientCallFuture.get(long timeout, TimeUnit unit) {
...
this.wait(waitMillis); // line 103 in hazelcast 3.2.3
}
Has anyone seen the same?
Created an issue https://github.com/hazelcast/hazelcast/issues/3842, but no response received

Sorry somehow missed the issue. Can you try a 3.3.x release? I guess the issue is fixed on those versions.

Related

adf internal error occured on executing a vo in taskflow

can any body tell me the actual reason causing the below error:
oracle.jbo.JboException: JBO-29000: Unexpected exception caught: java.lang.NullPointerException, msg=null
at oracle.jbo.server.ViewObjectImpl.executeQueryForCollection(ViewObjectImpl.java:7349)
at oracle.jbo.server.ViewRowSetImpl.execute(ViewRowSetImpl.java:1257)
at oracle.jbo.server.ViewRowSetImpl.executeQueryForMasters(ViewRowSetImpl.java:1449)
at oracle.jbo.server.ViewRowSetImpl.executeQueryForMode(ViewRowSetImpl.java:1355)
at oracle.jbo.server.ViewRowSetImpl.executeQuery(ViewRowSetImpl.java:1340)
at oracle.jbo.server.ViewObjectImpl.executeQuery(ViewObjectImpl.java:7236)
at oracle.adf.model.bc4j.DCJboDataControl.executeIteratorBindingWithParams(DCJboDataControl.java:2987)
at oracle.jbo.uicli.binding.JUCtrlActionBinding.doIt(JUCtrlActionBinding.java:1541)
at oracle.adf.model.binding.DCDataControl.invokeOperation(DCDataControl.java:2150)
at oracle.jbo.uicli.binding.JUCtrlActionBinding.invoke(JUCtrlActionBinding.java:740)
...
...
..
.
Caused by: java.lang.NullPointerException
at oracle.jdbc.driver.OraclePreparedStatement.setObjectAtName(OraclePreparedStatement.java:15884)
at oracle.jdbc.driver.OraclePreparedStatementWrapper.setObjectAtName(OraclePreparedStatementWrapper.java:911)
at weblogic.jdbc.wrapper.PreparedStatement_oracle_jdbc_driver_OraclePreparedStatementWrapper.setObjectAtName(Unknown Source)
at oracle.jbo.server.OracleSQLBuilderImpl.bindParamValue(OracleSQLBuilderImpl.java:4669)
at oracle.jbo.server.BaseSQLBuilderImpl.bindParametersForStmt(BaseSQLBuilderImpl.java:3687)
at oracle.jbo.server.ViewObjectImpl.bindParametersForCollection(ViewObjectImpl.java:22684)
From the stacktrace, it seems some bind variable that is required by your view object query is not being set correctly.
If it works intermittently, it could be that the bind variable is lost at some point. To debug/test, try it with a hard-coded value for your bind variable and see if that works first (if you have a ViewCriteria, try removing that first) - then run the task flow and see if it works consistently.
I would suggest running the application in JDeveloper with -Djbo.debugoutput=console argument. It gives you a lot if information of what your business components are doing and you could catch the error cause in console logs.

Hazelcast Supplier and Aggregation gives Concurrent Execution Exception

I am trying to get a set of the distinct values of an object's field stored in a Hazelcast map.
This line of java code:
instructions.aggregate(Supplier.all(value -> value.getWorkArea()), Aggregations.distinctValues());
has the following stacktrace :
java.util.concurrent.ExecutionException: com.hazelcast.nio.serialization.HazelcastSerializationException: java.lang.ClassNotFoundException: com.example.instruction.repository.HazelcastInstructionRepository$GeneratedEvaluationClass
com.hazelcast.nio.serialization.HazelcastSerializationException: java.lang.ClassNotFoundException: com.example.instruction.repository.HazelcastInstructionRepository$GeneratedEvaluationClass
java.lang.ClassNotFoundException: com.example.instruction.repository.HazelcastInstructionRepository$GeneratedEvaluationClass
If I were to try this line :
instructions.aggregate(Supplier.all()), Aggregations.distinctValues());
or:
instructions.aggregate(Supplier.fromPredicate(Predicates.and(Predicates.equal("type", "someType"), equal("groupId", null),
Predicates.equal("workArea", "someWorkArea"))), Aggregations.distinctValues());
It just works ... It seems to be something wrong when I am making a reference to the object's field. (I also tried it with other fields of the object and the same error gets returned)
This is running on my local environment and I am sure that the objects are being placed correctly in the Hazelcast map since the other aggregations/predicates are working.
Do you have any ideas about what am I doing wrong?
Many Thanks!
EDITED: So the problem is the closure. It's not available on all nodes. Only on the calling node.
Also. This feature is deprecated. Plz use the fast-aggregations instead.
http://docs.hazelcast.org/docs/latest/manual/html-single/#fast-aggregations

Exception in Map Reduce job

i have created a map reduce job to fetch the number of employees of a location.
I am using hazelcast 3.6.3. Each employee has name and address.
I have added my code to following git repository.
https://github.com/adasari/hazelcast-demo
Exception :
java.util.concurrent.ExecutionException: java.lang.ClassCastException: com.hazelcast.mapreduce.aggregation.impl.DistinctValuesAggregation$SimpleEntry cannot be cast to com.hazelcast.query.impl.Extractable
at com.hazelcast.mapreduce.impl.task.TrackableJobFuture.setResult(TrackableJobFuture.java:68)
at com.hazelcast.mapreduce.impl.task.JobSupervisor.notifyRemoteException(JobSupervisor.java:156)
at com.hazelcast.mapreduce.impl.operation.NotifyRemoteExceptionOperation.run(NotifyRemoteExceptionOperation.java:54)
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:172)
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:393)
at com.hazelcast.spi.impl.operationexecutor.classic.OperationThread.processPacket(OperationThread.java:184)
at com.hazelcast.spi.impl.operationexecutor.classic.OperationThread.process(OperationThread.java:137)
at com.hazelcast.spi.impl.operationexecutor.classic.OperationThread.doRun(OperationThread.java:124)
at com.hazelcast.spi.impl.operationexecutor.classic.OperationThread.run(OperationThread.java:99)
Caused by: java.lang.ClassCastException: com.hazelcast.mapreduce.aggregation.impl.DistinctValuesAggregation$SimpleEntry cannot be cast to com.hazelcast.query.impl.Extractable
at com.hazelcast.query.impl.predicates.AbstractPredicate.readAttributeValue(AbstractPredicate.java:129)
at com.hazelcast.query.impl.predicates.AbstractPredicate.apply(AbstractPredicate.java:55)
can you point me the issue ?
Thanks.
Not exactly sure what you're looking for or what you're trying to do (looking at the code) but your problem is here:
Caused by: java.lang.ClassCastException: com.hazelcast.mapreduce.aggregation.impl.DistinctValuesAggregation$SimpleEntry cannot be cast to com.hazelcast.query.impl.Extractable
So you have to implement the Extractable interface with your SimpleEntry class.
I haven't worked on MapReduce but below are my observation while looking/executing the code.
The place it is failing is a different SimpleEntry class, an inner class of DistinctValuesAggregation which doesn't implement Extractable.
There is already a defect in Hazelcast(#7398) but it says closed in 3.6.1 so might as well follow up with them in Git hub.
I found that the code works well when you run the Cluster with just single node. So I suspect the above defect would be impacting the aggregation over multiple nodes.
HazelcastInstance hazelcastInstance = buildCluster(1);
Following actions resolved the issue -
1. DistinctMapper implements DataSerializable
2. SimpleEntry extends QueryableEntry

WebSphereApplication NoClassDefFoundError while unmarshalling XML with JAXB

I´ve deployed an EAR on a Websphere Application Server 7. In my code there is a part where I try to unmarshall a XML file into an object. I get this error when trying to do that:
java.lang.NoClassDefFoundError: com.ibm.xtq.bcel.util.SyntheticRepository (initialization failure)
at java.lang.J9VMInternals.initialize(J9VMInternals.java:140)
at com.ibm.xtq.bcel.classfile.JavaClass.<init>(JavaClass.java:109)
at com.ibm.xtq.bcel.classfile.JavaClass.<init>(JavaClass.java:228)
at com.ibm.xtq.bcel.generic.ClassGen.getJavaClass(ClassGen.java:174)
at com.ibm.fcg.bcel.FcgClassGenBCEL.dump2(Unknown Source)
at com.ibm.fcg.bcel.FcgClassGenBCEL.dump(Unknown Source)
at com.ibm.xml.xlxp2.jaxb.unmarshal.codegen.fcg.FCGDeserializationStubGenerator.generate(FCGDeserializationStubGenerator.java:249)
at com.ibm.xml.xlxp2.jaxb.codegen.AbstractGeneratedStubFactory.generateByteCode(AbstractGeneratedStubFactory.java:96)
at com.ibm.xml.xlxp2.jaxb.unmarshal.codegen.fcg.FCGStubFactory.generateStubByteCode(FCGStubFactory.java:46)
at com.ibm.xml.xlxp2.jaxb.codegen.AbstractGeneratedStubFactory.getStubClassConstructor(AbstractGeneratedStubFactory.java:154)
at com.ibm.xml.xlxp2.jaxb.unmarshal.codegen.AbstractGeneratedDeserializationStubFactory.createStub(AbstractGeneratedDeserializationStubFactory.java:58)
at com.ibm.xml.xlxp2.jaxb.unmarshal.impl.DeserializationContext.startComplexType(DeserializationContext.java:662)
at com.ibm.xml.xlxp2.jaxb.unmarshal.impl.DeserializationContext.handleRootElementEvent(DeserializationContext.java:303)
at com.ibm.xml.xlxp2.jaxb.unmarshal.impl.JAXBDocumentScanner.produceRootElementEvent(JAXBDocumentScanner.java:186)
at com.ibm.xml.xlxp2.scan.DocumentScanner.scanRootElement(DocumentScanner.java:2234)
at com.ibm.xml.xlxp2.scan.DocumentScanner.scanProlog(DocumentScanner.java:1726)
at com.ibm.xml.xlxp2.scan.DocumentScanner.nextEvent(DocumentScanner.java:1316)
at com.ibm.xml.xlxp2.scan.DocumentScanner.parseDocumentEntity(DocumentScanner.java:1168)
at com.ibm.xml.xlxp2.jaxb.unmarshal.impl.JAXBDocumentScanner.unmarshal(JAXBDocumentScanner.java:125)
at com.ibm.xml.xlxp2.jaxb.unmarshal.UnmarshallerImpl.unmarshal(UnmarshallerImpl.java:120)
at com.inditex.lois.core.ws.utilidades.services.impl.AdaptadorServiceImpl.transformarXMLenObjeto(AdaptadorServiceImpl.java:137)
As far as I know that class is part of IBM JDK and cannot be found in runtime. Is there anything I have to modify in my ear or, as I guess, its all about configuring/modifiying WAS configuration (or even applying a patch if this is a bug).
Any help? Thanks a lot.
(Sorry for my english :) )
This exception means that the class com.ibm.xtq.bcel.util.SyntheticRepository is found, but failed static initialization. If there is no other message in the log about this, then this is the time to open a PMR with IBM. Static initializers in internal WebSphere code should never fail during normal usage course.
Same here, there is an APAR:
http://www-01.ibm.com/support/docview.wss?uid=swg1IV41639
Problem conclusion
This defect will be fixed in:
6.0.0 SR14
6.0.1 SR6
7.0.0 SR5
.
The fix resolved the AccessControlException. SyntheticRepository
class can be initialized properly, hence NoClassDefFoundError
does not occur.

Cause of assertion failure

Any idea what would cause this assertion failure in a Groovy script?
Exception in thread "Thread-2" groovy.lang.GroovyRuntimeException: exception while dumping process stream
at org.codehaus.groovy.runtime.ProcessGroovyMethods$ByteDumper.run(ProcessGroovyMethods.java:488)
at java.lang.Thread.run(Thread.java:619)
Caused by: java.io.IOException: Bad file descriptor
at java.io.FileInputStream.available(Native Method)
at java.io.BufferedInputStream.read(BufferedInputStream.java:325)
at java.io.FilterInputStream.read(FilterInputStream.java:90)
at org.codehaus.groovy.runtime.ProcessGroovyMethods$ByteDumper.run(ProcessGroovyMethods.java:484)
... 1 more
I'm running Groovy 1.8.3 with JVM 1.6.0_14. I noticed an old bug report that was similar but I perform a waitFor() call after every process is started and it's reported to be fixed in 1.6.x.
I can't trap the exception apparently so I don't know exactly where it is happening. However, I'm guessing it occurs in this code segment:
def pipedOutputStream = new PipedOutputStream()
process.consumeProcessOutput(
new TeeOutputStream(System.out, pipedOutputStream),
new TeeOutputStream(System.err, pipedOutputStream))
new PipedInputStream(pipedOutputStream).eachLine {
(it =~ /condition/).find { process.destroy() }
}
System.exit(0)

Resources