Exception in Map Reduce job - hazelcast

i have created a map reduce job to fetch the number of employees of a location.
I am using hazelcast 3.6.3. Each employee has name and address.
I have added my code to following git repository.
https://github.com/adasari/hazelcast-demo
Exception :
java.util.concurrent.ExecutionException: java.lang.ClassCastException: com.hazelcast.mapreduce.aggregation.impl.DistinctValuesAggregation$SimpleEntry cannot be cast to com.hazelcast.query.impl.Extractable
at com.hazelcast.mapreduce.impl.task.TrackableJobFuture.setResult(TrackableJobFuture.java:68)
at com.hazelcast.mapreduce.impl.task.JobSupervisor.notifyRemoteException(JobSupervisor.java:156)
at com.hazelcast.mapreduce.impl.operation.NotifyRemoteExceptionOperation.run(NotifyRemoteExceptionOperation.java:54)
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:172)
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:393)
at com.hazelcast.spi.impl.operationexecutor.classic.OperationThread.processPacket(OperationThread.java:184)
at com.hazelcast.spi.impl.operationexecutor.classic.OperationThread.process(OperationThread.java:137)
at com.hazelcast.spi.impl.operationexecutor.classic.OperationThread.doRun(OperationThread.java:124)
at com.hazelcast.spi.impl.operationexecutor.classic.OperationThread.run(OperationThread.java:99)
Caused by: java.lang.ClassCastException: com.hazelcast.mapreduce.aggregation.impl.DistinctValuesAggregation$SimpleEntry cannot be cast to com.hazelcast.query.impl.Extractable
at com.hazelcast.query.impl.predicates.AbstractPredicate.readAttributeValue(AbstractPredicate.java:129)
at com.hazelcast.query.impl.predicates.AbstractPredicate.apply(AbstractPredicate.java:55)
can you point me the issue ?
Thanks.

Not exactly sure what you're looking for or what you're trying to do (looking at the code) but your problem is here:
Caused by: java.lang.ClassCastException: com.hazelcast.mapreduce.aggregation.impl.DistinctValuesAggregation$SimpleEntry cannot be cast to com.hazelcast.query.impl.Extractable
So you have to implement the Extractable interface with your SimpleEntry class.

I haven't worked on MapReduce but below are my observation while looking/executing the code.
The place it is failing is a different SimpleEntry class, an inner class of DistinctValuesAggregation which doesn't implement Extractable.
There is already a defect in Hazelcast(#7398) but it says closed in 3.6.1 so might as well follow up with them in Git hub.
I found that the code works well when you run the Cluster with just single node. So I suspect the above defect would be impacting the aggregation over multiple nodes.
HazelcastInstance hazelcastInstance = buildCluster(1);

Following actions resolved the issue -
1. DistinctMapper implements DataSerializable
2. SimpleEntry extends QueryableEntry

Related

Provider com.sap.cloud.sdk.cloudplatform.connectivity.CertificateBasedHttpClientFactory not a subtype

In SAP Cloud SDK FAQ page, there is a QA about I'm Observing a DefaultHttpClientFactory not a subtype Exception, now I encountered a similar error Provider com.sap.cloud.sdk.cloudplatform.connectivity.CertificateBasedHttpClientFactory not a subtype, however I could not exclude connectivity-scp-cf since this is mandatory?
Any hints how to solve this error?
Caused by: com.sap.core.connectivity.jco.cf.auth.TokenFactory$GetTokenException: Could not get ClientCredentialsGrantAccessToken
at com.sap.core.connectivity.jco.cf.auth.TokenFactory.getClientCredentialsGrantAccessToken(TokenFactory.java:61)
at com.sap.core.connectivity.jco.cf.destination.ConnectivityConfigurationCF.getConfiguration(ConnectivityConfigurationCF.java:72)
... 92 more
Caused by: java.lang.NoClassDefFoundError: com/sap/cloud/security/client/HttpClientFactory : cannot initialize class because prior initialization attempt failed
at com.sap.core.connectivity.jco.cf.auth.TokenFactory.executeTokenExchange(TokenFactory.java:94)
at com.sap.core.connectivity.jco.cf.auth.TokenFactory.getClientCredentialsGrantAccessToken(TokenFactory.java:57)
... 93 more
Caused by: java.lang.ExceptionInInitializerError: java.util.ServiceConfigurationError: com.sap.cloud.security.client.HttpClientFactory: Provider com.sap.cloud.sdk.cloudplatform.connectivity.CertificateBasedHttpClientFactory not a subtype
Interface type com.sap.cloud.security.client.HttpClientFactory is part of dependency com.sap.cloud.security.xsuaa:token-client.
When using SAP Java Buildpack (e.g. for JCo), please make sure to give this dependency a provided scope. Otherwise you'll be experiencing class loading issues like in your current case. For further errors, please attach your mvn dependency:tree.

Optaplanner multithreading attempt yielded "missing rebase" on custom move

I updated from 7.5 to 7.9 Optaplanner libraries for use with a variant of the nurserostering code, and used the release notes (for example, some method names changed) to successfully rebuild and re-run. Then, I added the "moveThreadCount" xml line (for multithreading) to my solver config xml.
<moveThreadCount>AUTO</moveThreadCount>
Running then immediately threw an error:
Caused by: java.lang.UnsupportedOperationException: The custom move class (class westgranite.staffrostering.solver.move.EmployeeChangeMove) doesn't implement the rebase() method, so multithreaded solving is impossible.
I do have a number of custom moves. I did not see any reference to the need to add a rebase() method in the release notes, nor do I see a reference to rebase() in the current (newer) documentation section on building custom moves.
https://docs.optaplanner.org/7.12.0.Final/optaplanner-docs/html_single/index.html#customMoves
Would someone please point me the right way? Thanks!
I would suggest reading this excellent blog post: http://www.optaplanner.org/blog/2018/07/03/AGiantLeapForwardWithMultithreadedIncrementalSolving.html as it gives a more in depth explanation of how multithreaded solving works.
I also suggest to read the javadoc on the rebase method, it should point you in the right direction: https://docs.optaplanner.org/7.12.0.Final/optaplanner-javadoc/org/optaplanner/core/impl/heuristic/move/Move.html#rebase-org.optaplanner.core.impl.score.director.ScoreDirector-
Here's an example:
public class CloudComputerChangeMove extends AbstractMove<CloudBalance> {
private CloudProcess cloudProcess;
private CloudComputer toCloudComputer;
...
#Override
public CloudComputerChangeMove rebase(ScoreDirector<CloudBalance> destinationScoreDirector) {
return new CloudComputerChangeMove(
destinationScoreDirector.lookUpWorkingObject(cloudProcess),
destinationScoreDirector.lookUpWorkingObject(toCloudComputer));
}
}

Class Cast Exception coming while Deploying in Liferay DXP

We are getting following below error when we deploy any application in Liferay DXP 7.
When we clean the Liferay DXP and then redeploy the below issue gets fixed.
But the problem with this approach is that all the caches gets deleted after cleaning and when we redeploy and access the site , the caches gets recreated but it takes lot of time to access any page on the site.
[2018-05-17 10:58:33,113] [DEBUG] [10.111.2.74] [] [http-nio-5443-exec-8] [com.fsvps.clientPortal.service.common.ProgramFilterPopulator] - Retrieving logged in user
[2018-05-17 10:58:33,137] [DEBUG] [10.111.2.74] [] [http-nio-5443-exec-8] [com.fsvps.clientPortal.util.common.UserContextInitializationInterceptor] - Portlet mode view and debug mode = false
[2018-05-17 10:58:33,137] [DEBUG] [10.111.2.74] [] [http-nio-5443-exec-8] [com.fsvps.clientPortal.util.common.UserContextInitializationInterceptor] - Checking to see if invalid filter view should be shown
[2018-05-17 11:07:40,859] [DEBUG] [] [] [http-nio-5443-exec-2] [com.fsvps.clientPortal.util.common.UserContextInitializationInterceptor] - Entering
[2018-05-17 11:07:40,859] [WARN] [] [] [http-nio-5443-exec-2] [org.springframework.web.portlet.DispatcherPortlet] - Handler execution resulted in exception - forwarding to resolved error view
java.lang.ClassCastException: com.fsvps.clientPortal.domain.common.UserContext cannot be cast to com.fsvps.clientPortal.domain.common.UserContext
at com.fsvps.clientPortal.domain.common.UserContext$$FastClassBySpringCGLIB$$818d2483.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:738)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
at org.springframework.aop.support.DelegatingIntroductionInterceptor.doProceed(DelegatingIntroductionInterceptor.java:133)
at org.springframework.aop.support.DelegatingIntroductionInterceptor.invoke(DelegatingIntroductionInterceptor.java:121)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:673)
at com.fsvps.clientPortal.domain.common.UserContext$$EnhancerBySpringCGLIB$$830ac420.setIpAddress(<generated>)
at com.fsvps.clientPortal.util.common.UserContextInitializationInterceptor.preHandle(UserContextInitializationInterceptor.java:93)
at org.springframework.web.portlet.handler.HandlerInterceptorAdapter.preHandleRender(HandlerInterceptorAdapter.java:72)
at org.springframework.web.portlet.DispatcherPortlet.doRenderService(DispatcherPortlet.java:739)
at org.springframework.web.portlet.FrameworkPortlet.processRequest(FrameworkPortlet.java:537)
The exact cause is impossible to pinpoint with the information you give. However, the class of problem is easy to identify:
java.lang.ClassCastException:
com.fsvps.clientPortal.domain.common.UserContext cannot be cast to
com.fsvps.clientPortal.domain.common.UserContext
(separated to lines to illustrate the identical class name)
Whenever a class can't be typecasted to itself or a legitimate superclass/interface, you're dealing with duplicate code: There are two versions of the class with the same name available to the classloader, and the system is choosing both.
As the error message just contains the name of the class, not its classloader, a first glance at the error message doesn't make sense. Knowing that a class is uniquely described by its package, name, and its classloader leads you to the root cause.
Identify your modules and make sure that there's only one option for com.fsvps.clientPortal.domain.common.UserContext available.
Edit: Answering to your comments - without knowing your deployment details, there's no way to help you other than wild guesses. Please add more information to your question if the next wild guess doesn't help:
The name of the class, UserContext, suggests that you might store it somewhere, e.g. in a session. Doing so will prevent the original class from unloading when you're undeploying your plugin. Note that there is a huge difference between undeploying code and garbage collecting objects: GC can only happen, when there is no more reference.
If you deploy an updated version of your plugin, the old and existing objects still are referencing the previously loaded UserContext class, while the new code is trying to assign it to a new UserContext reference. Even though, both might be identical in implementation, they are different classes that just share the name.
You can't keep long living references to code that might undeploy, and expect them to stay usable. A quick fix (if you're deploying OSGi modules) might be to extract stable and long-used classes into its own bundle that you won't redeploy. Or replace session stored objects (assuming that this is it) with Java runtime classes, e.g. Map of built-in types, and build a UserContext object from those types whenever you need it.

Hazelcast Supplier and Aggregation gives Concurrent Execution Exception

I am trying to get a set of the distinct values of an object's field stored in a Hazelcast map.
This line of java code:
instructions.aggregate(Supplier.all(value -> value.getWorkArea()), Aggregations.distinctValues());
has the following stacktrace :
java.util.concurrent.ExecutionException: com.hazelcast.nio.serialization.HazelcastSerializationException: java.lang.ClassNotFoundException: com.example.instruction.repository.HazelcastInstructionRepository$GeneratedEvaluationClass
com.hazelcast.nio.serialization.HazelcastSerializationException: java.lang.ClassNotFoundException: com.example.instruction.repository.HazelcastInstructionRepository$GeneratedEvaluationClass
java.lang.ClassNotFoundException: com.example.instruction.repository.HazelcastInstructionRepository$GeneratedEvaluationClass
If I were to try this line :
instructions.aggregate(Supplier.all()), Aggregations.distinctValues());
or:
instructions.aggregate(Supplier.fromPredicate(Predicates.and(Predicates.equal("type", "someType"), equal("groupId", null),
Predicates.equal("workArea", "someWorkArea"))), Aggregations.distinctValues());
It just works ... It seems to be something wrong when I am making a reference to the object's field. (I also tried it with other fields of the object and the same error gets returned)
This is running on my local environment and I am sure that the objects are being placed correctly in the Hazelcast map since the other aggregations/predicates are working.
Do you have any ideas about what am I doing wrong?
Many Thanks!
EDITED: So the problem is the closure. It's not available on all nodes. Only on the calling node.
Also. This feature is deprecated. Plz use the fast-aggregations instead.
http://docs.hazelcast.org/docs/latest/manual/html-single/#fast-aggregations

WebSphereApplication NoClassDefFoundError while unmarshalling XML with JAXB

I´ve deployed an EAR on a Websphere Application Server 7. In my code there is a part where I try to unmarshall a XML file into an object. I get this error when trying to do that:
java.lang.NoClassDefFoundError: com.ibm.xtq.bcel.util.SyntheticRepository (initialization failure)
at java.lang.J9VMInternals.initialize(J9VMInternals.java:140)
at com.ibm.xtq.bcel.classfile.JavaClass.<init>(JavaClass.java:109)
at com.ibm.xtq.bcel.classfile.JavaClass.<init>(JavaClass.java:228)
at com.ibm.xtq.bcel.generic.ClassGen.getJavaClass(ClassGen.java:174)
at com.ibm.fcg.bcel.FcgClassGenBCEL.dump2(Unknown Source)
at com.ibm.fcg.bcel.FcgClassGenBCEL.dump(Unknown Source)
at com.ibm.xml.xlxp2.jaxb.unmarshal.codegen.fcg.FCGDeserializationStubGenerator.generate(FCGDeserializationStubGenerator.java:249)
at com.ibm.xml.xlxp2.jaxb.codegen.AbstractGeneratedStubFactory.generateByteCode(AbstractGeneratedStubFactory.java:96)
at com.ibm.xml.xlxp2.jaxb.unmarshal.codegen.fcg.FCGStubFactory.generateStubByteCode(FCGStubFactory.java:46)
at com.ibm.xml.xlxp2.jaxb.codegen.AbstractGeneratedStubFactory.getStubClassConstructor(AbstractGeneratedStubFactory.java:154)
at com.ibm.xml.xlxp2.jaxb.unmarshal.codegen.AbstractGeneratedDeserializationStubFactory.createStub(AbstractGeneratedDeserializationStubFactory.java:58)
at com.ibm.xml.xlxp2.jaxb.unmarshal.impl.DeserializationContext.startComplexType(DeserializationContext.java:662)
at com.ibm.xml.xlxp2.jaxb.unmarshal.impl.DeserializationContext.handleRootElementEvent(DeserializationContext.java:303)
at com.ibm.xml.xlxp2.jaxb.unmarshal.impl.JAXBDocumentScanner.produceRootElementEvent(JAXBDocumentScanner.java:186)
at com.ibm.xml.xlxp2.scan.DocumentScanner.scanRootElement(DocumentScanner.java:2234)
at com.ibm.xml.xlxp2.scan.DocumentScanner.scanProlog(DocumentScanner.java:1726)
at com.ibm.xml.xlxp2.scan.DocumentScanner.nextEvent(DocumentScanner.java:1316)
at com.ibm.xml.xlxp2.scan.DocumentScanner.parseDocumentEntity(DocumentScanner.java:1168)
at com.ibm.xml.xlxp2.jaxb.unmarshal.impl.JAXBDocumentScanner.unmarshal(JAXBDocumentScanner.java:125)
at com.ibm.xml.xlxp2.jaxb.unmarshal.UnmarshallerImpl.unmarshal(UnmarshallerImpl.java:120)
at com.inditex.lois.core.ws.utilidades.services.impl.AdaptadorServiceImpl.transformarXMLenObjeto(AdaptadorServiceImpl.java:137)
As far as I know that class is part of IBM JDK and cannot be found in runtime. Is there anything I have to modify in my ear or, as I guess, its all about configuring/modifiying WAS configuration (or even applying a patch if this is a bug).
Any help? Thanks a lot.
(Sorry for my english :) )
This exception means that the class com.ibm.xtq.bcel.util.SyntheticRepository is found, but failed static initialization. If there is no other message in the log about this, then this is the time to open a PMR with IBM. Static initializers in internal WebSphere code should never fail during normal usage course.
Same here, there is an APAR:
http://www-01.ibm.com/support/docview.wss?uid=swg1IV41639
Problem conclusion
This defect will be fixed in:
6.0.0 SR14
6.0.1 SR6
7.0.0 SR5
.
The fix resolved the AccessControlException. SyntheticRepository
class can be initialized properly, hence NoClassDefFoundError
does not occur.

Resources