Hazelcast not injecting spring dependencies - hazelcast

I'm using hazelcast 3.8.5 as the store for jcache.
It appears hazelcast is not injecting SpringAware dependencies into the CacheLoader.
I took a peek at AbstractCacheRecordStore and it seems like only Hazelcast InstanceAware dependencies are injected, not SpringAware + Autowired
I'm setting up the cluster managedContext programatically like:
config.setManagedContext(springManagedContext);
Update
A work around I've found is put the ApplicationContext into the UserContext of hazelcast. Make the CacheLoader implement HazelcastInstanceAware. Pull the context out of there and finish autowiring the CacheLoader. Not ideal, but it works.

Created https://github.com/hazelcast/hazelcast/issues/11384
Only work around is getting spring app context out of hazelcast user context.

Related

EventHubConsumerClient Apache Qpid memory leak?

I am reading events from an Azure EventHub cluster synchronously via the receiveFromPartition method on the EventHubConsumerClient class.
I create the client once like so:
EventHubConsumerClient eventHubConsumerClient = new EventHubClientBuilder()
.connectionString(eventHubConnectionString)
.consumerGroup(consumerGroup)
.buildConsumerClient());
I then just use a ScheduledExecutorService to retrieve events every 1.5s via:
IterableStream<PartitionEvent> receivedEvents = eventHubConsumerClient.receiveFromPartition(
partitionId, 1, eventPosition);
The equivalent logic in V3 of the SDK worked fine (using PartitionReceivers), but now I am seeing OOMs in my JVM.
Running a profiler against a local version of the logic I see the majority of the heap (90%, mainly in OG) is being taken up by byte[]s, referenced by org.apache.qpid.proton.codex.CompositeReadableBuffer. This pattern is not present when I profile the V3 logic.
What could be causing a leak of the AMQP messages here, do I need to interact with the SDK further, for example close a connection that I'm not aware of after each call?
Any advise would be very appreciated, thanks!
Turns out it was a bug, solved here: https://github.com/Azure/azure-sdk-for-java/issues/13775

submit job to remote hazelcast cluster

I'm new to Hazelcast Jet and have a very basic question. I have a 3-node JET cluster set up. I have a sample code to read from Kafka and drain to an IMap. When I run it from command-line (using jet-submit.sh and use JetBootstrap.getInstance() to acquire JET client instance) it works perfectly fine. When I run the same code (using Jet.newJetClient() to acquire the instance and Run As -> Java application on Eclipse), I get:
java.lang.ClassCastException: cannot assign instance of java.lang.invoke.SerializedLambda to field com.hazelcast.jet.core.ProcessorMetaSupplier.
Could you please let me know where am I going wrong?
One of your lambda functions captures an outside variable, probably defined at class level, and that class is not Serializable or not added to the Job config when submitting from client. This is done automatically when submitting via the script.
Please see http://docs.hazelcast.org/docs/jet/0.6.1/manual/#remember-that-a-jet-job-is-distributed
When you use a client instance to submit the job, you have to add all classes that contain the code called by the job to the JobConfig:
JobConfig config = new JobConfig();
config.addClass(...);
config.addJar(...);
...
client.newJob(pipeline, config);
For example, if you use a lambda for stage.map(), the class containing the lambda has to be added.
The jet-submit.sh script makes this easier by automatically adding the entire submitted .jar file.

Amazon SQS Outbound channel adapter won't initialize

I'm trying to start up a spring-int-aws flow that is writing to an SQS queue, but the context won't load.
Error starting ApplicationContext. To display the auto-configuration report
re-run your application with 'debug' enabled.
2016-10-03 14:10:51.848 ERROR 3741 --- [ main]
o.s.boot.SpringApplication : Application startup failed
org.springframework.beans.factory.BeanCreationException: Error creating bean
with name 'org.springframework.integration.aws.outbound.SqsMessageHandler#0':
Bean instantiation via constructor failed; nested exception is
org.springframework.beans.BeanInstantiationException: Failed to instantiate
[org.springframework.integration.aws.outbound.SqsMessageHandler]: Constructor
threw exception; nested exception is java.lang.NoSuchMethodError:
org.springframework.cloud.aws.messaging.core.QueueMessagingTemplate.<init
(Lcom/amazonaws/services/sqs/AmazonSQS;Lorg/springframework/cloud/aws/core/env/ResourceIdResolver;)V
....
Did some research: Inside of the SQSMessageHandler (v 1.0.0-RELEASE) it's attempting to create a QueueMessagingTemplate using a declared AmazonSQS object, however the QueueMessagingTemplate (v1.1.3 Release) it requires an AmazonSQSAsync object in the constructor. Since spring-cloud-aws uses the sub-interface, it seems that spring-int-aws should as well.
Am I missing something, or should this be an issue filed against the project?
spring-integration-aws is built against 1.1.1.
Please open an issue there so we can make a compatible build.
That said, it's pretty unusual for Spring projects to break APIs in a point release, so we should raise an issue for spring-cloud-aws too, to ask them to restore the other ctor (if possible), and cross-link the issues.
I opened issues.
We would wait for the disposition of that issue before doing anything in spring-integration-aws (but we should still go ahead and support the async option).

Cassandra Cluster Recovery

I have a Spring Boot Application that uses Spring Data for Cassandra. One of the requirements is that the application will start even if the Cassandra Cluster is unavailable. The Application logs the situation and all its endpoints will not work properly but the Application does not shutdown. It should retry to connect to the cluster during this time. When the cluster is available the application should start to operate normally.
If I am able to connect during the application start and the cluster becomes unavailable after that, the cassandra java driver is capable of managing the retries.
How can I manage the retries during application start and still use Cassandra Repositories from Spring Data?
Thanx
It is possible to start a Spring Boot application if Apache Cassandra is not available but you need to define the Session and CassandraTemplate beans on your own with #Lazy. The beans are provided out of the box with CassandraAutoConfiguration but are initialized eagerly (default behavior) which creates a Session. The Session requires a connection to Cassandra which will prevent a startup if it's not initialized lazily.
The following code will initialize the resources lazily:
#Configuration
public class MyCassandraConfiguration {
#Bean
#Lazy
public CassandraTemplate cassandraTemplate(#Lazy Session session, CassandraConverter converter) throws Exception {
return new CassandraTemplate(session, converter);
}
#Bean
#Lazy
public Session session(CassandraConverter converter, Cluster cluster,
CassandraProperties cassandraProperties) throws Exception {
CassandraSessionFactoryBean session = new CassandraSessionFactoryBean();
session.setCluster(cluster);
session.setConverter(converter);
session.setKeyspaceName(cassandraProperties.getKeyspaceName());
session.setSchemaAction(SchemaAction.NONE);
return session.getObject();
}
}
One of the requirements is that the application will start even if the Cassandra Cluster is unavailable
I think you should read this session from the Java driver doc: http://datastax.github.io/java-driver/manual/#cluster-initialization
The Cluster object does not connect automatically unless some calls are executed.
Since you're using Spring Data Cassandra (that I do not recommend since it has less feature than the plain Mapper Module of the Java driver ...) I don't know if the Cluster object or Session object are exposed directly to the users ...
For retry, you can put the cluster.init() call in a try/catch block and if the cluster is still unavaible, you'll catch an NoHostAvailableException according to the docs. Upon the exception, you can schedule a retry of cluster.init() later

Web application using log4j logs tons of severe error messages

I'm written an application which uses a library (Jabber stream objects) which internally uses log4j. When I deploy the application, there are no errors. However after some time, I could see lots of error message which look like this:
[#|2013-02-26T12:48:56.147+0000|SEVERE|oracle-glassfish3.1.2|javax.enterprise.system.std.com.sun.enterprise.server.logging|_ThreadID=365;_ThreadName=SelectWorker 1;|java.lang.IllegalStateException: WEB9031: WebappClassLoader unable to load resource [org.apache.log4j.spi.NOPLoggerRepository], because it has not yet been started, or was already stopped
at org.glassfish.web.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1401)
at org.glassfish.web.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1359)
at org.apache.log4j.LogManager.getLoggerRepository(LogManager.java:197)
at org.apache.log4j.LogManager.getLogger(LogManager.java:228)
at org.apache.log4j.Logger.getLogger(Logger.java:117)
I have log4j.jar inside the WEB-INF/lib directory of my application, along with the external library (JSO.jar)
The issue [1] looked similar, but doesn't seem to be the same.
[1] Web service is not working on GlassFish
I've found that this happens when I redeploy a Servlet and went away when I overridden destroy() for my Servlet and did some cleanup there.

Resources