Logstash + Azure Events Hubs - azure

Trying to follow the link to add azure event into logstash, I have the below issue:
[2020-02-13T14:06:28,886][ERROR][com.microsoft.azure.eventprocessorhost.PartitionManager] host logstash-5fdbcee8-e368-44de-bc13-c640a36f646f: Exception while initializing stores, not starting partition manager com.microsoft.azure.eventhubs.IllegalEntityException: Failure getting partition ids for event hub
at com.microsoft.azure.eventprocessorhost.PartitionManager.lambda$cachePartitionIds$4(PartitionManager.java:80) ~[azure-eventhubs-eph-2.1.0.jar:?]
at java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836) ~[?:1.8.0_242]
at java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811) ~[?:1.8.0_242]
at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:456) [?:1.8.0_242]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_242]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_242]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_242]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [?:1.8.0_242]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_242]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_242]
Can someone help ?

I got the hint from this question. It appears the consumer SAS policy still needs manage privileges.

Related

Using pubsub lite library in spark getting error

I am getting error while publishing message to gcp pubsub lite using spark structured streaming.
I cannot use writestream as I want to use it in forEachBatch sink in spark so I am using foreachpartition and foreach and publishing message inside foreach for each dataframe row.
Below is error I get , some messages get published but in some I can see below exception:
2022-06-07 10:08:17 WARN PartitionCountWatcherImpl:101 - Failed to refresh partition count
com.google.api.gax.rpc.ApiException:
at com.google.cloud.pubsublite.internal.CheckedApiException.<init>(CheckedApiException.java:51)
at com.google.cloud.pubsublite.internal.CheckedApiException.<init>(CheckedApiException.java:55)
at com.google.cloud.pubsublite.internal.ExtractStatus.toCanonical(ExtractStatus.java:49)
at com.google.cloud.pubsublite.internal.wire.PartitionCountWatcherImpl.pollTopicConfig(PartitionCountWatcherImpl.java:92)
at com.google.cloud.pubsublite.internal.wire.PartitionCountWatcherImpl.onAlarm(PartitionCountWatcherImpl.java:71)
at com.google.cloud.pubsublite.internal.AlarmFactory.lambda$null$0(AlarmFactory.java:41)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.InterruptedException
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:456)
at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:100)
at com.google.common.util.concurrent.ForwardingFuture.get(ForwardingFuture.java:73)
at com.google.cloud.pubsublite.internal.wire.PartitionCountWatcherImpl.pollTopicConfig(PartitionCountWatcherImpl.java:81)
... 9 more

Got TimeoutException when try to download file from Azure Blob Storage

Im trying to download file from Azure blob storage with flowing code:
blobServiceClient = new BlobServiceClientBuilder().connectionString(connectionString)
.buildClient();
BlobClient b = blobContainerClient.getBlobClient(remotePath);
b.downloadToFile(localPath, true);
But sometimes i got this exception:
Caused by: java.util.concurrent.TimeoutException: Did not observe any item or terminal signal within 60000ms in 'map' (and no fallback has been configured)
at reactor.core.publisher.FluxTimeout$TimeoutMainSubscriber.handleTimeout(FluxTimeout.java:288)
at reactor.core.publisher.FluxTimeout$TimeoutMainSubscriber.doTimeout(FluxTimeout.java:273)
at reactor.core.publisher.FluxTimeout$TimeoutTimeoutSubscriber.onNext(FluxTimeout.java:390)
at reactor.core.publisher.StrictSubscriber.onNext(StrictSubscriber.java:89)
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:73)
at reactor.core.publisher.MonoDelay$MonoDelayRunnable.run(MonoDelay.java:117)
at reactor.core.scheduler.SchedulerTask.call(SchedulerTask.java:50)
at reactor.core.scheduler.SchedulerTask.call(SchedulerTask.java:27)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Do we have any solution to make it's stable?
Version:
<azure-storage-blob.version>12.6.0</azure-storage-blob.version>
<azure-core.version>1.3.0</azure-core.version>

Dispatcher has no subscribers while trying to gracefully shutdown application

I have the following requirement for my application:
I have an integration flow which takes files from a directory via
Files.inboundAdapter
and a polling configuration as follows:
#Bean public PollerSpec orderOutboundFlowTempFileInPoller() {
return Pollers
.fixedDelay(pollerDelay)
.maxMessagesPerPoll(100)
.transactional();
}
The files should be transferred to a remote host via RemoteFileTemplate. The application runs in a docker container which should be stoppable for maintainance or rollout purposes.
When the container is shutdown, the flow should finish writing the file to the remote host and should not accept new incoming files.
Therefore I have implemented a graceful shutdown as follows:
#Override public void onApplicationEvent(final ContextClosedEvent event) {
LOG.info("Trying to gracefully shutdown App");
//CHECKSTYLE:OFF
allFlowPollers.forEach(
p -> {
try {
p.destroy();
} catch (final Exception e) {
LOG.warn("Unable to destroy poller.");
}
}
);
//CHECKSTLYE:ON
FLOWS_TO_SHUTDOWN.forEach(GracefulShutdownAware::shutdown);
}
I assumed when I destroy the pollers, no further file would be read from source. The RemoteFileTemplate does send the current file correctly, there is no problem.
But the poller still seems to get new files and when the application is nearly shutdown, an exception appears as follows:
timestamp=15:55:56.599, thread=task-scheduler-2, severity=ERROR, class=o.s.i.h.LoggingHandler, message=org.springframework.messaging.MessageDeliveryException: Dispatcher has no subscribers for channel 'application.flowTempFileIn.channel#0'.; nested exception is org.springframework.integration.MessageDispatchingException: Dispatcher has no subscribers, failedMessage=GenericMessage [payload=/servicedata/tmp/1544088550162_57280.xml, headers={file_originalFile=/servicedata/tmp/1544088550162_57280.xml, id=d04473ff-d1bd-173e-d801-b7b9fd31596c, file_name=1544088550162_57280.xml, file_relativePath=1544088550162_57280.xml, timestamp=1544108156591}], failedMessage=GenericMessage [payload=/servicedata/tmp/1544088550162_57280.xml, headers={file_originalFile=/servicedata/tmp/1544088550162_57280.xml, id=d04473ff-d1bd-173e-d801-b7b9fd31596c, file_name=1544088550162_57280.xml, file_relativePath=1544088550162_57280.xml, timestamp=1544108156591}]
at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:77)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:445)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:394)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:181)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:160)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:47)
at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:108)
at org.springframework.integration.endpoint.SourcePollingChannelAdapter.handleMessage(SourcePollingChannelAdapter.java:227)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.doPoll(AbstractPollingEndpoint.java:290)
at sun.reflect.GeneratedMethodAccessor292.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:343)
at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:197)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:294)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:98)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:185)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:212)
at com.sun.proxy.$Proxy138.call(Unknown Source)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$Poller.lambda$run$0(AbstractPollingEndpoint.java:391)
at org.springframework.integration.util.ErrorHandlingTaskExecutor.lambda$execute$0(ErrorHandlingTaskExecutor.java:57)
at org.springframework.core.task.SyncTaskExecutor.execute(SyncTaskExecutor.java:50)
at org.springframework.integration.util.ErrorHandlingTaskExecutor.execute(ErrorHandlingTaskExecutor.java:55)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$Poller.run(AbstractPollingEndpoint.java:385)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at org.springframework.scheduling.concurrent.ReschedulingRunnable.run(ReschedulingRunnable.java:93)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.springframework.integration.MessageDispatchingException: Dispatcher has no subscribers, failedMessage=GenericMessage [payload=/servicedata/tmp/1544088550162_57280.xml, headers={file_originalFile=/servicedata/tmp/1544088550162_57280.xml, id=d04473ff-d1bd-173e-d801-b7b9fd31596c, file_name=1544088550162_57280.xml, file_relativePath=1544088550162_57280.xml, timestamp=1544108156591}]
at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:138)
at org.springframework.integration.dispatcher.UnicastingDispatcher.dispatch(UnicastingDispatcher.java:105)
at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:73)
... 33 more
Is there any other way, I can achieve this requirement? Which is: stopping the application should let the current file finish, not accept further files to be read and still closing without any weird exceptions?
What I assume it's some kind of timing issue since the integration flow and the reaction to the ContextClosedEvent are running on different Threads. The poller isn't destroyed completely but the transform-subscriber of the outbound channel is already destroyed.
I also tried to stop the poller via control bus, but the outcome was the same.
Thanks in advance :)
The problem is not about poller, but InboundChannelAdapter. Would be great to see your IntegrationFlow definition. Actually it must stop during application context shutdown. And you don't need to do anything else from your side.
The point is that all active Spring Integration components implement SmartLifecycle and they are stop()'ed gracefully during appropriate application context close phase.

Nodetool repair erroring on cassandra 3.9 cluster complaining of dead nodes

I have a cassandra 3.9 cluster. I initiated a repair from one of the nodes in the cluster. The repair went nowhere. I see the logs on that initiated node filled with errors like this.
ERROR [GossipTasks:1] 2018-02-16 23:27:36,949 RepairSession.java:347 - [repair #cadf6f11-1342-11e8-8d73-6767c6890f70] session completed with the following error
java.io.IOException: Endpoint /**.**.**.52 died
at org.apache.cassandra.repair.RepairSession.convict(RepairSession.java:346) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.gms.FailureDetector.interpret(FailureDetector.java:306) [apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.gms.Gossiper.doStatusCheck(Gossiper.java:782) [apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.gms.Gossiper.access$800(Gossiper.java:66) [apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.gms.Gossiper$GossipTask.run(Gossiper.java:181) [apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:118) [apache-cassandra-3.9.jar:3.9]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_91]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_91]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_91]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_91]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_91]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_91]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
One the other hand if I look at the logs for the nodes claimed to be dead, I see one of 3 symptoms.
Either the node claims to have successfully sent the requested
merkle tree over.
The node does not have any trace of the repair session, and thus doesn't appear to have received any repair request.
The node shows exception like this.
ERROR [ValidationExecutor:3] 2018-02-16 23:29:06,548 Validator.java:261 - Failed creating a merkle tree for [repair #cac2bf50-1342-11e8-8d73-6767c6890f70 on somekeyspace/sometable, [(-3531087107126953137,-3495591103116433105], (1424707151780052485,1425479237398192865], (-3533012126945497873,-3531087107126953137], (1425479237398192865,1429220273719165251], (-4991682772598302168,-4984938905452900436], (-7686750611814623539,-7685228552629222537], (7554301216433235881,7559623046999138658], (334796420453180909,342318143371667659], (-3538876023288368831,-3533012126945497873], (1409514567521922418,1424707151780052485], (5391546013321073004,5393284101537339558], (590921410556013711,593440512568877190]]], /..**.43 (see log for details)
ERROR [ValidationExecutor:3] 2018-02-16 23:29:06,549 CassandraDaemon.java:226 - Exception in thread Thread[ValidationExecutor:3,1,main]
java.lang.RuntimeException: Parent repair session with id = c8bf7540-1342-11e8-8d73-6767c6890f70 has failed.
at org.apache.cassandra.service.ActiveRepairService.getParentRepairSession(ActiveRepairService.java:377) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.db.compaction.CompactionManager.getSSTablesToValidate(CompactionManager.java:1313) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.db.compaction.CompactionManager.doValidationCompaction(CompactionManager.java:1222) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.db.compaction.CompactionManager.access$700(CompactionManager.java:81) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.db.compaction.CompactionManager$11.call(CompactionManager.java:844) ~[apache-cassandra-3.9.jar:3.9]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_91]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_91]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_91]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
Is this a known issue?

After creating a new jhipster project, unable to launch the application

After creating a jhipster project, tried with the following command.
**mvnw
I am getting the following error. For existing project also, i am facing the same issue.
Error :
com.netflix.discovery.shared.transport.TransportException: Cannot execute request on any known server
at com.netflix.discovery.shared.transport.decorator.RetryableEurekaHttpClient.execute(RetryableEurekaHttpClient.java:111)
at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.register(EurekaHttpClientDecorator.java:56)
at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator$1.execute(EurekaHttpClientDecorator.java:59)
at com.netflix.discovery.shared.transport.decorator.SessionedEurekaHttpClient.execute(SessionedEurekaHttpClient.java:77)
at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.register(EurekaHttpClientDecorator.java:56)
at com.netflix.discovery.DiscoveryClient.register(DiscoveryClient.java:815)
at com.netflix.discovery.InstanceInfoReplicator.run(InstanceInfoReplicator.java:104)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Your question is not detailed enough but it is obvious that you are using a microservice achitecture and did not start the registry, check documentation.

Resources