The Problem:
I'm setting up hazelcast AutoDiscovery feature to be used in Azure Environments. I am following the github document for the same: https://github.com/hazelcast/hazelcast-azure
But I am getting the following error:
Exception in thread "main" com.hazelcast.config.InvalidConfigurationException: Invalid configuration
at com.hazelcast.spi.discovery.impl.DefaultDiscoveryService.loadDiscoveryStrategies(DefaultDiscoveryService.java:147)
.....
com.hazelcast.core.server.StartServer.main(StartServer.java:46)
Caused by: com.hazelcast.config.properties.ValidationException: There is no discovery strategy factory to create 'DiscoveryStrategyConfig{properties={group-name=****, client-secret=****, subscription-id=****, client-id=****, tenant-id=****, cluster-id=****}, className='com.hazelcast.azure.AzureDiscoveryStrategy', discoveryStrategyFactory=null}' Is it a typo in a strategy classname? Perhaps you forgot to include implementation on a classpath?
at com.hazelcast.spi.discovery.impl.DefaultDiscoveryService.buildDiscoveryStrategy(DefaultDiscoveryService.java:186)
...
I have tried the following properties:
<azure enabled="true">
<client-id>****</client-id>
<client-secret>****</client-secret>
<tenant-id>****</tenant-id>
<subscription-id>****</subscription-id>
<cluster-id>****</cluster-id>
<group-name>****</group-name>
</azure>
Since This was not working I even tried using discovery strategy with the following code snipppet:
<discovery-strategies>
<!-- class equals to the DiscoveryStrategy not the factory! -->
<discovery-strategy enabled="true" class="com.hazelcast.azure.AzureDiscoveryStrategy">
<properties>
<property name="client-id">****</property>
<property name="client-secret">****</property>
<property name="tenant-id">****</property>
<property name="subscription-id">****</property>
<property name="cluster-id">****</property>
<property name="group-name">****</property>
</properties>
</discovery-strategy>
</discovery-strategies>
Even switched the value of class from:
class="com.hazelcast.azure.AzureDiscoveryStrategy"
to
class="com.hazelcast.azure.AzureDiscoveryStrategyFactory"
I have tried the above with hazelcast versions 3.12.2, 3.12.1, 3.12 and 3.11.4 and all are giving same result.
Please suggest what I am doing incorrectly and what else is required.
Related
EXCEPTION during IgniteClient startup (Springboot job):
TcpDiscoverySpi - Failed to get registered addresses from IP finder on start
Server returned HTTP response code: 401 for URL https://
I've created the service account, and read the token.
Then I0ve putted the value of che attribute token inside a file
Then I've tries with ignitevisorcmd to connect, but it's seems there is an error i'm not able to identify
Snippet of my "ignite-config.xml":
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<!--
Enables Kubernetes IP finder with default settings.
-->
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder">
<property name="MasterUrl" value="https://XXXX-aks-001-ignitecluster-XXXXXXXX.hcp.westeurope.azmk8s.io"/>
<property name="AccountToken" value="C:\Users\XXXXXXXX\Desktop\TOP\token"/>
<property name="ServiceName" value="ignite"/>
</bean>
</property>
</bean>
</property>
What's wrong!!?!??!?!??!
Check if you have RBAC enabled cluster. If you do, you will have to give permission to your ignite pod to access the endpoints.
https://kubernetes.io/docs/admin/authorization/rbac/
I have spent a whole day trying to figure out this odd issue. I have my NiFi instance stand up on a Linux server. I configured ldap-provider in login-identity-providers.xml as below
<provider>
<identifier>ldap-provider</identifier>
<class>org.apache.nifi.ldap.LdapProvider</class>
<property name="Authentication Strategy">SIMPLE</property>
<property name="Manager DN"></property>
<property name="Manager Password"></property>
<property name="TLS - Keystore">/Data/ssl/server_keystore.jks</property>
<property name="TLS - Keystore Password">changeit</property>
<property name="TLS - Keystore Type">JKS</property>
<property name="TLS - Truststore">/Data/ssl/server_truststore.jks</property>
<property name="TLS - Truststore Password">changeit</property>
<property name="TLS - Truststore Type">JKS</property>
<property name="TLS - Client Auth"></property>
<property name="TLS - Protocol">TLSv1.2</property>
<property name="TLS - Shutdown Gracefully"></property>
<property name="Referral Strategy">FOLLOW</property>
<property name="Connect Timeout">10 secs</property>
<property name="Read Timeout">10 secs</property>
<property name="Url">ldaps://myserver.hostname:636</property>
<property name="User Search Base">ou=people,dc=xxx,dc=net</property>
<property name="User Search Filter">cn={0}</property>
<property name="Authentication Expiration">12 hours</property>
When I starting nifi, I got a login page prompted first. However, I kept getting
2016-07-28 00:17:43,527 ERROR [NiFi Web Server-64] org.apache.nifi.ldap.LdapProvider myserver.hostname:636; nested exceptin is javax.naming.CommunicationException: myserver.hostname:636; [Root exception is javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target]
I then tried to use jvm argument in bootstrap.conf as
java.arg.15=-Djavax.net.ssl.trustStore=/Data/ssl/server_truststore.jks
It worked perfectly fine.
I also tried SSLPoke.class with the same truststore vm argument, it also worked fine.
java -Djavax.net.ssl.trustStore=/Data/ssl/server_truststore.jks SSLPoke myserver.hostname 636
"Successfully connected"
Now my question is why my configuration in NiFi login-identity-providers.xml doesn't work?
Unfortunately, NiFi does not support LDAPS currently. There is a JIRA [1] to build this capability. SIMPLE (plaintext) or START_TLS are the only valid options. Further, the SSL context configuration options are only considered when the Authentication Strategy is START_TLS.
[1] https://issues.apache.org/jira/browse/NIFI-2325
#davy_wei,
while Matt's comment is correct, if you are for some reason restricted from using LDAP to you LDAP/AD server (e.g. firewall rules), one option is to use stunnel or socat to tunnel between the protected LDAP and NiFi's LDAP client.
sample stunnel config would look like:
...
[ldap2ldaps]
accept = 127.0.0.1:whatever_port_you_want
client = yes
connect = your.real.ldaps.fqdn.or.ip:636
...
Remember this is the basic config. You may want to fine tune your stunnel to match your security requirements (e.g. restrict to particular ciphers, TLS version, etc)
I am maintaining an existing Spring Integration application which is polling a third-party SFTP server for files. It occasionally throws permission or 'not found' errors, which I suspect are caused by transient problems at the remote end. I would like the application to retry on getting these errors, as it will probably resolve the issue. (I also have a requirement to "retry on any problems", which should cover this case.)
e.g.
org.springframework.messaging.MessagingException: Problem occurred while synchronizing remote to local directory; nested exception is org.springframework.messaging.MessagingException: Failure occurred while copying from remote to local directory; nested exception is org.springframework.core.NestedIOException: failed to read file mypath/myfile.csv; nested exception is 3: Permission denied
at [snip]
Caused by: org.springframework.messaging.MessagingException: Failure occurred while copying from remote to local directory; nested exception is org.springframework.core.NestedIOException: failed to read file mypath/myfile.csv; nested exception is 3: Permission denied
at [snip]
Caused by: 3: Permission denied
at com.jcraft.jsch.ChannelSftp.throwStatusError(ChannelSftp.java:2846) [snip]
After extensive googling and going round in circles, I am still unable to figure out how to do this with Spring Integration. Here is the existing config:
<bean id="myAcceptOnceFilter" class="org.springframework.integration.sftp.filters.SftpPersistentAcceptOnceFileListFilter">
<constructor-arg index="0" ref="myLocalFileStore"/>
<constructor-arg index="1" name="prefix" value="myprefix_"/>
<property name="flushOnUpdate" value="true"/>
</bean>
<bean id="myCompositeFilter" class="org.springframework.integration.file.filters.CompositeFileListFilter">
<constructor-arg>
<list>
<bean class="org.springframework.integration.sftp.filters.SftpSimplePatternFileListFilter">
<constructor-arg value="myprefix" />
</bean>
<ref bean="myAcceptOnceFilter"/>
</list>
</constructor-arg>
</bean>
<int-sftp:inbound-channel-adapter id="myInboundChannel"
session-factory="mySftpSessionFactory"
channel="myDownstreamChannel"
remote-directory="blah"
filter="myCompositeFilter"
local-directory="blah"
auto-create-local-directory="true"
>
<int:poller fixed-rate="10000" max-messages-per-poll="-1">
<int:transactional transaction-manager="transactionManager" synchronization-factory="syncFactory" />
</int:poller>
</int-sftp:inbound-channel-adapter>
EDIT: I think the problem lies in myCompositeFilter. It doesn't look like rollback() is being called inside myAcceptOnceFilter when the exception is thrown. If I simply use myAcceptOnceFilter without the composite then the code works as intended (i.e. rollback() is called). Question is now: how do I continue to use a CompositeFilter which calls rollback on all its children?
I've looked into putting a retry adapter inside the poller (EDIT: I now know this is irrelevant):
<bean id="retryAdvice" class="org.springframework.integration.handler.advice.RequestHandlerRetryAdvice"/>
<int:poller fixed-rate="10000" max-messages-per-poll="-1">
<int:advice-chain>
<tx:advice transaction-manager="transactionManager"/>
<int:ref bean="retryAdvice"/>
</int:advice-chain>
</int:poller>
...but this throws a warning that
This advice org.springframework.integration.handler.advice.RequestHandlerRetryAdvice can only be used for MessageHandlers
In short, I'm stuck. Any help on getting it to retry on this kind of sftp exception would be very gratefully received. Thanks!
EDIT: Added in mention of SftpPersistentAcceptOnceFileListFilter.
EDIT: Added discussion of CompositeFileLIstFilter, which now looks like the location of the problem.
The retry advice is for consuming endpoints (push-retrying).
It's not clear why you need to add retry here - the poller will inherently retry on the the next poll.
I have an inbound gateway which is listening to MQ for messages. Whenever the MQ server is bought down. I loose the connection and in the logs I see that my inbound gateway tries to refresh the connection every 5 seconds. But once the MQ server is up I still see the same error in the log. Can you please let me know if I'm missing any info from the config or is there anything to be done on the MQ server.
Spring Config
<bean id="cachingConnectionFactory1" class="org.springframework.jms.connection.CachingConnectionFactory">
<property name="targetConnectionFactory" ref="mqConnectionFactory1"/>
<property name="reconnectOnException" value="true"/>
</bean>
<bean id="mqConnectionFactory1" class="org.springframework.jms.connection.UserCredentialsConnectionFactoryAdapter">
<property name="targetConnectionFactory">
<bean class="com.ibm.mq.jms.MQQueueConnectionFactory">
<property name="hostName" value="${mq.hostname.1}"/>
<property name="port" value="${mq.port}"/>
<property name="queueManager" value="${mq.queuemanager.1}"/>
<property name="transportType" value="${mq.transporttype}"/>
<property name="channel" value="${mq.channel}"/>
</bean>
</property>
<property name="username" value="${mq.username}"/>
</bean>
<int-jms:message-driven-channel-adapter
id="mqEnterpriseRequestAdapter1"
connection-factory="cachingConnectionFactory1"
destination="xyzQueue"
concurrent-consumers="2"
max-concurrent-consumers="5"
cache-level="5"
idle-consumer-limit="5"
max-messages-per-task="2"
channel="xyzReceive"/>
Error Log:
[2015-09-19 01:52:56,267] WARN [org.springframework.jms.listener.DefaultMessageListenerContainer#4-494492] (DefaultMessageListenerContainer.java:842) - Setup of JMS message listener invoker failed for destination 'queue:///queuename' - trying to recover. Cause: MQJMS2002: failed to get message from MQ queue; nested exception is com.ibm.mq.MQException: MQJE001: Completion Code 2, Reason 2019
[2015-09-19 01:52:51,292] WARN [org.springframework.jms.listener.DefaultMessageListenerContainer#4-494037] (DefaultMessageListenerContainer.java:842) - Setup of JMS message listener invoker failed for destination 'queue:///queuename' - trying to recover. Cause: MQJMS2002: failed to get message from MQ queue; nested exception is com.ibm.mq.MQException: MQJE001: Completion Code 2, Reason 2019
[2015-09-19 01:52:51,263] WARN [org.springframework.jms.listener.DefaultMessageListenerContainer#4-494488] (DefaultMessageListenerContainer.java:842) - Setup of JMS message listener invoker failed for destination 'queue:///queuename' - trying to recover. Cause: MQJMS2002: failed to get message from MQ queue; nested exception is com.ibm.mq.MQException: MQJE001: Completion Code 2, Reason 2019
[2015-09-19 01:52:46,291] WARN [org.springframework.jms.listener.DefaultMessageListenerContainer#4-494033] (DefaultMessageListenerContainer.java:842) - Setup of JMS message listener invoker failed for destination 'queue:///queuename' - trying to recover. Cause: MQJMS2002: failed to get message from MQ queue; nested exception is com.ibm.mq.MQException: MQJE001: Completion Code 2, Reason 2019
[2015-09-19 01:52:46,262] WARN [org.springframework.jms.listener.DefaultMessageListenerContainer#4-494485] (DefaultMessageListenerContainer.java:842) - Setup of JMS message listener invoker failed for destination 'queue:///queuename' - trying to recover. Cause: MQJMS2002: failed to get message from MQ queue; nested exception is com.ibm.mq.MQException: MQJE001: Completion Code 2, Reason 2019
I found the answer to your question by googling MQJE001: Completion Code 2, Reason 2019
The answer is on IBM's support site.
Reason code 2019 usually occurs after a connection broken error (reason code 2009) occurs. You would see a JMSException with reason code 2009 preceding reason code 2019 in the SystemOut.log.
Reason code 2009 indicates that the connection to the MQ queue manager is no longer valid, usually due to a network or firewall issue.
Reason code 2019 errors will occur when invalid connections remain in the connection pool after the reason code 2009 error occurs. The next time that the application tries to use one of these connections, the reason code 2019 occurs.
Resolved after following the below post and the config changes after the modification
http://forum.spring.io/forum/spring-projects/integration/jms/89532-defaultmessagelistenercontainer-cachingconnectionfactory-tomcat-and-websphere-mq
<bean id="mqContainer1" class="org.springframework.jms.listener.DefaultMessageListenerContainer">
<property name="connectionFactory" ref="cachingConnectionFactory1" />
<property name="exceptionListener" ref="cachingConnectionFactory1" />
<property name="destinationName" value="${mq.requestqueue}" />
<property name="maxConcurrentConsumers" value="x"/>
<property name="concurrentConsumers" value="x"/>
<property name="maxMessagesPerTask" value="x"/>
<property name="idleConsumerLimit" value="x"/>
</bean>
<int-jms:message-driven-channel-adapter
id="mqEnterpriseRequestAdapter1"
container="cbatsMqContainer1"
channel="mqMessageReceive"/>
While launching a local GridGain instance in a local node for the sake of testing I'm getting the following
class org.gridgain.grid.GridException: Failed to start SPI: GridTcpDiscoverySpi [locPort=47500, locPortRange=100, statsPrintFreq=0, netTimeout=5000, sockTimeout=2000, ackTimeout=5000, maxAckTimeout=600000, joinTimeout=0, hbFreq=2000, maxMissedHbs=1, threadPri=10, storesCleanFreq=60000, reconCnt=10, topHistSize=1000, gridName=null, locNodeId=dd235392-85b2-4f13-8a36-c433c5053c84, marsh=GridJdkMarshaller [], gridMarsh=org.gridgain.grid.marshaller.optimized.GridOptimizedMarshaller#56589a42, locNode=GridTcpDiscoveryNode [id=dd235392-85b2-4f13-8a36-c433c5053c84, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1], sockAddrs=[/0:0:0:0:0:0:0:1:47500, /127.0.0.1:47500], discPort=47500, order=0, loc=true, ver=GridProductVersion [major=6, minor=1, maintenance=6, revTs=1401961981]], locAddr=null, locHost=0.0.0.0/0.0.0.0, ipFinder=GridTcpDiscoveryVmIpFinder [addrs=[/127.0.0.1:0], super=GridTcpDiscoveryIpFinderAdapter [shared=false]], metricsStore=null, spiState=CONNECTING, ipFinderHasLocAddr=true, recon=false, joinRes=GridTuple [val=null], nodeAuth=org.gridgain.grid.kernal.managers.discovery.GridDiscoveryManager$3#6bdf5fb8, gridStartTime=0]
at org.gridgain.grid.kernal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:221)
at org.gridgain.grid.kernal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:371)
at org.gridgain.grid.kernal.GridKernal.startManager(GridKernal.java:1523)
... 8 more
Caused by: class org.gridgain.grid.spi.GridSpiException: Failed to authenticate local node (will shutdown local node).
at org.gridgain.grid.spi.discovery.tcp.GridTcpDiscoverySpi.joinTopology(GridTcpDiscoverySpi.java:1507)
at org.gridgain.grid.spi.discovery.tcp.GridTcpDiscoverySpi.spiStart0(GridTcpDiscoverySpi.java:994)
at org.gridgain.grid.spi.discovery.tcp.GridTcpDiscoverySpi.spiStart(GridTcpDiscoverySpi.java:916)
at org.gridgain.grid.kernal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:218)
... 10 more
You can find the full stack trace at this link http://pastebin.com/7D17vuCY
I've tried also to configure a local IP Finder like this, but with no joy.
<property name="discoverySpi">
<bean class="org.gridgain.grid.spi.discovery.tcp.GridTcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.gridgain.grid.spi.discovery.tcp.ipfinder.vm.GridTcpDiscoveryVmIpFinder">
<property name="addresses" value="127.0.0.1"></property>
</bean>
</property>
</bean>
</property>
Any clue what's wrong with it?
OK, I've solved this by setting the localAddress property
<property name="discoverySpi">
<bean class="org.gridgain.grid.spi.discovery.tcp.GridTcpDiscoverySpi">
<property name="localAddress" value="127.0.0.1"/>
</bean>
</property>
The workaround is to run with -Djava.net.preferIPv4Stack=true or upgrade to GridGain 6.2.0.