Spring Integration outbound-gateway with basic authentication - spring-integration

I have seen some examples , but i am unable to use that solutions
problem is that I have to pass basic authentication info as part of my config
current request below...can you tell me how to add basic authentication
<bean id="WSACaoedelen" class="nl.bIntnActiCallback">
<constructor-arg index="0" value="http://enst.nl/kkm/Kkmervice/toest"></constructor-arg>
<constructor-arg index="1" value="${KKSEURL}"></constructor-arg>
</bean>
<int:chain input-channel="kkChannel" output-channel="dest-channel">
<ws:header-enricher>
<ws:soap-action value="http://knst.nl/kkm/KkService/toest"/>
</ws:header-enricher>
<ws:outbound-gateway uri="${GATEWAY}" request-callback="WSACaoedelen"/>
</int:chain>

The point of the basic authentication that it is a part of HTTP transport.
You need to consider to use a HttpComponentsMessageSender with an injected setCredentials(). In your case I guess you can just use UsernamePasswordCredentials:
<bean id="httpComponentsMessageSender" class="org.springframework.ws.transport.http.HttpComponentsMessageSender">
<property name="credentials">
<bean class="org.apache.http.auth.UsernamePasswordCredentials">
<constructor-arg value="userName"/>
<constructor-arg value="password"/>
</bean>
</property>
</bean>
...
<ws:outbound-gateway uri="${GATEWAY}" message-sender="httpComponentsMessageSender"/>

Related

spring integration idempotent metadatastore message going to same store

I am using s3 module to poll files from s3.It downloads the file to local system and starts processing it.I am running this on 3 node cluster with module count as 1.Now lets assume the file is downloaded to local system from s3 and xd is processing it.If xd node goes down it would have processed half the message.When the server comes up it will start processing file again hence I will get duplicate message.I am trying to change to idempotent pattern with message store to change the module count to 3 but still this duplicate message issues will be there.
This configuration worked for me thanks for the help
<int:poller fixed-delay="${fixedDelay}" default="true">
<int:advice-chain>
<ref bean="pollAdvise"/>
</int:advice-chain>
</int:poller>
<bean id="pollAdvise" class="org.springframework.integration.scheduling.PollSkipAdvice">
<constructor-arg ref="healthCheckStrategy"/>
</bean>
<bean id="healthCheckStrategy" class="test.ServiceHealthCheckPollSkipStrategy">
<property name="url" value="${url}"/>
<property name="doHealthCheck" value="${doHealthCheck}"/>
</bean>
<bean id="credentials" class="org.springframework.integration.aws.core.BasicAWSCredentials">
<property name="accessKey" value="${accessKey}"/>
<property name="secretKey" value="${secretKey}"/>
</bean>
<bean id="clientConfiguration" class="com.amazonaws.ClientConfiguration">
<property name="proxyHost" value="${proxyHost}"/>
<property name="proxyPort" value="${proxyPort}"/>
<property name="preemptiveBasicProxyAuth" value="false"/>
</bean>
<bean id="s3Operations" class="org.springframework.integration.aws.s3.core.CustomC1AmazonS3Operations">
<constructor-arg index="0" ref="credentials"/>
<constructor-arg index="1" ref="clientConfiguration"/>
<property name="awsEndpoint" value="s3.amazonaws.com"/>
<property name="temporaryDirectory" value="${temporaryDirectory}"/>
<property name="awsSecurityKey" value="${awsSecurityKey}"/>
</bean>
<!-- aws-endpoint="https://s3.amazonaws.com" -->
<int-aws:s3-inbound-channel-adapter aws-endpoint="s3.amazonaws.com"
bucket="${bucket}"
s3-operations="s3Operations"
credentials-ref="credentials"
file-name-wildcard="${fileNameWildcard}"
remote-directory="${remoteDirectory}"
channel="splitChannel"
local-directory="${localDirectory}"
accept-sub-folders="false"
delete-source-files="true"
archive-bucket="${archiveBucket}"
archive-directory="${archiveDirectory}">
</int-aws:s3-inbound-channel-adapter>
<int-file:splitter id="s3splitter" input-channel="splitChannel" output-channel="bridge" markers="false" charset="UTF-8">
<int-file:request-handler-advice-chain>
<bean class="org.springframework.integration.handler.advice.ExpressionEvaluatingRequestHandlerAdvice">
<property name="onSuccessExpression" value="payload.delete()"/>
</bean>
</int-file:request-handler-advice-chain>
</int-file:splitter>
I had few doubts wanted to clarify If i am doing right.
1)As you can see I have ExpressionEvaluatingRequestHandlerAdvice to delete the file.Will the file get deleted after i read the file into redis or after last record is read?
2)I explored redis using desktop manager I see this I have a MetaData
Both (file and payload) metadatastore key and value are going to same table is this fine?or should it be different metadatastore?
Can i use hash of payload instead of payload as key?Is there something like payload.hash!
Sorry, but you still don't show enough config to determine the issue.
There is neither ExpressionEvaluatingRequestHandlerAdvice, nor s3splitter and even S3 Inbound Channel adapter.
Anyway I'll try to answer to your questions.
I'd say that the best place to use ExpressionEvaluatingRequestHandlerAdvice with the file delete is your FileSplitter. And it is deleted after the last record is read. Actually the store into the Redis via the IdempotentReceiverInterceptor is done even before any splitting logic is involved. That is a premise of the Idempotent Receiver: do not pass the message to the target endpoint, if we don't accept the duplicate messages:
boolean accept = this.messageSelector.accept(message);
if (!accept) {
boolean discarded = false;
.....
Yes, looks like you should use different RedisMetadataStore instances with different keys. But that may be the same Redis Server.
Not sure what you mean about payload.hash. Any Java Object has its own hashCode(). Although I'm sure you have there after splitting the strings representing each line from the file. So, I think you really should use there something like MD5:
new BigInteger(1, md5.digest(value.toUpperCase().getBytes('UTF-8'))).toString(16).toUpperCase()

duplicate message processed when polling files from s3

I am using s3 module to poll files from s3.It downloads the file to local system and starts processing it.I am running this on 3 node cluster with module count as 1.Now lets assume the file is downloaded to local system from s3 and xd is processing it.If xd node goes down it would have processed half the message.When the server comes up it will start processing file again hence I will get duplicate message.I am trying to change to idempotent pattern with message store to change the module count to 3 but still this duplicate message issues will be there.
<int:poller fixed-delay="${fixedDelay}" default="true">
<int:advice-chain>
<ref bean="pollAdvise"/>
</int:advice-chain>
</int:poller>
<bean id="pollAdvise" class="org.springframework.integration.scheduling.PollSkipAdvice">
<constructor-arg ref="healthCheckStrategy"/>
</bean>
<bean id="healthCheckStrategy" class="ServiceHealthCheckPollSkipStrategy">
<property name="url" value="${url}"/>
<property name="doHealthCheck" value="${doHealthCheck}"/>
</bean>
<bean id="credentials" class="org.springframework.integration.aws.core.BasicAWSCredentials">
<property name="accessKey" value="${accessKey}"/>
<property name="secretKey" value="${secretKey}"/>
</bean>
<bean id="clientConfiguration" class="com.amazonaws.ClientConfiguration">
<property name="proxyHost" value="${proxyHost}"/>
<property name="proxyPort" value="${proxyPort}"/>
<property name="preemptiveBasicProxyAuth" value="false"/>
</bean>
<bean id="s3Operations" class="org.springframework.integration.aws.s3.core.CustomC1AmazonS3Operations">
<constructor-arg index="0" ref="credentials"/>
<constructor-arg index="1" ref="clientConfiguration"/>
<property name="awsEndpoint" value="s3.amazonaws.com"/>
<property name="temporaryDirectory" value="${temporaryDirectory}"/>
<property name="awsSecurityKey" value=""/>
</bean>
<!-- aws-endpoint="https://s3.amazonaws.com" -->
<int-aws:s3-inbound-channel-adapter aws-endpoint="s3.amazonaws.com"
bucket="${bucket}"
s3-operations="s3Operations"
credentials-ref="credentials"
file-name-wildcard="${fileNameWildcard}"
remote-directory="${remoteDirectory}"
channel="splitChannel"
local-directory="${localDirectory}"
accept-sub-folders="false"
delete-source-files="true"
archive-bucket="${archiveBucket}"
archive-directory="${archiveDirectory}">
</int-aws:s3-inbound-channel-adapter>
<int-file:splitter input-channel="splitChannel" output-channel="output" markers="false" charset="UTF-8">
<int-file:request-handler-advice-chain>
<bean class="org.springframework.integration.handler.advice.ExpressionEvaluatingRequestHandlerAdvice">
<property name="onSuccessExpression" value="payload.delete()"/>
</bean>
</int-file:request-handler-advice-chain>
</int-file:splitter>
<int:idempotent-receiver id="expressionInterceptor" endpoint="output"
metadata-store="redisMessageStore"
discard-channel="nullChannel"
throw-exception-on-rejection="false"
key-expression="payload"/>
<bean id="redisMessageStore" class="o.s.i.redis.store.RedisChannelMessageStore">
<constructor-arg ref="redisConnectionFactory"/>
</bean>
<bean id="redisConnectionFactory"
class="o.s.data.redis.connection.jedis.JedisConnectionFactory">
<property name="port" value="7379" />
</bean>
<int:channel id="output"/>
Update 2
This configuration worked for me Thanks for your help.
<int:idempotent-receiver id="s3Interceptor" endpoint="s3splitter"
metadata-store="redisMessageStore"
discard-channel="nullChannel"
throw-exception-on-rejection="false"
key-expression="payload.name"/>
<bean id="redisMessageStore" class="org.springframework.integration.redis.metadata.RedisMetadataStore">
<constructor-arg ref="redisConnectionFactory"/>
</bean>
<bean id="redisConnectionFactory"
class="org.springframework.data.redis.connection.jedis.JedisConnectionFactory">
<property name="port" value="6379" />
</bean>
<int:bridge id="batchBridge" input-channel="bridge" output-channel="output">
</int:bridge>
<int:idempotent-receiver id="splitterInterceptor" endpoint="batchBridge"
metadata-store="redisMessageStore"
discard-channel="nullChannel"
throw-exception-on-rejection="false"
key-expression="payload"/>
<int:channel id="output"/>
I had few doubts wanted to clarify If i am doing right.
1)As you can see I have ExpressionEvaluatingRequestHandlerAdvice to delete the file.Will the file get deleted after i read the file into redis or after last record is read?
2)I explored redis using desktop manager I see this I have a MetaData as man Key
Both (file and payload) metadatastore key and value are going to same table is this fine?or should it be different metadatastore?
Can i use hash of payload instead of payload as key?Is there something like payload.hash!
Looks like it is continuation of the Multiple message processed, but unfortunately we don't see <idempotent-receiver> configuration in your case.
According to your comment there looks like you continue to use SimpleMetadataStore or clean the shared one (Redis/Mongo) very often.
You should share more info where to dig. Some logs and DEBUG investigation would be good, too.
UPDATE
The Idempotent Receiver is exactly for endpoint. In your config it is for the MessageChannel. That's why you don't achieve any proper work, because the MessageChannel is just ignored from the IdempotentReceiverInterceptor.
you should add an id for your <int-file:splitter> and use that id from the endpoint attribute. Not should if that would be good idea to use File as a key for idempotency. The name sounds better.
UPDATE 2
If a node goes down and lets assume a file is dowloaded(file size with million records may be gb) to xd node and I would have processed half the records and node crashes .When server comes up I think we will process same records again?
OK. I got your point finally! you have an issue with splitted lines from the file already.
Also I'd use Idempotent Receiver for the <splitter> as well to avoid duplicate files from S3.
To fix your use-case you should place in between <splitter> and output channel one more endpoint - <bridge> to skip duplicate lines with the Idempotent Receiver.

RequestHandlerRetryAdvice | Service Activator

I want to block the retry from my of service activator, incase of any exception thrown.
Below is my configuration,
<http:inbound-gateway id="inboundGW"
request-channel="getCustInfo" supported-methods="GET"
error-channel="errorHandlerRouterChannel"
path="/Messages/{cust}" reply-channel="msgRetrival">
<http:header name="cust" expression="#pathVariables.cust" />
</http:inbound-gateway>
<int:service-activator input-channel="getCustInfo"
output-channel="msgRetrival" ref="createService" method="processCustInfo">
<int:request-handler-advice-chain>
<ref bean="retryWithBackoffAdviceSession"></ref>
</int:request-handler-advice-chain>
</int:service-activator>
<bean id="retryWithBackoffAdviceSession" class="org.springframework.integration.handler.advice.RequestHandlerRetryAdvice">
<property name="retryTemplate">
<bean class="org.springframework.retry.support.RetryTemplate">
<property name="retryPolicy">
<bean class="org.springframework.retry.policy.SimpleRetryPolicy">
<property name="maxAttempts" value="1" />
</bean>
</property>
</bean>
</property>
<property name="recoveryCallback">
<bean class="org.springframework.integration.handler.advice.ErrorMessageSendingRecoverer">
<constructor-arg ref="errorHandlerRouterChannel"/>
</bean>
</property>
</bean>
Exception thrown from my service activator doesn't get caught by the error-channel specified in the http:inbound-gateway
Spring integration config file was loaded twice in the deployment descriptor. After fixing this the issue was resolved.
Your question is not clear.
If you want to throw the exception after retries are exhausted, remove the recoveryCallback.

handling file backup/archive on success and retry on failure

i have setup a simple file integration the reads a files from a directory and transfer via sftp to a remote system. it is working fine, however i need to handle archiving after successful transfer and retry until *n if unsuccessful transfer. any advise how this can be done using spring-integration components?
<bean id="sftpSessionFactory" class="org.springframework.integration.sftp.session.DefaultSftpSessionFactory">
<property name="host" value="sftp.host"/>
<property name="port" value="22"/>
<property name="user" value="user1"/>
<property name="password" value="pass1"/>
</bean>
<file:inbound-channel-adapter channel="fileInputChannel"
directory="C:/FileServer"
prevent-duplicates="true"
filename-pattern="*.pdf">
<integration:poller fixed-rate="5000"/>
</file:inbound-channel-adapter>
<sftp:outbound-channel-adapter id="sftpOutboundAdapter"
session-factory="sftpSessionFactory"
channel="fileInputChannel"
charset="UTF-8"
remote-directory=".">
<sftp:request-handler-advice-chain>
<bean class="org.springframework.integration.handler.advice.ExpressionEvaluatingRequestHandlerAdvice">
<property name="onSuccessExpression" value="payload.renameTo('C:/succeeded/' + payload.name)"/>
<property name="successChannel" ref="afterSuccessDeleteChannel"/>
<property name="onFailureExpression" value="payload.renameTo('C:/failed/' + payload.name)"/>
<property name="failureChannel" ref="afterFailRenameChannel" />
</bean>
<bean id="retryAdvice" class="org.springframework.integration.handler.advice.RequestHandlerRetryAdvice">
<property name="retryTemplate" ref="retryTemplate"/>
</bean>
</sftp:request-handler-advice-chain>
</sftp:outbound-channel-adapter>
Thanks
See the retry-and-more sample. It shows how to use a retry advice and how to take different actions based on success/failure.
For both you would declare the retry advice after the expression evaluating advice so the expression evaluating advice is invoked when retries are exhausted.

Spring Security - Retrieving Role of a User

I am using OpenDS for Authentication of my Application. I am able to Authenticate the user successfully but not able get the roles of the user.
The following is the configuration in the XML file.....
<bean id="secondLdapProvider" class="org.springframework.security.ldap.authentication.LdapAuthenticationProvider">
<constructor-arg>
<bean class="org.springframework.security.ldap.authentication.BindAuthenticator">
<constructor-arg ref="contextSource" />
<property name="userSearch">
<bean id="userSearch" class="org.springframework.security.ldap.search.FilterBasedLdapUserSearch">
<constructor-arg index="0" value="ou=people"/>
<constructor-arg index="1" value="(uid={0})"/>
<constructor-arg index="2" ref="contextSource" />
</bean>
</property>
</bean>
</constructor-arg>
<constructor-arg>
<bean class="org.springframework.security.ldap.userdetails.DefaultLdapAuthoritiesPopulator">
<constructor-arg ref="contextSource" />
<constructor-arg value="ou=groups" />
<property name="groupSearchFilter" value="(member={0})"/>
<property name="rolePrefix" value="ROLE_"/>
<property name="searchSubtree" value="true"/>
<property name="convertToUpperCase" value="true"/>
</bean>
</constructor-arg>
</bean>
Please help me to get the roles.
Collection<? extends GrantedAuthority> roles = SecurityContextHolder.getContext().getAuthentication().getAuthorities();
That will return you the roles ("authorities") as found by the DefaultLdapAuthoritiesPopulator
The search-filter is "(member={0})" in ou "groups", ie roles are retrieved by searching for entries in the "groups" ou with a "member" attribute with value matching the users dn. In your example ldif in the comment below, it looks like you use "uniqueMember" instead of "member" as your group membership attribute,
If you read the documentation carefully
(http://static.springsource.org/spring-security/site/docs/3.1.x/apidocs/org/springframework/security/ldap/userdetails/DefaultLdapAuthoritiesPopulator.html) you'll see examples of ldif and how the different attributes map in the populator.

Resources