***strong text***I want to use 2 aws outbound adaptor in a single integration flow. One is outbound S3 adaptor and one outbound SQS adaptor. I have to move file from smb share to S3 bucket then use the transformer and then use the SQS adaptor to send transformed message to SQS queue. I can achieve this with 2 integration flow but I want to achieve this with only one integration flow. If I add both outbound adaptor part of one flow only one of them is working
That’s true, two Outbound channel Adapters can’t work one after another. Just because one-way component doesn’t return anything to send as a payload into the output channel.
You might need to make yourself familiar with the publish-subscribe pattern: https://www.enterpriseintegrationpatterns.com/patterns/messaging/PublishSubscribeChannel.html.
In the IntegrationFlow you just need to configure a publishSubscribeChannel with two subscribers as sub-flows for your different Channel Adapters.
See docs for more info: https://docs.spring.io/spring-integration/docs/5.1.7.RELEASE/reference/html/#java-dsl-subflows
UPDATE
Since you already have an .enrichHeaders(h->h.header("bucketName”,”mybucket”)) before publishSubscribeChannel(), it is a fact that this one is going to be available for both subscribers downstream.
To make access into it from the s3MessageHandler, you must configure it like this:
public MessageHandler s3MessageHandler() {
S3MessageHandler handler = new S3MessageHandler(amazonS3,
new FunctionExpression<>(m -> m.getHeaders().get(”mybucket”)));
return handler;
}
To make access to that header for your next SQS subscriber part, you need to change your transform() method signature to accept the whole Message<>, so you get access to headers again for building some custom message for SQS.
Related
In my former life I worked on a few Apache Camel projects, so I'm not entirely new to EIPs, but I am now trying to learn & understand Spring Integration. I have (what I think is) a small snippet of code for a "flow" that:
Defines a control bus for managing & monitoring the flow
Flow starts by fetching PNG images out of a folder (polling for new ones once a day); then
Uploads them to a directory on an FTP server
FileReadingMessageSource fileSource = new FileReadingMessageSource();
fileSource.setBeanName("fileMessageSource");
fileSource.setDirectory(new File("C:/DestDir"));
fileSource.setAutoCreateDirectory(true);
DefaultFtpSessionFactory ftpSessionFactory = new DefaultFtpSessionFactory();
IntegrationFlow flow = IntegrationFlows.from(fileSource, configurer -> configurer.poller(Pollers.cron("0 0 * * *")))
.filter("*.png") // only allow PNG files through
.controlBus() // add a control bus
.handle(Ftp.outboundAdapter(ftpSessionFactory, FileExistsMode.FAIL)
.useTemporaryFileName(false)
.remoteDirectory("uploadDir"))
.get();
Although admittedly I am a little unsure of the differences between "flows" and "channels" in Spring Integration parlance (I believe a flow is a composition of channels, and channels connect individual endpoints, maybe?), I am not understanding how, given my code above, the control bus can be leverage to turn the fileSource input endpoint on/off.
I know that with control buses, you send SPeL messages to channels and the control bus takes those SPeL messages and uses them to figure out which beans/methods to invoke, but above I am starting my flow from a FileReadingMessageSource. So what is the exact message I would need to send to which channel so that it stops/pauses or starts/restarts the FileReadingMessageSource endpoint?
The idea would be that if I used the control bus to turn the FileReadingMessageSource off, then days and days could pass and no PNG files would ever be uploaded to the FTP server, until I used the control bus to turn it back on.
Thanks for any-and-all help here!
The control bus needs to be in its own flow, not a component in another flow; you send control messages to it, to control other endpoints.
See a recent example in this question and answer.
Using Control Bus EIP in Spring Integration to start/stop channels dynamically
So, I created a couple of short flows, using SQS as the backing store between processing segments. The basic flow is:
RestController -> SQS Queue1 OCA
SQS Queue1 MDCA -> Service-Adapter -> SQS Queue2 OCA
SQS Queue2 MDCA -> Service Adapter
However, I ran into a couple of problems...
The "SQS Queue1 MDCA" reads messages off the queue with AWS specific Message Headers that end up getting to the outbound adapter writing to Queue2. These message headers specify things like the AWS_queue name, aws message id, etc, which cause the message to be re-delivered to queue1.
Once I filtered these headers off (I used a simple custom transformer to test this), the attributes of the outbound-channel-adapter then work as expected and the QueueMessagingTemplate delivers the message to the proper queue.
So, my question for the spring-int and spring-aws experts is, "Where do you think the proper place to filter out any pre-existing SQS headers is so that they don't get picked up by any downstream SQS processing?" Seems to me like you'd want to do it in the SQSHandler, since they might be relevant in any base spring-aws-messaging calls.
As a side note, with a message being created from an SQS ICA, and an object-to-json transformer in the flow, it also caused 12 message headers to be created which is 2 more than is allowed by SQS (causing the message delivery to fail).
Sample message with headers slightly modified to protect the innocent queues... As you can see the aws_queue, destination, etc are still in the message from being read from "wrench_workflow-mjg" so that when trying to deliver to the next queue, those headers overwrote the configuration in the spring-int xml configuration and never delivered to the next queue. The message just kept cycling through SQS queue "wrench_workflow-mjg" (after fixing the more than 10 attributes problem).
GenericMessage [payload={"eventId":"event-1","eventStartDateTime":1476127827.201000000},
headers={aws_messageId=db9b6cc0-f133-4182-b79c-4d5d9717a3a9, ApproximateReceiveCount=1,
SentTimestamp=1476127827803, id=0b662b72-f149-a970-5e63-64a1b28290fb,
SenderId=AIDAJOYV7TECZCZCOK22C,
aws_receiptHandle=AQEBdaToWo9utbjBwADeSuChX/KrY3l8eoSBMZ0RMmbI8ewjJQw6tV74RwSYxWPKZBSzgJhCmfJ8AUX+
reOy2yGrmefULU7NS8nqYTdMW6BB4Ug2+mHIY+8Tze+2ws15FB5t96q3iJO8+tP5pl/xuo+CiTDv+
L1rlYkVODD0Yv1zosGKx48IhGOXdL8nJ4Im8bujcUNLJy/vEYD8Qcfsi6NHOF3Qn0A4Xw+Nva0wp86zfq,
aws_queue=wrench_workflow-mjg,
lookupDestination=wrench_workflow-mjg,
ApproximateFirstReceiveTimestamp=1476127827886, timestamp=1476127836254}
I can probably pull together an example if necessary, but I'm hoping you understand what I'm getting at without.
We should probably add header-mapping facilities to the adapters to enable selective mapping of headers like we have for other technologies in Spring Integration.
I have opened a GitHub Issue.
In the meantime, you could put an <int:header-filter.../> (or HeaderFilter transformer) immediately upstream of the outbound adapter to remove the unwanted headers.
You could also add a custom Request Handler Advice to the outbound adapter.
I have the following flow:
1) message-driven-channel-adapter ->
1.1) output-channel connected to -> service-activator -> outbound-channel-adapter (for sending response)
1.2) error-channel connected to -> exception-type-router
1.2.1) message is sent to different queues depending on the exception type using outbound-channel-adapter
MessageDrivenChannelAdapter uses DefaultMessageListenrContainer and OutboundAdapter uses JMSTemplate
Have used same cachingconnectionfactory for inbound and outbound adapters,
set acknowledge="transacted" in messageDrivenChannelAdapter
set cacheLevel as CACHE_CONSUMER in DefaultMessageListenerContainer
set cacheProducers=true and cacheConsumers=false in CachingConnectionFactory
I am so confused as how jms session/producer/consumer is created and handled in this flow.
1) whether the consumers and producers used by inbound adapter, outbound adapters(used for response and the error queues) are created from the same session i.e. whether producers and consumers used in a thread are created from the same session?
2) And Just wanted to confirm as whether there are any disadvantage/issues in using cachingconnectionfactory even after setting 1)cacheConsmers to false in the factory and 2)cache level to CACHE_CONSUMER at the DefaultMessageListenerContainer. Because , it is confusing to read the forums saying that cachingconnectionfactory should not be used.
3)Also , have a doubt on the execution flow: In the flow, when will the service activator method execution complete ? Will it complete only after the message is sent to the output queue?
Please advise
It's ok to use a caching connection factory with the listener container, as long as you don't use variable concurrency, or you disable caching consumers in the factory.
In your scenario, you should use a caching connection factory because you really want the producers to be cached for performance reasons.
When the container acknowledgemode is transacted, the container session is bound to the thread and will be used by any upstream JmsTemplate that is configured to use the same connection factory, as long as there is no asynch handoff (QueueChannel or ExecutorChannel); the default DirectChannel runs the downstream endpoints on the container thread.
The service activator method is invoked (and "completes") before sending the message to the outbound adapter.
We previously used to have a Spring Integration flow (XML configuration-based) where we would do an update in a database after sending a message to a JMS queue. To achieve this, the SI flow was configured with a publish-subscribe queue channel as an input to a JMS Outbound Channel Adapter (order 0) and a Service Activator (order 1). The idea here being that after a successful JMS send, the service activator would be called thus, updating the data in the database.
We are now in the process of updating our flows to work with spring-integration:4.0.x APIs and wanted to use this opportunity to see if the described flow pattern is still a good/recommended way of doing a database update after a successful JMS send or if there is now a simpler/better way of achieving this? As a side note, our flows are now being implemented using spring-integration-java-dsl:1.0.0.M3 APIs.
Thanks in advance for any input on this,
PM.
publish-subscribe queue channel
There's no such thing as a pub-sub queue channel; by definition, it's a subscribable channel; so I assume that's what you mean.
It is one of the ways to do what you need, and perfectly fine; you can also achieve the same result with a RecipientListRouter. The dsl syntax is quite nice, especially with Java 8; see the SpringOne demo app for an example.
I have two identical sites which will consume RabbitMQ messages using the new Rabbit MQ client. The producer ideally should be able to designate the site either by queue name or routing key. The former I can do as a Publish parameter but the latter I have no access to. Furthermore, on the service side, the consumer appears only able to subscribe to convention-based queue names, i.e. mq.myrequest.inq and I don't seem to be able to take advantage of the routing key.
Is there a way I can publish and subscribe using my own routing key, or register the handler based on an explicit queue name, i.e mq.myrequest.site1.inq ?
There isn't. ServiceStack's RabbitMq support is conventionally based on Type names and is opinionated to function as a work queue. It was designed to be config-free and simple to use so automatically takes care of the details of which exchanges, routing keys and queue names to use.
If you need advanced or custom configuration it's best to instead use the underlying RabbitMQ.Client directly.