Ftp File polling in Load Balancer - spring-integration

I have created FTP polling using SPring Inetegration (FTP adaptors).Just like to know is there any known issues if i use in the loadbalancer. i.e we have our app deployed in 2 instance of server with LoadBalancer.
Just thinking if File is polled by one FTP Poller and processing and another FTP poller will again poll the same file and process?

For this purpose Spring Integration 3.0 has introduced FtpPersistentAcceptOnceFileListFilter
There is just need to use shared external Persistence MetadataStore - Redis or Gemfire.
Or implement your own, for example JDBC.
And contribute it to Spring Integration - https://jira.spring.io/browse/INT

Related

How to dynamically detect the web-server nodes in a load-balanced cluster?

I am implementing some real-time, collaborative features in an ASP.NET Web API based application using WebSockets and things are working fine when the app is deployed on a single web server.
When it is deployed on a farm behind a software (or hardware) load-balancer, I would like implement the pub-sub pattern to make any changes happening on one of the web servers invoke the same logic to check and push those changes via websocket to the clients connected to any of the other web servers.
I understand that this can be done if there an additional layer using RabbitMQ, Redis or some such pub/sub or messaging component.
But is there a way to use DNS or TCP broadcast or something that is already available on the Windows Server/IIS to publish the message to all the other sibling web-servers in the cluster?
No.
But you can use MSMQ instead of RabbitMQ, but still that's not really going to help as it's a queue and not pub/sub so ignore that.
If it's SignalR you're using there are plenty of docs on how to scale out like Introduction to Scaleout in SignalR
Even if it's not SignalR then you can probably get some ideas from there.

Consuming Amazon SQS (AMQP) from Azure

The need has come in which we have to consume data coming from a 3rd party in which they have an Amazon SQS instance setup on top of the AMQP protocol. They have given us the following:
queue name
user name
password
port
virtualhost
host
We are a cloud-born company in which we host everything in the Azure cloud, e.g. web services, web apps, databases, etc.
I would like to find out the following:
What "service" should I design or develop on from Azure that can consume messages from an Amazon SQS?
If Azure Service Bus supports AMQP 1.0 and Amazon SQS supports AMQP 0.9.3, can this be a plausible path?
I guess my question is more related on how to architect my solution. I know there are frameworks like RabbitMQ, but would like to avoid the VM path. If solutions like RabbitMQ are the way to go, can only the "consumer" pieces be utilized and not the "server" pieces of RabbitMQ implemented?
Any and all advice will be greatly appreciated.

Kafka as Messaging queue in Microservices

To give you background of the question, i am considering Kafka as a channel for inter service communication between micro services. Most of my micro services would have web nature (Either Web server/ REST server/ SOAP server to communicate with existing endpoints).
At some point i need asynchronous channel between micro services so i a considering Kafka as the message broker.
In my scenario, i have RESTfull microservice once its job is done, it that pushes messages to Kafka queue. Then another micro service which is also web server (embedded tomcat) with small REST layer would consume those messages.
Reason for considering messaging queue is that even if my receiving micro service is down for some reason, all my incoming message would be added to queue and data flow wont be disturbed. Another reason is that Kafka is persistent queue.
Typically, Kafka consumer are multi threaded.
Question is that, receiving micro service being web server, i am concerned about creating user threads in a servlet container managed environment. It might work. But considering the fact that user created threads are bad within web application, i am confused. So what would be your approach in this scenario.
Please suggest.

spring integration jms-outbound-channel-adapter not caching weblogic jms connection

We are using jms outbound channel adapter to send messages coming to one channel. For that we are accessing connection factory and queue using jndi template. Both config are using cache=true in their config.
But what we have noticed is that, for every message it sends, a new connection is being created instead of using from cache.
I believe behind the scene this outbound channel adapter uses "JmsSendingMessageHandler " which internally use jms template to send the message.
Can someone throw some light on how can we cache weblogic jms connection?
accessing connection factory and queue using jndi template
Caching in this context means caching the connection factory object, not its connection(s).
As long as you are not using JTA transactions, you can wrap the CF you obtain from JNDI into a CachingConnectionFactory. With JTA, the app server might need you to get a new connection for each TX and you will need to configure caching in the app server (if available).

Connect to multiple ftp servers using Spring integration

I have been spring integration, I want to connect multiple ftp server to retrieve files from remote locations, could anyone give me good example how to connect multiple ftp servers using spring integration
Thank you in advance,
Udeshika
The Dynamic FTP Sample explores a technique for creating multiple parameterized contexts on the outbound side. On the inbound side you have to make the context a child of the main context, so that it has access to the channel to which to send the files. This is discussed in this Spring Forum Thread and other threads linked from there.

Resources