Spring Integration Tcp project - spring-integration

I have a project that part of it is using Tcp connection, the case is as per below , I will also include a screen shot.
We have two clients, client 1 and client 2 those are conveyor belts so if we receive data on client one input we should send the reply to client 2 output and vise vers, I'm sure we can do it using Spring integration Tcp and probably getways. Am I approaching correctly Tcp integration at this case?
Yet I do not have code implementation but started to put something on it.

Sounds like you implementing a chat (or similar user-to-user) communication.
No, gateways won't help you here.
You need to have a TcpReceivingChannelAdapter and TcpSendingMessageHandler connected to the same AbstractServerConnectionFactory. The TcpSendingMessageHandler is registered as a TcpSender with that connection and all the sending connections are stored in the Map<String, TcpConnection> connections. When we produce a message to this MessageHandler, it tries to consult that registry like this:
private void handleMessageAsServer(Message<?> message) {
// We don't own the connection, we are asynchronously replying
String connectionId = message.getHeaders().get(IpHeaders.CONNECTION_ID, String.class);
TcpConnection connection = null;
if (connectionId != null) {
connection = this.connections.get(connectionId);
}
if (connection != null) {
So, on the receiving side (TcpReceivingChannelAdapter and its sub-flow) you need to ensure somehow that you really set a proper IpHeaders.CONNECTION_ID header for producing so-called reply in the end to a desired client.
You probably can react for the TcpConnectionOpenEvent via #EventListener and register some business key with the connectionId for the future correlation. When you send a message, you supply that target user business key, in the TcpReceivingChannelAdapter sub-flow you take that business key and obtain a desired connectionId from you registry. And enrich it into the IpHeaders.CONNECTION_ID header for automatic logic in the TcpSendingMessageHandler.
When TcpConnectionCloseEvent happens you have to remove its respective entry from your custom registry.
Since TCP/IP comes without headers support there is no any out-of-the-box mechanism to implement such a correlation feature.
Although TcpConnectionOpenEvent might not be enough for you since there is no any business info when connection is established. Perhaps you would need to implement some hand-shake logic in the TcpReceivingChannelAdapter flow to distinguish a real message and connection metadata for registering in the custom registry.
See more info in the docs: https://docs.spring.io/spring-integration/docs/current/reference/html/ip.html#ip-correlation
It might be also better for your use-case to look into a WebSocket support: https://docs.spring.io/spring-integration/docs/current/reference/html/web-sockets.html#web-sockets

Related

Using Control Bus EIP in Spring Integration to start/stop channels dynamically

I am interested in using Spring Integration to fetch files from various endpoints (FTP servers, email inboxes, S3, etc.) and load them into my system (essentially, ETL).
There are times when I will want these channels active and running, and other times when I will want them paused/stopped. Meaning, even if there are files available at the source, I do not want the channel consuming the data and doing anything with it.
Is a control bus an appropriate start/stop solution here:
#Bean
public IntegrationFlow controlBusFlow() {
return IntegrationFlow.from("controlBus")
.controlBus()
.get();
}
If so, how would I stop/restart a specific channel (route between an S3 bucket and the rest of my system) using the Java DSL/API? And if not, then what is the recommended practice/EIP to apply here?
Yes, the Control Bus is exactly a pattern and tool designed for your goal: https://www.enterpriseintegrationpatterns.com/ControlBus.html.
Yes, to use it you need to send messages to input channel of that control bus endpoint. The payload of message to sent must be a command to do some control activity for endpoint. Typically we call start and stop.
So, let's imagine you have an S3 source polling channel adapter:
#Bean
IntegrationFlow s3Flow(S3InboundFileSynchronizingMessageSource s3InboundFileSynchronizingMessageSource) {
return IntegrationFlow.from(s3InboundFileSynchronizingMessageSource, e -> e.id("myS3SourceEndpoint"))
...;
}
So, to stop that myS3SourceEndpoint via Control Bus, you need to send a message with a payload #myS3SourceEndpoint.stop().
Pay attention that we don't talk here about message channels neither message sources. The active components in the flow are really endpoints.
UPDATE
The Control Bus component utilizes a Command Message pattern. So, you need to build a respective message and send it to the input channel of that control bus endpoint. Something like this is OK:
#Autowired
MessageChannel controlBus;
...
this.controlBus.send(new GenericMessage<>("#myS3SourceEndpoint.stop()"));
You can use a MessagingTemplate.convertAndSend() if you don't like creating message yourself. Or you also can expose high-lever API via #MessagingGateway interface.
Everything you can find in docs: https://docs.spring.io/spring-integration/docs/current/reference/html/index.html

Seeking an understanding of ServiceStack.Redis: IRedisClient.PublishMessage vs IMessageQueueClient.Publish

I am having a hard time separating the IRedisClient.PublishMessage and IMessageQueueClient.Publish and realize I must be mixing something up.
ServiceStack gives us the option to listen for pub/sub broadcasts like this:
static IRedisSubscription _subscription;
static IRedisClient redisClientSub;
static int received = 0;
static void ReadFromQueue()
{
redisClientSub = redisClientManager.GetClient();
_subscription = redisClientSub.CreateSubscription();
_subscription.OnMessage = (channel, msg) =>
{
try
{
received++;
}
catch (Exception ex)
{
}
};
Task.Run(() => _subscription.SubscribeToChannels("Test"));
}
Looks nice, straightforward. But what about the producer?
When looking at the classes available, I thought that one could either user the IRedisClient.PublishMessage(string toChannel, string message) or IMessageQueueClient.Publish(string queueName, IMessage message).
redisClient.PublishMessage("Test", json);
// or:
myMessageQueueClient.Publish("Test", new Message<CoreEvent>(testReq));
In both cases, you specify the channel name yourself. This is the behaviour I am seeing:
the subscriber above only receives the message if I use IRedisClient.PublishMessage(string toChannel, string message) and never if I use IMessageQueueClient.Publish(string queueName, IMessage message)
If I publish using IRedisClient.PublishMessage, I expected the "Test" channel to be populated (if I view with a Redis browser), but it is not. I never see any trace of the queue (let's say I don't start the subscription, but producers adds messages)
If I publish using IMessageQueueClient.Publish(string queueName, IMessage message), the channel "Test" is created and the messages are persisted there, but never popped/fetched-and-deleted.
I want to understand the difference between the two. I have looked at source code and read all I can about it, but I haven't found any documentation regarding IRedisClient.PublishMessage.
Mythz answered this on ServiceStack forum, here.
He writes:
These clients should not be used interchangeably, you should only be
using ServiceStack MQ clients to send MQ Messages or the Message MQ
Message wrapper.
The redis subscription is low level API to create a Redis Pub/Sub
subscription, a more useful higher level API is the Managed Pub/Sub
Server which wraps the pub/sub subscription behind a managed thread.
Either way, MQ Server is only designed to process messages from MQ
clients, if you’re going to implement your own messaging
implementation use your own messages & redis clients not the MQ
clients or MQ Message class.
and
No IRedisClient (& ServiceStack.Redis) APIs are for Redis Server, the
PublishMessage API sends the redis PUBLISH command. IRedisSubscription
creates a Redis Pub/Sub subscription, see Redis docs to learn how
Redis Pub/Sub works. The ServiceStack.Redis library and all its APIs
are just for Redis Server, it doesn’t contain any
ServiceStack.Messaging MQ APIs.
So just use ServiceStack.Redis for your custom Redis Pub/Sub
subscription implementation, i.e. don’t use any ServiceStack.Messaging
APIs which is for ServiceStack MQ only.

ClientWebSocketContainer - can it be used on the client side to create a websocket connection?

The ClientWebSocketContainer Spring class can provide a websocket connection session to a remote endpoint. Though if an attempt is made to re-establish a closed connection (after a failed attempt) by using the ClientWebSocketContainer stop(), start(), and then getSession() methods, the connection is established but the ClientWebSocketContainer thinks it isn't connected due to the openConnectionException set in the failed attempt.
#Override
public void onFailure(Throwable t) {
logger.error("Failed to connect", t);
ClientWebSocketContainer.this.openConnectionException = t;
ClientWebSocketContainer.this.connectionLatch.countDown();
}
Should I be able to use the ClientWebSocketContainer in this fashion or should I create my own client connection manager?
I think it's just a bug, some kind of omission in the ClientWebSocketContainer logic.
I've just raised a JIRA on the matter. Will be fixed today.
Meanwhile give us more information what is your task?
The ClientWebSocketContainer is based on the ConnectionManagerSupport, where one of its implementation is WebSocketConnectionManager. So, consider to use the last one for obtaining the session.
If you use Spring Integration WebSocket Adapters, you don't have choice unless implement your own ClientWebSocketContainer variant. Yes, it fully may be based on the existing one.

Spring Integration - TCP - Response Correlation

I'm new to Spring Integration. The situation is that I've to connect to Tcp server dynamically(i.e. the DNS will be dynamically generated at runtime based on some params). Because of this I'm using Service Activator to manually create Tcp Connections and send messages. I've overridden CachingClientConnectionFactory to make use of shared connections concept(with single-use='false'). I was listening to messages using TcpReceivingChannelAdaptor by overriding "onMessage" method. The problem is that the server either responds with a Success or failure(with Generic messages) with no CorrelationID. Is there any way to correlate the request with the response ?
I tried using TcpOutboundGateway, but with this approach also I get the same problem. I used TcpConnectionSupport to send messages :
//Sample Code.
final String correlationId = "" // Dynamic unique number
TcpOutboundGateway outboundGateway = new TcpOutboundGateway(){
public synchronized boolean onMessage(Message<?> message) {
ByteArrayToStringConverter converter = new ByteArrayToStringConverter();
String response = converter.convert((byte[]) message
.getPayload());
logger.info(correlationId);
return false;
}
};
DefaultCachingClientConnectionFactory connFactory = new DefaultCachingClientConnectionFactory();
TcpConnectionSupport con = connFactory.obtainConnection();
GenericMessage<String> msg = new GenericMessage<String>("Sample Message" + correlationId);
con.registerListener(outboundGateway);
con.send(msg);
// DefaultCachingClientConnectionFactory is the subclass of CachingClientConnectionFactory.
When I send multiple messages, every time I get the same correlation printed in the "onMessage" method.
I read here that Outbound Gateway will correlate messages. Please help me. Maybe I'm doing something wrong.
Thanks
Unless you include correlation data in the message you can't correlate a response to a request.
The gateway achieves this by only allowing one outstanding request on a socket at a time; hence the reply has to be for the request. This is not very useful at high volume with a shared connection; hence the caching client cf was introduced. The gateway keeps a map of outstanding requests based on the connection id.
The gateway, in conjunction with the caching client connection factory should do what you need. However, overriding onMessage is not a good idea, because that's where the reply correlation is done.

How can services written in JAVA communicate with zeromq broker written in C

I have written a request-reply broker using zeromq and the C programming language. The broker routes client requests to the appropriate services, and then routes the reply back to the client. The services are written in JAVA.
Can someone please explain how to have the services communicate with the broker. I am sure that this must be a common scenario, but I don't have much experience, so can someone please help me with making my code inter-operable.
Please assume that the services will not be zeromq aware. Is node.js to be used in such a scenario? Will I have to write an http front end?
Here's one way you can do it using async PUSH/PULL sockets. I'm psuedo-coding this, so fill in the blanks yourself:
Assuming the Java services are POJO's residing in their own process, let's say we have a simple service with no zmq dependencies:
public class MyJavaService{
public Object invokeService(String params){
}
}
Now we build a Java delegate layer that pulls in messages from the broker, delegating requests to the Java service methods, and returning the response on a separate socket:
//receive on this
Socket pull = ctx.createSocket(ZMQ.PULL)
pull.connect("tcp://localhost:5555")
//respond on this
Socket push = ctx.createSocket( ZMQ.PUSH)
psuch.connect("tcp://localhost:5556")
while( true){
ZMsg msg = pull.recvMsg( pull)
//assume the msg has 2 frames,
//one for service to invoke,
//the other with arguments
String svcToInvoke = msg.popString()
String svcArgs = msg.popString()
if( "MyJavaService".equals(svcToInvoke)){
ZMsg respMsg = new ZMsg()
respMsg.push( (new MyJavaService()).invokeService( svcArgs))
respMsg.send( push)
}
}
On the broker side, just create the PUSH/PULL sockets to communicate with the Java services layer (I'm not a C++ programmer, so forgive me)
int main () {
zmq::context_t context(1);
zmq::socket_t push(context, ZMQ_PUSH);
push.bind( "tcp://localhost:5555");
// First allow 0MQ to set the identity
zmq::socket_t pull(context, ZMQ_PULL);
pull.bind( "tcp://localhost:5556");
//code here to handle request/response,
//to clients
..
}
Using PUSH/PULL works for this approach, but the ideal approach is to use ROUTER on the server, and DEALER on the client, for full asynchronous communication, example here.
Hope it helps!

Resources