I'm new to Spring Integration. The situation is that I've to connect to Tcp server dynamically(i.e. the DNS will be dynamically generated at runtime based on some params). Because of this I'm using Service Activator to manually create Tcp Connections and send messages. I've overridden CachingClientConnectionFactory to make use of shared connections concept(with single-use='false'). I was listening to messages using TcpReceivingChannelAdaptor by overriding "onMessage" method. The problem is that the server either responds with a Success or failure(with Generic messages) with no CorrelationID. Is there any way to correlate the request with the response ?
I tried using TcpOutboundGateway, but with this approach also I get the same problem. I used TcpConnectionSupport to send messages :
//Sample Code.
final String correlationId = "" // Dynamic unique number
TcpOutboundGateway outboundGateway = new TcpOutboundGateway(){
public synchronized boolean onMessage(Message<?> message) {
ByteArrayToStringConverter converter = new ByteArrayToStringConverter();
String response = converter.convert((byte[]) message
.getPayload());
logger.info(correlationId);
return false;
}
};
DefaultCachingClientConnectionFactory connFactory = new DefaultCachingClientConnectionFactory();
TcpConnectionSupport con = connFactory.obtainConnection();
GenericMessage<String> msg = new GenericMessage<String>("Sample Message" + correlationId);
con.registerListener(outboundGateway);
con.send(msg);
// DefaultCachingClientConnectionFactory is the subclass of CachingClientConnectionFactory.
When I send multiple messages, every time I get the same correlation printed in the "onMessage" method.
I read here that Outbound Gateway will correlate messages. Please help me. Maybe I'm doing something wrong.
Thanks
Unless you include correlation data in the message you can't correlate a response to a request.
The gateway achieves this by only allowing one outstanding request on a socket at a time; hence the reply has to be for the request. This is not very useful at high volume with a shared connection; hence the caching client cf was introduced. The gateway keeps a map of outstanding requests based on the connection id.
The gateway, in conjunction with the caching client connection factory should do what you need. However, overriding onMessage is not a good idea, because that's where the reply correlation is done.
Related
We have a MQ request/reply pattern implementation
Here we use a IBM MQ Cluster host. Here the request/reply queues on both sides are linked to each other by the MQ cluster environment, as the queue managers of different systems within the cluster talks to each other.
Our Requestor code uses Spring JMS Integration - JmsOutboundGateway to send and receive message
The service provider is a Mainframe application which we have no control.
public class JmsOutboundGatewayConfig {
#Bean
public MessageChannel outboundRequestChannel() {
return new DirectChannel();
}
#Bean
public QueueChannel outboundResponseChannel() {
return new QueueChannel();
}
#Bean
#ServiceActivator(inputChannel = "outboundRequestChannel")
public JmsOutboundGateway jmsTestOutboundGateway(ConnectionFactory connectionFactory) {
JmsOutboundGateway gateway = new JmsOutboundGateway();
gateway.setConnectionFactory(connectionFactory);
gateway.setRequestDestinationName("REQUEST.ALIAS.CLUSTER.QUEUE");
gateway.setReplyDestinationName("REPLY.ALIAS.CLUSTER.QUEUE");
gateway.setReplyChannel(outboundResponseChannel());
gateway.setRequiresReply(true);
return gateway;
}
}
// Requestor - sendAndReceive code
outboundRequestChannel.send(new GenericMessage<>("payload"));
Message<?> response = outboundResponseChannel.receive(10000);
Issue:
The issue we are facing when we send message, the gateway code is also passing the replyTo = queue://REPLY.ALIAS.CLUSTER.QUEUE.
Now the mainframe program that consumes this message , it is forced to reply back to the replyTo queue. It is failing on mainframe side as this replyTo queue which we send is not part of their MQ Mgr/env.
I could not find a way to remove the replyTo when sending message. As JmsOutboundGateway set this replyTo using the "ReplyDestinationName" which I had configured.
Our requestor will need to set the "ReplyDestinationName" as we are listening to this Alias-cluster reply queue for reply back.
I looked at the Channel interceptor options, I could only Intercept the message to alter it, but no option to change the replyTo.
Is there way to alter the replyTo i.e replyTo and ReplyDestination different?
Is there anyway to remove/not-set the replyTo when sending message to request queue?
Just wondering how to get this working for such MQ cluster environment where the replyTo queue will have to kept what the mainframe consumer service want, that is different to the replyDestination queue which we use.
Considering that the replyTo is used by the mainframe service to reply back. If it is not passed the mainframe service will use its own reply queue which is linked to our reply-cluster-alias queue.
Any inputs appreciated?
Thanks
Saishm
Further clarification:
The cluster mq env we have, Our spring jms outbound gateway is writing request to - "REQUEST.ALIAS.CLUSTER.QUEUE" & listening to the reply on "REPLY.ALIAS.CLUSTER.QUEUE"
So the jmsOutboundGateway sets the replyTo=REPLY.ALIAS.CLUSTER.QUEUE
Now the mainframe service on the other side is reading the message from "REQUEST.LOCAL.QUEUE". In the cluster env the "REQUEST.ALIAS.CLUSTER.QUEUE" ands its QMGR are linked to "REQUEST.LOCAL.QUEUE" and its QMGR, this is all managed within the cluster MQ env.
The mainframe service when consuming the request, sees that the incoming message had a replyTo and tries to send the response to this replyTo.
The issue is mainframe was supposed to reply to "REPLY.LOCAL.QUEUE" which is linked to REPLY.ALIAS.CLUSTER.QUEUE
If there is no replyTo it would have send the reply to "REPLY.LOCAL.QUEUE".
Now from the jmsOutBoundGateway I dont have any options to remove replyTo when sending mEssage or edit it to "REPLY.LOCAL.QUEUE" and keep listening to the response/reply of the request on "REPLY.ALIAS.CLUSTER.QUEUE"
Doesn't look like a JmsOutboundGateway will fit your IBM MQ cluster configuration and requirements. You just cannot use that replyTo feature since we cannot bypass JMS protocol over here.
Consider to use a pair of JmsSendingMessageHandler and JmsMessageDrivenEndpoint components, respectively. Your JmsSendingMessageHandler will just send a JMS message to the REQUEST.ALIAS.CLUSTER.QUEUE and forget. Only what you need is to supply a JmsHeaders.CORRELATION_ID. The JmsHeaderMapper will populate a jmsMessage.setJMSCorrelationID() property - the same way as JmsOutboundGateway does that by default. In this case your mainframe service is free to use its own reply queue and I guess it will supply our correlationId correctly.
A JmsMessageDrivenEndpoint has to be subscribed to the REPLY.ALIAS.CLUSTER.QUEUE and you do a correlation between request and reply yourself. For example a Map of correlationId and Future to fulfill when reply comes back.
I have a project that part of it is using Tcp connection, the case is as per below , I will also include a screen shot.
We have two clients, client 1 and client 2 those are conveyor belts so if we receive data on client one input we should send the reply to client 2 output and vise vers, I'm sure we can do it using Spring integration Tcp and probably getways. Am I approaching correctly Tcp integration at this case?
Yet I do not have code implementation but started to put something on it.
Sounds like you implementing a chat (or similar user-to-user) communication.
No, gateways won't help you here.
You need to have a TcpReceivingChannelAdapter and TcpSendingMessageHandler connected to the same AbstractServerConnectionFactory. The TcpSendingMessageHandler is registered as a TcpSender with that connection and all the sending connections are stored in the Map<String, TcpConnection> connections. When we produce a message to this MessageHandler, it tries to consult that registry like this:
private void handleMessageAsServer(Message<?> message) {
// We don't own the connection, we are asynchronously replying
String connectionId = message.getHeaders().get(IpHeaders.CONNECTION_ID, String.class);
TcpConnection connection = null;
if (connectionId != null) {
connection = this.connections.get(connectionId);
}
if (connection != null) {
So, on the receiving side (TcpReceivingChannelAdapter and its sub-flow) you need to ensure somehow that you really set a proper IpHeaders.CONNECTION_ID header for producing so-called reply in the end to a desired client.
You probably can react for the TcpConnectionOpenEvent via #EventListener and register some business key with the connectionId for the future correlation. When you send a message, you supply that target user business key, in the TcpReceivingChannelAdapter sub-flow you take that business key and obtain a desired connectionId from you registry. And enrich it into the IpHeaders.CONNECTION_ID header for automatic logic in the TcpSendingMessageHandler.
When TcpConnectionCloseEvent happens you have to remove its respective entry from your custom registry.
Since TCP/IP comes without headers support there is no any out-of-the-box mechanism to implement such a correlation feature.
Although TcpConnectionOpenEvent might not be enough for you since there is no any business info when connection is established. Perhaps you would need to implement some hand-shake logic in the TcpReceivingChannelAdapter flow to distinguish a real message and connection metadata for registering in the custom registry.
See more info in the docs: https://docs.spring.io/spring-integration/docs/current/reference/html/ip.html#ip-correlation
It might be also better for your use-case to look into a WebSocket support: https://docs.spring.io/spring-integration/docs/current/reference/html/web-sockets.html#web-sockets
I'm using vert.x 3.7.1, which depends on netty 4.1.34.Final. HTTP requests are being made through vertx-web.
WebClient client = WebClient.create(vertx);
HttpRequest<JsonArray> request = client.postAbs(uri)
.timeout(timeout)
.basicAuthentication(username, password)
.as(BodyCodec.jsonArray());
// HttpRequest is created once and used for several requests.
request.sendJsonObject(bodyJson, handler);
Some times, netty logs the following warning. It repeats every 5 seconds with a different id and stops when DNS resolves, I believe.
WARNING [io.netty.resolver.dns.DnsNameResolver] [id: 0xf9a6e215, L:/0.0.0.0:37175] Received a DNS response with an unknown ID: 38649
As I understand, netty generates a random id for the DNS query datagram and stores it as a key for the DNS query context in a map. When the DNS response datagram is received, the id is retrieved and checked against the map. When the DNS process finishes, the id is removed from the map.
What causes a DNS Response datagram to have an unknown id? Does that log indicate some problem? Should I take it as a regular event?
I have a Function app in Azure that is triggered when an item is put on a queue. It looks something like this (greatly simplified):
public static async Task Run(string myQueueItem, TraceWriter log)
{
using (var client = new HttpClient())
{
client.BaseAddress = new Uri(Config.APIUri);
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
StringContent httpContent = new StringContent(myQueueItem, Encoding.UTF8, "application/json");
HttpResponseMessage response = await client.PostAsync("/api/devices/data", httpContent);
response.EnsureSuccessStatusCode();
string json = await response.Content.ReadAsStringAsync();
ApiResponse apiResponse = JsonConvert.DeserializeObject<ApiResponse>(json);
log.Info($"Activity data successfully sent to platform in {apiResponse.elapsed}ms. Tracking number: {apiResponse.tracking}");
}
}
This all works great and runs pretty well. Every time an item is put on the queue, we send the data to some API on our side and log the response. Cool.
The problem happens when there's a big spike in "the thing that generates queue messages" and a lot of items are put on the queue at once. This tends to happen around 1,000 - 1,500 items in a minute. The error log will have something like this:
2017-02-14T01:45:31.692 mscorlib: Exception while executing function:
Functions.SendToLimeade. f-SendToLimeade__-1078179529: An error
occurred while sending the request. System: Unable to connect to the
remote server. System: Only one usage of each socket address
(protocol/network address/port) is normally permitted
123.123.123.123:443.
At first, I thought this was an issue with the Azure Function app running out of local sockets, as illustrated here. However, then I noticed the IP address. The IP address 123.123.123.123 (of course changed for this example) is our IP address, the one that the HttpClient is posting to. So, now I'm wondering if it is our servers running out of sockets to handle these requests.
Either way, we have a scaling issue going on here. I'm trying to figure out the best way to solve it.
Some ideas:
If it's a local socket limitation, the article above has an example of increasing the local port range using Req.ServicePoint.BindIPEndPointDelegate. This seems promising, but what do you do when you truly need to scale? I don't want this problem coming back in 2 years.
If it's a remote limitation, it looks like I can control how many messages the Functions runtime will process at once. There's an interesting article here that says you can set serviceBus.maxConcurrentCalls to 1 and only a single message will be processed at once. Maybe I could set this to a relatively low number. Now, at some point our queue will be filling up faster than we can process them, but at that point the answer is adding more servers on our end.
Multiple Azure Functions apps? What happens if I have more than one Azure Functions app and they all trigger on the same queue? Is Azure smart enough to divvy up the work among the Function apps and I could have an army of machines processing my queue, which could be scaled up or down as needed?
I've also come across keep-alives. It seems to me if I could somehow keep my socket open as queue messages were flooding in, it could perhaps help greatly. Is this possible, and any tips on how I'd go about doing this?
Any insight on a recommended (scalable!) design for this sort of system would be greatly appreciated!
I think the code error is because of: using (var client = new HttpClient())
Quoted from Improper instantiation antipattern:
this technique is not scalable. A new HttpClient object is created for
each user request. Under heavy load, the web server may exhaust the
number of available sockets.
I think I've figured out a solution for this. I've been running these changes for the past 3 hours 6 hours, and I've had zero socket errors. Before I would get these errors in large batches every 30 minutes or so.
First, I added a new class to manage the HttpClient.
public static class Connection
{
public static HttpClient Client { get; private set; }
static Connection()
{
Client = new HttpClient();
Client.BaseAddress = new Uri(Config.APIUri);
Client.DefaultRequestHeaders.Add("Connection", "Keep-Alive");
Client.DefaultRequestHeaders.Add("Keep-Alive", "timeout=600");
Client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
}
}
Now, we have a static instance of HttpClient that we use for every call to the function. From my research, keeping HttpClient instances around for as long as possible is highly recommended, everything is thread safe, and HttpClient will queue up requests and optimize requests to the same host. Notice I also set the Keep-Alive headers (I think this is the default, but I figured I'll be implicit).
In my function, I just grab the static HttpClient instance like:
var client = Connection.Client;
StringContent httpContent = new StringContent(myQueueItem, Encoding.UTF8, "application/json");
HttpResponseMessage response = await client.PostAsync("/api/devices/data", httpContent);
response.EnsureSuccessStatusCode();
I haven't really done any in-depth analysis of what's happening at the socket level (I'll have to ask our IT guys if they're able to see this traffic on the load balancer), but I'm hoping it just keeps a single socket open to our server and makes a bunch of HTTP calls as the queue items are processed. Anyway, whatever it's doing seems to be working. Maybe someone has some thoughts on how to improve.
If you use consumption plan instead of Functions on a dedicated web app, #3 more or less occurs out of the box. Functions will detect that you have a large queue of messages and will add instances until queue length stabilizes.
maxConcurrentCalls only applies per instance, allowing you to limit per-instance concurrency. Basically, your processing rate is maxConcurrentCalls * instanceCount.
The only way to control global throughput would be to use Functions on dedicated web apps of the size you choose. Each app will poll the queue and grab work as necessary.
The best scaling solution would improve the load balancing on 123.123.123.123 so that it can handle any number of requests from Functions scaling up/down to meet queue pressure.
Keep alive afaik is useful for persistent connections, but function executions aren't viewed as a persistent connection. In the future we are trying to add 'bring your own binding' to Functions, which would allow you to implement connection pooling if you liked.
I know the question was answered long ago, but in the mean time Microsoft have documented the anti-pattern that you were using.
Improper Instantiation antipattern
I have a website and and a webjob, where the website is a oneway client and the webjob is worker.
I use the Azure ServiceBus transport for the queue.
I get the following error:
InvalidOperationException: Cannot use ourselves as timeout manager
because we're a one-way client
when I try to send Bus.Defer from the website bus.
Since Azure Servicebus have built in support for timeoutmanager should not this work event from a oneway client?
The documentation on Bus.Defer says: Defers the delivery of the message by attaching a header to it and delivering it to the configured timeout manager endpoint
/// (defaults to be ourselves). When the time is right, the deferred message is returned to the address indicated by the header."
Could I fix this by setting the ReturnAddress like this:
headers.Add(Rebus.Messages.Headers.ReturnAddress, "webjob-worker");
Could I fix this by setting the ReturnAddress like this: headers.Add(Rebus.Messages.Headers.ReturnAddress, "webjob-worker");
Yes :)
The problem is this: When you await bus.Defer a message with Rebus, it defaults to return the message to the input queue of the sender.
When you're a one-way client, you don't have an input queue, and thus there is no way for you to receive the message after the timeout has elapsed.
Setting the return address fixes this, although I admit the solution does not exactly reek of elegance. A nicer API would be if Rebus had a Defer method on its routing API, which could be called like this:
var routingApi = bus.Advanced.Routing;
await routingApi.Defer(recipient, TimeSpan.FromSeconds(10), message);
but unfortunately it does not have that method at the moment.
To sum it up: Yes, setting the return address explicitly on the deferred message makes a one-way client capable of deferring messages.