Logging in jms:message-driven-channel-adapter and jms:outbound-channel-adapter - spring-integration

I use jms:message-driven-channel-adapter and jms:outbound-channel-adapter in my project to get and put messages from/to IBM MQ . I need to get timestamps before and after each put and get. How could i achieve this. Please advise.
Please see my question updated below:
We need time taken for each put and get operation. So what i believed is, if I could get the timestamp in the following way, I will be able to achieve what I wanted.
1)At jms:message-driven-channel-adapter: Note timestamp before and after each get -> derive time taken for each get
2)At jms:outbound-channel-adapter: Note timestamp before and after each put -> derive time taken for each put
Please advise.
Thanks.

Well. It isn't clear what you want to have, because you always can get deal with System.currentTimeMillis().
But from other side Spring Integration maps a jMSTimestamp property of JmsMessage to the Message header jms_timestamp of incomming message in the <jms:message-driven-channel-adapter>.
Another point, that each Spring Integration Message has its own timestamp header.
So, if you switch on something like this:
<wire-tap channel="logger"/>
<logging-channel-adapter id="logger" log-full-message="true"/>
You always will see the timestamp fro each message in te logs, before it is sent to the channel.
UPDATE
OK. Thanks. Now it is more clear.
Well, for the outbound part (put in your case) I can say that your solution is laying with a custom ChannelInterceptor :
public class PutTimeInterceptor extends ChannelInterceptorAdapter {
private final Log logger = LogFactory.getLog(this.getClass());
#Override
public Message<?> preSend(Message<?> message, MessageChannel channel) {
logger.info("preSend time [" + System.currentTimeMillis() + "] for: " + message);
return message;
}
#Override
public void postSend(Message<?> message, MessageChannel channel, boolean sent) {
logger.info("postSend time [" + System.currentTimeMillis() + "] for: " + message);
}
}
<channel id="putToJmsChannel">
<interceptors>
<bean class="com.my.proj.int.PutTimeInterceptor"/>
</interceptors>
</channel>
<jms:outbound-channel-adapter channel="putToJmsChannel"/>
Keep in mind that ChannelInterceptor isn't statefull, so you should calculate the put time manually for each message.
Another option is <jms:request-handler-advice-chain>, when you should implements the custom AbstractRequestHandlerAdvice:
public class PutTimeRequestHandlerAdvice extends AbstractRequestHandlerAdvice {
private final Log logger = LogFactory.getLog(this.getClass());
#Override
protected Object doInvoke(ExecutionCallback callback, Object target, Message<?> message) throws Exception {
long before = System.currentTimeMillis();
Object result = callback.execute();
logger.info("Put time: [" + System.currentTimeMillis() - before + "] for: " + message);
return result;
}
}
These are for the put only.
You can't derive the execution time for the get part, because it is a MessageListener, which is an event-driven component. When the message is in the queue you just receive it and that's all. There is no a hook to tie when the listener starts retrieve a message from the queue.

Related

Java: MQTT MessageProducerSupport to Flux

I have a simple MQTT Client that outputs received messages via IntegrationFlow:
public MqttPahoClientFactory mqttClientFactory() {
DefaultMqttPahoClientFactory factory = new DefaultMqttPahoClientFactory();
MqttConnectOptions options = new MqttConnectOptions();
options.setServerURIs(new String[] { "tcp://test.mosquitto.org:1883" });
factory.setConnectionOptions(options);
return factory;
}
public MessageProducerSupport mqttInbound() {
MqttPahoMessageDrivenChannelAdapter adapter = new MqttPahoMessageDrivenChannelAdapter(
"myConsumer",
mqttClientFactory(),
"/test/#");
adapter.setCompletionTimeout(5000);
adapter.setConverter(new DefaultPahoMessageConverter());
adapter.setQos(1);
return adapter;
}
public IntegrationFlow mqttInFlow() {
return IntegrationFlows.from(mqttInbound())
.transform(p -> p + ", received from MQTT")
.handle(logger())
.get();
}
private LoggingHandler logger() {
LoggingHandler loggingHandler = new LoggingHandler("INFO");
loggingHandler.setLoggerName("siSample");
return loggingHandler;
}
I need to pipe all received messages into a Flux though for further processing.
public Flux<String> mqttChannel() {
...
return mqttFlux;
}
How can I do that? The loggingHandler receives all messages from the IntegrationFlow. Couldn't my Flux get it's input in a similar fashion - by passing it somehow to IntegrationFlows handle function?
MQTT Example code is take from https://github.com/spring-projects/spring-integration-samples/blob/master/basic/mqtt/src/main/java/org/springframework/integration/samples/mqtt/Application.java
Attempt: Following Artem Bilans advise I'm now trying to use toReactivePublisher to convert my inbound IntegrationFlow to Flux.
public Flux<String> mqttChannel() {
Publisher<Message<Object>> flow = IntegrationFlows.from(mqttInbound())
.toReactivePublisher();
Flux<String> mqttFlux = Flux.from(flow)
.log()
.map(i -> "TESTING: Received a MQTT message");
return mqttFlux;
}
Running the example i get following error:
10:14:39.541 [MQTT Call: myConsumer] ERROR o.s.i.m.i.MqttPahoMessageDrivenChannelAdapter - Unhandled exception for GenericMessage [payload=OFF,26.70,65.00,663,-62,192.168.2.100,0.026,25,4,6,7,933,278,27,4,1,0,1580496218,730573600,1800000,1980000,1580496218,730573600,10800000,11880000, headers={mqtt_receivedRetained=true, mqtt_id=0, mqtt_duplicate=false, id=3f7565aa-ff4f-c389-d8a9-712d4f06f1cb, mqtt_receivedTopic=/083B7036697886C41D2DF2FD919143EE/MasterBedroom/Sensor/, mqtt_receivedQos=0, timestamp=1602231279537}]
Conclusion: as soon as the first message arrives, it's handled wrong and an exception is thrown.
Please, read this doc: https://docs.spring.io/spring-integration/docs/5.3.2.RELEASE/reference/html/reactive-streams.html#reactive-streams
It is not clear what you would like to achieve with that "my flux" and how that could look, but for your current configuration there are a couple of solutions.
You can use a FluxMessageChannel which is already a Publisher, so you can simply use Flux.from() and subscriber to that for consuming data produced by the mentioned MqttPahoMessageDrivenChannelAdapter.
Another way is to use a toReactivePublisher() on the IntegrationFlowBuilder to expose the whole flow as a reactive Publsiher source. In this case, of course, you can't use the LoggingHandler because it is a one-way and makes your flow ending exactly here. You may consider to use a log() operator instead though: https://docs.spring.io/spring-integration/docs/5.3.2.RELEASE/reference/html/dsl.html#java-dsl-log
By the way the FluxMessageChannel is publish-subscribe, so you can have it in the flow for those logs and also have it externally for Flux.from() subscription. All the subscribers to this channel are going to get the same message.

Abstracting Spring Cloud Stream Producer and Consumer code

I have a Service that is Producing and Consuming messages from different Spring Cloud Stream Channels (bound to EventHub/Kafka topics). There are several such Services which are setup similarly.
The configuration looks like below
public interface MessageStreams {
String WORKSPACE = "workspace";
String UPLOADNOTIFICATION = "uploadnotification";
String BLOBNOTIFICATION = "blobnotification";
String INGESTIONSTATUS = "ingestionstatusproducer";
#Input(WORKSPACE)
SubscribableChannel workspaceChannel();
#Output(UPLOADNOTIFICATION)
MessageChannel uploadNotificationChannel();
#Input(BLOBNOTIFICATION)
SubscribableChannel blobNotificationChannel();
#Output(INGESTIONSTATUS)
MessageChannel ingestionStatusChannel();
}
#EnableBinding(MessageStreams.class)
public class EventHubStreamsConfiguration {
}
The Producer/Publisher code looks like below
#Service
#Slf4j
public class IngestionStatusEventPublisher {
private final MessageStreams messageStreams;
public IngestionStatusEventPublisher(MessageStreams messageStreams) {
this.messageStreams = messageStreams;
}
public void sendIngestionStatusEvent() {
log.info("Sending ingestion status event");
System.out.println("Sending ingestion status event");
MessageChannel messageChannel = messageStreams.ingestionStatusChannel();
boolean messageSent = messageChannel.send(MessageBuilder
.withPayload(IngestionStatusMessage.builder()
.correlationId("some-correlation-id")
.status("done")
.source("some-source")
.eventTime(OffsetDateTime.now())
.build())
.setHeader("tenant-id", "some-tenant")
.build());
log.info("Ingestion status event sent successfully {}", messageSent);
}
}
Similarly I have multiple other Publishers which publish to different Event Hubs/Topics. Notice that there is a tenant-id header being set for each published message. This is something specific to my multi-tenant application to track the tenant context. Also notice that I am getting the channel to be published to while sending the message.
My Consumer code looks like below
#Component
#Slf4j
public class IngestionStatusEventHandler {
private AtomicInteger eventCount = new AtomicInteger();
#StreamListener(TestMessageStreams.INGESTIONSTATUS)
public void handleEvent(#Payload IngestionStatusMessage message, #Header(name = "tenant-id") String tenantId) throws Exception {
log.info("New ingestion status event received: {} in Consumer: {}", message, Thread.currentThread().getName());
// set the tenant context as thread local from the header.
}
Again I have several such consumers and also there is a tenant context that is set in each consumer based on the incoming tenant-id header that is sent by the Publisher.
My questions is
How do I get rid of the boiler plate code of setting the tenant-id header in Publisher and setting the tenant context in the Consumer by abstracting it into a library which could be included in all the different Services that I have.
Also, is there a way of dynamically identifying the Channel based on the Type of the Message being published. for ex IngestionStatusMessage.class in the given scenario
To set and tenant-id header in the common code and to avoid its copy/pasting in every microservice you can use a ChannelInterceptor and make it as global one with a #GlobalChannelInterceptor and its patterns option.
See more info in Spring Integration: https://docs.spring.io/spring-integration/docs/5.3.0.BUILD-SNAPSHOT/reference/html/core.html#channel-interceptors
https://docs.spring.io/spring-integration/docs/5.3.0.BUILD-SNAPSHOT/reference/html/overview.html#configuration-enable-integration
You can't make a channel selection by the payload type because the payload type is really determined from the #StreamListener method signature.
You can try to have a general #Router with a Message<?> expectation and then return a particular channel name to route according that request message context.
See https://docs.spring.io/spring-integration/docs/5.3.0.BUILD-SNAPSHOT/reference/html/message-routing.html#messaging-routing-chapter

What does #serviceactivator does exactlly?

I wish to understand what does serviceactivator annotation do? Because I want to modify message when I got it through serviceactivator. For example I have seen, there is no message parameter I can control. Why handle can receive message, even I cannot see any message parameter passed in, what is the principle?
#Bean
#ServiceActivator(inputChannel="requests")
public MessageHandler jmsMessageHandler((ActiveMQConnectionFactory connectionFactory) {
JmsSendingMessageHandler handler = new JmsSendingMessageHandler(new
JmsTemplate(connectionFactory));
handler.setDestinationName("requests");
return handler;
}
I wish I can do
#Bean
#ServiceActivator(inputChannel="requests")
public MessageHandler jmsMessageHandler(Message message) {
String new_message = message.split();
}
The #ServiceActivator wraps a call to the consumer endpoint. In case of MessageHandler it is used as is and the message from the inputChannel is passed to it. But if your code is not based on the MessageHandler, but is a simple POJO method invocation, then everything is based on the signature of your method. In the end that POJO method call is wrapped to the MethodInvokingMessageHandler.
In your case it must be something like this:
#ServiceActivator(inputChannel="requests", outputChannel="toJms")
public String jmsMessageHandler(Message message) {
return message.split();
}
So, no #Bean, because we deal only with POJO method invocation. The message is something incoming from request message and return String is going to become a payload from output message to processed somewhere downstream on the toJms channel.
See more info in the Reference Manual: https://docs.spring.io/spring-integration/docs/current/reference/html/#annotations

Splitter aborts during exception with out processing subsequent messages

I have a requirement to split the messages and process one by one. If any of the messages fails, I would like to report it to error channel and resume processing the next available messages
I am using spring cloud aws stream starter with 1.0.0-SNAPSHOT
I wrote a sample program using splitter
#Bean
public MessageChannel channelSplitOne() {
return new DirectChannel();
}
#StreamListener(INTERNAL_CHANNEL)
public void channelOne(String message) {
if (message.equals("l")) {
throw new RuntimeException("Error due to l");
}
System.out.println("Internal: " + message);
}
#Splitter(inputChannel = Sink.INPUT, outputChannel = INTERNAL_CHANNEL)
public List<Message> extractItems(Message<String> input) {
return Arrays.stream(input.getPayload().split(""))
.map(s -> MessageBuilder.withPayload(s).copyHeaders(input.getHeaders()).build())
.collect(Collectors.toList());
}
When I send the message as Hello, the exxpectation is that
'h','e','o' shall be processed, but 'l' shall be reported as error.
But here the after 'l', the processing is not resumed.
Is there any way to achieve this.
You can do that, but with the #ServiceActivator instead of #StreamListener. The first one has adviceChain option where you can inject an ExpressionEvaluatingRequestHandlerAdvice: https://docs.spring.io/spring-integration/docs/5.0.4.RELEASE/reference/html/messaging-endpoints-chapter.html#expression-advice.
The problem that the splitter is like a regular loop in Java, so to continue after error we need to add somehow a try...catch there. But that’s already not a splitter responsibility. Therefore we have to move such a logic into the place we have a error problem.

No Messages When Obtaining Input Stream from SFTP Outbound Gateway

Sometimes No Messages When Obtaining Input Stream from SFTP Outbound Gateway
This is follow up question to
Use SFTP Outbound Gateway to Obtain Input Stream
The problem I was having in previous question appears that I was not closing the stream as shown in the int:service-activator. However, when I added the int:service-activator then I was seemed to be forced to add int:poller.
However, when I added the int:poller I have noticed that sometimes now when attempting to obtain the stream the messages are null. I have found that a workaround is to simply retry. I have tested with different files and it seems that small files are adversely affected and large files are not. So, if I had to guess, there must be a race condition where the int:service-activator is closing the session before I try call getInputStream() but I was hoping someone could explain if this is what is actually going on and if there is a better solution than just simply retrying?
Thanks!
Here is the outbound gateway configuration:
<int-ftp:outbound-gateway session-factory="ftpClientFactory"
request-channel="inboundGetStream" command="get" command-options="-stream"
expression="payload" remote-directory="/" reply-channel="stream">
</int-ftp:outbound-gateway>
<int:channel id="stream">
<int:queue/>
</int:channel>
<int:poller default="true" fixed-rate="50" />
<int:service-activator input-channel="stream"
expression="payload.toString().equals('END') ? headers['file_remoteSession'].close() : null" />
Here is the source where I obtain the InputStream:
public InputStream openFileStream(final int retryCount, final String filename, final String directory)
throws Exception {
InputStream is = null;
for (int i = 1; i <= retryCount; ++i) {
if (inboundGetStream.send(MessageBuilder.withPayload(directory + "/" + filename).build(), ftpTimeout)) {
is = getInputStream();
if (is != null) {
break;
} else {
logger.info("Failed to obtain input stream so attempting retry " + i + " of " + retryCount);
Thread.sleep(ftpTimeout);
}
}
}
return is;
}
private InputStream getInputStream() {
Message<?> msgs = stream.receive(ftpTimeout);
if (msgs == null) {
return null;
}
InputStream is = (InputStream) msgs.getPayload();
return is;
}
Update, I’ll go ahead and accept the only answer as it helped just enough to find the solution.
The answer to the original question accepted answer was confusing because it answered a java question with an xml configuration solution that while explained the problem didn’t really provide the necessary java technical solution. This follow up question/answer clarifies what is going on within spring-integration and sort of suggests what is necessary to solve.
Final solution. To obtain and save the stream for later, I had to create a bean to save the stream for later reference. This stream is obtained from the message header.
Note, error checking and getter/setter is left out for brevity:
Use the same xml config as in the question above but eliminate the poller and service-activator elements as they are unnecessary and were causing the errors.
Create a new class SftpStreamSession to hold necessary references:
public class SftpStreamSession {
private Session<?> session;
private InputStream inputStream;
public void close() {
inputStream.close();
session.close();
}
}
Change the openFileStream method to return an SftpStreamSession:
public SftpStreamSession openFileStream(final String filename, final String directory) throws Exception {
SftpStreamSession sss = null;
if (inboundGetStream.send(MessageBuilder.withPayload(directory + "/" + filename).build(), ftpTimeout)) {
Message<?> msgs = stream.receive(ftpTimeout);
InputStream is = (InputStream) msgs.getPayload();
MessageHeaders mH = msgs.getHeaders();
Session<?> session = (Session<?>) mH.get("file_remoteSession");
sss = new SftpStreamSession(session, is);
}
return sss;
}
First of all you don't need payload.toString().equals('END') because it looks like you don't use <int-file:splitter> in your logic.
Second. You don't need that ugly <service-activator> because you have full access to the message in your code. You can simply obtain that file_remoteSession, cast it into Session<?> and call its .close() in the end of your logic.
Yes, there is a race condition, but it happens in your code.
Look, you have stream QueueChannel. From the beginning you had one consumer stream.receive(ftpTimeout);. But now you have introduced that <int:service-activator input-channel="stream">. Therefore one more competition consumer. Having such a small (fixed-rate="50") polling interval indeed leads you to unexpected behavior.

Resources