Spring Cloud Stream - Functions - How to manually acknowledge rabbitmq message? - spring-integration

I'm using a spring cloud stream with rabbitbinder.
Using a #StreamListener, I could manually acknowledge rabbitmq messages by having Channel and deliveryTag injected into the method as follows:
#StreamListener(target = MySink.INPUT1)
public void listenForInput1(Message<String> message,
#Header(AmqpHeaders.CHANNEL) Channel channel,
#Header(AmqpHeaders.DELIVERY_TAG) Long deliveryTag) throws IOException {
log.info(" received new message [" + message.toString() + "] ");
channel.basicAck(deliveryTag, false);
}
I am now trying to achieve the same using functions:
#Bean
public Consumer<Message<String>> sink1() {
return message -> {
System.out.println("******************");
System.out.println("At Sink1");
System.out.println("******************");
System.out.println("Received message " + message.getPayload());
};
}
How do I get the Channel object in here so that I can acknowledge it with the deliveryTag?
I am able to get the delivery tag form headers. However, I am unable to get the channel Object.

I was able to figure it out:
#Bean
public Consumer<Message<String>> sink1() {
return message -> {
System.out.println("******************");
System.out.println("At Sink1");
System.out.println("******************");
System.out.println("Received message " + message.getPayload());
Channel channel = message.getHeaders().get(AmqpHeaders.CHANNEL, Channel.class);
Long deliveryTag = message.getHeaders().get(AmqpHeaders.DELIVERY_TAG, Long.class);
try {
channel.basicAck(deliveryTag, false);
} catch (IOException e) {
e.printStackTrace();
}
};
}

Related

Spring Integration Default Response for Jms inboundGateway

Seeing the below exception when trying to send a default constructed response for Jms inboundGateway exception from the downstream call. We are extracting the failedMessage headers from the ErrorMessage and then setting the constructed response as payload. The replyChannel headers is matching with the initially logged message header
2023-01-26 20:34:32,623 [mqGatewayListenerContainer-1] WARN o.s.m.c.GenericMessagingTemplate$TemporaryReplyChannel - be776858594e7c79 Reply message received but the receiving thread has exited due to an exception while sending the request message:
ErrorMessage [payload=org.springframework.messaging.MessageHandlingException: Failed to send or receive; nested exception is java.io.UncheckedIOException: java.net.SocketTimeoutException: Connect timed out, failedMessage=GenericMessage [payload=NOT_PRINTED, headers={replyChannel=org.springframework.messaging.core.GenericMessagingTemplate$TemporaryReplyChannel#2454562d, b3=xxxxxxxxxxxx, nativeHeaders={}, errorChannel=org.springframework.messaging.core.GenericMessagingTemplate$TemporaryReplyChannel#2454562d, sourceTransacted=false, jms_correlationId=ID:xxxxxxxxxx, id=xxxxxxxxxx, jms_expiration=36000, timestamp=1674750867614}]
Code:
return IntegrationFlows.from(Jms.inboundGateway(mqGatewayListenerContainer)
.defaultReplyQueueName(replyQueue)
.replyChannel(mqReplyChannel)
.errorChannel(appErrorChannel)
.replyTimeout(mqReplyTimeoutSeconds * 1000L))
// log
.log(DEBUG, m -> "Request Headers: " + m.getHeaders() + ", Message: " + m.getPayload())
// transform with required response headers
.transform(Message.class, m -> MessageBuilder.withPayload(m.getPayload())
.setHeader(ERROR_CHANNEL, m.getHeaders().get(ERROR_CHANNEL))
.setHeader(REPLY_CHANNEL, m.getHeaders().get(REPLY_CHANNEL))
.setHeader(CORRELATION_ID, m.getHeaders().get(MESSAGE_ID))
.setHeader(EXPIRATION, mqReplyTimeoutSeconds * 1000L)
.setHeader(MSG_HDR_SOURCE_TRANSACTED, transacted)
.build())
return IntegrationFlows.from(appErrorChannel())
.publishSubscribeChannel(
pubSubSpec -> pubSubSpec.subscribe(sf -> sf.channel(globalErrorChannel))
.<MessagingException, Message<MessagingException>>
transform(AppMessageUtil::getFailedMessageWithoutHeadersAsPayload)
.transform(p -> "Failure")
.get();
public static Message<MessagingException> getFailedMessageAsPayload(final MessagingException messagingException) {
var failedMessage = messagingException.getFailedMessage();
var failedMessageHeaders = Objects.isNull(failedMessage) ? null : failedMessage.getHeaders();
return MessageBuilder.withPayload(messagingException)
.copyHeaders(failedMessageHeaders)
.build();
}
Since you perform the processing of the request message on the same thread, it is blocked on a send and therefore we just re-throw an exception as is:
try {
doSend(channel, requestMessage, sendTimeout);
}
catch (RuntimeException ex) {
tempReplyChannel.setSendFailed(true);
throw ex;
}
And as you see we mark that tempReplyChannel as failed on a send operation.
So, the replyChannel header correlated with that mqReplyChannel is out of use. If you get rid of it at all, then everything is OK. But you also cannot reply back an Exception since the framework treats it as an error to re-throw back to the listener container:
if (errorFlowReply != null && errorFlowReply.getPayload() instanceof Throwable) {
rethrow((Throwable) errorFlowReply.getPayload(), "error flow returned an Error Message");
}
So, here is a solution:
#SpringBootApplication
public class So75249125Application {
public static void main(String[] args) {
SpringApplication.run(So75249125Application.class, args);
}
#Bean
IntegrationFlow jmsFlow(ConnectionFactory connectionFactory) {
return IntegrationFlow.from(Jms.inboundGateway(connectionFactory)
.requestDestination("testDestination")
.errorChannel("appErrorChannel"))
.transform(payload -> {
throw new RuntimeException("intentional");
})
.get();
}
#Bean
IntegrationFlow errorFlow() {
return IntegrationFlow.from("appErrorChannel")
.transform(So75249125Application::getFailedMessageAsPayload)
.get();
}
public static Message<String> getFailedMessageAsPayload(MessagingException messagingException) {
var failedMessage = messagingException.getFailedMessage();
var failedMessageHeaders = failedMessage.getHeaders();
return MessageBuilder.withPayload("failed")
.copyHeaders(failedMessageHeaders)
.build();
}
}
and unit test:
#SpringBootTest
class So75249125ApplicationTests {
#Autowired
JmsTemplate jmsTemplate;
#Test
void errorFlowRepliesCorrectly() throws JMSException {
Message reply = this.jmsTemplate.sendAndReceive("testDestination", session -> session.createTextMessage("test"));
assertThat(reply.getBody(String.class)).isEqualTo("failed");
}
}
Or even better like this:
public static String getFailedMessageAsPayload(MessagingException messagingException) {
var failedMessage = messagingException.getFailedMessage();
return "Request for '" + failedMessage.getPayload() + "' has failed";
}
and this test:
#Test
void errorFlowRepliesCorrectly() throws JMSException {
String testData = "test";
Message reply = this.jmsTemplate.sendAndReceive("testDestination", session -> session.createTextMessage(testData));
assertThat(reply.getBody(String.class)).isEqualTo("Request for '" + testData + "' has failed");
}

Receive messages from a channel by some event spring integration dsl [duplicate]

i have a channel that stores messages. When new messages arrive, if the server has not yet processed all the messages (that still in the queue), i need to clear the queue (for example, by rerouting all data into another channel). For this, I used a router. But the problem is when a new messages arrives, then not only old but also new ones rerouting into another channel. New messages must remain in the queue. How can I solve this problem?
This is my code:
#Bean
public IntegrationFlow integerFlow() {
return IntegrationFlows.from("input")
.bridge(e -> e.poller(Pollers.fixedDelay(500, TimeUnit.MILLISECONDS, 1000).maxMessagesPerPoll(1)))
.route(r -> {
if (flag) {
return "mainChannel";
} else {
return "garbageChannel";
}
})
.get();
}
#Bean
public IntegrationFlow outFlow() {
return IntegrationFlows.from("mainChannel")
.handle(m -> {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println(m.getPayload() + "\tmainFlow");
})
.get();
}
#Bean
public IntegrationFlow outGarbage() {
return IntegrationFlows.from("garbageChannel")
.handle(m -> System.out.println(m.getPayload() + "\tgarbage"))
.get();
}
Flag value changes through #GateWay by pressing "q" and "e" keys.
I would suggest you to take a look into a purge API of the QueueChannel:
/**
* Remove any {#link Message Messages} that are not accepted by the provided selector.
* #param selector The message selector.
* #return The list of messages that were purged.
*/
List<Message<?>> purge(#Nullable MessageSelector selector);
This way with a custom MessageSelector you will be able to remove from the queue old messages. See a timestamp message header to consult. With the result of this method you can do whatever you need to do with old messages.

How to handle both String and HttpRequest using channelhandler in Netty in java?

I want to handle two different clients. One is simple tcp client which sends string packets. Another one is http client which sends httprequest msg. I am a beginner in Netty, I don't know how handlers in pipelines flow.
This is my server coding:
public class TCPServer {
int port;
public static void main(String[] args) {
new TCPServer().start();
}
public void start() {
port = 1222;
EventLoopGroup producer = new NioEventLoopGroup();
EventLoopGroup consumer = new NioEventLoopGroup();
try {
ServerBootstrap bootstrap = new ServerBootstrap()
.option(ChannelOption.SO_BACKLOG, 1024)
.group(producer, consumer)//separate event loop groups to handle for parent and child for handling all chanel events
.channel(NioServerSocketChannel.class)//select type of chanel
.handler(new LoggingHandler(LogLevel.INFO))
.childHandler(new ServerAdapterInitializer());//configure chanel pipeline
System.out.println("Server started");// configuring server channel
bootstrap.bind(port).sync().channel().closeFuture().sync();//start the server and Wait until the server socket is closed. Thread gets blocked.
} catch (Exception e) {
e.printStackTrace();
} finally {
producer.shutdownGracefully();
consumer.shutdownGracefully();
}
}
}
This is my serverInitializer:
<pre>public class ServerAdapterInitializer extends ChannelInitializer<SocketChannel> {//special chanel handler configures registered chanel pipeline
#Override
protected void initChannel(SocketChannel channel) throws Exception {//this method is called once the chanel was registered
ChannelPipeline pipeline = channel.pipeline();
pipeline.addLast("decoder", new StringDecoder());//chanel inbound handler
pipeline.addLast("encoder", new StringEncoder());
pipeline.addLast("handler", new TCPServerHandler());
}
}
And this my handler to handle both httprequest and string. But my handler never handle httprequest packet.
class TCPServerHandler extends SimpleChannelInboundHandler<Object> {
private static final byte[] CONTENT = { 'H', 'e', 'l', 'l', 'o', ' ', 'W', 'o', 'r', 'l', 'd' };
private static final ChannelGroup channels = new DefaultChannelGroup("tasks", GlobalEventExecutor.INSTANCE);
#Override
public void channelRead0(ChannelHandlerContext ctx, Object msg)
throws Exception {
if (msg instanceof HttpRequest) {
System.out.println("http request");
HttpRequest req = (HttpRequest) msg;
boolean keepAlive = HttpUtil.isKeepAlive(req);
FullHttpResponse response = new DefaultFullHttpResponse(req.protocolVersion(), OK,Unpooled.wrappedBuffer(CONTENT));
response.headers()
.set(CONTENT_TYPE, TEXT_PLAIN)
.setInt(CONTENT_LENGTH, response.content().readableBytes());
if (keepAlive) {
if (!req.protocolVersion().isKeepAliveDefault()) {
response.headers().set(CONNECTION, KEEP_ALIVE);
}
} else {
// Tell the client we're going to close the connection.
response.headers().set(CONNECTION, CLOSE);
}
ChannelFuture f = ctx.write(response);
if (!keepAlive) {
f.addListener(ChannelFutureListener.CLOSE);
}
}
if(msg instanceof String){
System.out.println("String request");
String arg1=(String)msg;
Channel currentChannel = ctx.channel();
if(arg1.equals("quit")){
System.out.println("[INFO] - " + currentChannel.remoteAddress() + " is quitting... ");
}else{
System.out.println("[INFO] - " + currentChannel.remoteAddress() + " - "+ arg1);
currentChannel.writeAndFlush("Server Said Hii "+ arg1);
}
}
}
}
I don't think it is possible to configure the same server bootstrap to handle both HTTP requests and raw String messages. You need two server bootstraps (one for HTTP, one for String messages) each with its own pipeline. You already have the decoder/encoder for String message handling.
EventLoopGroup producer = new NioEventLoopGroup();
EventLoopGroup consumer = new NioEventLoopGroup();
ServerBootstrap httpSb = new ServerBootstrap();
ServerBootstrap strSb = new ServerBootstrap();
httpSb.group(producer, consumer).bind(<port for http>).<other methods>...
strSb.group(producer, consumer).bind(<port for strings>).<other methods>...
For HTTP, you need to add handlers HttpServerCodec and HttpObjectAggregator to be able to read FullHttpRequest from the channel and write FullHttpResponse into the channel.
(aggregator is optional, it helps you to avoid the task of combining fragmented incoming HTTP data into a single (full) HTTP request as well as write a combined (full) HTTP response into the channel)
In bootstrap for HTTP:
ch.pipeline().addLast("httpcodec" , new HttpServerCodec());
ch.pipeline().addLast("httpaggregator", new HttpObjectAggregator(512 * 1024));
ch.pipeline().addLast("yourhandler" , new YourHttpRequestHandler());
Example handler for FullHttpRequest processing:
public class YourHttpRequestHandler extends ChannelInboundHandlerAdapter {
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg_arg)
{
FullHttpRequest msg = (FullHttpRequest)msg_arg;
System.out.println("URI: " + msg.getUri());
System.out.println("method: " + msg.getMethod().toString());
System.out.println("protocol version: " + msg.getProtocolVersion());
System.out.println("header1: " + msg.headers().get("header1"));
System.out.println("header2: " + msg.headers().get("header2"));
System.out.println("header3: " + msg.headers().get("header3"));
System.out.println("content: " + msg.content().toString(CharsetUtil.UTF_8));
}//end read
}//end handler

Stop renaming file if data processing fails while streaming remote directory file

I am reading the file from remote directory using SFTP. I am able to get file by stream using outbound gateway, and move it to archive folder even.
I am processing the data in file but if there is some issue in data then I am throwing an error. I do not want to rename the file if there is any error thrown while processing the data, how can I achieve that. It will be very helpful if I can get some good practices for having error handler while using spring integration.
.handle(Sftp.outboundGateway(sftpSessionFactory(), GET, "payload.remoteDirectory + payload.filename").options(STREAM).temporaryFileSuffix("_reading"))
.handle(readData(),c->c.advice(afterReading()))
.enrichHeaders(h -> h
.headerExpression(FileHeaders.RENAME_TO, "headers[file_remoteDirectory] + 'archive/' + headers[file_remoteFile]")
.headerExpression(FileHeaders.REMOTE_FILE, "headers[file_remoteFile]")
.header(FileHeaders.REMOTE_DIRECTORY, "headers[file_remoteDirectory]"))
.handle(Sftp.outboundGateway(sftpSessionFactory(), MV, "headers[file_remoteDirectory]+headers[file_remoteFile]").renameExpression("headers['file_renameTo']"))
.get();
#Bean
public ExpressionEvaluatingRequestHandlerAdvice afterReading() {
ExpressionEvaluatingRequestHandlerAdvice advice = new ExpressionEvaluatingRequestHandlerAdvice();
advice.setSuccessChannelName("successReading.input");
advice.setOnSuccessExpressionString("payload + ' was successful streamed'");
advice.setFailureChannelName("failureReading.input");
advice.setOnFailureExpressionString("payload + ' was bad, with reason: ' + #exception.cause.message");
advice.setTrapException(true);
advice.setPropagateEvaluationFailures(true);
return advice;
}
#Bean
public IntegrationFlow successReading() {
return f -> f.log();
}
#Bean
public IntegrationFlow failureReading() {
return f -> f.log(ERROR);
}
public GenericHandler readData() {
return new GenericHandler() {
#Override
public Object handle(Object o, Map map) {
InputStream file = (InputStream) o;
String fileName = (String) map.get(REMOTE_FILE);
try {
// processing data
} catch (Exception e) {
return new SftpException(500, String.format("Error while processing the file %s because of Error: %s and reason %s", fileName, e.getMessage(), e.getCause()));
}
Closeable closeable = (Closeable) map.get(CLOSEABLE_RESOURCE);
if (closeable != null) {
try {
closeable.close();
file.close();
} catch (Exception e) {
logger.error(String.format("Session didn`t get closed after reading the stream data for file %s and error %s"), fileName, e.getMessage());
}
}
return map;
}
};
}
Updated
Add an ExpressionEvaluatingRequestHandlerAdvice to the .handler() endpoint .handle(readData(), e -> e.advice(...)).
The final supplied advice class is the o.s.i.handler.advice.ExpressionEvaluatingRequestHandlerAdvice. This advice is more general than the other two advices. It provides a mechanism to evaluate an expression on the original inbound message sent to the endpoint. Separate expressions are available to be evaluated, after either success or failure. Optionally, a message containing the evaluation result, together with the input message, can be sent to a message channel.

JMS Header not getting stored into Spring Integration message header

Have an incoming message from ActiveMQ queue and the message is being delivered properly. I need to access the JMS header value x-cutoffrule in my spring integration flow, but the value of cutoffrule in the handle section always is coming as null. My code is below:
#Bean
public JmsHeaderMapper sampleJmsHeaderMapper() {
return new DefaultJmsHeaderMapper() {
public Map<String, Object> toHeaders(javax.jms.Message jmsMessage) {
Map<String, Object> headers = super.toHeaders(jmsMessage);
try {
headers.put("cutoffrule", jmsMessage.getStringProperty("x-cutoffrule"));
} catch (JMSException e) {
e.printStackTrace();
}
return headers;
}
};
}
#Bean
public IntegrationFlow jmsMessageDrivenFlow(JmsHeaderMapper sampleJmsHeaderMapper ) {
return IntegrationFlows
.from(
Jms.messageDriverChannelAdapter(jmsMessagingTemplate.getConnectionFactory())
.destination(integrationProps.getIncomingRequestQueue())
.errorChannel(errorChannel())
.setHeaderMapper( sampleJmsHeaderMapper )
)
.handle((payload, headers) -> {
incomingPayload = payload;
logger.debug("cutoffrule"+ headers.get("cutoffrule"));
return payload;
})
.handle(message -> {
logger.debug("Message was succcessfully processed");
})
.get();
}
I thought the DefaultJmsHeaderMapper will map all JMS headers into the spring integration message. What am I missing?
The best way to understand what's wrong it to debug the code.
Or, at least log everything.
The best place for you is that your DefaultJmsHeaderMapper extension.
So, the DefaultJmsHeaderMapper maps all incoming properties. But it does that with the getObjectProperty() not getStringProperty(), like in your code:
Enumeration<?> jmsPropertyNames = jmsMessage.getPropertyNames();
if (jmsPropertyNames != null) {
while (jmsPropertyNames.hasMoreElements()) {
String propertyName = jmsPropertyNames.nextElement().toString();
try {
String headerName = this.toHeaderName(propertyName);
headers.put(headerName, jmsMessage.getObjectProperty(propertyName));
}
catch (Exception e) {
if (logger.isWarnEnabled()) {
logger.warn("error occurred while mapping JMS property '"
+ propertyName + "' to Message header", e);
}
}
}
}
So, your x-cutoffrule should be mapped exactly into the x-cutoffrule header.
See Andriy's comment, too.

Resources