I am writing my own Mqtt connector for Kafka. I know Confluent has Mqtt connector , but it is not open-source , so I decided to write my own connector.I have a problem about sending messages to Kafka. I can get Mqtt messages but can't send into Kafka. Basically, when a message comes from Mqtt Broker messageArrived method triggered , I am adding message into BlockingQueue like this:
private final BlockingQueue<SourceRecord> queue= new LinkedBlockingQueue<>();
#Override
public void messageArrived(String topic, MqttMessage message) throws Exception {
log.info("Message arrived : " + message.toString());
queue.add(new SourceRecord(null, null, kafkaTopic, null,
Schema.STRING_SCHEMA, message.getId(),
Schema.BYTES_SCHEMA, message.getPayload()));
}
And this is my poll method:
#Override
public List<SourceRecord> poll() throws InterruptedException {
List<SourceRecord> records = new ArrayList<SourceRecord>();
records.add(queue.take());
return records;
}
So, when I debug this code, the output as follows in console repeteadly:
[2019-03-13 17:45:37,113] INFO WorkerSourceTask{id=mqtt-connector-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask)
[2019-03-13 17:45:40,385] INFO Message arrived : {"timestamp":1552488339305,"values":[{"id":"Simulation.Simulator.Temperature","v":1,"q":true,"t":1552488336813}]} (deneme.MqttSourceTask)
[2019-03-13 17:45:45,386] INFO Message arrived : {"timestamp":1552488344305,"values":[{"id":"Simulation.Simulator.Temperature","v":2,"q":true,"t":1552488341809}]} (deneme.MqttSourceTask)
[2019-03-13 17:45:48,204] INFO WorkerSourceTask{id=mqtt-connector-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask)
It seems, Mqtt messages come but can't send Kafka. These two methods work asynchronous by multiple threads.
When I write the queue.hashCode() to the console in both poll and messageArrived method, hashcode is different. It acts like seperate queue object. I don't understand why it is. What causes to this problem?
Can anyone tell me what is wrong?
Related
I am processing messages from batch.
Defined advice for MessageHandler
ExpressionEvaluatingRequestHandlerAdvice advice = new ExpressionEvaluatingRequestHandlerAdvice();
advice.setFailureChannelName("errorChannel");
Once there is an error processing one or more messages from they payload, serviceActivator is triggered
#ServiceActivator(inputChannel = "errorChannel")
public void handleFailure (Message<?> message){
ExpressionEvaluatingRequestHandlerAdvice.MessageHandlingExpressionEvaluatingAdviceException adviceException = (ExpressionEvaluatingRequestHandlerAdvice.MessageHandlingExpressionEvaluatingAdviceException) message.getPayload();
//throw CustomException
So the above gives me the error but not which item from payload caused it (can be more than one).
What is the correct way to retry the payload(one by one)?
Should I somehow 'Split' the payload ? if yes what is the way to do this.
I tried replacing
#ServiceActivator(inputChannel = "errorChannel")
public void handleFailure (Message<?> message){
with
#Splitter(inputChannel = "errorChannel", outputChannel = "outboundChannel")
public List<Message<?>> handleFailure2(Message<?> message)
But couldn't split the messages from there as message's type is ExpressionEvaluatingRequestHandlerAdvice.MessageHandlingExpressionEvaluatingAdviceException
Spring Integration deals with a Message as a unit of work. It doesn't matter for the framework if payload is a batch of data or not: the service throws an exception, the whole failed message is sent to the error channel with a single exception.
You may consider to use an AggregateMessageDeliveryException in your service to gather all the errors in a single exception and then throw it for processing in that error handler. There you can inspect such an aggregate to determine what items in batch have been failed.
If you'd like to go a splitter way, then it is going to be a general splitter for your batch and then you process in your service items one by one. Not sure if that is what you'd like to do. However splitter can be configured for an ExecutorChannel as an output to process items in parallel.
I have a spring integration flow which produces messages that should be kept around waiting for an appropriate consumer to come along and consume them.
#Bean
public IntegrationFlow messagesPerCustomerFlow() {
return IntegrationFlows.
from(WebFlux.inboundChannelAdapter("/messages/{customer}")
.requestMapping(r -> r
.methods(HttpMethod.POST)
)
.requestPayloadType(JsonNode.class)
.headerExpression("customer", "#pathVariables.customer")
)
.channel(messagesPerCustomerQueue())
.get();
}
#Bean(name = PollerMetadata.DEFAULT_POLLER)
public PollerSpec poller() {
return Pollers.fixedRate(100);
}
#Bean
public QueueChannel messagesPerCustomerQueue() {
return MessageChannels.queue()
.get();
}
The messages in the queue should be delivered as server-sent events via http as shown below.
The PublisherSubscription is just a holder for the Publisher and the IntegrationFlowRegistration, the latter is used to destroy the dynamically created flow when it is no longer needed (note that the incoming message for the GET has no content, which is not handled properly ATM by the Webflux integration, hence a small workaround is necessary to get access to the path variable shoved to the customer header):
#Bean
public IntegrationFlow eventMessagesPerCustomer() {
return IntegrationFlows
.from(WebFlux.inboundGateway("/events/{customer}")
.requestMapping(m -> m.produces(TEXT_EVENT_STREAM_VALUE))
.headerExpression("customer", "#pathVariables.customer")
.payloadExpression("''") // neeeded to make handle((p,h) work
)
.log()
.handle((p, h) -> {
String customer = h.get("customer").toString();
PublisherSubscription<JsonNode> publisherSubscription =
subscribeToMessagesPerCustomer(customer);
return Flux.from(publisherSubscription.getPublisher())
.map(Message::getPayload)
.doFinally(signalType ->
publisherSubscription.unsubscribe());
})
.get();
}
The above request for server-sent events dynamically registers a flow which subscribes to the queue channel on demand with a selective consumer realized by a filter with throwExceptionOnRejection(true). Following the spec for Message Handler chain that should ensure that the message is offered to all consumers until one accepts it.
public PublisherSubscription<JsonNode> subscribeToMessagesPerCustomer(String customer) {
IntegrationFlowBuilder flow = IntegrationFlows.from(messagesPerCustomerQueue())
.filter("headers.customer=='" + customer + "'",
filterEndpointSpec -> filterEndpointSpec.throwExceptionOnRejection(true));
Publisher<Message<JsonNode>> messagePublisher = flow.toReactivePublisher();
IntegrationFlowRegistration registration = integrationFlowContext.registration(flow.get())
.register();
return new PublisherSubscription<>(messagePublisher, registration);
}
This construct works in principle, but with the following issues:
Messages sent to the queue while there are no subscribers at all lead to a MessageDeliveryException: Dispatcher has no subscribers for channel 'application.messagesPerCustomerQueue'
Messages sent to the queue while no matching subscriber is present yet lead to an AggregateMessageDeliveryException: All attempts to deliver Message to MessageHandlers failed.
What I want is that the message remains in the queue and is repeatedly offered to all subscribers until it is either consumed or expires (a proper selective consumer). How can I do that?
note that the incoming message for the GET has no content, which is not handled properly ATM by the Webflux integration
I don't understand this concern.
The WebFluxInboundEndpoint works with this algorithm:
if (isReadable(request)) {
...
else {
return (Mono<T>) Mono.just(exchange.getRequest().getQueryParams());
}
Where GET method really goes to the else branch. And the payload of the message to send is a MultiValueMap. And also we recently fixed with you the problem for the POST, which is released already as well in version 5.0.5: https://jira.spring.io/browse/INT-4462
Dispatcher has no subscribers
Can't happen on the QueueChannel in principle. There is no any dispatcher on there at all. It is just queue and sender offers message to be stored. You are missing something else to share with us. But let's call things with its own names: the messagesPerCustomerQueue is not a QueueChannel in your application.
UPDATE
Regarding:
What I want is that the message remains in the queue and is repeatedly offered to all subscribers until it is either consumed or expires (a proper selective consumer)
Only what we see is a PollableJmsChannel based on the embedded ActiveMQ to honor TTL for messages. As a consumer of this queue you should have a PublishSubscribeChannel with the setMinSubscribers(1) to make MessagingTemplate to throw a MessageDeliveryException when there is no subscribers yet. This way a JMS transaction will be rolled back and message will return to the queue for the next polling cycle.
The problem with in-memory QueueChannel that there is no transactional redelivery and message once polled from that queue is going to be lost.
Another option is similar to JMS (transactional) is a JdbcChannelMessageStore for the QueueChannel. Although this way we don't have a TTL functionality...
Actually i am running cometd-demo server in my local using maven jetty run shown in the doc https://docs.cometd.org/current/reference/ and trying to subscribe and publish something in a broadcast channel. Using Groovy script shown below,
ClientSessionChannel.MessageListener mylistener = new Mylistener();
def myurl = "http://localhost:8080/cometd/"
MyHttpClient httpClient = new MyHttpClient();
httpClient.start()
Map<String, Object> options = new HashMap<String, Object>();
ClientTransport transport = new LongPollingTransport(options, httpClient);
BayeuxClient client = new BayeuxClient(myurl, transport)
println 'client started on URL : '+ client.getURL()
client.handshake ( new ClientSessionChannel.MessageListener() {
public void onMessage(ClientSessionChannel channel, Message message) {
if (message.isSuccessful()) {
println 'Handshake Message : ' + message
}
}
})
boolean handshakecheck = client.waitFor(1000, BayeuxClient.State.CONNECTED);
println 'Handshake check : '+ handshakecheck
client.batch( new Runnable() {
public void run() {
client.getChannel("/foo/hello").subscribe(
new ClientSessionChannel.MessageListener() {
public void onMessage(ClientSessionChannel channel,
Message message) {
println "subscribed : "+ message
}
})
}
});
The program Output :
client started on URL : http://localhost:8080/cometd/
Handshake Message : [minimumVersion:1.0, clientId:fv0ozxw8cb5e11vtlwpacm7afp, supportedConnectionTypes:[websocket, long-polling, callback-polling], advice:[reconnect:retry, interval:0, maxInterval:10000, timeout:20000], channel:/meta/handshake, id:1, version:1.0, successful:true]
Handshake check : true
Here I can't get the subscribed message as in the code. But in server log It prints like shown below,
2018-02-12 20:30:32,687 qtp2069584894-17 [ INFO][examples.CometDDemoServlet] Monitored Subscribe from fv0ozxw8cb5e11vtlwpacm7afp,last=0,expire=0 for /foo/hello
Update 1:
Also i can't subscribe with callback method, i get the message as [channel:/meta/subscribe, id:4, subscription:/foo/hello, error:403:denied_by_not_granting:create_denied, successful:false]. I don't know what i am doing wrong ? I am just following the documentation steps. Thanks in advance.
The ClientSessionChannel.MessageListener that you pass to the subscribe(...) method will be invoked whenever a message will be published on channel /foo/hello.
Your program never publishes a message on that channel, so the listener is never invoked, therefore in your code subscribed is never printed.
You want to double check what version of the subscribe() method you want to use, as there are 2 versions.
The single parameter version takes a listener, while the two parameter version takes a listener and a callback.
Guessing from your code, you want the subscribed log line be in the callback not in the listener, so you just need to change your code to use the two parameter version of the subscribe() method.
Also, pay attention to the fact that if the JVM exits at the end of your groovy script, then that client will be gone and will never receive any message.
I'm stuck writing a service that simply receives TCP stream messages using spring-integration. I'm using this class to send test message:
class TCPClient {
public static void main(String args[]) throws Exception {
Socket clientSocket = new Socket("localhost", 9999);
clientSocket.getOutputStream().write("XYZ".getBytes());
clientSocket.close();
}
}
Server code, that should receive a message:
#EnableIntegration
#IntegrationComponentScan
#Configuration
public class TcpServer {
#Bean
public AbstractServerConnectionFactory serverCF() {
return new TcpNetServerConnectionFactory(9999);
}
#Bean
public TcpInboundGateway tcpInGate(AbstractServerConnectionFactory conFactory) {
TcpInboundGateway inGate = new TcpInboundGateway();
inGate.setConnectionFactory(conFactory);
SubscribableChannel channel = new DirectChannel();
//Planning to set custom message handler here, to process messages later
channel.subscribe(message -> System.out.println(convertMessage(message)));
inGate.setRequestChannel(channel);
return inGate;
}
private String convertMessage(Message<?> message) {
return message == null || message.getPayload() == null
? null
: new String((byte[]) message.getPayload());
}
}
Problem: when client code is run - server logs the following exception:
TcpNetConnection : Read exception localhost:46924:9999:6d00ac25-b5c8-47ac-9bdd-edb6bc09fe55 IOException:Socket closed during message assembly
A am able to receive a message when I send it using telnet or when I use simple java-only tcp-server implementation. How can I configure spring-integration to be able to read message sent from client?
The default deserializer expects messages to be terminated with CRLF (which is what Telnet sends, and why that works).
Send "XYZ\r\n".getBytes().
Or, change the deserializer to use a ByteArrayRawDeserializer which uses the socket close to terminate the message.
See the documentation about (de)serializers here.
TCP is a streaming protocol; this means that some structure has to be provided to data transported over TCP, so the receiver can demarcate the data into discrete messages. Connection factories are configured to use (de)serializers to convert between the message payload and the bits that are sent over TCP. This is accomplished by providing a deserializer and serializer for inbound and outbound messages respectively. A number of standard (de)serializers are provided.
The ByteArrayCrlfSerializer*, converts a byte array to a stream of bytes followed by carriage return and linefeed characters (\r\n). This is the default (de)serializer and can be used with telnet as a client, for example.
...
The ByteArrayRawSerializer*, converts a byte array to a stream of bytes and adds no additional message demarcation data; with this (de)serializer, the end of a message is indicated by the client closing the socket in an orderly fashion. When using this serializer, message reception will hang until the client closes the socket, or a timeout occurs; a timeout will NOT result in a message.
My requirements are stated below:
I have to develop a wrapper service on top a queue,so i was just going through some message Queue like (ActiveMQ,Apollo,Kafka). But decided to proceed with ActiveMQ to match our usecases.Now the requirement are as follows:
1) A restful api through which different publisher will publish to queue,based on clientId queue will be selected.
2) Consumer will consume message through restful api and will consume message in batches. say consumer as for something like give me 10 message from queue.
Now the service should provide 10 message if there is 10 message or if message number is less or zero it will send accordingly. After receiving the message the client will process with the message and send back acknowledgement through different res-full uri. upon receiving that acknowledgement,the MQService should commit or rollback message from the queue.
In order to this in the MQService layer, i have used a cached,where im keeping the JMS connection and session object till acknowledgemnt is received or ttl expire.
In-order to retrieve message in batches and send back to client, i have created a multi-threaded consumer,so that for 5 batch message request,the service layer will create 5 thread each having different connection and session object( as stated in ActiveMQ multiple consumer http://activemq.apache.org/multiple-consumers-on-a-queue.html)
Basic use-case:
MQ(BROKER)[A] --> Wrapper(MQService)[B]-->Client [C]
Note:[B] is a restfull service having JMS consumer implemented in it.It keeps the connection and session object in cache.
[C] request to [B] to give 3 message
[B] must fetch 3 message if available in queue,wrap it in batchmsgFormat and send it to [C]
[C] process the message and send acknowledgemnt suces/failed to [B] through /send-ack uri.
Upon receiving Ack from [C], [B] will commit the Jms session and close the session and connection object. Also it will evict those from the cache.
The above work-flow is working fine with single message fetching.
But the queue hungs up on JMS MesageConsumer.receive() when try to fetch message with mutilple consumer using multithreading. ...
Here the JMS Consumer code in MQService layer:
----------------------------------------------
public BatchMessageFormat getConsumeMsg(final String clientId, final Integer batchSize) throws Exception {
BatchMessageFormat batchmsgFormat = new BatchMessageFormat();
List<MessageFormat> msgdetails = new ArrayList<MessageFormat>();
List<Future<MessageFormat>> futuremsgdetails = new ArrayList<Future<MessageFormat>>();
if (batchSize != null) {
Integer msgCount = getMsgCount(clientId, batchSize);
for (int batchconnect = 0; batchconnect <msgCount; batchconnect++) {
FutureTask<MessageFormat> task = new FutureTask<MessageFormat>(new Callable<MessageFormat>() {
#Override
public MessageFormat call() throws Exception {
MessageFormat msg=consumeBatchMsg(clientId,batchSize);
return msg;
}
});
futuremsgdetails.add(task);
Thread t = new Thread(task);
t.start();
}
for(Future<MessageFormat> msg:futuremsgdetails){
msgdetails.add(msg.get());
}
batchmsgFormat.setMsgDetails(msgdetails);
return batchmsgFormat
}
Message fetching:
private MessageFormat consumeBatchMsg(String clientId, Integer batchSize) throws JMSException, IOException{
MessageFormat msgFormat= new MessageFormat();
Connection qC = ConnectionUtil.getConnection();
qC.start();
Session session = qC.createSession(true, -1);
Destination destination = createQueue(clientId, session);
MessageConsumer consumer = session.createConsumer(destination);
Message message = consumer.receive(2000);
if (message!=null || message instanceof TextMessage) {
TextMessage textMessage = (TextMessage) message;
msgFormat.setMessageID(textMessage.getJMSMessageID());
msgFormat.setMessage(textMessage.getText());
CacheObject cacheValue = new CacheObject();
cacheValue.setConnection(qC);
cacheValue.setSession(session);
cacheValue.setJmsQueue(destination);
MQCache.instance().add(textMessage.getJMSMessageID(),cacheValue);
}
consumer.close();
return msgFormat;
}
Acknowledgement and session closing:
public String getACK(String clientId,String msgId,String ack)throws JMSException{
if (MQCache.instance().get(msgId) != null) {
Connection connection = MQCache.instance().get(msgId).getConnection();
Session session = MQCache.instance().get(msgId).getSession();
Destination destination = MQCache.instance().get(msgId).getJmsQueue();
MessageConsumer consumer = session.createConsumer(destination);
if (ack.equalsIgnoreCase("SUCCESS")) {
session.commit();
} else {
session.rollback();
}
session.close();
connection.close();
MQCache.instance().evictCache(msgId);
return "Accepted";
} else {
return "Rejected";
}
}
Does anyone worked on similar scenario or can you pls throw some light? Is there any other way to implement this batch mesage fetching as well as client failure handling?
Try after setting the prefetch limit to 0 as below:
ConnectionFactory connectionFactory
= new ActiveMQConnectionFactory("tcp://localhost:61616?jms.prefetchPolicy.queuePrefetch=0");
I'll give a few pointers to help to code this logic better.
I'm assuming you are using pure JMS 1.1 as much as possible. Ensure that you have one place where you get the connection from the pool or create a connection. You need not do that inside a thread. You can do that outside. Sessions must be created inside a thread and shouldn't be shared. This will impact the logic in the function consumeBatchMsg().
Secondly, its simpler to use one thread to consume all the messages of the given batchSize. I see that you are using transacted session. So you can do one commit after getting all the messages of the batchSize.
If you really want to take the complicated route of having multiple consumers on a queue (probably little better performance), you can using CountDownLatch or CyclicBarrier of Java and set it to batchSize to trigger. Once all the threads have received the messages, it can commit and close the sessions in the respective threads. Never let the session instance go out of the context of the thread that created it.