With Kafka binding I am able to read the message content but unable to find a way to retrieve the message header property.
How can I read message header using Kafka binding in dotnet.
Related
We have a unique need in apache pulsar solution where we need to filter our message content based on who is consuming the message from a given topic. We can solve this problem by creating separate topic per consumer but would like to know if there is a better way to have single topic where all consumer connects to and we filter out the message content based on who is receiving the message.
I read about EntryFilter interface in apache pulsar but not sure if that one is for producer or consumer.
just wondering if i could get some help about azure iot hub and stream analytics.
I'm using this guide to build my simulated device https://learn.microsoft.com/en-us/azure/iot-hub/quickstart-send-telemetry-python
however, whenever i try to extend the json telemetry message to include more key pairs, stream analytics always gives me this error
Source '<unknown_location>' had 1 occurrences of kind 'InputDeserializerError.InvalidData' between processing times '2020-07-14T02:35:47.4125308Z' and '2020-07-14T02:35:47.4125308Z'. Could not deserialize the input event(s) from resource 'Partition: [2], Offset: [806016], SequenceNumber: [1698], DeviceId: [testdevicereal]' as Json. Some possible reasons: 1) Malformed events 2) Input source configured with incorrect serialization format
I've checked my json formatting and it seems fine, any clues?
Deserialization issues are caused when the input stream of your Stream Analytics job contains malformed messages. For example, a malformed message could be caused by a missing parenthesis, or brace, in a JSON object or an incorrect timestamp format in the time field.
Enable resource logs to view the details of the error and the message (payload) that caused the error. There are multiple reasons why deserialization errors can occur. For more information regarding specific deserialization errors, see Input data errors. If resource logs are not enabled, a brief notification will be available in the Azure portal.
Please see Troubleshoot input connections for more details.
This question is related to why do I need to poll message hub, The documentation linked with that answer shows that Kafka will support a 'long-poll' concept, but there is no clear way in the existing nodejs support for message-hub to implement such a mechanism. The demo app provided for nodejs just uses a 250mSec timer interval to handle retrieving messages from the server. I'd like to replace that with a more sophisticated long-poll approach using the Kafka support long-poll approach:
To avoid this we have parameters in our pull request that allow the consumer request to block in a "long poll" waiting until data arrives
However the existing implementation does not appear to allow for configuring any such kind of parameter, nor is it clear what the necessary parameter would be. The prototype for the get function is defined as:
MessageHub.ConsumerInstance.prototype.get(topicName [toValue])
Retrieves a message from the provided topic name.
topicName - (String) (required), the topic to retrieve messages from.
toValue - (Boolean) (optional), unwraps base64 encoded messages, if true. Defaults to true.
Returns a Promise object which will be fulfilled when the request to the service resolves.
so no config options. Alternately, could you provide a link the documentation which defines the URLs and available options for those URLs which are implemented in the message-hub.js module?
that one you're mentioning is an npm module built on top of the REST API that Message Hub provides -
which is essentially the Confluent REST Proxy API minus the AVRO support
http://docs.confluent.io/2.0.1/kafka-rest/docs/api.html
That npm is not offering you a full Kafka api. It is offering just a subset of the REST API
The long poll which the the Kafka doc refer to is the timeout in the Java Consumer poll
https://kafka.apache.org/0102/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#poll(long)
A good client for Node.js is
https://github.com/Blizzard/node-rdkafka
Please see our coding sample in
https://github.com/ibm-messaging/message-hub-samples
Our application has an amqp inbound-channel-adapter with a listener container where we dynamically add and remove queue names.
We would like to utilize RabbitMQ's BCC (Sender-selected Distribution) feature where you set the BCC header to a Collection of recipient routing keys. This would be beneficial to have RabbitMQ distribute the messages instead of having Spring Integration create copies (could be thousands) and send them individually.
The problem is that when RabbitMQ sends the message it removes the BCC field, as expected, and does not put the recipient's routing key in the received routing key header. Also, there appears to be no way to map the message to the queue that it came from. Therefore the application has no idea "who" the message is intended for (queue-name/routing-key of queue it came from).
Previously we had used the received routing key to identify the recipient.
We had thought about 2 approaches.
1) Dynamically create an inbound channel adapter for each queue that needs to be listened to that has a dynamically created header enricher to add a recipient header = queue name it is listening to.
2) Dynamically create a sub-classed listener container that contains a queue name property that sends its messages to a gateway to get it back into the integration flow.
Could someone help us to determine what queue our message came from without a received routing key header?
Artem Bilan was correct and I was using an older version of S-AMQP which did not support that attribute. I upgraded to S-AMQP 1.4.2 and my CONSUMER_QUEUE property was there.
We use an <int-amqp:publish-subscribe-channel/> as a kind of an event bus in our service-based application. The send method as well as the message handler are based on the Message class from spring-messaging (as of spring-integration 4.0 (+). Events are changes to entities that need to be picked up by other services.
The problem is: the spring-messaging Message class is treated as arbitrary object payload by spring-amqp as it is not recognized as a spring-amqp Message. This causes the following problems:
default message format is serialized Java objects. spring-amqp does not only serialize our original payload object only, but also the wrapping spring-messaging Message, which is not compatible between Spring Framework 4.0 and 4.1
configuring a message converter for JSON (Jackson2JsonMessageConverter to be exact) doesn't solve the problem, as it also converts the Message instance - which is spring-integration's GenericMessage, and that can't be instantiated from JSON as it lacks an appropriate constructor
We need to mix Spring versions, as we have services implemented with Grails 2.4 (based on Spring 4.0) and with current Spring Boot (relies on Spring 4.1).
Is there any way out of this, preferably an idiomatic spring-integration way? Is there maybe another abstraction instead or in addition to the PublishSubscribeAmqpChannel? Or any other means of message conversion that we could apply?
Instead of using an amqp-backed channel, use an outbound-channel-adapter to send and an inbound-channel-adapter to receive.
The channel holds the entire message (serialized) whereas the adapters transport the payload as the message body and (optionally) map headers to/from amqp headers.
You will need to configure a fanout exchange for pub/sub (the channel will create one called si.fanout.<channelName> by default. You can then bind a queue for each recipient.