Is there any Pulsar UI where messages can be updated/produced? - apache-pulsar

The only UI I found was Pulsar Manager , which don't even allow to see the messages in a topic.
By any chance is there a Pulsar UI with better features where messages can be manipulated?

Related

Publish and consume message on confluent control center (localhost:9021) using Node.js app

I set up confluent platform on my local machine (mac), which is running on "localhost:9021".
Now I want to create a simple app using node.js, so that I can publish and consume message on the confluent GUI which is running on "localhost:9021".
How can I connect node.js app with this GUI , so that I can publish and consume message through that.
I know I can setup apache kafka and connect it with the node, but I want to know how I can use confluent control center (localhost:9021) GUI with node.js to publish and consume message.
Kafka isn't running on localhost:9021, that is Confluent Control Center, which has no NodeJS communication capabilities in order to get data in/out of Kafka.
If you want to communicate with Apache Kafka (which is part of Confluent Platform), then its default port is 9092, not 9021, and you don't require Confluent Platform for this.
After you connect to the actual broker, you can use Control Center to monitor the cluster health, read the produced messages for topics, or setup Kafka Connectors / KSQL queries.

Multi thread transactional Kafka producer and consumer with Spring Boot

I have a project which uses Spring Boot related project. I want to use the Transactional feature of Kafka consumer and producer in the project. I need to produce a lot of messages in Kafka as efficient as possible. So I need a multi-thread consuming and producing for this requirement. How can I use Spring boot for developing a multi-thread consumer and producer?
See the concurrency listener Boot property.
spring.kafka.listener.concurrency
The topic must have at least as many partitions as the concurrency.
https://docs.spring.io/spring-kafka/docs/2.6.1/reference/html/#message-listener-container

Spring Batch Remote Partitioning with Kafka as middle wear

I was checking Spring batch remote partitioning for loading data from RDBMS sources as well as multi partitioned Kafka topic. Problem with me is, I can not have rabbitMQ or JMS as the middle wear between master and worker nodes, I can only have Kafka as channel between the master and worker.
On all the documentation I can see that it supports JMS and AMQP.
Can anyone tell me how we can use remote partitioning with Kafka as middle wear .... if anyone has working example also, it will be a great help?
spring-integration-kafka provides similar endpoints to those used for JMS and RabbitMQ so it shouldn't be difficult to apply the concepts in that documentation to kafka.
The spring-integration-kafka latest version is 3.3.1 (it is moving to the core spring-integration project in 5.4.0).

Clustered app - only one server at a time reads from kafka, what am I missing?

I have a clustered application built around spring tooling, using kafka as the message layer for the fabric. At a high level, its architecture is a master process that parcels out work to slave processes running on separate hardware/vm's.
Master
|_______________
| | |
slave1 slave2 slave3
What I expect to happen is, if I throw 100 messages at Kafka, each of the slaves (three in this example) will pick up a proportionate number of messages and execute a proportionate amount of the work (about 1/3rd in this example).
What really happens is a slave picks up all of the messages and executes all of the work. It is indeterminate which slave will pick up the messages, but it is guaranteed one a slave starts picking up messages, the others will not until the slave has finished its work.
To me, it looks like the read from Kafka is pulling all of the messages from the queue, rather than one at a time. This leads me to believe I missed a configuration either on Kafka or in the Spring kafka.
I think you miss a conceptual understanding what is Apache Kafka and how it works.
There is no queues, first of all. Messages are settled in the topic. Everybody subscribed can get the same message. However there is a concept of consumer group. So, independently of the number of subscrbibers, only one of them will read a single message if the consumer group is the same.
There is another feature in Kafka called partitions. With that you can distribute your messages into different partitions or they will be assigned automatically: evenly by default. This partitions feature has another angle to use. When we have several subscribers for the same topic in the same consumer group, the partitions are distributed between them. So, you may reconsider your logic in favor of built-in features in Apache Kafka.
There is nothing to do from the Spring Kafka perspective, though. You only need properly configure your topic for reasonable number of partitions and provide the same consumer group for all your "slaves".

Can a spring kafka consumer run on multiple machines for the same groip?

Kafka says that the offset is managed by consumers and there should be as many consumers as many partitions for the same group.
Spring integration says that the number of consumer streams in high level consumer is the number of partitions for the same group.
So, can the spring kafka consumer code run on multiple servers for the same group? If yes, how do the offsets know not to be in conflict between servers?
According to the kafka doc, if group (http://kafka.apache.org/documentation.html#introduction) was implemented, each message is consumed by exactly one consumer in the group. Each consumer can run on one machine. Two consumer can run on the same machine, also. In this case, each consumer can be one process.
One group can contain multiple consumers. Partitions can be distributed among all the consumers in one group by some algorithms. The number of consumers can be larger or less than the number of the partitions.
Offset can be managed by aid of zookeeper. but not all functions have been implemented in some clients until now.
As for your use case, in fact, kafka maybe "at-least-once delivery system". Kafka can be at-most-once delivery by disabling retries on the producer OR committing its offset before processing a batch of messages. It is very difficult to implement "exactly-once delivery system", which requires co-operation. But kafka provides offset. So it may be possible.For more details, please see http://kafka.apache.org/documentation.html#semantics, http://ben.kirw.in/2014/11/28/kafka-patterns/, https://dzone.com/articles/kafka-clients-at-most-once-at-least-once-exactly-o and so on.
Based on my personal experience, I spent lots of time to make sure that my kafka system to be exactly-once delivery system. but when the server is down, some messages can be consumed twice. But my testing was done on standalone kafka server, always kafka cluter is used in production. So, I think it may can be considered as exactly-once system.

Resources