How to test Apache Kafka for multiple clients? - node.js

I've created one producer and consumer script each to publish and receive data from topic. But how can I do the same for multiple clients? I'm using Node JS for implementation.

Your question need a bit more of explanation but maybe you can find what you are looking for here
Can you please try to describe a bit more your issue ?

Related

What’s the best way to architect a MQTT broker and display data on a web page?

I’m not pretty sure if It's appropriate to post this question here, but I couldn’t find anyone or any article that could clarify that to me.
Anyway, the doubt is the following: I'm currently working on a project which intends to monitor a bunch of data that is streaming over the CAN protocol, this data is sent by some devices. Well, I came up with the idea to monitor and display this data using the MQTT protocol along with NodeJS.
I managed to develop a broker using mosquito running in a docker container, I can receive the messages and print them on my terminal, In addition, I found out on the internet that I could use Express Ws to create a WebSocket, so I could send the data to my Web page, so far it seems to be working as expected. The problem is I don’t know if that's the right way to do that. I've got this concept in mind but seems to be too simple.
The questions are:
How should I manage many publishers taking into consideration just one broker?
Should I have a MQTT broker for the server and another one for client?
Is it right to consume the data through a websocket?
I apologize in advance for this whole question, I just want some guidance.

Social network app architecture with React+Nodejs and Kafka

I have an idea of social network website and as I'm currently learning web development I thought that was a great idea to practice. I already worked out the business logic and front-end design with React. Now it's time for backend and I'm struggling.
I want to create a React+Nodejs event-driven app. It seems logical to use Kafka right away. I looked through various Kafka architecture examples and have several questions:
Is it possible to create an app that uses Kafka for data through API calls from Nodejs and to React and vice versa. And user relational database only for longterm storage?
Is it better to use Kafka to handle all events but communicate with some noSQL database like Cassandra or HBase. Then it seems that NodeJS will have to make API calls to them and send data to React.
Am I completely missing the point and talking nonsense?
Any answer to this question is going to be quite opinion based, so I'm just going to try to stick to the facts.
It is totally possible to create such an application. Moreover, Kafka is basically a distributed log, so you can use it as an event store and build your state from that.
That mainly depends on your architecture, and there are too many gaps here to answer this with any certainty - what kind of data are you saving? What kind of questions will you need answered? What does your domain model look like? You could use Kafka as a store, or as a persistent messaging service.
I don't think you're off the mark, but perhaps you're going for the big guns when in reality you don't really need them. Kafka is great for a very large volume of events going through. If you're building something new, you don't have that volume. Perhaps start with something simpler that doesn't require so much operational complexity.

How to build mqtt broker from scratch?

I need to build a MQTT broker with basic functions but I cannot find any documents about MQTT broker.
Anyone have any idea how to do this? What do I need to read?
Firstly, I just want broker can accept connection using CONNECT and CONNACK.
The MQTT specification is available here, this will outline the protocol you will need to implement.
If your question is more generically, "How do I implement a network protocol?" then I would have to ask why you think you need to write your own broker and not just use one of the existing ones available. Even if the existing open source brokers don't do exactly what you want, adapting one of these will be much easier than starting from scratch. Brokers like Mosca and Moquetta allow themselves to be embedded into other applications.
If you still feel you need to write your own then I would start by picking one of the existing open source brokers and see how they have gone about it, picking one in a language similar to the one you intend to use would be the best bet.

Thrift publish subscribe

I'm evaluating thrift as an rpc framework. I want to be able to do publish/subscribe logic with thrift and was wondering how to do this.
A few different answers may help:
Is there a canonical way to do publish/subscribe with thrift?
Is there a way to stream results of a call (similar to zerorpc streaming)?
How do you solve this problem?
I've done my own research and it looks like that with thrift, you should serialize and do pub sub over some type of message queue like zeromq or redis.
Have you taken a look at DDS? Data Distribution Service is a full standard for doing publish / subscribe communications via topics.

creating temp queues in grails, creating a lot of temp queues

i can't seem to find any samples on this.. can someone help?
and is this good design?
in my grails app, every user can create their own queues (temp queues - assuming i know how to create them).. so let's assume that there will be 100,000 users using the web app.. the consumer will be a stand alone java app. the java app will be a consumer of a permanent queue/topic.. that queue/topic will be for the "commands" to create an object that will consume the temporarily created queues. and then the users will send/receive messages (i might use the examples in activemq as template for the codes.. i need to implement them as runnables for each user).
and does having a lot of temp queues ok?
thanks!
A good example for implementing a Request/Reply Scenario using JMS is in the ActiveMQ documentation as you already stated: http://activemq.apache.org/how-should-i-implement-request-response-with-jms.html
However if you are really talking about 100.000 users you should do some performance and stability testing ahead. Maybe it would be a better idea to pool the temporary queues and reuse them everytime. Another possibility might be to use only few queues and select the appropriate messages with a MessageSelector, by assigning some unique id as messages property.

Resources