A serverless solution to run a Kafka consumer/producer in node? - node.js

I'm having a hard time figuring out a serverless runtime for my Kafka consumers to process my events, and I want my producer to be a constant listener to pick up and ingest events into corresponding topics.
I'm using Upstash Kafka (serverless), but I don't know where to run my consumer code. I tried AWS Lambda to consume messages even though I don't think that it's the right approach.

Related

KafkaJS consume from multiple topics with one consumer

I want to implement retry logic while consuming from Kafka topic using KafkaJS, so basically, I will have 2 topics main-topic and retry-topic and I will
read from -> main-topic
if processing fails | -> retry topic
so is it a bad practice to use one consumer for listening from both topics(both main and retry), as kafka allows to listen from multiple topics using same consumer.
It's not a bad practice at all.
The only problem you may run into using one consumer is that the topics may need differ configurations (connection settings, deserializer, etc). In that case, you can create two separate Consumer instances rather than one subscribing to both.

Reading messages in bulk through a Pulsar consumer

I am using node pulsar client to consume messages from a Pulsar topic. The consumer is subscribed to the topic using a shared subscription mode. Currently, each call to receive gets a single message from the topic. Is there a way to receive messages in bulk?
The fact that you get messages one by one doesn't mean that the Pulsar client doesn't use batching and other optimization techniques in the background. Official documentation for the Pulsar Java consumer defines the receiverQueueSize parameter defining accumulation of messages. By default, the Pulsar consumer uses reasonable values for its parameters and it should perform quite well for the most of the applications. Do you experience any kind of issues or slow performance?
Update
Since the 2.4.1 version of Apache Pulsar it is possible to receive messages in batches using consumer. First, the consumer should be created with the BatchReceivePolicy config (change values to more appropriate for your use case):
Consumer<GenericRecord> consumer = pulsarClient
.newConsumer(Schema.AUTO_CONSUME())
.batchReceivePolicy(BatchReceivePolicy.builder()
.maxNumMessages(5000)
.maxNumBytes(10 * 1024 * 1024)
.timeout(1, TimeUnit.SECONDS).build())
// .. other configuration such as topic and subscription
Second, use the batchReceive method to get a batch of messages:
Messages<GenericRecord> messages = consumer.batchReceive();
When all messages are processed, simply acknowledge all of them:
consumer.acknowledge(messages);

Design Question Kafka Consumer/Producer vs Kafka Stream

I'm working with NodeJs MS, so far they communicate through Kafka Consumer/Producer. Now I need to buiid a Loggger MS which must record all the messages and do some processing (parse and save to db), but I'm not sure if the current approach could be improved using Kafka Stream or if I should continue using Consumers
The Streams API is a higher level abstraction that sits on top of the Consumer/Producer APIs. The Streams API allows you to filter and transform messages, and build a topology of processing steps.
For what you're describing, if you're just picking up a messages and doing a single processing step, the Consumer API is probably fine. That said, you could do the same thing with the Streams API too and not use the other features.
buiid a Loggger MS which must record all the messages and do some processing (parse and save to db)
I would suggest using something like Streams API or Nodejs Producer + Consumer to parse and write back to Kafka.
From your parsed/filtered/sanitized messages, you can run a Kafka Connect cluster to sink your data into a DB
could be improved using Kafka Stream or if I should continue using Consumers
Ultimately, depends what you need. The peek and foreach methods of Streams DSL are functionally equivalent to a Consumer

Spark Streaming from Kafka Consumer

I might need to work with Kafka and I am absolutely new to it. I understand that there are Kafka producers which will publish the logs(called events or messages or records in Kafka) to the Kafka topics.
I will need to work on reading from Kafka topics via consumer. Do I need to set up consumer API first then I can stream using SparkStreaming Context(PySpark) or I can directly use KafkaUtils module to read from kafka topics?
In case I need to setup the Kafka consumer application, how do I do that? Please can you share links to right docs.
Thanks in Advance!!
Spark provide internal kafka stream in which u dont need to create custom consumer there is 2 approach to connect with kafka 1 with receiver 2. direct approach.
For more detail go through this link http://spark.apache.org/docs/latest/streaming-kafka-integration.html
There's no need to set up kafka consumer application,Spark itself creates a consumer with 2 approaches. One is Reciever Based Approach which uses KafkaUtils class and other is Direct Approach which uses CreateDirectStream Method.
Somehow, in any case of failure ion Spark streaming,there's no loss of data, it starts from the offset of data where you left.
For more details,use this link: http://spark.apache.org/docs/latest/streaming-kafka-integration.html

Using Apache Kafka for log aggregation

I am learning Apache Kafka from their quickstart tutorial: http://kafka.apache.org/documentation.html#quickstart. Upto now, I have done the setup as follows. A producer node, where a web server is running at port 8888. A Kafka server(broker), Consumer and Zookeeper instance on another node. And I have tested the default console/file enabled producer and consumer with 3 partitions. The setup is perfect, and I am able to see the messages I sent in the order they created (with in each partition).
Now, I want to send the logs generated from the web server to Kafka Broker. These messages will be processed by consumer later. Currently I am using syslog-ng to capture server logs to a text file. I have come up with 3 rough ideas on how to implement producer to use kafka for log aggregation
Producer Implementations
First Kind:
Listen to tcp port of syslog-ng. Fetch each message and send to kafka server. Here we have two middle processes: Producer and syslog-ng
Second Kind: Using syslog-ng as Producer. Should find a way to send messages to Kafka server instead of writing to a file. Syslog-ng, the producer is the middle process.
Third Kind: Configuring the webserver itself as producer.
Am I correct in my thinking. In the last case we don't have any middle process. But I doubt its implementation will effect server performance. Can anyone let me know the best way of using Apache Kafka(if the above 3 are not good) and guide me through appropriate configuration of server?..
P.S.: I am using node.js for my web server
Thanks,
Sarath
Since you specify that you wish to send the logs generated to kafka broker, it indeed looks as if executing a process to listen and resend messages mainly creates another point of failure with no additional value (unless you need a specific syslog-ng capability).
Syslog-ng can send messages to external applications using:
http://www.balabit.com/sites/default/files/documents/syslog-ng-ose-3.4-guides/en/syslog-ng-ose-v3.4-guide-admin/html/configuring-destinations-program.html. I don't know if there are other ways to do that.
For the third option, I am not sure if kafka can easily be integrated into Node.js as it requires a c++ producer and when I last looked for one, I was not able to find. However, an easy alternative could be to have kafka read the log file created by the server and send those logs (using the console producer provided with kafka). This is usually a good way, as it completely remove dependencies between kafka and the web server (embedding the producer in would require error handling, configuration, etc). It requires the use of tail --follow and it works for us very well. If you wish more details on that, I can include them as well. Still you would need to supervise kafka execution to make sure messages are not lost (and provide a recovery option to offline send messages that failed). But, the good thing about this method is that there are no dependency between the tools.
Hope it helps...
Eran

Resources