Following on discussion here, I used the following steps to enable an external client (based on kafkajs) connect to Strimzi on OpenShift. These steps are from here.
Enable external route
The kafka-persistent-single.yaml is edited to as shown below.
apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
name: my-cluster
spec:
kafka:
version: 2.3.0
replicas: 1
listeners:
plain: {}
tls: {}
external:
type: route
config:
offsets.topic.replication.factor: 1
transaction.state.log.replication.factor: 1
transaction.state.log.min.isr: 1
log.message.format.version: "2.3"
storage:
type: jbod
volumes:
- id: 0
type: persistent-claim
size: 5Gi
deleteClaim: false
zookeeper:
replicas: 1
storage:
type: persistent-claim
size: 5Gi
deleteClaim: false
entityOperator:
topicOperator: {}
userOperator: {}
Extract certificate,
To extract certificate and use it in client, I ran the following command:
kubectl get secret my-cluster-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -D > ca.crt
Note that, I had to use base64 -D on my macOS and not base64 -d as shown in documentation.
Kafkajs client
This is the client adapted from their npm page and their documentation.
const fs = require('fs')
const { Kafka } = require('kafkajs')
const kafka = new Kafka({
clientId: 'my-app',
brokers: ['my-cluster-kafka-bootstrap-messaging-os.192.168.99.100.nip.io'],
ssl : { rejectUnauthorized: false,
ca : [fs.readFileSync('ca.crt', 'utf-8')]
}
})
const producer = kafka.producer()
const consumer = kafka.consumer({ groupId: 'test-group' })
const run = async () => {
// Producing
await producer.connect()
await producer.send({
topic: 'test-topic',
messages: [
{ value: 'Hello KafkaJS user!' },
],
})
// Consuming
await consumer.connect()
await consumer.subscribe({ topic: 'test-topic', fromBeginning: true })
await consumer.run({
eachMessage: async ({ topic, partition, message }) => {
console.log({
partition,
offset: message.offset,
value: message.value.toString(),
})
},
})
}
run().catch(console.error)
Question
When I run node sample.js from the folder having ca.crt, I get a connection refused message.
{"level":"ERROR","timestamp":"2019-10-05T03:22:40.491Z","logger":"kafkajs","message":"[Connection] Connection error: connect ECONNREFUSED 192.168.99.100:9094","broker":"my-cluster-kafka-bootstrap-messaging-os.192.168.99.100.nip.io:9094","clientId":"my-app","stack":"Error: connect ECONNREFUSED 192.168.99.100:9094\n at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1113:14)"}
What am I missing?
I guess that the problem is that you are missing the right port 443 on the broker address so you have to use
brokers: ['my-cluster-kafka-bootstrap-messaging-os.192.168.99.100.nip.io:443']
otherwise it is trying to connect to the default port 80 on the OpenShift route.
After an extended discussion with #ppatierno, I feel that, the Strimzi cluster works well with the Kafka console clients. The kafkajs package, on the other hand, keeps failing with NOT_LEADER_FOR_PARTITION.
UPDATE The Python client seem to be working without a fuss; so, I am abandoning kafkajs.
Related
This question already has answers here:
Connect to Kafka running in Docker
(5 answers)
Kafka access inside and outside docker [duplicate]
(3 answers)
Closed 4 months ago.
I'm working on a node js and kafka app and i put both of the services in docker compose.
The problem is my nodejs app connects to kafka broker but when i produce messages or try to consume i get this error:
connect ECONNREFUSED 127.0.0.1:9092","broker":"127.0.0.1:9092","clientId":"myapp","stack":"Error: connect ECONNREFUSED 127.0.0.1:9092\
I'm not sure but from what i did read i guess the problem is KAFKA_CFG_ADVERTISED_LISTENERS
docker-compose.yml
version: '3.8'
services:
zookeeper:
image: 'bitnami/zookeeper:latest'
container_name: zookeeper
ports:
- '2181:2181'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
image: 'bitnami/kafka:latest'
container_name: kafka
ports:
- '9092:9092'
environment:
- KAFKA_BROKER_ID=1
- KAFKA_CFG_LISTENERS=PLAINTEXT://:9092
/*
// I THINK THE PROBLEM IS HERE
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:9092
*/
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
depends_on:
- zookeeper
emailing-service:
build:
context: ./emailing-service
dockerfile: Dockerfile
image: emailing-services-image
ports:
- "6000:6000"
container_name: emailing-service-container
volumes:
- ./emailing-service/:/app:ro
My node app
import { Kafka, Partitioners } from "kafkajs";
const kafka = new Kafka({
clientId: 'myapp',
brokers: ['kafka:9092']
})
const producer = kafka.producer({ createPartitioner: Partitioners.LegacyPartitioner })
const consumer = kafka.consumer({ groupId: 'test-group' })
const run = async () => {
await producer.connect()
console.log('Connecting');
await producer.send({
topic: 'test-topic',
messages: [
{ value: 'Hello KafkaJS user!' },
],
})
await consumer.connect()
await consumer.subscribe({ topic: 'test-topic', fromBeginning: true })
console.log('CONSUMER subscribed')
await consumer.run({
eachMessage: async ({ topic, partition, message }) => {
console.log({
topic,
partition,
offset: message.offset,
value: message.value?.toString(),
})
},
})
}
run().catch(console.error)
My subscriber is being triggered when I publish a message but the data payload seems to be empty. I'm following the steps in the pub/sub subscription methods documentation.
This is the app endpoint code:
const app = express();
app.use(cors())
app.get('/dapr/subscribe', (_req, res) => {
res.json([
{
pubsubname: "order-pub-sub",
topic: "orders",
route: "api/deliveries",
}
]);
});
app.post('/api/deliveries', async (req: Request, res: Response) => {
const rawBody = JSON.stringify(req.body);
console.log(`Data received: ${rawBody}`)
res.status(200).send({ status: "SUCCESS" });
});
Starting the app:
docker run -d -p 5672:5672 --name dtc-rabbitmq rabbitmq
dapr run --app-id delivery --app-port 3100 --app-protocol http --dapr-http-port 3501 --components-path ../components npm run dev
Publishing to a topic:
dapr publish --publish-app-id order --pubsub order-pub-sub --topic orders --data '{"orderId": "100"}'
Here is the console output where the payload is empty. The endpoint is triggered, but no payload.
== APP == Server started on port 3100
== APP == Data received: {}
My pubsub.yaml file:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: order-pub-sub
spec:
type: pubsub.rabbitmq
version: v1
metadata:
- name: host
value: "amqp://localhost:5672"
- name: durable
value: "false"
- name: deletedWhenUnused
value: "false"
- name: autoAck
value: "false"
- name: reconnectWait
value: "0"
- name: concurrency
value: parallel
scopes:
- order
- delivery
I didn't notice that I should expect a CloudEvents payload with my configuration.
Modifying my code to use the appropriate body parser solved the issue:
import bodyParser from 'body-parser';
const app = express();
app.use(bodyParser.json({ type: 'application/*+json' }));
It would also be possible to use raw data, but for my case I'll keep as it is.
Right now what I'm trying to do is that every time a request is made, a query is made to the Redis service. The problem is that when using a basic configuration, it would not be working. The error is the following:
INFO Redis Client Error Error: connec at TCPConnectWrap.afterConnect [as oncomplete] (node} port: 6379127.0.0.1',
I have as always running redis-server with its corresponding credentials listening to port 127.0.0.1:6379. I know that AWS SAM runs with a container, and the issue is probably due to a network configuration, but the only command that AWS SAM CLI provides me is --host. How could i fix this?
my code is the following, although it is not very relevant:
import { APIGatewayProxyEvent, APIGatewayProxyResult } from 'aws-lambda';
import { createClient } from 'redis';
import processData from './src/lambda-data-dictionary-read/core/service/controllers/processData';
export async function lambdaHandler(event: APIGatewayProxyEvent): Promise<APIGatewayProxyResult> {
const body: any = await processData(event.queryStringParameters);
const url = process.env.REDIS_URL || 'redis://127.0.0.1:6379';
const client = createClient({
url,
});
client.on('error', (err) => console.log('Redis Client Error', err));
await client.connect();
await client.set('key', 'value');
const value = await client.get('key');
console.log('----', value, '----');
const response: APIGatewayProxyResult = {
statusCode: 200,
body,
};
if (body.error) {
return {
statusCode: 404,
body,
};
}
return response;
}
My template.yaml:
Transform: AWS::Serverless-2016-10-31
Description: >
lambda-data-dictionary-read
Sample SAM Template for lambda-data-dictionary-read
Globals:
Function:
Timeout: 0
Resources:
IndexFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: app/
Handler: index.lambdaHandler
Runtime: nodejs16.x
Timeout: 10
Architectures:
- x86_64
Environment:
Variables:
ENV: !Ref develope
REDIS_URL: !Ref redis://127.0.0.1:6379
Events:
Index:
Type: Api
Properties:
Path: /api/lambda-data-dictionary-read
Method: get
Metadata:
BuildMethod: esbuild
BuildProperties:
Minify: true
Target: 'es2020'
Sourcemap: true
UseNpmCi: true
Im using:
"scripts": {
"dev": "sam build --cached --beta-features && sam local start-api --port 8080 --host 127.0.0.1"
}
I built a Http app and 2 microservices using TCP protocol.
This is my application diagram.
// Http App/app.service.ts
constructor() {
this.accountService = ClientProxyFactory.create({
transport: Transport.TCP,
options: {
host: 'localhost',
port: 8877,
},
});
this.friendService = ClientProxyFactory.create({
transport: Transport.TCP,
options: {
host: 'localhost',
port: 8080,
},
});
}
I tried to send message from Account Service to Friend Service by #Messagepattern().
ClientProxy is set up each service. But it doesn't work.
I read offical documentaion #nestjs/microservices, But i don't know which one is appropriate.
Is there right way to send message from one microservice to another microservice?
You need to set up a message broker, something like RabbitMQ or Kafka, ie for RabbitMQ enter the command below and create a RabbitMQ container.
docker run -it --rm --name rabbitmq -p 0.0.0.0:5672:5672 -p 0.0.0.0:15672:15672 -d rabbitmq:3-management
Then pass RabbitMQ options to your main.ts bootstrap function:
async function bootstrap() {
const rabbitmqPort = 5672
const rabbitmqHost = 127.0.0.1
const app = await NestFactory.create(AppModule);
app.connectMicroservice<MicroserviceOptions>({
transport: Transport.RMQ,
options: {
urls: [
`amqp://${rabbitmqHost}:${rabbitmqPort}`,
],
queue: 'myqueue',
queueOptions: {
durable: false,
},
},
});
app
.startAllMicroservices(() => {
logger.log('Microservice is listening!');
})
.listen(3000, () => {
logger.log('Api Server is listening on 3000');
});
}
bootstrap();
For receiving messages:
#MessagePattern('my_pattern')
async myController(
#Payload() data: MYDTO,
): Promise<MY TYPE> {
return await this.accountService.myFunction(data);
}
Now when a client sends a message on myqueue with my_pattern pattern, the data that client sends will be data which comes from #playload() annotation.
For sending messages on any queue you need to add RabbitMQ configurations to your application module, ie account.module.ts, by the assumption that you want to send a message on FriendService
const rabbitmqPort = 5672
const rabbitmqHost = 127.0.0.1
#Module({
imports: [
ClientsModule.registerAsync([
{
name: 'Friend',
useFactory: {
transport: Transport.RMQ,
options: {
urls: [
`amqp://${rabbitmqHost}:${rabbitmqPort}`,
],
queue: 'friend_queue',
queueOptions: {
durable: false,
},
},
}
},
]),
],
controllers: [AccountController],
providers: [AccountService],
})
export class AccountModule {}
And then inject Friend client to your service constructor like this:
#Inject('Friend')
private friendClient: ClientProxy,
Send messages like this:
const myVar = await this.friendClient.send('Some_pattern', {SOME DATA}).toPromise();
Set up all the above configurations for your both microservices and it shall work.
I want to connect external kafka topic provided by vendor; as we are already developed service on top of Node JS.
So I am looking for
NodeJS kafka consumer and with SSL setup;
as the kafka-server needs the details while handshake;
This what I tried with kafkajs module already
var fs = require('fs');
var Kafka = require('kafkajs').Kafka;
var logLevel = require('kafkajs').logLevel;
var _kafka = new Kafka({
clientId: 'my-app',
brokers: ['broker:9093'],
logLevel: logLevel.DEBUG,
ssl: {
rejectUnauthorized: false,
ca: [fs.readFileSync('./cert/ca.trust.certificate.pem', 'utf-8')],
cert: fs.readFileSync('./cert/client-cert-signed.pem', 'utf-8'),
}
});
try {
const consumer = _kafka.consumer({ groupId: 'test-group' }, { maxWaitTimeInMs: 3000 });
consumer.connect();
consumer.subscribe({ topic: 'external-topic', fromBeginning: true });
consumer.run({
eachMessage: async({ topic, partition, message }) => {
console.log({
partition: 2,
offset: message.offset,
value: message.value.toString(),
})
},
})
} catch (err) {
console.log('Error while connect : ' + err);
}
It is giving
Connection error: 101057795:error:1408E0F4:SSL routines:ssl3_get_message:unexpected message:openssl\ssl\s3_both.c:408:\
while connecting;
Could you please help me with resolution or suggest me any npm module, so that I can give a try examples are so welcome.
Forgot to add key attribute in ssl Options; as this one is the mandatory while handshake;