Dapr pub/sub empty subscriber payload - dapr

My subscriber is being triggered when I publish a message but the data payload seems to be empty. I'm following the steps in the pub/sub subscription methods documentation.
This is the app endpoint code:
const app = express();
app.use(cors())
app.get('/dapr/subscribe', (_req, res) => {
res.json([
{
pubsubname: "order-pub-sub",
topic: "orders",
route: "api/deliveries",
}
]);
});
app.post('/api/deliveries', async (req: Request, res: Response) => {
const rawBody = JSON.stringify(req.body);
console.log(`Data received: ${rawBody}`)
res.status(200).send({ status: "SUCCESS" });
});
Starting the app:
docker run -d -p 5672:5672 --name dtc-rabbitmq rabbitmq
dapr run --app-id delivery --app-port 3100 --app-protocol http --dapr-http-port 3501 --components-path ../components npm run dev
Publishing to a topic:
dapr publish --publish-app-id order --pubsub order-pub-sub --topic orders --data '{"orderId": "100"}'
Here is the console output where the payload is empty. The endpoint is triggered, but no payload.
== APP == Server started on port 3100
== APP == Data received: {}
My pubsub.yaml file:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: order-pub-sub
spec:
type: pubsub.rabbitmq
version: v1
metadata:
- name: host
value: "amqp://localhost:5672"
- name: durable
value: "false"
- name: deletedWhenUnused
value: "false"
- name: autoAck
value: "false"
- name: reconnectWait
value: "0"
- name: concurrency
value: parallel
scopes:
- order
- delivery

I didn't notice that I should expect a CloudEvents payload with my configuration.
Modifying my code to use the appropriate body parser solved the issue:
import bodyParser from 'body-parser';
const app = express();
app.use(bodyParser.json({ type: 'application/*+json' }));
It would also be possible to use raw data, but for my case I'll keep as it is.

Related

Use Redis with AWS SAM (Redis Client Error)

Right now what I'm trying to do is that every time a request is made, a query is made to the Redis service. The problem is that when using a basic configuration, it would not be working. The error is the following:
INFO Redis Client Error Error: connec at TCPConnectWrap.afterConnect [as oncomplete] (node} port: 6379127.0.0.1',
I have as always running redis-server with its corresponding credentials listening to port 127.0.0.1:6379. I know that AWS SAM runs with a container, and the issue is probably due to a network configuration, but the only command that AWS SAM CLI provides me is --host. How could i fix this?
my code is the following, although it is not very relevant:
import { APIGatewayProxyEvent, APIGatewayProxyResult } from 'aws-lambda';
import { createClient } from 'redis';
import processData from './src/lambda-data-dictionary-read/core/service/controllers/processData';
export async function lambdaHandler(event: APIGatewayProxyEvent): Promise<APIGatewayProxyResult> {
const body: any = await processData(event.queryStringParameters);
const url = process.env.REDIS_URL || 'redis://127.0.0.1:6379';
const client = createClient({
url,
});
client.on('error', (err) => console.log('Redis Client Error', err));
await client.connect();
await client.set('key', 'value');
const value = await client.get('key');
console.log('----', value, '----');
const response: APIGatewayProxyResult = {
statusCode: 200,
body,
};
if (body.error) {
return {
statusCode: 404,
body,
};
}
return response;
}
My template.yaml:
Transform: AWS::Serverless-2016-10-31
Description: >
lambda-data-dictionary-read
Sample SAM Template for lambda-data-dictionary-read
Globals:
Function:
Timeout: 0
Resources:
IndexFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: app/
Handler: index.lambdaHandler
Runtime: nodejs16.x
Timeout: 10
Architectures:
- x86_64
Environment:
Variables:
ENV: !Ref develope
REDIS_URL: !Ref redis://127.0.0.1:6379
Events:
Index:
Type: Api
Properties:
Path: /api/lambda-data-dictionary-read
Method: get
Metadata:
BuildMethod: esbuild
BuildProperties:
Minify: true
Target: 'es2020'
Sourcemap: true
UseNpmCi: true
Im using:
"scripts": {
"dev": "sam build --cached --beta-features && sam local start-api --port 8080 --host 127.0.0.1"
}

NestJS - Communication between 2 microservice

I built a Http app and 2 microservices using TCP protocol.
This is my application diagram.
// Http App/app.service.ts
constructor() {
this.accountService = ClientProxyFactory.create({
transport: Transport.TCP,
options: {
host: 'localhost',
port: 8877,
},
});
this.friendService = ClientProxyFactory.create({
transport: Transport.TCP,
options: {
host: 'localhost',
port: 8080,
},
});
}
I tried to send message from Account Service to Friend Service by #Messagepattern().
ClientProxy is set up each service. But it doesn't work.
I read offical documentaion #nestjs/microservices, But i don't know which one is appropriate.
Is there right way to send message from one microservice to another microservice?
You need to set up a message broker, something like RabbitMQ or Kafka, ie for RabbitMQ enter the command below and create a RabbitMQ container.
docker run -it --rm --name rabbitmq -p 0.0.0.0:5672:5672 -p 0.0.0.0:15672:15672 -d rabbitmq:3-management
Then pass RabbitMQ options to your main.ts bootstrap function:
async function bootstrap() {
const rabbitmqPort = 5672
const rabbitmqHost = 127.0.0.1
const app = await NestFactory.create(AppModule);
app.connectMicroservice<MicroserviceOptions>({
transport: Transport.RMQ,
options: {
urls: [
`amqp://${rabbitmqHost}:${rabbitmqPort}`,
],
queue: 'myqueue',
queueOptions: {
durable: false,
},
},
});
app
.startAllMicroservices(() => {
logger.log('Microservice is listening!');
})
.listen(3000, () => {
logger.log('Api Server is listening on 3000');
});
}
bootstrap();
For receiving messages:
#MessagePattern('my_pattern')
async myController(
#Payload() data: MYDTO,
): Promise<MY TYPE> {
return await this.accountService.myFunction(data);
}
Now when a client sends a message on myqueue with my_pattern pattern, the data that client sends will be data which comes from #playload() annotation.
For sending messages on any queue you need to add RabbitMQ configurations to your application module, ie account.module.ts, by the assumption that you want to send a message on FriendService
const rabbitmqPort = 5672
const rabbitmqHost = 127.0.0.1
#Module({
imports: [
ClientsModule.registerAsync([
{
name: 'Friend',
useFactory: {
transport: Transport.RMQ,
options: {
urls: [
`amqp://${rabbitmqHost}:${rabbitmqPort}`,
],
queue: 'friend_queue',
queueOptions: {
durable: false,
},
},
}
},
]),
],
controllers: [AccountController],
providers: [AccountService],
})
export class AccountModule {}
And then inject Friend client to your service constructor like this:
#Inject('Friend')
private friendClient: ClientProxy,
Send messages like this:
const myVar = await this.friendClient.send('Some_pattern', {SOME DATA}).toPromise();
Set up all the above configurations for your both microservices and it shall work.

How to programatically create a Cloud Run service from Node.js?

I'm trying to create a new Cloud Run service from firebase functions using the googleapis client library. The following code:
const auth = new google.auth.GoogleAuth({
projectId,
scopes: ['https://www.googleapis.com/auth/cloud-platform']
});
const authClient = await auth.getClient();
const result = await google.run({
version: 'v1',
auth: authClient
}).namespaces.services.create({
parent: `namespaces/${projectId}`,
requestBody: {
metadata: {
name: 'asdf'
},
spec: {
template: {
spec: {
containers: [
{
image: 'gcr.io/graph-4d1ec/graph#sha256:80c764961657d7e2fe548b3886c4662c55c9b5ac881aad5a74cce2d1f97895b8',
env: [
{ name: 'URL', value: url }
]
}
]
}
},
traffic: [{ percent: 100, latestRevision: true }]
}
}
}, {})
Produces an error:
Error: The request has errors
at Gaxios._request (/srv/node_modules/gaxios/build/src/gaxios.js:85:23)
at <anonymous>
at process._tickDomainCallback (internal/process/next_tick.js:229:7)
No further information is provided as to what is wrong with this request.
What am I doing wrong?
Most notably, the API client library you're using by default points to run.googleapis.com.
However, while using namespaces.services.create, you need a regional api endpoint, such as us-central1-run.googleapis.com. I'm not familiar with Node.js but you need to change the API endpoint from the default to this value.
You are in super luck, I just published a blog post several 5 minutes ago explaining how does gcloud run deploy work under the covers, with details on API calls, how updates are made etc. https://ahmet.im/blog/gcloud-run-deploy/ It has sample Go code linked at the end that you can study. Note that "updating" Cloud Run services has several other intricacies to understand, so make sure to check out the blog post.
Furthermore, to debug the issue you are having, I'm assuming (again I know nothing about Node.js) you might find more info in the result object that storing some error value or http response code or body.

Strimzi - Connecting external clients

Following on discussion here, I used the following steps to enable an external client (based on kafkajs) connect to Strimzi on OpenShift. These steps are from here.
Enable external route
The kafka-persistent-single.yaml is edited to as shown below.
apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
name: my-cluster
spec:
kafka:
version: 2.3.0
replicas: 1
listeners:
plain: {}
tls: {}
external:
type: route
config:
offsets.topic.replication.factor: 1
transaction.state.log.replication.factor: 1
transaction.state.log.min.isr: 1
log.message.format.version: "2.3"
storage:
type: jbod
volumes:
- id: 0
type: persistent-claim
size: 5Gi
deleteClaim: false
zookeeper:
replicas: 1
storage:
type: persistent-claim
size: 5Gi
deleteClaim: false
entityOperator:
topicOperator: {}
userOperator: {}
Extract certificate,
To extract certificate and use it in client, I ran the following command:
kubectl get secret my-cluster-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -D > ca.crt
Note that, I had to use base64 -D on my macOS and not base64 -d as shown in documentation.
Kafkajs client
This is the client adapted from their npm page and their documentation.
const fs = require('fs')
const { Kafka } = require('kafkajs')
const kafka = new Kafka({
clientId: 'my-app',
brokers: ['my-cluster-kafka-bootstrap-messaging-os.192.168.99.100.nip.io'],
ssl : { rejectUnauthorized: false,
ca : [fs.readFileSync('ca.crt', 'utf-8')]
}
})
const producer = kafka.producer()
const consumer = kafka.consumer({ groupId: 'test-group' })
const run = async () => {
// Producing
await producer.connect()
await producer.send({
topic: 'test-topic',
messages: [
{ value: 'Hello KafkaJS user!' },
],
})
// Consuming
await consumer.connect()
await consumer.subscribe({ topic: 'test-topic', fromBeginning: true })
await consumer.run({
eachMessage: async ({ topic, partition, message }) => {
console.log({
partition,
offset: message.offset,
value: message.value.toString(),
})
},
})
}
run().catch(console.error)
Question
When I run node sample.js from the folder having ca.crt, I get a connection refused message.
{"level":"ERROR","timestamp":"2019-10-05T03:22:40.491Z","logger":"kafkajs","message":"[Connection] Connection error: connect ECONNREFUSED 192.168.99.100:9094","broker":"my-cluster-kafka-bootstrap-messaging-os.192.168.99.100.nip.io:9094","clientId":"my-app","stack":"Error: connect ECONNREFUSED 192.168.99.100:9094\n at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1113:14)"}
What am I missing?
I guess that the problem is that you are missing the right port 443 on the broker address so you have to use
brokers: ['my-cluster-kafka-bootstrap-messaging-os.192.168.99.100.nip.io:443']
otherwise it is trying to connect to the default port 80 on the OpenShift route.
After an extended discussion with #ppatierno, I feel that, the Strimzi cluster works well with the Kafka console clients. The kafkajs package, on the other hand, keeps failing with NOT_LEADER_FOR_PARTITION.
UPDATE The Python client seem to be working without a fuss; so, I am abandoning kafkajs.

NODEJS Marklogic - Write document list cannot process response with 404 status when using documents.write()

I'm new to nodejs and marklogic, and I'm following a tutorial for a simple app, I have setup and configured my marklogin login credentials,
when I run this sample code by running node sample.js
the output is write document list cannot process response with 404 status
I wonder why I'm encountering this error,
here is the code from the tutorial,
my-connection.js
module.exports = {
connInfo: {
host: '127.0.0.1',
port: 8001,
user: 'user',
password: 'password'
}
};
sample.js
const marklogic = require('marklogic');
const my = require('./my-connection.js');
const db = marklogic.createDatabaseClient(my.connInfo);
const documents = [
{ uri: '/gs/aardvark.json',
content: {
name: 'aardvark',
kind: 'mammal',
desc: 'The aardvark is a medium-sized burrowing, nocturnal mammal.'
}
},
{ uri: '/gs/bluebird.json',
content: {
name: 'bluebird',
kind: 'bird',
desc: 'The bluebird is a medium-sized, mostly insectivorous bird.'
}
},
{ uri: '/gs/cobra.json',
content: {
name: 'cobra',
kind: 'mammal',
desc: 'The cobra is a venomous, hooded snake of the family Elapidae.'
}
},
];
db.documents.write(documents).result(
function(response) {
console.log('Loaded the following documents:');
response.documents.forEach( function(document) {
console.log(' ' + document.uri);
});
},
function(error) {
console.log('error here');
console.log(JSON.stringify(error, null, 2));
}
);
I hope someone can tell me what is wrong with the code,
Thank You!
The MarkLogic NodeJS Client library is meant to run against a so-called MarkLogic REST-api instance. There is typically one running at port 8000, but you can also deploy other ones at different ports by issuing a POST call to :8002/v1/rest-apis, as described here:
http://docs.marklogic.com/REST/POST/v1/rest-apis
Port 8001 however is reserved for the MarkLogic Admin UI, which doesn't understand the REST calls that the NodeJS Client library is trying to invoke, hence the 404 (not found)..
HTH!

Resources