I built a Http app and 2 microservices using TCP protocol.
This is my application diagram.
// Http App/app.service.ts
constructor() {
this.accountService = ClientProxyFactory.create({
transport: Transport.TCP,
options: {
host: 'localhost',
port: 8877,
},
});
this.friendService = ClientProxyFactory.create({
transport: Transport.TCP,
options: {
host: 'localhost',
port: 8080,
},
});
}
I tried to send message from Account Service to Friend Service by #Messagepattern().
ClientProxy is set up each service. But it doesn't work.
I read offical documentaion #nestjs/microservices, But i don't know which one is appropriate.
Is there right way to send message from one microservice to another microservice?
You need to set up a message broker, something like RabbitMQ or Kafka, ie for RabbitMQ enter the command below and create a RabbitMQ container.
docker run -it --rm --name rabbitmq -p 0.0.0.0:5672:5672 -p 0.0.0.0:15672:15672 -d rabbitmq:3-management
Then pass RabbitMQ options to your main.ts bootstrap function:
async function bootstrap() {
const rabbitmqPort = 5672
const rabbitmqHost = 127.0.0.1
const app = await NestFactory.create(AppModule);
app.connectMicroservice<MicroserviceOptions>({
transport: Transport.RMQ,
options: {
urls: [
`amqp://${rabbitmqHost}:${rabbitmqPort}`,
],
queue: 'myqueue',
queueOptions: {
durable: false,
},
},
});
app
.startAllMicroservices(() => {
logger.log('Microservice is listening!');
})
.listen(3000, () => {
logger.log('Api Server is listening on 3000');
});
}
bootstrap();
For receiving messages:
#MessagePattern('my_pattern')
async myController(
#Payload() data: MYDTO,
): Promise<MY TYPE> {
return await this.accountService.myFunction(data);
}
Now when a client sends a message on myqueue with my_pattern pattern, the data that client sends will be data which comes from #playload() annotation.
For sending messages on any queue you need to add RabbitMQ configurations to your application module, ie account.module.ts, by the assumption that you want to send a message on FriendService
const rabbitmqPort = 5672
const rabbitmqHost = 127.0.0.1
#Module({
imports: [
ClientsModule.registerAsync([
{
name: 'Friend',
useFactory: {
transport: Transport.RMQ,
options: {
urls: [
`amqp://${rabbitmqHost}:${rabbitmqPort}`,
],
queue: 'friend_queue',
queueOptions: {
durable: false,
},
},
}
},
]),
],
controllers: [AccountController],
providers: [AccountService],
})
export class AccountModule {}
And then inject Friend client to your service constructor like this:
#Inject('Friend')
private friendClient: ClientProxy,
Send messages like this:
const myVar = await this.friendClient.send('Some_pattern', {SOME DATA}).toPromise();
Set up all the above configurations for your both microservices and it shall work.
Related
i'm trying to implement microservices with grpc and kafka but when i add both options in main.ts the client of grpc doesn't load on "onModuleInit" method
this is my main.ts:
async function bootstrap() {
const app = await NestFactory.create(AppModule);
app.connectMicroservice<MicroserviceOptions>({
transport: Transport.GRPC,
options:{
url: '0.0.0.0.:50053',
package: protobufPackage,
protoPath: join('./order.proto',)
},
});
app.connectMicroservice<MicroserviceOptions>({
transport: Transport.KAFKA,
options:{
client: {
brokers: ['localhost:29092'],
},
consumer: {
groupId: 'orders-consumer',
},
},
});
await app.startAllMicroservices();
}
and my client in the provider:
public onModuleInit(): void {
this.service = this.clientService<ProductService>(PRODUCT_SERVICE_NAME);
}
At this point this.service is undefined only when set kafka configuration, i don't know if I'm missing something or did I misconfigure?
Any suggestion would be very helpful!
call await app.init() after app.startAllMicroservices()
I am setting up an Astro site which will display data fetched from a simple service running on the same host but a different port.
The service is a simple Express app.
server.js:
const express = require('express')
const app = express()
const port = 3010
const response = {
message: "hello"
}
app.get('/api/all', (_req, res) => {
res.send(JSON.stringify(response))
})
app.listen(port, () => {
console.log(`listening on port ${port}`)
})
Since the service is running on port 3010, which is different from the Astro site, I configure a server proxy at the Vite level.
astro.config.mjs:
import { defineConfig } from 'astro/config';
import react from '#astrojs/react';
export default defineConfig({
integrations: [react()],
vite: {
optimizeDeps: {
esbuildOptions: {
define: {
global: 'globalThis'
}
}
},
server: {
proxy: {
'/api/all': 'http://localhost:3010'
}
}
},
});
Here is where I am trying to invoke the service.
index.astro:
---
const response = await fetch('/api/all');
const data = await response.json();
console.log(data);
---
When I run yarn dev I get this console output:
Response {
size: 0,
[Symbol(Body internals)]: {
body: Readable {
_readableState: [ReadableState],
_events: [Object: null prototype],
_eventsCount: 1,
_maxListeners: undefined,
_read: [Function (anonymous)],
[Symbol(kCapture)]: false
},
stream: Readable {
_readableState: [ReadableState],
_events: [Object: null prototype],
_eventsCount: 1,
_maxListeners: undefined,
_read: [Function (anonymous)],
[Symbol(kCapture)]: false
},
boundary: null,
disturbed: false,
error: null
},
[Symbol(Response internals)]: {
type: 'default',
url: undefined,
status: 404,
statusText: '',
headers: { date: 'Tue, 02 Aug 2022 19:41:02 GMT' },
counter: undefined,
highWaterMark: undefined
}
}
It looks like the network request is returning a 404.
I'm not seeing in the doc much more about server configuration.
Am I going about this the right way?
I have this working correctly with a vanilla Vite app and the same config/setup.
How can I proxy local service calls for an Astro application?
Short Answer
You cannot proxy service calls with Astro but also you don't have to
For direct resolution answer see section functional test without proxy
Details
Astro does not forward the server.proxy config to Vite (unless you patch your own version of Astro), the Astro Vite server config can be seen empty
proxy: {
// add proxies here
},
reference https://github.com/withastro/astro/blob/8c100a6fe6cc652c3799d1622e12c2c969f30510/packages/astro/src/core/create-vite.ts#L125
there is a merge of Astro server with Astro vite.server config but it does not take the proxy param. This is not obvious to get from the code, see tests later.
let result = commonConfig;
result = vite.mergeConfig(result, settings.config.vite || {});
result = vite.mergeConfig(result, commandConfig);
reference https://github.com/withastro/astro/blob/8c100a6fe6cc652c3799d1622e12c2c969f30510/packages/astro/src/core/create-vite.ts#L167
Tests
Config tests
I tried all possible combinations of how to input config to Astro and in each location a different port number to show which one takes an override
a vite.config.js file on root with
export default {
server: {
port:6000,
proxy: {
'/api': 'http://localhost:4000'
}
}
}
in two locations in the root file astro.config.mjs
server
vite.server
export default defineConfig({
server:{
port: 3000,
proxy: {
'/api': 'http://localhost:4000'
}
},
integrations: [int_test()],
vite: {
optimizeDeps: {
esbuildOptions: {
define: {
global: 'globalThis'
}
}
},
server: {
port:5000,
proxy: {
'/api': 'http://localhost:4000'
}
}
}
});
in an Astro integration
Astro has a so called integration that helps update the config (sort of Astro plugins) the integration helps identify what was finally kept in the config and also gives a last chance to update the config
integration-test.js
async function config_setup({ updateConfig, config, addPageExtension, command }) {
green_log(`astro:config:setup> running (${command})`)
updateConfig({
server:{proxy : {'/api': 'http://localhost:4000'}},
vite:{server:{proxy : {'/api': 'http://localhost:4000'}}}
})
console.log(config.server)
console.log(config.vite)
green_log(`astro:config:setup> end`)
}
this is the output log
astro:config:setup> running (dev)
{ host: false, port: 3000, streaming: true }
{
optimizeDeps: { esbuildOptions: { define: [Object] } },
server: { port: 5000, proxy: { '/api': 'http://localhost:4000' } }
}
astro:config:setup> end
the proxy parameter is removed from astro server config, the vite config is visible but has no effect as it is overridden, and not forwarded to Vite
test results
dev server runs on port 3000 which is from Astro config server all other configs overridden
the fetch api fails with the error
error Failed to parse URL from /api
File:
D:\dev\astro\astro-examples\24_api-proxy\D:\dev\astro\astro-examples\24_api-proxy\src\pages\index.astro:15:20
Stacktrace:
TypeError: Failed to parse URL from /api
at Object.fetch (node:internal/deps/undici/undici:11118:11)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
functional test without proxy
Given that Astro front matter runs on the server side, in SSG mode during build and in SSR mode on page load on the server then the server sends the result html, Astro has access to all host ports and can directly use the service port like this
const response = await fetch('http://localhost:4000/api');
const data = await response.json();
console.log(data);
The code above runs as expected without errors
Reference Example
All tests and files mentioned above are available on the reference example github repo : https://github.com/MicroWebStacks/astro-examples/tree/main/24_api-proxy
You can add your own proxy middleware with the astro:server:setup hook.
For example use http-proxy-middleware in the server setup hook.
// plugins/proxy-middleware.mjs
import { createProxyMiddleware } from "http-proxy-middleware"
export default (context, options) => {
const apiProxy = createProxyMiddleware(context, options)
return {
name: 'proxy',
hooks: {
'astro:server:setup': ({ server }) => {
server.middlewares.use(apiProxy)
}
}
}
}
Usage:
// astro.config.mjs
import { defineConfig } from 'astro/config';
import proxyMiddleware from './plugins/proxy-middleware.mjs';
// https://astro.build/config
export default defineConfig({
integrations: [
proxyMiddleware("/api/all", {
target: "http://localhost:3010",
changeOrigin: true,
}),
],
});
I have a node/express server which is used to give streams from IP camera to a website. Everything is working well. I run that webserver with PM2 on a windows server.
The problem : for each stream I have a windows console opening with just nothing logged in. The console reopen when I try to close it.
Is there a way to prevent those console to open ?
Here is the related node.js code :
const { NodeMediaServer } = require('node-media-server');
private _initiate_streams(): void{
DatabaseProvider.instance.camerasDao.getCamerasList().pipe(
take(1)
).subscribe(
(databaseReadOperationResult: DatabaseReadOperationResult<ICamera[]>) => {
if (databaseReadOperationResult.successful === true){
const cameras = databaseReadOperationResult.result;
const tasks = [];
cameras.forEach( camera => {
tasks.push(
{
app : config.get('media_server.app_name'),
mode: 'static',
edge: camera.rtsp_url,
name: camera.stream_name,
rtsp_transport: 'tcp'
}
)
});
const configMediaServer = {
logType: 3, // 3 - Log everything (debug)
rtmp: {
port: 1935,
chunk_size: 60000,
gop_cache: true,
ping: 60,
ping_timeout: 30
},
http: {
port: config.get('media_server.port'),
allow_origin: '*'
},
auth: {
play: true,
api: true,
publish: true,
secret: config.get('salt'),
api_user: 'user',
api_pass: 'password',
},
relay: {
ffmpeg: 'C:\\FFmpeg\\bin\\ffmpeg.exe',
tasks: tasks
}
};
var nms = new NodeMediaServer(configMediaServer)
nms.run();
} else {
// catch exception
}
}
);
}
I'm working on an Node.js application with NestJS. I need to communicate with 2 other apps.
The first one over WebSockets (Socket.io) and the other one over TCP sockets with net module.
Is it possible to use two gateways with specific adapters, one based on Socket.io and the other one on Net module, or do I have to split this application?
You don't need to split the application.
You can define your module as:
#Module({
providers: [
MyGateway,
MyService,
],
})
export class MyModule {}
with the gateway being in charge of the web sockets channel
import { SubscribeMessage, WebSocketGateway } from '#nestjs/websockets'
import { Socket } from 'socket.io'
...
#WebSocketGateway()
export class MyGateway {
constructor(private readonly myService: MyService) {}
#SubscribeMessage('MY_MESSAGE')
public async sendMessage(socket: Socket, data: IData): Promise<IData> {
socket.emit(...)
}
}
and the service being in charge of the TCP channel
import { Client, ClientProxy, Transport } from '#nestjs/microservices'
...
#Injectable()
export class MyService {
#Client({
options: { host: 'MY_HOST', port: MY_PORT },
transport: Transport.TCP,
})
private client: ClientProxy
public async myFunction(): Promise<IData> {
return this.client
.send<IData>({ cmd: 'MY_MESSAGE' })
.toPromise()
.catch(error => {
throw new HttpException(error, error.status)
})
}
}
I'm currently studying nodejs and marklogic, I'm running a sample code but I cannot make it work I'm getting econnrefused whenever I run the code.,
Here is my code,
my-connection.js
module.exports = {
connInfo: {
host: 'localhost',
port: 8008,
user: 'user',
password: 'password'
}
};
sample.js
const marklogic = require('marklogic');
const my = require('./my-connection.js');
const db = marklogic.createDatabaseClient(my.connInfo);
const documents = [
{ uri: '/gs/aardvark.json',
content: {
name: 'aardvark',
kind: 'mammal',
desc: 'The aardvark is a medium-sized burrowing, nocturnal mammal.'
}
},
{ uri: '/gs/bluebird.json',
content: {
name: 'bluebird',
kind: 'bird',
desc: 'The bluebird is a medium-sized, mostly insectivorous bird.'
}
},
{ uri: '/gs/cobra.json',
content: {
name: 'cobra',
kind: 'mammal',
desc: 'The cobra is a venomous, hooded snake of the family Elapidae.'
}
},
];
db.documents.write(documents).result(
function(response) {
console.log('Loaded the following documents:');
response.documents.forEach( function(document) {
console.log(' ' + document.uri);
});
},
function(error) {
console.log('error here');
console.log(JSON.stringify(error, null, 2));
}
);
I'm running it by typing node sample.js I'm using marklogic for the database, can someone help me identify the problem here,
I get ECONNREFUSED upon running the app, thank you!
ECONNREFUSED indicates no TCP listener process is running behind localhost:8008. That could mean MarkLogic is not running on your localhost, or it has no app-server configured at port 8008.
Check if http://localhost:8001 works on your machine, and brings up the MarkLogic Admin UI. If so, check the app-servers to see if you actually have one configured for 8008.
HTH!