I have a Node 14 server which is initialized like this:
import express, { Express } from 'express';
import kafkaConsumer from './modules/kafkaConsumer';
async function bootstrap(): Promise<Express> {
kafkaConsumer();
const app = express();
app.get('/health', (_req, res) => {
res.send('ok');
});
return app;
}
export default bootstrap;
kafkaConsumer code:
import logger from './logger.utils';
import KafkaConnector from '../connectors/kafkaConnector';
// singleton
const connectorInstance: KafkaConnector = new KafkaConnector('kafka endpoints', 'consumer group name');
// creating consumer and producer outside of main function in order to not initialize a new consumer producer per each new call.
(async () => {
await connectorInstance.createConsumer('consumer group name');
})();
const kafkaConsumer = async (): Promise<void> => {
const kafkaConsumer = connectorInstance.getConsumer();
await kafkaConsumer.connect();
await kafkaConsumer.subscribe({ topic: 'topic1', fromBeginning: true });
await kafkaConsumer.run({
autoCommit: false, // cancel auto commit in order to control committing
eachMessage: async ({ topic, partition, message }) => {
const messageContent = message.value ? message.value.toString() : '';
logger.info('received message', {
partition,
offset: message.offset,
value: messageContent,
topic
});
// commit message once finished all processing
await kafkaConsumer.commitOffsets([ { topic, partition, offset: message.offset } ]);
}
});
};
export default kafkaConsumer;
You can see that in the kafkaConsumer module there's an async function which is called at the begging to initialize the consumer instance.
How can I guarantee that it successfully passed when importing the module?
In addition, when importing the module, does this mean that the kafkaConsumer default function, is automatically called? won't it cause the server to be essentially stuck at startup?
Would appreciate some guidance here, thanks in advance.
Twicked the Kafka initalzation, and tested with local Kafka. Everything works as expected.
Related
I want to perform a database migration from an old model to another .
I have a database in MongoDB inside 2 collection the first one represents the old model and the other one present the model that the data would be transfer to it .
Thank you for sharing with me if there is a way to get the job done , perhaps a link or anything that could help please.
Note: example i get the data from large (instance of the old model) i must get it and transfer it to another instance width (step by step till having the second model).
import dotenv from 'dotenv';
dotenv.config();
async function main() {
const MONGO_URL = process.env.MONGO_URL || '';
const client = new MongoClient(MONGO_URL);
await listDatabases(client);
try {
await client.connect();
console.log('server is connected');
} finally {
await client.close();
}
}
main().catch(console.error);
async function listDatabases(client: { db: (arg0: string) => { (): any; new (): any; admin: { (): { (): any; new (): any; listDatabases: { (): any; new (): any } }; new (): any } } }) {
const databases = await client.db('park').admin().listDatabases();
console.log('Databases:');
databases.databases.forEach((db: { name: any }) => console.log(` - ${db.name}`));
}
async function ...{
}
I'm writing pact integration tests which require to perform actual call to specific mock server during running tests.
I found that I cannot find a way to change RTK query baseUrl after initialisation of api.
it('works with rtk', async () => {
// ... setup pact expectations
const reducer = {
[rtkApi.reducerPath]: rtkApi.reducer,
};
// proxy call to configureStore()
const { store } = setupStoreAndPersistor({
enableLog: true,
rootReducer: reducer,
isProduction: false,
});
// eslint-disable-next-line #typescript-eslint/no-explicit-any
const dispatch = store.dispatch as any;
dispatch(rtkApi.endpoints.GetModules.initiate();
// sleep for 1 second
await new Promise((resolve) => setTimeout(resolve, 1000));
const data = store.getState().api;
expect(data.queries['GetModules(undefined)']).toEqual({modules: []});
});
Base api
import { createApi } from '#reduxjs/toolkit/query/react';
import { graphqlRequestBaseQuery } from '#rtk-query/graphql-request-base-query';
import { GraphQLClient } from 'graphql-request';
export const client = new GraphQLClient('http://localhost:12355/graphql');
export const api = createApi({
baseQuery: graphqlRequestBaseQuery({ client }),
endpoints: () => ({}),
});
query is very basic
query GetModules {
modules {
name
}
}
I tried digging into customizing baseQuery but were not able to get it working.
I have an express app which uses bullmq queues, schedulers and workers. Even after pressing Ctrl + C I can still see the node process running inside my Activity manager but my server on the terminal shuts down. I know this because the bullmq task starts outputting console.log statements even after the server is down to the terminal.
This is what my server.js file looks like
// eslint-disable-next-line import/first
import http from 'http';
import { app } from './app';
import { sessionParser } from './session';
import { websocketServer } from './ws';
import 'jobs/repeatable';
const server = http.createServer(app);
server.on('upgrade', (request, socket, head) => {
sessionParser(request, {}, () => {
websocketServer.handleUpgrade(request, socket, head, (ws) => {
websocketServer.emit('connection', ws, request);
});
});
});
server.on('listening', () => {
websocketServer.emit('listening');
});
server.on('close', () => {
websocketServer.emit('close');
});
// https://stackoverflow.com/questions/18692536/node-js-server-close-event-doesnt-appear-to-fire
process.on('SIGINT', () => {
server.close();
});
export { server };
Notice that I have a SIGINT handler defined above. Is this the reason my jobs are not exiting? Do I have to manually close every queue, worker and scheduler inside my SIGINT? My jobs/repeatable.js file looks as shown below
const { scheduleJobs } = require('jobs');
if (process.env.ENABLE_JOB_QUEUE === 'true') {
scheduleJobs();
}
Here is my jobs.js file
import { scheduleDeleteExpiredTokensJob } from './delete-expired-tokens';
import { scheduleDeleteNullVotesJob } from './delete-null-votes';
export async function scheduleJobs() {
await scheduleDeleteExpiredTokensJob();
await scheduleDeleteNullVotesJob();
}
Here is my delete-expired-tokens.js file, other one is quite similar
import { processor as deleteExpiredTokensProcessor } from './processor';
import { queue as deleteExpiredTokensQueue } from './queue';
import { scheduler as deleteExpiredTokensScheduler } from './scheduler';
import { worker as deleteExpiredTokensWorker } from './worker';
export async function scheduleDeleteExpiredTokensJob() {
const jobId = process.env.QUEUE_DELETE_EXPIRED_TOKENS_JOB_ID;
const jobName = process.env.QUEUE_DELETE_EXPIRED_TOKENS_JOB_NAME;
await deleteExpiredTokensQueue.add(jobName, null, {
repeat: {
cron: process.env.QUEUE_DELETE_EXPIRED_TOKENS_FREQUENCY,
jobId,
},
});
}
export {
deleteExpiredTokensProcessor,
deleteExpiredTokensQueue,
deleteExpiredTokensScheduler,
deleteExpiredTokensWorker,
};
How do I shutdown bullmq task queues gracefully?
You have to call the close() method on the workers:
server.on('close', async () => {
websocketServer.emit('close');
// Close the workers
await worker.close()
});
Docs
With top-level await accepted into ES2022, I wonder if it is save to assume that await import("./path/to/module") has no timeout at all.
Here is what I’d like to do:
// src/commands/do-a.mjs
console.log("Doing a...");
await doSomethingThatTakesHours();
console.log("Done.");
// src/commands/do-b.mjs
console.log("Doing b...");
await doSomethingElseThatTakesDays();
console.log("Done.");
// src/commands/do-everything.mjs
await import("./do-a");
await import("./do-b");
And here is what I expect to see when running node src/commands/do-everything.mjs:
Doing a...
Done.
Doing b...
Done.
I could not find any mentions of top-level await timeout, but I wonder if what I’m trying to do is a misuse of the feature. In theory Node.js (or Deno) might throw an exception after reaching some predefined time cap (say, 30 seconds).
Here is how I’ve been approaching the same task before TLA:
// src/commands/do-a.cjs
import { autoStartCommandIfNeeded } from "#kachkaev/commands";
const doA = async () => {
console.log("Doing a...");
await doSomethingThatTakesHours();
console.log("Done.");
}
export default doA;
autoStartCommandIfNeeded(doA, __filename);
// src/commands/do-b.cjs
import { autoStartCommandIfNeeded } from "#kachkaev/commands";
const doB = async () => {
console.log("Doing b...");
await doSomethingThatTakesDays();
console.log("Done.");
}
export default doB;
autoStartCommandIfNeeded(doB, __filename);
// src/commands/do-everything.cjs
import { autoStartCommandIfNeeded } from "#kachkaev/commands";
import doA from "./do-a";
import doB from "./do-b";
const doEverything = () => {
await doA();
await doB();
}
export default doEverything;
autoStartCommandIfNeeded(doEverything, __filename);
autoStartCommandIfNeeded() executes the function if __filename matches require.main?.filename.
Answer: No, there is not a top-level timeout on an await.
This feature is actually being used in Deno for a webserver for example:
import { serve } from "https://deno.land/std#0.103.0/http/server.ts";
const server = serve({ port: 8080 });
console.log(`HTTP webserver running. Access it at: http://localhost:8080/`);
console.log("A");
for await (const request of server) {
let bodyContent = "Your user-agent is:\n\n";
bodyContent += request.headers.get("user-agent") || "Unknown";
request.respond({ status: 200, body: bodyContent });
}
console.log("B");
In this example, "A" gets printed in the console and "B" isn't until the webserver is shut down (which doesn't automatically happen).
As far as I know, there is no timeout by default in async-await. There is the await-timeout package, for example, that is adding a timeout behavior. Example:
import Timeout from 'await-timeout';
const timer = new Timeout();
try {
await Promise.race([
fetch('https://example.com'),
timer.set(1000, 'Timeout!')
]);
} finally {
timer.clear();
}
Taken from the docs: https://www.npmjs.com/package/await-timeout
As you can see, a Timeout is instantiated and its set method defines the timeout and the timeout message.
I would like to process scheduled jobs using node.js bull. Basically I have two processors that handle 2 types of jobs. There is one configurator that configures the jobs which will be added to the bull queue using cron.
The scheduler will be in one microservice and the each of the processor will be a separate microservice. So I will be having 3 micro services.
My question is am I using the correct pattern with bull?
index.js
const Queue = require('bull');
const fetchQueue = new Queue('MyScheduler');
fetchQueue.add("fetcher", {name: "earthQuakeAlert"}, {repeat: {cron: '1-59/2 * * * *'}, removeOnComplete: true});
fetchQueue.add("fetcher", {name: "weatherAlert"}, {repeat: {cron: '3-59/3 * * * *'}, removeOnComplete: true});
processor-configurator.js
const Queue=require('bull');
const scheduler = new Queue("MyScheduler");
scheduler.process("processor", __dirname + "/alert-processor");
fetcher-configurator.js
const Queue=require('bull');
const scheduler = new Queue("MyScheduler");
scheduler.process("fetcher", __dirname+"/fetcher");
fetcher.js
const Queue = require('bull');
const moment = require('moment');
module.exports = function (job) {
const scheduler = new Queue('MyScheduler');
console.log("Insider processor ", job.data, moment().format("YYYY-MM-DD hh:mm:ss"));
scheduler.add('processor', {'name': 'Email needs to be sent'}, {removeOnComplete: true});
return Promise.resolve()
};
alert-processor.js
const Queue = require('bull');
const moment = require('moment');
module.exports = function (job) {
const scheduler = new Queue('MyScheduler');
console.log("Insider processor ", job.data, moment().format("YYYY-MM-DD hh:mm:ss"));
scheduler.add('processor', {'name': 'Email needs to be sent'}, {removeOnComplete: true});
return Promise.resolve()
};
There will be three microservices -
node index.js
node fetcher-configurator.js
node processor-configurator.js
I see inconsistent behavior from bull. Sometimes I am getting the error Missing process handler for job type
Quoting myself with a hope this will be helpful for someone else
This is because both workers use the same queue. Worker tries to get next job from queue, receives a job with wrong type (eg "fetcher" instead of "processor") and fails because it knows how to handle "processor" and doesn't know what to do with "fetcher". Bull doesn't allow you to take only compatible jobs from queue, both workers should be able to process all types of jobs. The simplest solution would be to use two different queues, one for processors and one for fetchers. Then you can remove names from jobs and processors, it won't be needed anymore since name is defined by the queue.
https://github.com/OptimalBits/bull/issues/1481
The Bull:
expiration-queue.js
import Queue from 'bull';
import { ExpirationCompletePublisher } from '../events/publishers/expiration-complete-publisher';
import { natsWrapper } from '../nats-wrapper';
interface Payload {
orderId: string;
}
const expirationQueue = new Queue<Payload>('order:expiration', {
redis: {
host: process.env.REDIS_HOST,
},
});
expirationQueue.process(async (job) => {
console.log('Expiries order id', job.data.orderId);
new ExpirationCompletePublisher(natsWrapper.client).publish({
orderId: job.data.orderId,
});
});
export { expirationQueue };
promotionEndQueue.js
import Queue from 'bull';
import { PromotionEndedPublisher } from '../events/publishers/promotion-ended-publisher';
import { natsWrapper } from '../nats-wrapper';
interface Payload {
promotionId: string;
}
const promotionEndQueue = new Queue<Payload>('promotions:end', {
redis: {
host: process.env.REDIS_HOST, // look at expiration-depl.yaml
},
});
promotionEndQueue.process(async (job) => {
console.log('Expiries promotion id', job.data.promotionId);
new PromotionEndedPublisher(natsWrapper.client).publish({
promotionId: job.data.promotionId,
});
});
export { promotionEndQueue };
order-created-listener.js
import { Listener, OrderCreatedEvent, Subjects } from '#your-lib/common';
import { queueGroupName } from './queue-group-name';
import { Message } from 'node-nats-streaming';
import { expirationQueue } from '../../queues/expiration-queue';
export class OrderCreatedListener extends Listener<OrderCreatedEvent> {
subject: Subjects.OrderCreated = Subjects.OrderCreated;
queueGroupName = queueGroupName;
async onMessage(data: OrderCreatedEvent['data'], msg: Message) {
// delay = expiredTime - currentTime
const delay = new Date(data.expiresAt).getTime() - new Date().getTime();
// console.log("delay", delay)
await expirationQueue.add(
{
orderId: data.id,
},
{
delay,
}
);
msg.ack();
}
}
promotion-started-listener.js
import {
Listener,
PromotionStartedEvent,
Subjects,
} from '#your-lib/common';
import { queueGroupName } from './queue-group-name';
import { Message } from 'node-nats-streaming';
import { promotionEndQueue } from '../../queues/promotions-end-queue';
export class PromotionStartedListener extends Listener<PromotionStartedEvent> {
subject: Subjects.PromotionStarted = Subjects.PromotionStarted;
queueGroupName = queueGroupName;
async onMessage(data: PromotionStartedEvent['data'], msg: Message) {
// delay = expiredTime - currentTime
const delay = new Date(data.endTime).getTime() - new Date().getTime();
// console.log("delay", delay)
await promotionEndQueue.add(
{
promotionId: data.id,
},
{
delay,
}
);
msg.ack();
}
}