Bull queue is getting added but never completed - node.js

I'm working on an express app that uses several Bull queues in production. We created a wrapper around BullQueue (I added a stripped down version of it down below)
import logger from '~/libs/logger'
import BullQueue from 'bull'
import Redis from '~/libs/redis'
import {report} from '~/libs/sentry'
import {ValidationError, RetryError} from '~/libs/errors'
export default class Queue {
constructor(opts={}) {
if (!opts.label) {
throw new ValidationError('Cannot create queue without label')
}
if (!this.handler) {
throw new ValidationError(`Cannot create queue ${opts.label} without handler`)
}
this.label = opts.label
this.jobOpts = Object.assign({
attempts: 3,
backoff: {
type: 'exponential',
delay: 10000,
},
concurrency: 1,
// clean up jobs on completion - prevents redis slowly filling
removeOnComplete: true
}, opts.jobOpts)
const queueOpts = Object.assign({
createClient: function (type) {
switch (type) {
case 'client':
return client
case 'subscriber':
return subscriber
default:
return new Redis().client
}
}
}, opts.queueOpts)
this.queue = new BullQueue(this.label, queueOpts)
if (process.env.NODE_ENV === 'test') {
return
}
logger.info(`Queue:${this.label} created`)
}
add(data, opts={}) {
const jobOpts = Object.assign(this.jobOpts, opts)
return this.queue.add(data, jobOpts)
}
}
Then I created a queue that is supposed to send a GET request using node-fetch
import Queue from '~/libs/queue'
import Sentry from '~/libs/sentry'
import fetch from 'node-fetch'
class IndexNowQueue extends Queue {
constructor(options = {}) {
super({
label: 'documents--index-now'
})
}
async handler(job) {
Sentry.addBreadcrumb({category: 'async'})
const {permalink} = job.data
const res = await fetch(`https://www.bing.com/indexnow?url=${permalink}&key=${process.env.INDEX_NOW_KEY}`)
if (res.status === 200) {
return
}
throw new Error(`Failed to submit url '${permalink}' to IndexNow. Status: ${res.status} ${await res.text()}`)
}
}
export default new IndexNowQueue()
And then this Queue is being added in the relevant endpoint
indexNowQueue.add({permalink: document.permalink})
In the logs I can see that the Queue is added, however, unlike the other queues (for instance aggregate-feeds) it never moves forward
No error is thrown and any debugger breakpoint I added in there never gets reached. I also tried isolating the handler function outside of the queue and it works as I would expect. What could be causing this issue? What other ways do I have to debug Bull?
It's worth mentioning that there are half a dozen queues in the projects and they are all working as expected.

Related

Node - async function inside an imported module

I have a Node 14 server which is initialized like this:
import express, { Express } from 'express';
import kafkaConsumer from './modules/kafkaConsumer';
async function bootstrap(): Promise<Express> {
kafkaConsumer();
const app = express();
app.get('/health', (_req, res) => {
res.send('ok');
});
return app;
}
export default bootstrap;
kafkaConsumer code:
import logger from './logger.utils';
import KafkaConnector from '../connectors/kafkaConnector';
// singleton
const connectorInstance: KafkaConnector = new KafkaConnector('kafka endpoints', 'consumer group name');
// creating consumer and producer outside of main function in order to not initialize a new consumer producer per each new call.
(async () => {
await connectorInstance.createConsumer('consumer group name');
})();
const kafkaConsumer = async (): Promise<void> => {
const kafkaConsumer = connectorInstance.getConsumer();
await kafkaConsumer.connect();
await kafkaConsumer.subscribe({ topic: 'topic1', fromBeginning: true });
await kafkaConsumer.run({
autoCommit: false, // cancel auto commit in order to control committing
eachMessage: async ({ topic, partition, message }) => {
const messageContent = message.value ? message.value.toString() : '';
logger.info('received message', {
partition,
offset: message.offset,
value: messageContent,
topic
});
// commit message once finished all processing
await kafkaConsumer.commitOffsets([ { topic, partition, offset: message.offset } ]);
}
});
};
export default kafkaConsumer;
You can see that in the kafkaConsumer module there's an async function which is called at the begging to initialize the consumer instance.
How can I guarantee that it successfully passed when importing the module?
In addition, when importing the module, does this mean that the kafkaConsumer default function, is automatically called? won't it cause the server to be essentially stuck at startup?
Would appreciate some guidance here, thanks in advance.
Twicked the Kafka initalzation, and tested with local Kafka. Everything works as expected.

Mocha, Supertest and Mongo Memory server, cannot setup hooks properly

I have a Nodejs, Express server that uses Mongodb as the Database. I am trying to write some tests but I cannot get it configured properly. I need to create a mongoose connection for each block ( .test.ts file) once and then clean up the db for the other tests. Depending on how I approach this I get two different behaviours. But first my setup.
user.controller.test.ts
import { suite, test, expect } from "../utils/index";
import expressLoader from "#/loaders/express";
import { Logger } from "winston";
import express from "express";
import request from "supertest";
import { UserAccountModel } from "#/models/UserAccount";
import setup from "../setup";
setup();
import { app } from "#/server";
let loggerMock: Logger;
describe("POST /api/user/account/", function () {
it("it should have status code 200 and create an user account", async function () {
//GIVEN
const userCreateRequest = {
email: "test#gmail.com",
userId: "testUserId",
};
//WHEN
await request(app)
.post("/api/user/account/")
.send(userCreateRequest)
.expect(200);
//SHOULD
const cnt: number = await UserAccountModel.count();
expect(cnt).to.equal(1);
});
});
And my other test post.repository.test.ts
import { suite, test, expect } from "../../utils/index";
import { PostModel } from "#/models/Posts/Post";
import PostRepository from "#/repositories/posts.repository";
import { PostType } from "#/interfaces/Posts";
import { Logger } from "winston";
#suite
class PostRepositoryTests {
private loggerMock: Logger;
private SUT: PostRepository = new PostRepository({});
#test async "Should create two posts"() {
//GIVEN
const given = {
id: "jobId123",
};
//WHEN
await this.SUT.CreatePost(given);
//SHOULD
const cnt: number = await PostModel.count();
expect(cnt).to.equal(1);
}
}
And my setup
setup.ts
import { MongoMemoryServer } from "mongodb-memory-server";
import mongoose from "mongoose";
export = () => {
let mongoServer: MongoMemoryServer;
before(function () {
console.log("Before");
return MongoMemoryServer.create().then(function (mServer) {
mongoServer = mServer;
const mongoUri = mongoServer.getUri();
return mongoose.connect(mongoUri);
});
});
after(function () {
console.log("After");
return mongoose.disconnect().then(function () {
return mongoServer.stop(true);
});
});
};
With the above setup I get
1) "before all" hook in "{root}":
MongooseError: Can't call `openUri()` on an active connection with different connection strings. Make sure you aren't calling `mongoose.connect()` multiple times. See: https://mongoosejs.com/docs/connections.html#multiple_connections
But if I don't import the setup.ts and I rename it to setup.test.ts, then it works but the data isn't cleared after each run so I actually have 10 new users created instead of 1. Every time I run it, it works and doesn't clear the data after it's finished.
Also I have a big issue where the tests hang and don't seem to finish. I am guessing that is because of the async, await in the tests or because of the hooks hanging.
What I want to happen is:
Each test should have it's own setup, the mongo memory server should be clean every time.
The tests should use async and await and not hang.
Somehow export the setup from 1) as a utility function so that I can reuse it in my code

Does top-level await have a timeout?

With top-level await accepted into ES2022, I wonder if it is save to assume that await import("./path/to/module") has no timeout at all.
Here is what I’d like to do:
// src/commands/do-a.mjs
console.log("Doing a...");
await doSomethingThatTakesHours();
console.log("Done.");
// src/commands/do-b.mjs
console.log("Doing b...");
await doSomethingElseThatTakesDays();
console.log("Done.");
// src/commands/do-everything.mjs
await import("./do-a");
await import("./do-b");
And here is what I expect to see when running node src/commands/do-everything.mjs:
Doing a...
Done.
Doing b...
Done.
I could not find any mentions of top-level await timeout, but I wonder if what I’m trying to do is a misuse of the feature. In theory Node.js (or Deno) might throw an exception after reaching some predefined time cap (say, 30 seconds).
Here is how I’ve been approaching the same task before TLA:
// src/commands/do-a.cjs
import { autoStartCommandIfNeeded } from "#kachkaev/commands";
const doA = async () => {
console.log("Doing a...");
await doSomethingThatTakesHours();
console.log("Done.");
}
export default doA;
autoStartCommandIfNeeded(doA, __filename);
// src/commands/do-b.cjs
import { autoStartCommandIfNeeded } from "#kachkaev/commands";
const doB = async () => {
console.log("Doing b...");
await doSomethingThatTakesDays();
console.log("Done.");
}
export default doB;
autoStartCommandIfNeeded(doB, __filename);
// src/commands/do-everything.cjs
import { autoStartCommandIfNeeded } from "#kachkaev/commands";
import doA from "./do-a";
import doB from "./do-b";
const doEverything = () => {
await doA();
await doB();
}
export default doEverything;
autoStartCommandIfNeeded(doEverything, __filename);
autoStartCommandIfNeeded() executes the function if __filename matches require.main?.filename.
Answer: No, there is not a top-level timeout on an await.
This feature is actually being used in Deno for a webserver for example:
import { serve } from "https://deno.land/std#0.103.0/http/server.ts";
const server = serve({ port: 8080 });
console.log(`HTTP webserver running. Access it at: http://localhost:8080/`);
console.log("A");
for await (const request of server) {
let bodyContent = "Your user-agent is:\n\n";
bodyContent += request.headers.get("user-agent") || "Unknown";
request.respond({ status: 200, body: bodyContent });
}
console.log("B");
In this example, "A" gets printed in the console and "B" isn't until the webserver is shut down (which doesn't automatically happen).
As far as I know, there is no timeout by default in async-await. There is the await-timeout package, for example, that is adding a timeout behavior. Example:
import Timeout from 'await-timeout';
const timer = new Timeout();
try {
await Promise.race([
fetch('https://example.com'),
timer.set(1000, 'Timeout!')
]);
} finally {
timer.clear();
}
Taken from the docs: https://www.npmjs.com/package/await-timeout
As you can see, a Timeout is instantiated and its set method defines the timeout and the timeout message.

Job processing microservices using bull

I would like to process scheduled jobs using node.js bull. Basically I have two processors that handle 2 types of jobs. There is one configurator that configures the jobs which will be added to the bull queue using cron.
The scheduler will be in one microservice and the each of the processor will be a separate microservice. So I will be having 3 micro services.
My question is am I using the correct pattern with bull?
index.js
const Queue = require('bull');
const fetchQueue = new Queue('MyScheduler');
fetchQueue.add("fetcher", {name: "earthQuakeAlert"}, {repeat: {cron: '1-59/2 * * * *'}, removeOnComplete: true});
fetchQueue.add("fetcher", {name: "weatherAlert"}, {repeat: {cron: '3-59/3 * * * *'}, removeOnComplete: true});
processor-configurator.js
const Queue=require('bull');
const scheduler = new Queue("MyScheduler");
scheduler.process("processor", __dirname + "/alert-processor");
fetcher-configurator.js
const Queue=require('bull');
const scheduler = new Queue("MyScheduler");
scheduler.process("fetcher", __dirname+"/fetcher");
fetcher.js
const Queue = require('bull');
const moment = require('moment');
module.exports = function (job) {
const scheduler = new Queue('MyScheduler');
console.log("Insider processor ", job.data, moment().format("YYYY-MM-DD hh:mm:ss"));
scheduler.add('processor', {'name': 'Email needs to be sent'}, {removeOnComplete: true});
return Promise.resolve()
};
alert-processor.js
const Queue = require('bull');
const moment = require('moment');
module.exports = function (job) {
const scheduler = new Queue('MyScheduler');
console.log("Insider processor ", job.data, moment().format("YYYY-MM-DD hh:mm:ss"));
scheduler.add('processor', {'name': 'Email needs to be sent'}, {removeOnComplete: true});
return Promise.resolve()
};
There will be three microservices -
node index.js
node fetcher-configurator.js
node processor-configurator.js
I see inconsistent behavior from bull. Sometimes I am getting the error Missing process handler for job type
Quoting myself with a hope this will be helpful for someone else
This is because both workers use the same queue. Worker tries to get next job from queue, receives a job with wrong type (eg "fetcher" instead of "processor") and fails because it knows how to handle "processor" and doesn't know what to do with "fetcher". Bull doesn't allow you to take only compatible jobs from queue, both workers should be able to process all types of jobs. The simplest solution would be to use two different queues, one for processors and one for fetchers. Then you can remove names from jobs and processors, it won't be needed anymore since name is defined by the queue.
https://github.com/OptimalBits/bull/issues/1481
The Bull:
expiration-queue.js
import Queue from 'bull';
import { ExpirationCompletePublisher } from '../events/publishers/expiration-complete-publisher';
import { natsWrapper } from '../nats-wrapper';
interface Payload {
orderId: string;
}
const expirationQueue = new Queue<Payload>('order:expiration', {
redis: {
host: process.env.REDIS_HOST,
},
});
expirationQueue.process(async (job) => {
console.log('Expiries order id', job.data.orderId);
new ExpirationCompletePublisher(natsWrapper.client).publish({
orderId: job.data.orderId,
});
});
export { expirationQueue };
promotionEndQueue.js
import Queue from 'bull';
import { PromotionEndedPublisher } from '../events/publishers/promotion-ended-publisher';
import { natsWrapper } from '../nats-wrapper';
interface Payload {
promotionId: string;
}
const promotionEndQueue = new Queue<Payload>('promotions:end', {
redis: {
host: process.env.REDIS_HOST, // look at expiration-depl.yaml
},
});
promotionEndQueue.process(async (job) => {
console.log('Expiries promotion id', job.data.promotionId);
new PromotionEndedPublisher(natsWrapper.client).publish({
promotionId: job.data.promotionId,
});
});
export { promotionEndQueue };
order-created-listener.js
import { Listener, OrderCreatedEvent, Subjects } from '#your-lib/common';
import { queueGroupName } from './queue-group-name';
import { Message } from 'node-nats-streaming';
import { expirationQueue } from '../../queues/expiration-queue';
export class OrderCreatedListener extends Listener<OrderCreatedEvent> {
subject: Subjects.OrderCreated = Subjects.OrderCreated;
queueGroupName = queueGroupName;
async onMessage(data: OrderCreatedEvent['data'], msg: Message) {
// delay = expiredTime - currentTime
const delay = new Date(data.expiresAt).getTime() - new Date().getTime();
// console.log("delay", delay)
await expirationQueue.add(
{
orderId: data.id,
},
{
delay,
}
);
msg.ack();
}
}
promotion-started-listener.js
import {
Listener,
PromotionStartedEvent,
Subjects,
} from '#your-lib/common';
import { queueGroupName } from './queue-group-name';
import { Message } from 'node-nats-streaming';
import { promotionEndQueue } from '../../queues/promotions-end-queue';
export class PromotionStartedListener extends Listener<PromotionStartedEvent> {
subject: Subjects.PromotionStarted = Subjects.PromotionStarted;
queueGroupName = queueGroupName;
async onMessage(data: PromotionStartedEvent['data'], msg: Message) {
// delay = expiredTime - currentTime
const delay = new Date(data.endTime).getTime() - new Date().getTime();
// console.log("delay", delay)
await promotionEndQueue.add(
{
promotionId: data.id,
},
{
delay,
}
);
msg.ack();
}
}

Need to find the error with connecting subscription with schema stitching

I am using apollo-server-express for graphql back-end. I am going to process only mutations there, but I want to redirect query and subscription on hasura by means of schema stitching with introspection. Queries through apollo-server to hasura are working fine and returning the expected data.
But subscriptions are not working and I am getting this error: " Expected Iterable, but did not find one for field subscription_root.users".
And besides, server hasura is receiving events:
But apollo-server resents the answer from hasura. It is not the first day I suffer with this and I can not understand what the problem is.
In the editor hasura subscriptions work.
Link to full code
If you need any additional info, I will gladly provide it to you.
import {
introspectSchema,
makeExecutableSchema,
makeRemoteExecutableSchema,
mergeSchemas,
transformSchema,
FilterRootFields
} from 'graphql-tools';
import { HttpLink } from 'apollo-link-http';
import nodeFetch from 'node-fetch';
import { resolvers } from './resolvers';
import { hasRoleResolver } from './directives';
import { typeDefs } from './types';
import { WebSocketLink } from 'apollo-link-ws';
import { split } from 'apollo-link';
import { getMainDefinition } from 'apollo-utilities';
import { SubscriptionClient } from 'subscriptions-transport-ws';
import * as ws from 'ws';
import { OperationTypeNode } from 'graphql';
interface IDefinitionsParams {
operation?: OperationTypeNode,
kind: 'OperationDefinition' | 'FragmentDefinition'
}
const wsurl = 'ws://graphql-engine:8080/v1alpha1/graphql';
const getWsClient = function (wsurl: string) {
const client = new SubscriptionClient(wsurl, {
reconnect: true,
lazy: true
}, ws);
return client;
};
const wsLink = new WebSocketLink(getWsClient(wsurl));
const createRemoteSchema = async () => {
const httpLink = new HttpLink({
uri: 'http://graphql-engine:8080/v1alpha1/graphql',
fetch: (nodeFetch as any)
});
const link = split(
({ query }) => {
const { kind, operation }: IDefinitionsParams = getMainDefinition(query);
console.log('kind = ', kind, 'operation = ', operation);
return kind === 'OperationDefinition' && operation === 'subscription';
},
wsLink,
httpLink,
);
const remoteSchema = await introspectSchema(link);
const remoteExecutableSchema = makeRemoteExecutableSchema({
link,
schema: remoteSchema
});
const renamedSchema = transformSchema(
remoteExecutableSchema,
[
new FilterRootFields((operation, fieldName) => {
return (operation === 'Mutation') ? false : true; // && fieldName === 'password'
})
]
);
return renamedSchema;
};
export const createNewSchema = async () => {
const hasuraExecutableSchema = await createRemoteSchema();
const apolloSchema = makeExecutableSchema({
typeDefs,
resolvers,
directiveResolvers: {
hasRole: hasRoleResolver
}
});
return mergeSchemas({
schemas: [
hasuraExecutableSchema,
apolloSchema
]
});
};
Fixed by installing graphql-tools 4th version. It tutns out the editor did not even notice that I do not have this dependency and simply took the version of node_modules, which was installed by some other package. Problem was with version 3.x. Pull request is where the bug was fixed.
I had the same problem, different cause and solution.
My subscription was working well, until I introduced the 'resolve' key in
my subscription resolver:
Here is the 'Subscription' part of My resolver:
Subscription: {
mySubName: {
resolve: (payload) => {
console.log('In mySubName resolver, payload:',payload)
return payload;
},
subscribe:() => pubSub.asyncIterator(['requestsIncomplete']),
// )
},
The console.log proved the resolve() function was being called with a well structured payload (shaped the same as my Schema definiton - specifically the an object with a key named after the graphQL Subscriber, pointing to an array (array is an iterable):
In mySubName resolver, payload: { mySubName:
[ { id: 41,
...,
},
{...},
{...}
...
...
]
Even though I was returning that same unadulterated object, it caused the error expected Iterable, but did not find one for field "Subscription.mySubName"
When I commented out that resolve function all together, the subscription worked, which is further evidence that my payload was well structured, with the right key pointing to an iterable.
I must be mis-using the resolve field. From https://www.apollographql.com/docs/graphql-subscriptions/subscriptions-to-schema/
When using subscribe field, it's also possible to manipulate the event
payload before running it through the GraphQL execution engine.
Add resolve method near your subscribe and change the payload as you wish
so I am not sure how to properly use that function, specifically don't know what shape object to return from it, but using it as above breaks the subscription in the same manner you describe in your question.
I was already using graphql-tools 4.0.0, I upgraded to 4.0.8 but it made no difference.

Resources