I use PM2 to handle each worker (consumer) in a separate process, e.g. ecosystem.config.js:
module.exports = {
apps: [
{
name: "Nest App",
script: "./dist/main.js",
instances: "3",
autorestart: false,
watch: true,
max_memory_restart: "1G",
exec_mode: "cluster",
env: {
NODE_ENV: "development",
},
env_production: {
NODE_ENV: "production",
},
},
{
name: "Consumer",
script: "./dist/message-queue/consumer.service.js",
instances: 1,
},
],
};
I also created a message-queue module which includes message-queue.service.ts that is used to setup the queue and publish messages, no problem with that.
My structure looks like:
src
message-queue
consumer.service.ts
message-queue.module.ts
message-queue.service.ts
The problem comes when we deal with the consumer (worker), channel consumed correctly, but I can't get any data from the database or call any model/service, my consumer.service.ts looks like:
import * as amqp from 'amqp-connection-manager';
import { Channel } from 'amqplib';
import { Constants } from '../utils/constants';
// consume messages from RabbitMQ
async function testConsumer(): Promise<void> {
const connection: amqp.AmqpConnectionManager = await amqp.connect('');
const channel: amqp.ChannelWrapper = await connection.createChannel({
setup: function (channel: Channel) {
return Promise.all([channel.prefetch(2)]);
},
});
connection.on('connect', function () {
console.log('Connected from test consumer');
});
return new Promise((resolve, reject) => {
channel.consume(Constants.MessageQueues.TEST, async function (msg) {
// parse message
// ...
});
});
}
testConsumer();
Can I query to get any data from the database, call any service (e.g. authService), or call any database model (e.g. userModel), and how?
Related
I'm proofing out an integration test with NestJS/KafkaJS.
I have everything implemented except the function on the event listener (consumer) for the topic I'm emitting to is not being called.
I read somewhere you can't consume a message until the consumer event GROUP_JOIN has completed, not sure if this is right, and/or how I could get my e2e test to wait for this to happen?
Here's the setup of the e2e test -
describe('InventoryController(e2e)', () => {
let app: INestApplication
let client: ClientKafka
beforeAll(async () => {
const moduleFixture: TestingModule = await Test.createTestingModule({
imports: [InventoryModule, KafkaClientModule],
}).compile()
app = moduleFixture.createNestApplication()
await app.connectMicroservice({
transport: Transport.KAFKA,
options: {
client: {
clientId: 'test_clientid',
brokers: process.env.KAFKA_BROKERS.split(' '), // []
ssl: true,
sasl: {
mechanism: 'plain',
username: process.env.KAFKA_CLUSTER_APIKEY,
password: process.env.KAFKA_CLUSTER_SECRET,
},
},
consumer: {
groupId: 'test_consumerids',
},
},
})
await app.startAllMicroservices()
await app.init()
client = moduleFixture.get<ClientKafka>('test_name')
await client.connect()
await app.listen(process.env.port || 3000)
})
afterAll(async () => {
await app.close()
await client.close()
})
it('/ (GET)', async () => {
return request(app.getHttpServer()).get('/inventory/kafka-inventory-test')
})
it('Emits a message to a topic', async () => {
await client.emit('inventory-test', { foo: 'bar' })
})
The client is emitting the message fine,
In my controller I have the event handler for the topic 'inventory-test'
#EventPattern('inventory-test')
async consumeInventoryTest(
// eslint-disable-next-line
#Payload() inventoryMessage: any,
#Ctx() context: KafkaContext,
): Promise<void> {
console.log('inventory-test consumer')
}
I have also logged the microservice with the app.getMicroservices() method and can see under the messageHandlers object it has 'inventory-test' which returns a function
server: ServerKafka {
messageHandlers: Map(1) { 'inventory-test' => [Function] }
Also the message handler is working when I run the app locally.
I've been searching google a lot and the docs for both kafkajs and nest, there isn't a lot of info out there
Thanks for any advice/help!
I actually finally solved this, you need to await a new promise after your client emits a message for the handler to have time to read it.
This is my index.js file, located in the ./src directory:
import { MongoClient } from "mongodb";
import CharacterDAO from "./dao/character";
import GearDAO from "./dao/gear";
import { startServer } from "./server";
import { seedData } from "./dataSeed";
// connect mongoDb, seed data if needed, run fastify server
export const runServer = async ({ dbUrl, dbName, environment, port }) => {
// test seed data when starting server if running a test suite
if (environment === "test") {
await seedData({
hostUrl: dbUrl,
databaseName: dbName
});
}
await MongoClient.connect(dbUrl, {
poolSize: 50,
useNewUrlParser: true,
useUnifiedTopology: true,
wtimeout: 2500
})
.then(async conn => {
const database = await conn.db(dbName);
// inject database connection into DAO objects
CharacterDAO.injectDB(database);
GearDAO.injectDB(database);
// start the fastify server
startServer(port);
})
.catch(err => {
console.log(err.stack);
// process.exit(1);
});
};
const serverArguments = process.argv.slice(2).map(arg => {
return arg.split("=")[1];
});
const serverOptions = {
dbUrl: serverArguments[0],
dbName: serverArguments[1],
environment: serverArguments[2],
port: serverArguments[3]
};
runServer({
...serverOptions
});
jestconfig.json
{
"transform": {
"^.+\\.(t|j)sx?$": "ts-jest"
},
"testEnvironment": "node",
"testRegex": "(/__tests__/.*|(\\.|/)(test|spec))\\.(jsx?|tsx?)$",
"moduleFileExtensions": ["ts", "tsx", "js", "jsx", "json", "node"]
}
Test script from package.json used to run the test (db credentials are omitted)
"test": "dbUrl=mongodb+srv://sdaw-dsawdad-dsadsawd#cluster0-jopi5.mongodb.net dbName=untitled-combat-game-test environment=test port=4000 jest --config jestconfig.json"
My test file:
import { runServer } from "../index";
beforeAll(async () => {
const serverOptions = {
dbUrl: process.env.dbUrl,
dbName: process.env.dbName,
environment: process.env.environment,
port: process.env.port
};
console.log(serverOptions);
await runServer({
...serverOptions
});
});
describe("mock test", () => {
it("should run a basic test", () => {
expect(true).toBe(true);
});
});
What happens when I run the test:
the test script runs runServer
the index.js file runs runServer
This causes a invalid URI error (since the process.argv referenced in index.js does not include a valid mongodb URI). I double-checked this by commenting out the runServer call at the bottom of my index.js file - and everything runs fine.
Moving the runServer function to a different file and importing it from there also solves the issue. So importing in both index.js and the test file does not result in multiple calls.
What am I doing wrong?
Importing/requiring a file evaluates the code inside of it (read: runs the code inside of it). You're not technically doing anything wrong, but for the purpose of your tests the code as you have written it won't work.
In your index.js file you are executing runServer(). Whenever that file is imported/required, that function call is also run.
Having a start.js file or similar which will actually start your server is a common pattern. This will help you avoid the issue you're experiencing.
I would split the definition of your server and invoking your server into two different files, say server.js and index.js. I will leave the fixing up of the imports to you, but this is the idea:
server.js
// connect mongoDb, seed data if needed, run fastify server
export const runServer = async ({ dbUrl, dbName, environment, port }) => {
// test seed data when starting server if running a test suite
if (environment === "test") {
await seedData({
hostUrl: dbUrl,
databaseName: dbName
});
}
await MongoClient.connect(dbUrl, {
poolSize: 50,
useNewUrlParser: true,
useUnifiedTopology: true,
wtimeout: 2500
})
.then(async conn => {
const database = await conn.db(dbName);
// inject database connection into DAO objects
CharacterDAO.injectDB(database);
GearDAO.injectDB(database);
// start the fastify server
startServer(port);
})
.catch(err => {
console.log(err.stack);
// process.exit(1);
});
};
index.js
import { runServer } from './server';
const serverArguments = process.argv.slice(2).map(arg => {
return arg.split("=")[1];
});
const serverOptions = {
dbUrl: serverArguments[0],
dbName: serverArguments[1],
environment: serverArguments[2],
port: serverArguments[3]
};
runServer({
...serverOptions
});
I'm implementing a web application and it calls lambda function to get data from database.
I chose Serverless Aurora and wrote a code, but I get the exception "Error: Received packet in the wrong sequence." in query method.
I googled this issue but almost of all is too old.
An article said it is the problem of browisify but I don't use it.
I'm using serverless framework with typescript.
const mysql = require('serverless-mysql')({
config: {
host: process.env.DB_HOST,
database: process.env.DB_NAME,
user: process.env.DB_USER,
password: process.env.DB_PASSWORD
}
});
export async function query(sql: string, param?: Array<string>): Promise<any> {
const results = await mysql.query(sql).catch(e => {
console.log(e); // Error: Received packet in the wrong sequence
throw new Error(e);
});
await mysql.end();
return results;
}
following also not working
export async function query(sql: string, param?: Array<string>): Promise<any> {
const connQueryPromisified = util
.promisify(connection.query)
.bind(connection);
const result = await connQueryPromisified(sql, param)
.then(row => {
console.log(row);
return row;
})
.catch(err => {
console.log(err); // getting Error: Received packet in the wrong sequence
throw err;
});
return result;
}
I also tried to use RDS DATA Service but in my region it is not available.
export async function query(sql: string, param?: Array<string>): Promise<any> {
const params: aws.RDSDataService.Types.ExecuteSqlRequest = {
awsSecretStoreArn: '***',
dbClusterOrInstanceArn: '***',
database: '***',
schema: '***',
sqlStatements: sql
};
console.log(params);
try {
const rdsService = new aws.RDSDataService({
apiVersion: '2018-08-01',
region: 'ap-northeast-1'
});
return rdsService
.executeSql(params)
.promise()
.then(d => {
return d;
})
.catch(e => {
throw new Error(e);
});
} catch (err) {
console.log(err); // nothing to say
throw new Error(err);
}
}
And here is my configurations:
webpack.config.js
const path = require('path');
const slsw = require('serverless-webpack');
module.exports = {
mode: slsw.lib.webpack.isLocal ? 'development' : 'production',
entry: slsw.lib.entries,
devtool: 'source-map',
resolve: {
extensions: ['.js', '.jsx', '.json', '.ts', '.tsx'],
},
output: {
libraryTarget: 'commonjs',
path: path.join(__dirname, '.webpack'),
filename: '[name].js',
},
target: 'node',
module: {
rules: [
// all files with a `.ts` or `.tsx` extension will be handled by `ts-loader`
{ test: /\.tsx?$/, loader: 'ts-loader' },
],
},
};
tsconfig.json
{
"compilerOptions": {
"lib": [
"es2017"
],
"moduleResolution": "node",
"sourceMap": true,
"target": "es2017",
"outDir": "lib"
},
"exclude": [
"node_modules"
]
}
I want to query only getting records from Serverless Aurora.
Can anybody help me?
The reason this is happening is because Webpack (in production mode) is putting your code through a minimiser, and the mysql module that serverless-mysql is using is not compatible with minimising.
You can see the issue here: https://github.com/mysqljs/mysql/issues/1655. It's quite a common problem with Node modules which rely on function names to do code building, as uglyifiers/minifiers attempt to obfuscate/save space by changing the names of functions to single letters.
The simplest fix would be to switch off minimising in your webpack config by adding:
optimization: {
minimize: false
}
There is some discussion in the linked issue on configuring various other minimising plugins (like terser) to not mangle names, which would allow you to get some of the benefit of minimising if you need it.
You can use mysql2 which supports promises
https://stackoverflow.com/a/62255084/1031304
exports.sayHelloAction = {
name: 'sayHelloAction',
description: '',
outputExample: {},
version: 1,
inputs: {},
run: function (api, data, next) {
// Enqueue the task now, and process it ASAP
// api.tasks.enqueue(nameOfTask, args, queue, callback)
api.tasks.enqueue("sayHello", null, 'default', function (error, toRun) {
next(error)
});
}
};
and my task is like this, but when I run my task from my action y cant see the log(">>>>>>>>>>") in my console :(
const sayHello = {
name: 'sayHello',
description: 'I say hello',
queue: "default",
plugins: [],
pluginOptions: [],
frequency: 1000,
run: function(api, params, next){
console.log(">>>>>>>>>>>>>>>>>>>>>>>>>>")
next(true);
}
};
exports.task = sayHello
versions: Nodejs: 7.7, ActionHerojs 17
You are enquing a task, not running it. You need to enable some workers on your server.
I'm running Nightwatch after launching a child process that starts up my local servers. Nightwatch runs the tests, they complete successfully, and the browser windows all close, but the nightwatch process continues to run after printing the message "OK. 10 total assertions passed.".
I thought it may have something to do with how I'm watching events on the nightwatch process, but as far as I can tell I am watching all events that would indicate Nightwatch is exiting.
The method shutdown() in runner.js is never called. How do I get Nightwatch to terminate when the tests finish?
Update
If I remove the last test in sign-in.js then Nightwatch exits as expected.
runner.js
import spawn from 'cross-spawn'
// 1. start the dev server using production config
process.env.NODE_ENV = 'testing'
let servers
function shutdown (result) {
console.log('HERE', result)
try {
// Passing a negative PID to kill will terminate all child processes, not just the parent
if (servers) process.kill(-servers.pid)
} catch (e) {
console.error('Unable to shutdown servers, may need to be killed manually')
}
if (result) {
console.error(result)
process.exit(1)
} else {
process.exit(0)
}
}
function watch (child) {
child.on('close', shutdown)
child.on('disconnect', shutdown)
child.on('error', shutdown)
child.on('exit', shutdown)
child.on('uncaughtException', shutdown)
}
try {
servers = spawn('yarn', ['run', 'dev-all'], { cwd: '..', stdio: 'inherit', detached: true })
watch(servers)
// 2. run the nightwatch test suite against it
// to run in additional browsers:
// 1. add an entry in test/e2e/nightwatch.conf.json under "test_settings"
// 2. add it to the --env flag below
// or override the environment flag, for example: `npm run e2e -- --env chrome,firefox`
// For more information on Nightwatch's config file, see
// http://nightwatchjs.org/guide#settings-file
var opts = process.argv.slice(2)
if (opts.indexOf('--config') === -1) {
opts = opts.concat(['--config', 'e2e/nightwatch.conf.js'])
}
if (opts.indexOf('--env') === -1) {
opts = opts.concat(['--env', 'chrome'])
}
var runner = spawn('./node_modules/.bin/nightwatch', opts, { stdio: 'inherit' })
watch(runner)
watch(process)
} catch (error) {
shutdown(error)
}
nightwatch.conf.js
require('babel-register')
var config = require('../../frontend/config')
// http://nightwatchjs.org/guide#settings-file
module.exports = {
src_folders: ['e2e/specs'],
output_folder: 'e2e/reports',
custom_assertions_path: ['e2e/custom-assertions'],
selenium: {
start_process: true,
server_path: 'node_modules/selenium-server/lib/runner/selenium-server-standalone-3.0.1.jar',
host: '127.0.0.1',
port: 4444,
cli_args: {
'webdriver.chrome.driver': require('chromedriver').path
}
},
test_settings: {
default: {
selenium_port: 4444,
selenium_host: 'localhost',
silent: true,
globals: {
devServerURL: 'http://localhost:' + (process.env.PORT || config.dev.port)
}
},
chrome: {
desiredCapabilities: {
browserName: 'chrome',
javascriptEnabled: true,
acceptSslCerts: true
}
},
firefox: {
desiredCapabilities: {
browserName: 'firefox',
javascriptEnabled: true,
acceptSslCerts: true
}
}
}
}
sign-in.js (one of the tests)
import firebase from 'firebase-admin'
import uuid from 'uuid'
import * as firebaseSettings from '../../../backend/src/firebase-settings'
const PASSWORD = 'toomanysecrets'
function createUser (user) {
console.log('Creating user', user.uid)
let db = firebase.database()
return Promise.all([
firebase.auth().createUser({
uid: user.uid,
email: user.email,
emailVerified: true,
displayName: user.fullName,
password: PASSWORD
}),
db.ref('users').child(user.uid).set({
email: user.email,
fullName: user.fullName
}),
db.ref('roles').child(user.uid).set({
instructor: false
})
])
}
function destroyUser (user) {
if (!user) return
console.log('Removing user', user.uid)
let db = firebase.database()
try { db.ref('roles').child(user.uid).remove() } catch (e) {}
try { db.ref('users').child(user.uid).remove() } catch (e) {}
try { firebase.auth().deleteUser(user.uid) } catch (e) {}
}
module.exports = {
'Sign In links exist': browser => {
// automatically uses dev Server port from /config.index.js
// default: http://localhost:8080
// see nightwatch.conf.js
const devServer = browser.globals.devServerURL
browser
.url(devServer)
.waitForElementVisible('#container', 5000)
browser.expect.element('.main-nav').to.be.present
browser.expect.element('.main-nav a[href^=\'https://oauth.ais.msu.edu/oauth/authorize\']').to.be.present
browser.expect.element('.main-nav a[href^=\'/email-sign-in\']').to.be.present
browser.end()
},
'Successful Sign In with Email shows dashboard': browser => {
const devServer = browser.globals.devServerURL
firebase.initializeApp(firebaseSettings.appConfig)
let userId = uuid.v4()
let user = {
uid: userId,
email: `${userId}#test.com`,
fullName: 'Test User'
}
createUser(user)
browser.url(devServer)
.waitForElementVisible('.main-nav a[href^=\'/email-sign-in\']', 5000)
.click('.main-nav a[href^=\'/email-sign-in\']')
.waitForElementVisible('button', 5000)
.setValue('input[type=text]', user.email)
.setValue('input[type=password]', PASSWORD)
.click('button')
.waitForElementVisible('.main-nav a[href^=\'/sign-out\']', 5000)
.end(() => {
destroyUser(user)
})
}
}
After the tests complete successfully, I see the following:
grimlock:backend egillespie$ ps -ef | grep nightwatch
501 13087 13085 0 1:51AM ttys000 0:02.18 node ./node_modules/.bin/nightwatch --presets es2015,stage-0 --config e2e/nightwatch.conf.js --env chrome
I was not explicitly closing the Firebase connection. This caused the last test to hang indefinitely.
Here's how I am closing the connection after doing test cleanup:
browser.end(() => {
destroyUser(user).then(() => {
firebase.app().delete()
})
})
The destroyUser function now looks like this:
function destroyUser (user) {
if (!user) return Promise.resolve()
let db = firebase.database()
return Promise.all([
db.ref('roles').child(user.uid).remove(),
db.ref('users').child(user.uid).remove(),
firebase.auth().deleteUser(user.uid)
])
}
In my case (nightwatch with vue/vuetify) after each test like so:
afterEach:function(browser,done){
done();
}
AfterAll(async()=>{
await closeSession();
await stopWebDriver();
}
place this in the config file #Erik Gillespie
Nightwatch still have this issue with the browser.end()
If you run Nightwatch with node.js you can stop the process
by doing something like that:
browser.end(() => {
process.exit();
});
It will close the browser and end the process.
I have tried the following method:
In "nightwatch.conf.js",
"test_settings" {
"default" {
"silent": true,
...
},
...
}
I set "silent" from true to false.
It lead to becoming verbose in the console. And the chromedriver.exe will exit peacefully after running the tests
I was using the vue template from: https://github.com/vuejs-templates/pwa
My platform:
Windows 7 (64bit)
node v8.1.3
"nightwatch": "^0.9.16",
"selenium-server": "^3.6.0",
"chromedriver": "^2.33.1"