Connecting Redis with Docker with Bull with Throng with Node - node.js

I have a Heroku app that has a single process. I'm trying to change it so that it has several worker processes in a dedicated queue to handle incoming webhooks. To do so, I am using a Node.JS backend with the Bull and Throng packages, which use Redis. All of this is deployed on Docker.
I've found various tutorials that cover some of this combination, but not all of it so I'm not sure how to continue. When I spin up Docker, the main server runs, but when the worker process tries to start, it just logs Killed, which isn't that detailed of an error message.
Most of the information I found is here
My worker process file is worker.ts:
import { bullOptions, RedisData } from '../database/redis';
import throng from 'throng';
import { Webhooks } from '#octokit/webhooks';
import config from '../config/main';
import { configureWebhooks } from '../lib/github/webhooks';
import Bull from 'bull';
// Spin up multiple processes to handle jobs to take advantage of more CPU cores
// See: https://devcenter.heroku.com/articles/node-concurrency for more info
const workers = 2;
// The maximum number of jobs each worker should process at once. This will need
// to be tuned for your application. If each job is mostly waiting on network
// responses it can be much higher. If each job is CPU-intensive, it might need
// to be much lower.
const maxJobsPerWorker = 50;
const webhooks = new Webhooks({
secret: config.githubApp.webhookSecret,
});
configureWebhooks(webhooks);
async function startWorkers() {
console.log('starting workers...');
const queue = new Bull<RedisData>('work', bullOptions);
try {
await queue.process(maxJobsPerWorker, async (job) => {
console.log('processing...');
try {
await webhooks.verifyAndReceive(job.data);
} catch (e) {
console.error(e);
}
return job.finished();
});
} catch (e) {
console.error(`Error processing worker`, e);
}
}
throng({ workers: workers, start: startWorkers });
In my main server, I have the file Redis.ts:
import Bull, { QueueOptions } from 'bull';
import { EmitterWebhookEvent } from '#octokit/webhooks';
export const bullOptions: QueueOptions = {
redis: {
port: 6379,
host: 'cache',
tls: {
rejectUnauthorized: false,
},
connectTimeout: 30_000,
},
};
export type RedisData = EmitterWebhookEvent & { signature: string };
let githubWebhooksQueue: Bull.Queue<RedisData> | undefined = undefined;
export async function addToGithubQueue(data: RedisData) {
try {
await githubWebhooksQueue?.add(data);
} catch (e) {
console.error(e);
}
}
export function connectToRedis() {
githubWebhooksQueue = new Bull<RedisData>('work', bullOptions);
}
(Note: I invoke connectToRedis() before the worker process begins)
My dockerfile is
# We can change the version of ndoe by replacing `lts` to anything found here: https://hub.docker.com/_/node
FROM node:lts
ENV PORT=80
WORKDIR /usr/src/app
# Install dependencies
COPY package*.json ./
COPY yarn.lock ./
RUN yarn
RUN yarn global add npm-run-all
# Bundle app source
COPY . .
# Expose the web port
EXPOSE 80
EXPOSE 9229
EXPOSE 6379
CMD npm-run-all --parallel start start-notification-server start-github-server
and my docker-compose.yml is
version: '3.7'
services:
redis:
image: redis
container_name: cache
expose:
- 6379
api:
links:
- redis
image: instantish/api:latest
environment:
REDIS_URL: redis://cache
command: npm-run-all --parallel dev-debug start-notification-server-dev start-github-server-dev
depends_on:
- mongo
env_file:
- api/.env
- api/flags.env
ports:
- 2000:80
- 9229:9229
- 6379:6379
volumes:
# Activate if you want your local changes to update the container
- ./api:/usr/src/app:cached
Finally, the relevant NPM scripts for my project are
"dev-debug": "nodemon --watch \"**/**\" --ext \"js,ts,json\" --exec \"node --inspect=0.0.0.0:9229 -r ts-node/register ./index.js\"",
"start-github-server-dev": "MONGOOSE_DEBUG=false nodemon --watch \"**/**\" --ext \"js,ts,json\" --exec \"ts-node ./scripts/worker.ts\"",
The docker container logs are:
> instantish#1.0.0 start-github-server-dev /usr/src/app
> MONGOOSE_DEBUG=false nodemon --watch "**/**" --ext "js,ts,json" --exec "ts-node ./scripts/worker.ts"
> instantish#1.0.0 dev-debug /usr/src/app
> nodemon --watch "**/**" --ext "js,ts,json" --exec "node --inspect=0.0.0.0:9229 -r ts-node/register ./index.js"
[nodemon] 1.19.1
[nodemon] to restart at any time, enter `rs`
[nodemon] watching: **/**
[nodemon] starting `ts-node ./scripts/worker.ts`
[nodemon] 1.19.1
[nodemon] to restart at any time, enter `rs`
[nodemon] watching: **/**
[nodemon] starting `node --inspect=0.0.0.0:9229 -r ts-node/register ./index.js`
worker.ts
Killed
[nodemon] app crashed - waiting for file changes before starting...

Related

Update source code of server app inside docker container without touching the existing saved data

I have the following Docker Compose project (files contents provided below):
.
|-- .dockerignore
|-- Dockerfile
|-- docker-compose.yml
|-- messages
|-- 20221120-010625.txt
|-- 20221120-010630.txt
`-- 20221120-010641.txt
|-- package.json
`-- server.js
When you run the Docker Compose project with the following command:
$ docker-compose up -d
you can go to the url: http://localhost/?message=<message> and record multiple messages on the server.
Here you have an example:
So far so good, but...
My use case is: Some times I need to update the source code of the website. For example, imagine I need to prefix the page text on the screenshot above: Created file ... with: ### like:
### Created file: "/var/www/html/messages/20221120-010641.txt" with content: "this is a test".
BUT I cannot mess with the existing messages because that's valuable data for the server app.
I tried with the following commands:
$ docker-compose down --volumes
$ docker-compose up -d --force-recreate --build
My problem is: after updating the source code accordingly, even though the page text got properly updated, all the messages got lost, which is not good.
Could you please indicate me how can I achieve this?
I tried by defining a named volume inside the docker-compose.yml like:
services:
serverapp:
...
volumes:
- messages:/var/www/html/messages
volumes:
messages:
... expecting that if I destroy the server app the messages persist, but that didn't work because that named volume was owned by the user root and the messages are created by the user: node which doesn't have permission to create files on that directory, which causes an error.
Here is the content of the involved files:
.dockerignore
/node_modules/
/messages/
/npm-debug.log
Dockerfile
FROM node:16-alpine
RUN mkdir -p /var/www/html && chown -R node:node /var/www/html
WORKDIR /var/www/html
COPY --chown=node:node . .
USER node
RUN npm i
EXPOSE 8080
CMD [ "npm", "run", "start" ]
# ENTRYPOINT ["tail", "-f", "/dev/null"]
docker-compose.yml
version: '3'
services:
serverapp:
image: alpine:3.14
build:
dockerfile: Dockerfile
container_name: serverapp
restart: unless-stopped
ports:
- "80:80"
package.json
{
"name": "docker-compose-tester",
"version": "1.0.0",
"main": "server.js",
"scripts": {
"start": "cross-env NODE_ENV=debug nodemon --exec babel-node server.js"
},
"dependencies": {
"express": "^4.16.1",
"moment": "^2.29.4"
},
"devDependencies": {
"#babel/node": "^7.20.2",
"cross-env": "^7.0.3",
"nodemon": "^2.0.20"
}
}
server.js
const express = require('express');
const moment = require('moment');
const path = require('path');
const fs = require('fs');
const PORT = 80;
const app = express();
app.use('/messages/', express.static(path.join(__dirname, 'messages')));
app.get('/', (req, res) => {
const message = req.query.message;
if (!message) {
return res.send('<pre>Please use a query like: "/?message=Hello+World"</pre>');
}
const dirPathMessages = path.join(__dirname, 'messages');
const date = moment(new Date()).format('YYYYMMDD-HHmmss');
const fileNameMessage = `${date}.txt`;
const filePathMessage = path.join(dirPathMessages, fileNameMessage);
fs.mkdirSync(dirPathMessages, { recursive: true });
fs.writeFileSync(filePathMessage, message);
const filesList = fs.readdirSync(dirPathMessages);
const filesListStr = filesList.reduce((output, fileNameMessage) => {
const filePathMessage = path.join(dirPathMessages, fileNameMessage);
const message = fs.readFileSync(filePathMessage);
return output + `<div>/messages/${fileNameMessage} -> ${message}</div>` + "\n";
}, '');
res.send(`<pre>${filesListStr}\nCreated file: "${filePathMessage}" with content: "${message}".</pre>`);
});
app.listen(PORT, () => {
console.log(`TCP Server is running on port: ${PORT}`);
});
Your approach with named volume is correct. To fix the permission problem, change the owner of the messages folder in the Dockerfile, before switching to the node user.
FROM node:16-alpine
RUN mkdir -p /var/www/html && chown -R node:node /var/www/html
WORKDIR /var/www/html
COPY --chown=node:node . .
RUN mkdir -p messages && chown node:node messages
USER node
...

Why i cant access to my docker node app via browser, within container it works?

my docker-compose.yml
version: "3"
services:
client:
ports:
- "3000:3000"
restart: always
container_name: thread_client
build:
context: .
dockerfile: ./client/client.Dockerfile
volumes:
- ./client/src:/app/client/src
- /app/client/node_modules
depends_on:
- api
api:
build:
context: .
dockerfile: ./server/server.Dockerfile
container_name: thread_api
restart: always
ports:
- "3001:3001"
- "3002:3002"
volumes:
- ./server/src:/app/server/src
- /app/server/node_modules
pg_db:
image: postgres:14-alpine
container_name: thread_db
restart: always
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_DB: thread
POSTGRES_USER: postgres
volumes:
- pg_volume:/var/lib/postgresql/data
adminer:
image: adminer
restart: always
depends_on:
- pg_db
ports:
- "9090:8080"
volumes:
pg_volume:
client.Dockerfile
FROM node:16-alpine
WORKDIR /app
COPY .editorconfig .
COPY .eslintrc.yml .
COPY .lintstagedrc.yml .
COPY .ls-lint.yml .
COPY .npmrc .
COPY .nvmrc .
COPY .prettierrc.yml .
COPY .stylelintrc.yml .
COPY package.json .
COPY package-lock.json .
RUN npm install
COPY ./shared ./shared
RUN npm run install:shared
WORKDIR /app/client
COPY ./client/package.json .
COPY ./client/package-lock.json .
COPY ./client/.eslintrc.yml .
COPY ./client/.npmrc .
COPY ./client/.stylelintrc.yml .
COPY ./client/jsconfig.json .
COPY ./client/.env.example .env
RUN npm install
COPY ./client .
RUN npm run build
EXPOSE 3000
CMD ["npm", "run", "start"]
server.Dockerfile
FROM node:16-alpine
WORKDIR /app
COPY .editorconfig .
COPY .eslintrc.yml .
COPY .lintstagedrc.yml .
COPY .ls-lint.yml .
COPY .npmrc .
COPY .nvmrc .
COPY .prettierrc.yml .
COPY .stylelintrc.yml .
COPY package.json .
COPY package-lock.json .
RUN npm install
COPY ./shared ./shared
RUN npm run install:shared
WORKDIR /app/client
COPY ./client/package.json .
COPY ./client/package-lock.json .
COPY ./client/.eslintrc.yml .
COPY ./client/.npmrc .
COPY ./client/.stylelintrc.yml .
COPY ./client/jsconfig.json .
COPY ./client/.env.example .env
RUN npm install
COPY ./client .
RUN npm run build
WORKDIR /app/server
COPY ./server/package.json .
COPY ./server/package-lock.json .
COPY ./server/.env.example .env
RUN npm install
COPY ./server .
EXPOSE 8654
CMD ["npm", "start"]
client app is accessed in browser easily, but API service not, and I don't understand why
server.js
import fastify from 'fastify';
import cors from '#fastify/cors';
import fastifyStatic from '#fastify/static';
import http from 'http';
import Knex from 'knex';
import { Model } from 'objection';
import qs from 'qs';
import { Server as SocketServer } from 'socket.io';
import knexConfig from '../knexfile.js';
import { initApi } from './api/api.js';
import { ENV, ExitCode } from './common/enums/enums.js';
import { socketInjector as socketInjectorPlugin } from './plugins/plugins.js';
import { auth, comment, image, post, user } from './services/services.js';
import { handlers as socketHandlers } from './socket/handlers.js';
const app = fastify({
querystringParser: str => qs.parse(str, { comma: true })
});
const socketServer = http.Server(app);
const io = new SocketServer(socketServer, {
cors: {
origin: '*',
credentials: true
}
});
const knex = Knex(knexConfig);
Model.knex(knex);
io.on('connection', socketHandlers);
app.register(cors, {
origin: "*"
});
app.register(socketInjectorPlugin, { io });
app.register(initApi, {
services: {
auth,
comment,
image,
post,
user
},
prefix: ENV.APP.API_PATH
});
const staticPath = new URL('../../client/build', import.meta.url);
app.register(fastifyStatic, {
root: staticPath.pathname,
prefix: '/'
});
app.setNotFoundHandler((req, res) => {
res.sendFile('index.html');
});
const startServer = async () => {
try {
await app.listen(ENV.APP.PORT);
console.log(`Server is listening port: ${ENV.APP.PORT}`);
} catch (err) {
app.log.error(err);
process.exit(ExitCode.ERROR);
}
};
startServer();
socketServer.listen(ENV.APP.SOCKET_PORT);
So, I have tried curl localhost:3001 in API container and it's works, but why client works good via browser and API doesn't I don't any ideas.
How to debug, to find right solution?
UPD:
docker inspect (API service container)
"Ports": {
"3001/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3001"
},
{
"HostIp": "::",
"HostPort": "3001"
}
],
"3002/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3002"
},
{
"HostIp": "::",
"HostPort": "3002"
}
]
},
Looking at your comment stating:
i am trying to access to that app via browser by localhost:3001
And the ports part of your docker-compose.yaml.
ports:
- "8654:3001"
- "3002:3002"
You are trying to access the application on the wrong port.
With - "8654:3001" you are telling docker-compose to map port 3001 of the container to port 8654 on your host. (documentation)
Try to open http://localhost:8654 in your browser or changing 8654 in the docker-compose.yaml to 3001.

ampqlib Error:"Frame size exceeds frame max" inside docker container

I am trying to do simple application with backend on node.js + ts and rabbitmq, based on docker. So there are 2 containers: rabbitmq container and backend container with 2 servers running - producer and consumer. So now I am trying to get an access to rabbitmq server, but I get this error "Frame size exceeds frame max".
The full code is:
My producer server code is:
import express from 'express';
import amqplib, { Connection, Channel, Options } from 'amqplib';
const producer = express();
const sendRabbitMq = () =>{
amqplib.connect('amqp://localhost', function(error0: any, connection: any) {
if(error0){
console.log('Some error...')
throw error0
}
})
}
producer.post('/send', (_req, res) => {
sendRabbitMq();
console.log('Done...');
res.send("Ok")
})
export { producer };
It is connected to main file index.ts and running inside this file.
Also maybe I have some bad configuration inside docker. My Dockerfile is
FROM node:16
WORKDIR /app/backend/src
COPY *.json ./
RUN npm install
COPY . .
And my docker-compose include this code:
version: '3'
services:
backend:
build: ./backend
container_name: 'backend'
command: npm run start:dev
restart: always
volumes:
- ./backend:/app/backend/src
- ./conf/myrabbit.conf:/etc/rabbitmq/rabbitmq.config
ports:
- 3000:3000
environment:
- PRODUCER_PORT=3000
- CONSUMER_PORT=5672
depends_on:
- rabbitmq
rabbitmq:
image: rabbitmq:3.9.13
container_name: 'rabbitmq'
ports:
- 5672:5672
- 15672:15672
environment:
- RABBITMQ_DEFAULT_USER=user
- RABBITMQ_DEFAULT_PASS=user
I will be very appreciated for your help

docker-compose: nodejs container not communicating with postgres container

I did find a few people with a slightly different setup but with the same issue. So I hope this doesn't feel like a duplicated question.
My setup is pretty simple and straight-forward. I have a container for my node app and a container for my Postgres database. When I run docker-compose up and I see the log both containers are up and running. The problem is my node app is not connecting to the database.
I can connect to the database using Postbird and it works as it should.
If I create a docker container only for the database and run the node app directly on my machine everything works fine. So it's not and issue with the DB or the app but with the setup.
Here's a few useful information:
Running a docker just for the DB (connects and works perfectly):
> vigna-backend#1.0.0 dev /Users/lucasbittar/Dropbox/Code/vigna/backend
> nodemon src/server.js
[nodemon] 2.0.2
[nodemon] to restart at any time, enter `rs`
[nodemon] watching dir(s): *.*
[nodemon] watching extensions: js,mjs,json
[nodemon] starting `node -r sucrase/register src/server.js`
Initializing database...
Connecting to DB -> vignadb | PORT: 5432
Executing (default): SELECT 1+1 AS result
Connection has been established successfully -> vignadb
Running a container for each using docker-compose:
Creating network "backend_default" with the default driver
Creating backend_db_1 ... done
Creating backend_app_1 ... done
Attaching to backend_db_1, backend_app_1
db_1 |
db_1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
db_1 |
db_1 | 2020-07-24 13:23:32.875 UTC [1] LOG: starting PostgreSQL 12.1 (Debian 12.1-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
db_1 | 2020-07-24 13:23:32.876 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2020-07-24 13:23:32.876 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2020-07-24 13:23:32.881 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2020-07-24 13:23:32.955 UTC [27] LOG: database system was shut down at 2020-07-23 13:21:09 UTC
db_1 | 2020-07-24 13:23:32.999 UTC [1] LOG: database system is ready to accept connections
app_1 |
app_1 | > vigna-backend#1.0.0 dev /usr/app
app_1 | > npx sequelize db:migrate && npx sequelize db:seed:all && nodemon src/server.js
app_1 |
app_1 |
app_1 | Sequelize CLI [Node: 14.5.0, CLI: 5.5.1, ORM: 5.21.3]
app_1 |
app_1 | Loaded configuration file "src/config/database.js".
app_1 |
app_1 | Sequelize CLI [Node: 14.5.0, CLI: 5.5.1, ORM: 5.21.3]
app_1 |
app_1 | Loaded configuration file "src/config/database.js".
app_1 | [nodemon] 2.0.2
app_1 | [nodemon] to restart at any time, enter `rs`
app_1 | [nodemon] watching dir(s): *.*
app_1 | [nodemon] watching extensions: js,mjs,json
app_1 | [nodemon] starting `node -r sucrase/register src/server.js`
app_1 | Initializing database...
app_1 | Connecting to DB -> vignadb | PORT: 5432
My database class:
class Database {
constructor() {
console.log('Initializing database...');
this.init();
}
async init() {
let retries = 5;
while (retries) {
console.log(`Connecting to DB -> ${databaseConfig.database} | PORT: ${databaseConfig.port}`);
const sequelize = new Sequelize(databaseConfig);
try {
await sequelize.authenticate();
console.log(`Connection has been established successfully -> ${databaseConfig.database}`);
models
.map(model => model.init(sequelize))
.map( model => model.associate && model.associate(sequelize.models));
break;
} catch (err) {
console.log(`Error: ${err.message}`);
retries -= 1;
console.log(`Retries left: ${retries}`);
// Wait 5 seconds before trying again
await new Promise(res => setTimeout(res, 5000));
}
}
}
}
Dockerfile:
FROM node:alpine
WORKDIR /usr/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3333
CMD ["npm", "start"]
docker-compose.yml:
version: "3"
services:
db:
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
POSTGRES_DB: vignadb
volumes:
- ./pgdata:/var/lib/postgresql/data
ports:
- "5432:5432"
app:
build: .
depends_on:
- db
ports:
- "3333:3333"
volumes:
- .:/usr/app
command: npm run dev
package.json (scrips only):
"scripts": {
"dev-old": "nodemon src/server.js",
"dev": "npx sequelize db:migrate && npx sequelize db:seed:all && nodemon src/server.js",
"build": "sucrase ./src -d ./dist --transforms imports",
"start": "node dist/server.js"
},
.env:
# Database
DB_HOST=db
DB_USER=postgres
DB_PASS=postgres
DB_NAME=vignadb
DB_PORT=5432
database config:
require('dotenv/config');
module.exports = {
dialect: 'postgres',
host: process.env.DB_HOST,
username: process.env.DB_USER,
password: process.env.DB_PASS,
database: process.env.DB_NAME,
port: process.env.DB_PORT,
define: {
timestamp: true,
underscored: true,
underscoredAll: true,
},
};
I know I'm messing up something I just don't know where.
Let me know if I can provide more information.
Thanks!
You should put your 2 containers in the same network https://docs.docker.com/compose/networking/
And call your db service inside your nodejs connexion string.
Something like: postgres://db:5432/vignadb

How to watch and reload an ExpressJS app with pm2

I'm developing an ExpressJS app.
I use pm2 to load it:
myapp$ pm2 start bin/www
This works fine, except that adding the --watch flag doesn't seem to work; every time I change the JS source I need to explicitly restart it for my changes to take effect:
myapp$ pm2 restart www
What am I doing wrong? I've tried the --watch flag with a non-ExpressJS app and it worked as expected.
See this solution in Stack Overflow
The problem is relative to the path where pm2 is watching, and if it is relative to the execution file or the actual root path of the project.
2021 Feb.
Things have changed a bit now. Gave a full example below from my project. Below works:
1 . Create config file. File: ecosystem.config.js
module.exports = {
apps: [
{
name: 'api',
script: './bin/www', // --------------- our node start script here like index.js
// ------------------------------------ watch options - begin
watch: ['../'],
watch_delay: 1000,
ignore_watch: ['node_modules'],
watch_options: {
followSymlinks: false,
},
// ------------------------------------ watch options - end
env: {
NODE_ENV: 'development',
PORT: 3001,
DEBUG: 'api:*',
MONGODB_URI:
'mongodb://localhost:27017/collection1?readPreference=primary&ssl=false',
},
env_production: {
NODE_ENV: 'production',
},
},
],
deploy: {
production: {
// user: "SSH_USERNAME",
// host: "SSH_HOSTMACHINE",
},
},
};
2 . Run server (dev/ prod)
pm2 start ecosystem.config.js
pm2 start ecosystem.config.js --env production
3 . More information :
https://pm2.keymetrics.io/docs/usage/watch-and-restart/
You need to specify the app location to the --watch option
myapp$ pm2 start bin/www --watch /your/location/to/app
I never managed to make default watch settings work in Ubuntu, however using polling via advanced watch options worked:
"watch": true,
"ignore_watch" : ["node_modules"],
"watch_options": {
"usePolling": true,
"interval": 1000
}
More info:
https://github.com/buunguyen/PM2/blob/master/ADVANCED_README.md#watch--restart
https://github.com/paulmillr/chokidar#api

Resources