Update source code of server app inside docker container without touching the existing saved data - node.js

I have the following Docker Compose project (files contents provided below):
.
|-- .dockerignore
|-- Dockerfile
|-- docker-compose.yml
|-- messages
|-- 20221120-010625.txt
|-- 20221120-010630.txt
`-- 20221120-010641.txt
|-- package.json
`-- server.js
When you run the Docker Compose project with the following command:
$ docker-compose up -d
you can go to the url: http://localhost/?message=<message> and record multiple messages on the server.
Here you have an example:
So far so good, but...
My use case is: Some times I need to update the source code of the website. For example, imagine I need to prefix the page text on the screenshot above: Created file ... with: ### like:
### Created file: "/var/www/html/messages/20221120-010641.txt" with content: "this is a test".
BUT I cannot mess with the existing messages because that's valuable data for the server app.
I tried with the following commands:
$ docker-compose down --volumes
$ docker-compose up -d --force-recreate --build
My problem is: after updating the source code accordingly, even though the page text got properly updated, all the messages got lost, which is not good.
Could you please indicate me how can I achieve this?
I tried by defining a named volume inside the docker-compose.yml like:
services:
serverapp:
...
volumes:
- messages:/var/www/html/messages
volumes:
messages:
... expecting that if I destroy the server app the messages persist, but that didn't work because that named volume was owned by the user root and the messages are created by the user: node which doesn't have permission to create files on that directory, which causes an error.
Here is the content of the involved files:
.dockerignore
/node_modules/
/messages/
/npm-debug.log
Dockerfile
FROM node:16-alpine
RUN mkdir -p /var/www/html && chown -R node:node /var/www/html
WORKDIR /var/www/html
COPY --chown=node:node . .
USER node
RUN npm i
EXPOSE 8080
CMD [ "npm", "run", "start" ]
# ENTRYPOINT ["tail", "-f", "/dev/null"]
docker-compose.yml
version: '3'
services:
serverapp:
image: alpine:3.14
build:
dockerfile: Dockerfile
container_name: serverapp
restart: unless-stopped
ports:
- "80:80"
package.json
{
"name": "docker-compose-tester",
"version": "1.0.0",
"main": "server.js",
"scripts": {
"start": "cross-env NODE_ENV=debug nodemon --exec babel-node server.js"
},
"dependencies": {
"express": "^4.16.1",
"moment": "^2.29.4"
},
"devDependencies": {
"#babel/node": "^7.20.2",
"cross-env": "^7.0.3",
"nodemon": "^2.0.20"
}
}
server.js
const express = require('express');
const moment = require('moment');
const path = require('path');
const fs = require('fs');
const PORT = 80;
const app = express();
app.use('/messages/', express.static(path.join(__dirname, 'messages')));
app.get('/', (req, res) => {
const message = req.query.message;
if (!message) {
return res.send('<pre>Please use a query like: "/?message=Hello+World"</pre>');
}
const dirPathMessages = path.join(__dirname, 'messages');
const date = moment(new Date()).format('YYYYMMDD-HHmmss');
const fileNameMessage = `${date}.txt`;
const filePathMessage = path.join(dirPathMessages, fileNameMessage);
fs.mkdirSync(dirPathMessages, { recursive: true });
fs.writeFileSync(filePathMessage, message);
const filesList = fs.readdirSync(dirPathMessages);
const filesListStr = filesList.reduce((output, fileNameMessage) => {
const filePathMessage = path.join(dirPathMessages, fileNameMessage);
const message = fs.readFileSync(filePathMessage);
return output + `<div>/messages/${fileNameMessage} -> ${message}</div>` + "\n";
}, '');
res.send(`<pre>${filesListStr}\nCreated file: "${filePathMessage}" with content: "${message}".</pre>`);
});
app.listen(PORT, () => {
console.log(`TCP Server is running on port: ${PORT}`);
});

Your approach with named volume is correct. To fix the permission problem, change the owner of the messages folder in the Dockerfile, before switching to the node user.
FROM node:16-alpine
RUN mkdir -p /var/www/html && chown -R node:node /var/www/html
WORKDIR /var/www/html
COPY --chown=node:node . .
RUN mkdir -p messages && chown node:node messages
USER node
...

Related

Differences in code between local project and Dockerized project break the app

I'm trying to dockerize my current pet project in which I use a NodeJS (ExpressJS) as a backend, React as a frontend and PostgreSQL as a database. On both backend and frontend I use TypeScript instead of JavaScript. I'm also using a Prisma as ORM for my database. I decided to have a standard three container's architecture, one for backend, one for database and one for frontend app. My Dockerfile's are as follows:
Frontend's Dockerfile
FROM node:alpine
WORKDIR /usr/src/frontend
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "run", "start"]
Backend's Dockerfile
FROM node:lts
WORKDIR /usr/src/backend
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8000
RUN npx prisma generate
CMD ["npm", "run", "dev"]
there's also a .dockerignore file in the backend folder:
node_modules/
and my docker-compose.yml looks like this:
version: '3.9'
services:
db:
image: 'postgres'
ports:
- '5432:5432'
environment:
POSTGRES_USER: 'postgres'
POSTGRES_PASSWORD: 'postgres'
POSTGRES_DB: 'hucuplant'
server:
build:
context: ./backend_express
ports:
- "8000:8000"
environment:
DATABASE_URL: 'postgresql://postgres:postgres#localhost:5432/hucuplant?schema=public'
client:
build:
context: ./frontend
ports:
- "3000:3000"
After doing a docker-compose up --build everything starts well but when I try to register a new user on my site then I get the following error:
Error:
hucuplant-server-1 | Invalid `prisma.user.findUnique()` invocation in
hucuplant-server-1 | /usr/src/backend/src/routes/Auth.ts:44:57
hucuplant-server-1 |
hucuplant-server-1 | 41 auth.post("/register", async (req: Request, res: Response) => {
hucuplant-server-1 | 42 const { email, username, password } = req.body;
hucuplant-server-1 | 43
hucuplant-server-1 | → 44 const usernameResult: User | null = await prisma.user.findUnique({
hucuplant-server-1 | where: {
hucuplant-server-1 | ? username?: String,
hucuplant-server-1 | ? id?: Int,
hucuplant-server-1 | ? email?: String
hucuplant-server-1 | }
hucuplant-server-1 | })
However, the existing code in my Auth.ts file on the line 44 looks like this:
auth.post("/register", async (req: Request, res: Response) => {
const { email, username, password } = req.body;
const usernameResult: User | null = await prisma.user.findUnique({
where: {
username: username,
},
});
When I run my project locally everything works just fine but when I try to run the containerized app then those things break and differ quite much. What is causing that? How do I fix that?

Why i cant access to my docker node app via browser, within container it works?

my docker-compose.yml
version: "3"
services:
client:
ports:
- "3000:3000"
restart: always
container_name: thread_client
build:
context: .
dockerfile: ./client/client.Dockerfile
volumes:
- ./client/src:/app/client/src
- /app/client/node_modules
depends_on:
- api
api:
build:
context: .
dockerfile: ./server/server.Dockerfile
container_name: thread_api
restart: always
ports:
- "3001:3001"
- "3002:3002"
volumes:
- ./server/src:/app/server/src
- /app/server/node_modules
pg_db:
image: postgres:14-alpine
container_name: thread_db
restart: always
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_DB: thread
POSTGRES_USER: postgres
volumes:
- pg_volume:/var/lib/postgresql/data
adminer:
image: adminer
restart: always
depends_on:
- pg_db
ports:
- "9090:8080"
volumes:
pg_volume:
client.Dockerfile
FROM node:16-alpine
WORKDIR /app
COPY .editorconfig .
COPY .eslintrc.yml .
COPY .lintstagedrc.yml .
COPY .ls-lint.yml .
COPY .npmrc .
COPY .nvmrc .
COPY .prettierrc.yml .
COPY .stylelintrc.yml .
COPY package.json .
COPY package-lock.json .
RUN npm install
COPY ./shared ./shared
RUN npm run install:shared
WORKDIR /app/client
COPY ./client/package.json .
COPY ./client/package-lock.json .
COPY ./client/.eslintrc.yml .
COPY ./client/.npmrc .
COPY ./client/.stylelintrc.yml .
COPY ./client/jsconfig.json .
COPY ./client/.env.example .env
RUN npm install
COPY ./client .
RUN npm run build
EXPOSE 3000
CMD ["npm", "run", "start"]
server.Dockerfile
FROM node:16-alpine
WORKDIR /app
COPY .editorconfig .
COPY .eslintrc.yml .
COPY .lintstagedrc.yml .
COPY .ls-lint.yml .
COPY .npmrc .
COPY .nvmrc .
COPY .prettierrc.yml .
COPY .stylelintrc.yml .
COPY package.json .
COPY package-lock.json .
RUN npm install
COPY ./shared ./shared
RUN npm run install:shared
WORKDIR /app/client
COPY ./client/package.json .
COPY ./client/package-lock.json .
COPY ./client/.eslintrc.yml .
COPY ./client/.npmrc .
COPY ./client/.stylelintrc.yml .
COPY ./client/jsconfig.json .
COPY ./client/.env.example .env
RUN npm install
COPY ./client .
RUN npm run build
WORKDIR /app/server
COPY ./server/package.json .
COPY ./server/package-lock.json .
COPY ./server/.env.example .env
RUN npm install
COPY ./server .
EXPOSE 8654
CMD ["npm", "start"]
client app is accessed in browser easily, but API service not, and I don't understand why
server.js
import fastify from 'fastify';
import cors from '#fastify/cors';
import fastifyStatic from '#fastify/static';
import http from 'http';
import Knex from 'knex';
import { Model } from 'objection';
import qs from 'qs';
import { Server as SocketServer } from 'socket.io';
import knexConfig from '../knexfile.js';
import { initApi } from './api/api.js';
import { ENV, ExitCode } from './common/enums/enums.js';
import { socketInjector as socketInjectorPlugin } from './plugins/plugins.js';
import { auth, comment, image, post, user } from './services/services.js';
import { handlers as socketHandlers } from './socket/handlers.js';
const app = fastify({
querystringParser: str => qs.parse(str, { comma: true })
});
const socketServer = http.Server(app);
const io = new SocketServer(socketServer, {
cors: {
origin: '*',
credentials: true
}
});
const knex = Knex(knexConfig);
Model.knex(knex);
io.on('connection', socketHandlers);
app.register(cors, {
origin: "*"
});
app.register(socketInjectorPlugin, { io });
app.register(initApi, {
services: {
auth,
comment,
image,
post,
user
},
prefix: ENV.APP.API_PATH
});
const staticPath = new URL('../../client/build', import.meta.url);
app.register(fastifyStatic, {
root: staticPath.pathname,
prefix: '/'
});
app.setNotFoundHandler((req, res) => {
res.sendFile('index.html');
});
const startServer = async () => {
try {
await app.listen(ENV.APP.PORT);
console.log(`Server is listening port: ${ENV.APP.PORT}`);
} catch (err) {
app.log.error(err);
process.exit(ExitCode.ERROR);
}
};
startServer();
socketServer.listen(ENV.APP.SOCKET_PORT);
So, I have tried curl localhost:3001 in API container and it's works, but why client works good via browser and API doesn't I don't any ideas.
How to debug, to find right solution?
UPD:
docker inspect (API service container)
"Ports": {
"3001/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3001"
},
{
"HostIp": "::",
"HostPort": "3001"
}
],
"3002/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3002"
},
{
"HostIp": "::",
"HostPort": "3002"
}
]
},
Looking at your comment stating:
i am trying to access to that app via browser by localhost:3001
And the ports part of your docker-compose.yaml.
ports:
- "8654:3001"
- "3002:3002"
You are trying to access the application on the wrong port.
With - "8654:3001" you are telling docker-compose to map port 3001 of the container to port 8654 on your host. (documentation)
Try to open http://localhost:8654 in your browser or changing 8654 in the docker-compose.yaml to 3001.

Vite: Could not resolve entry module (index.html)

I am new to Openshift 3.11 deployment, I created a Multistage Dockerfile for a React application, the build want correctly on my local machine, but when I run on the openshift cluster I get the error below:
> kncare-ui#0.1.0 build
> tsc && vite build
vite v2.9.9 building for production...
✓ 0 modules transformed.
Could not resolve entry module (index.html).
error during build:
Error: Could not resolve entry module (index.html).
at error (/app/node_modules/rollup/dist/shared/rollup.js:198:30)
at ModuleLoader.loadEntryModule (/app/node_modules/rollup/dist/shared/rollup.js:22680:20)
at async Promise.all (index 0)
error: build error: running 'npm run build' failed with exit code 1
and this is my Dockefile
FROM node:16.14.2-alpine as build-stage
RUN mkdir -p /app/
WORKDIR /app/
RUN chmod -R 777 /app/
COPY package*.json /app/
COPY tsconfig.json /app/
COPY tsconfig.node.json /app/
RUN npm ci
COPY ./ /app/
RUN npm run build
FROM nginxinc/nginx-unprivileged
#FROM bitnami/nginx:latest
COPY --from=build-stage /app/dist/ /usr/share/nginx/html
#CMD ["nginx", "-g", "daemon off;"]
ENTRYPOINT ["nginx", "-g", "daemon off;"]
EXPOSE 80
Vite by default uses an html page as an entry point. So you either need to create one or if you don't have an html page you can use it in "Library Mode".
https://vitejs.dev/guide/build.html#library-mode
From the docs:
// vite.config.js
const path = require('path')
const { defineConfig } = require('vite')
module.exports = defineConfig({
build: {
lib: {
entry: path.resolve(__dirname, 'lib/main.js'),
name: 'MyLib',
fileName: (format) => `my-lib.${format}.js`
},
rollupOptions: {
// make sure to externalize deps that shouldn't be bundled
// into your library
external: ['vue'],
output: {
// Provide global variables to use in the UMD build
// for externalized deps
globals: {
vue: 'Vue'
}
}
}
}
})
If you're using ES Modules (i.e., import sytax):
Look in your package.json to confirm type field is set to module:
// vite.config.js
import * as path from 'path';
import { defineConfig } from "vite";
const config = defineConfig({
build: {
lib: {
entry: path.resolve(__dirname, 'lib/main.js'),
name: 'MyLib',
fileName: (format) => `my-lib.${format}.js`
},
rollupOptions: {
// make sure to externalize deps that shouldn't be bundled
// into your library
external: ['vue'],
output: {
// Provide global variables to use in the UMD build
// for externalized deps
globals: {
vue: 'Vue'
}
}
}
}
})
export default config;
Had same issue because of .dockerignore. Make sure your index.html not ignored.
In case if you ignoring everything (**) you can add !index.html to the next line and try.

Connecting Redis with Docker with Bull with Throng with Node

I have a Heroku app that has a single process. I'm trying to change it so that it has several worker processes in a dedicated queue to handle incoming webhooks. To do so, I am using a Node.JS backend with the Bull and Throng packages, which use Redis. All of this is deployed on Docker.
I've found various tutorials that cover some of this combination, but not all of it so I'm not sure how to continue. When I spin up Docker, the main server runs, but when the worker process tries to start, it just logs Killed, which isn't that detailed of an error message.
Most of the information I found is here
My worker process file is worker.ts:
import { bullOptions, RedisData } from '../database/redis';
import throng from 'throng';
import { Webhooks } from '#octokit/webhooks';
import config from '../config/main';
import { configureWebhooks } from '../lib/github/webhooks';
import Bull from 'bull';
// Spin up multiple processes to handle jobs to take advantage of more CPU cores
// See: https://devcenter.heroku.com/articles/node-concurrency for more info
const workers = 2;
// The maximum number of jobs each worker should process at once. This will need
// to be tuned for your application. If each job is mostly waiting on network
// responses it can be much higher. If each job is CPU-intensive, it might need
// to be much lower.
const maxJobsPerWorker = 50;
const webhooks = new Webhooks({
secret: config.githubApp.webhookSecret,
});
configureWebhooks(webhooks);
async function startWorkers() {
console.log('starting workers...');
const queue = new Bull<RedisData>('work', bullOptions);
try {
await queue.process(maxJobsPerWorker, async (job) => {
console.log('processing...');
try {
await webhooks.verifyAndReceive(job.data);
} catch (e) {
console.error(e);
}
return job.finished();
});
} catch (e) {
console.error(`Error processing worker`, e);
}
}
throng({ workers: workers, start: startWorkers });
In my main server, I have the file Redis.ts:
import Bull, { QueueOptions } from 'bull';
import { EmitterWebhookEvent } from '#octokit/webhooks';
export const bullOptions: QueueOptions = {
redis: {
port: 6379,
host: 'cache',
tls: {
rejectUnauthorized: false,
},
connectTimeout: 30_000,
},
};
export type RedisData = EmitterWebhookEvent & { signature: string };
let githubWebhooksQueue: Bull.Queue<RedisData> | undefined = undefined;
export async function addToGithubQueue(data: RedisData) {
try {
await githubWebhooksQueue?.add(data);
} catch (e) {
console.error(e);
}
}
export function connectToRedis() {
githubWebhooksQueue = new Bull<RedisData>('work', bullOptions);
}
(Note: I invoke connectToRedis() before the worker process begins)
My dockerfile is
# We can change the version of ndoe by replacing `lts` to anything found here: https://hub.docker.com/_/node
FROM node:lts
ENV PORT=80
WORKDIR /usr/src/app
# Install dependencies
COPY package*.json ./
COPY yarn.lock ./
RUN yarn
RUN yarn global add npm-run-all
# Bundle app source
COPY . .
# Expose the web port
EXPOSE 80
EXPOSE 9229
EXPOSE 6379
CMD npm-run-all --parallel start start-notification-server start-github-server
and my docker-compose.yml is
version: '3.7'
services:
redis:
image: redis
container_name: cache
expose:
- 6379
api:
links:
- redis
image: instantish/api:latest
environment:
REDIS_URL: redis://cache
command: npm-run-all --parallel dev-debug start-notification-server-dev start-github-server-dev
depends_on:
- mongo
env_file:
- api/.env
- api/flags.env
ports:
- 2000:80
- 9229:9229
- 6379:6379
volumes:
# Activate if you want your local changes to update the container
- ./api:/usr/src/app:cached
Finally, the relevant NPM scripts for my project are
"dev-debug": "nodemon --watch \"**/**\" --ext \"js,ts,json\" --exec \"node --inspect=0.0.0.0:9229 -r ts-node/register ./index.js\"",
"start-github-server-dev": "MONGOOSE_DEBUG=false nodemon --watch \"**/**\" --ext \"js,ts,json\" --exec \"ts-node ./scripts/worker.ts\"",
The docker container logs are:
> instantish#1.0.0 start-github-server-dev /usr/src/app
> MONGOOSE_DEBUG=false nodemon --watch "**/**" --ext "js,ts,json" --exec "ts-node ./scripts/worker.ts"
> instantish#1.0.0 dev-debug /usr/src/app
> nodemon --watch "**/**" --ext "js,ts,json" --exec "node --inspect=0.0.0.0:9229 -r ts-node/register ./index.js"
[nodemon] 1.19.1
[nodemon] to restart at any time, enter `rs`
[nodemon] watching: **/**
[nodemon] starting `ts-node ./scripts/worker.ts`
[nodemon] 1.19.1
[nodemon] to restart at any time, enter `rs`
[nodemon] watching: **/**
[nodemon] starting `node --inspect=0.0.0.0:9229 -r ts-node/register ./index.js`
worker.ts
Killed
[nodemon] app crashed - waiting for file changes before starting...

How to define port dynamically in proxy.conf.json or proxy.conf.js

I have angular4 application, and for development purposes i start it with npm run start with start defined as "start": "ng serve --proxy=proxy.conf.json". Also I have
{
"/api/**": {
"target": "http://localhost:8080/api/"
}
}
defined in proxy.conf.json.
Is there a way, to define port or whole url dynamically. Like npm run start --port=8099?
PS
http://localhost:8080/api/ is URL of my backend API.
I also did not find way to interpolate .json file but you can use .js file
e.g.
proxy.conf.js
var process = require("process");
var BACKEND_PORT = process.env.BACKEND_PORT || 8080;
const PROXY_CONFIG = [
{
context: [
"/api/**"
],
target: "http://localhost:"+BACKEND_PORT+"/api/
},
];
module.exports = PROXY_CONFIG;
add new command to package.json (notice .js not .json)
"startOn8090": "BACKEND_PORT=8090 && ng serve --proxy=proxy.conf.js"
or simply set env variable in your shell script and call npm run-script start
In .angular.cli.json
....
"defaults": {
"serve": {
"port": 9000
}
}
....
makes the proxy available on port 9000 rather than the default 4200.

Resources