Skaffold syncs files but pod doesn't refresh - node.js

Kubernetes newbie here.
I have some strange Skaffold/Kubernetes behavior. I'm working in Google Cloud but I've changed to the local environment just for test and it's the same. So probably it's me how's doing something wrong. The problem is that though I see Skaffold syncing changes these changes aren't reflected. All the files inside the pods are the old ones.
Skaffold.yaml:
apiVersion: skaffold/v2alpha3
kind: Config
deploy:
kubectl:
manifests:
- ./infra/k8s/*
build:
# local:
# push: false
googleCloudBuild:
projectId: ts-maps-286111
artifacts:
- image: us.gcr.io/ts-maps-286111/auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/*.ts'
dest: .
- image: us.gcr.io/ts-maps-286111/client
context: client
docker:
dockerfile: Dockerfile
sync:
manual:
- src: '**/*.js'
dest: .
- image: us.gcr.io/ts-maps-286111/tickets
context: tickets
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/*.ts'
dest: .
- image: us.gcr.io/ts-maps-286111/orders
context: orders
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/*.ts'
dest: .
- image: us.gcr.io/ts-maps-286111/expiration
context: expiration
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/*.ts'
dest: .
When a file inside one of the directories is changed I see following logs:
time="2020-09-05T01:24:06+03:00" level=debug msg="Change detected notify.Write: \"F:\\projects\\lrn_microservices\\complex\\expiration\\src\\index.ts\""
time="2020-09-05T01:24:06+03:00" level=debug msg="Change detected notify.Write: \"F:\\projects\\lrn_microservices\\complex\\expiration\\src\\index.ts\""
time="2020-09-05T01:24:06+03:00" level=debug msg="Change detected notify.Write: \"F:\\projects\\lrn_microservices\\complex\\expiration\\src\\index.ts\""
time="2020-09-05T01:24:06+03:00" level=debug msg="Change detected notify.Write: \"F:\\projects\\lrn_microservices\\complex\\expiration\\src\\index.ts\""
time="2020-09-05T01:24:06+03:00" level=debug msg="Change detected notify.Write: \"F:\\projects\\lrn_microservices\\complex\\expiration\\src\\index.ts\""
time="2020-09-05T01:24:06+03:00" level=debug msg="Change detected notify.Write: \"F:\\projects\\lrn_microservices\\complex\\expiration\\src\\index.ts\""
time="2020-09-05T01:24:07+03:00" level=debug msg="Found dependencies for dockerfile: [{package.json /app true} {. /app true}]"
time="2020-09-05T01:24:07+03:00" level=debug msg="Skipping excluded path: node_modules"
time="2020-09-05T01:24:07+03:00" level=debug msg="Found dependencies for dockerfile: [{package.json /app true} {. /app true}]"
time="2020-09-05T01:24:07+03:00" level=debug msg="Skipping excluded path: .next"
time="2020-09-05T01:24:07+03:00" level=debug msg="Skipping excluded path: node_modules"
time="2020-09-05T01:24:07+03:00" level=debug msg="Found dependencies for dockerfile: [{package.json /app true} {. /app true}]"
time="2020-09-05T01:24:07+03:00" level=debug msg="Skipping excluded path: node_modules"
time="2020-09-05T01:24:07+03:00" level=debug msg="Found dependencies for dockerfile: [{package.json /app true} {. /app true}]"
time="2020-09-05T01:24:07+03:00" level=debug msg="Skipping excluded path: node_modules"
time="2020-09-05T01:24:07+03:00" level=debug msg="Found dependencies for dockerfile: [{package.json /app true} {. /app true}]"
time="2020-09-05T01:24:07+03:00" level=debug msg="Skipping excluded path: node_modules"
time="2020-09-05T01:24:07+03:00" level=info msg="files modified: [expiration\\src\\index.ts]"
Syncing 1 files for us.gcr.io/ts-maps-286111/expiration:2aae7ff-dirty#sha256:2e31caedf3d9b2bcb2ea5693f8e22478a9d6caa21d1a478df5ff8ebcf562573e
time="2020-09-05T01:24:07+03:00" level=info msg="Copying files: map[expiration\\src\\index.ts:[/app/src/index.ts]] to us.gcr.io/ts-maps-286111/expiration:2aae7ff-dirty#sha256:2e31caedf3d9b2bcb2ea5693f8e22478a9d6caa21d1a478df5ff8ebcf562573e"
time="2020-09-05T01:24:07+03:00" level=debug msg="getting client config for kubeContext: ``"
time="2020-09-05T01:24:07+03:00" level=debug msg="Running command: [kubectl --context gke_ts-maps-286111_europe-west3-a_ticketing-dev exec expiration-depl-5cb997d597-p49lv --namespace default -c expiration -i -- tar xmf - -C / --no-same-owner]"
time="2020-09-05T01:24:09+03:00" level=debug msg="Command output: [], stderr: tar: removing leading '/' from member names\n"
Watching for changes...
time="2020-09-05T01:24:11+03:00" level=info msg="Streaming logs from pod: expiration-depl-5cb997d597-p49lv container: expiration"
time="2020-09-05T01:24:11+03:00" level=debug msg="Running command: [kubectl --context gke_ts-maps-286111_europe-west3-a_ticketing-dev logs --since=114s -f expiration-depl-5cb997d597-p49lv -c expiration --namespace default]"
[expiration]
[expiration] > expiration#1.0.0 start /app
[expiration] > ts-node-dev --watch src src/index.ts
[expiration]
[expiration] ts-node-dev ver. 1.0.0-pre.62 (using ts-node ver. 8.10.2, typescript ver. 3.9.7)
[expiration] starting expiration!kdd
[expiration] Connected to NATS!
NodeJS server inside the pod restarts. Sometimes I see this line, sometimes not, the result overall is the same
[expiration] [INFO] 22:23:42 Restarting: src/index.ts has been modified
But there are no changes made. If I cat the changed file inside a pod it's the old version, if I delete a pod it starts again with an old version.
My folder structure:
+---auth
| \---src
| +---models
| +---routes
| | \---__test__
| +---services
| \---test
+---client
| +---.next
| | +---cache
| | | \---next-babel-loader
| | +---server
| | | \---pages
| | | +---auth
| | | \---next
| | | \---dist
| | | \---pages
| | \---static
| | +---chunks
| | | \---pages
| | | +---auth
| | | \---next
| | | \---dist
| | | \---pages
| | +---development
| | \---webpack
| | \---pages
| | \---auth
| +---api
| +---components
| +---hooks
| \---pages
| \---auth
+---common
| +---build
| | +---errors
| | +---events
| | | \---types
| | \---middlewares
| \---src
| +---errors
| +---events
| | \---types
| \---middlewares
+---config
+---expiration
| \---src
| +---events
| | +---listeners
| | \---publishers
| +---queue
| \---__mocks__
+---infra
| \---k8s
+---orders
| \---src
| +---events
| | +---listeners
| | | \---__test__
| | \---publishers
| +---models
| +---routes
| | \---__test__
| +---test
| \---__mocks__
+---payment
\---tickets
\---src
+---events
| +---listeners
| | \---__test__
| \---publishers
+---models
| \---__test__
+---routes
| \---__test__
+---test
\---__mocks__
Would be grateful for any help!

What worked for me was using --poll flag with ts-node-dev.
My script looks like this
"start" : "ts-node-dev --respawn --poll --inspect --exit-child src/index.ts

For your start script try to add --poll. For example if you start script is "start" : "nodemon src/index.js", change it to "nodemon --poll src/index.js"

It looks like no major error in the logs, my guess is that the files are actually being put in another directory. You can try running in the container.
find / -name "index.ts"
to see if that didn't land somewhere else.
Another thing to check is the WORKDIR value in your container(s). Check what directory you land on when you run:
kubectl exec -it -c <container-name> <pod-name>
✌️

Related

Path to file is not forund of postgres script `COPY <table_name>(<columns>) FROM '<path>>' ...` within docker container using linux distribution

This issue has to do with the fact that the file exists on the backend container but not the postgres container. How could I transfer the file between containers automatically?
I am currently trying to execute the following script:
COPY climates(
station_id,
date,
element,
data_value,
m_flag,
q_flag,
s_flag,
obs_time
)
FROM '/usr/api/2017.csv`
DELIMITER ','
CSV HEADER;
within a docker container running a sequelize backend connecting to a postgres:14.1-alpine container.
The following error is returned:
db_1 | 2022-08-30 04:23:58.358 UTC [29] ERROR: could not open file "/usr/api/2017.csv" for reading: No such file or directory
db_1 | 2022-08-30 04:23:58.358 UTC [29] HINT: COPY FROM instructs the PostgreSQL server process to read a file. You may want a client-side facility such as psql's \copy.
db_1 | 2022-08-30 04:23:58.358 UTC [29] STATEMENT: COPY climates(
db_1 | station_id,
db_1 | date,
db_1 | element,
db_1 | data_value,
db_1 | m_flag,
db_1 | q_flag,
db_1 | s_flag,
db_1 | obs_time
db_1 | )
db_1 | FROM '/usr/api/2017.csv'
db_1 | DELIMITER ','
db_1 | CSV HEADER;
ebapi | Unable to connect to the database: MigrationError: Migration 20220829_02_populate_table.js (up) failed: Original error: could not open file "/usr/api/2017.csv" for reading: No such file or directory
ebapi | at /usr/api/node_modules/umzug/lib/umzug.js:151:27
ebapi | at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
ebapi | at async Umzug.runCommand (/usr/api/node_modules/umzug/lib/umzug.js:107:20)
ebapi | ... 2 lines matching cause stack trace ...
ebapi | at async start (/usr/api/index.js:14:3) {
ebapi | cause: Error
ebapi | at Query.run (/usr/api/node_modules/sequelize/lib/dialects/postgres/query.js:50:25)
ebapi | at /usr/api/node_modules/sequelize/lib/sequelize.js:311:28
ebapi | at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
ebapi | at async Object.up (/usr/api/migrations/20220829_02_populate_table.js:10:5)
ebapi | at async /usr/api/node_modules/umzug/lib/umzug.js:148:21
ebapi | at async Umzug.runCommand (/usr/api/node_modules/umzug/lib/umzug.js:107:20)
ebapi | at async runMigrations (/usr/api/util/db.js:52:22)
ebapi | at async connectToDatabase (/usr/api/util/db.js:32:5)
ebapi | at async start (/usr/api/index.js:14:3) {
ebapi | name: 'SequelizeDatabaseError',
...
Here is my docker-compose.yml
# set up a postgres database version: "3.8" services: db:
image: postgres:14.1-alpine
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
ports:
- "5432:5432"
volumes:
- db:/var/lib/postgresql/data
- ./db/init.sql:/docker-entrypoint-initdb.d/create_tables.sql api:
container_name: ebapi
build:
context: ./energybot
depends_on:
- db
ports:
- 3001:3001
environment:
DATABASE_URL: postgres://postgres:postgres#db:5432/postgres
DB_HOST: db
DB_PORT: 5432
DB_USER: postgres
DB_PASSWORD: postgres
DB_NAME: postgres
links:
- db
volumes:
- "./energybot:/usr/api"
volumes: db:
driver: local

prisma can't find database url

I'm using Prisma in node and I'm using docker.
When I try to create a new user in my database, it says that DATABASE_URL was not found but I declared it inside my env so I don't really know where I have this error.
server_container | Server is running on port 4000
server_container |
server_container | Invalid `prisma.user.create()` invocation in
server_container | /app/src/app.ts:18:36
server_container |
server_container | 15 const prisma = new PrismaClient();
server_container | 16
server_container | 17 const main = async () => {
server_container | → 18 const user = await prisma.user.create(
server_container | error: Environment variable not found: DATABASE_URL.
server_container | --> schema.prisma:10
server_container | |
server_container | 9 | provider = "postgresql"
server_container | 10 | url = env("DATABASE_URL")
server_container | |
server_container |
server_container | Validation Error Count: 1
My docker docker-compose.yml looks like that:
version: '3.8'
services:
api:
build: .
container_name: server_container
ports:
- "4000:4000"
volumes:
- ./:/app
networks:
- api-pokemon-network
depends_on:
- db
db:
image: postgres:14
restart: always
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_DB}
POSTGRES_PORT: ${POSTGRES_PORT}
volumes:
- data:/var/lib/postgresql/data
env_file:
- .env
command: -p ${POSTGRES_PORT}
networks:
- api-pokemon-network
ports:
- '${POSTGRES_PORT}:${POSTGRES_PORT}'
volumes:
data:
networks:
api-pokemon-network:
and this is my .env
POSTGRES_DB=postgres
POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres_docker
POSTGRES_HOST=db
POSTGRES_PORT=54320
DATABASE_URL=postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}#localhost:${POSTGRES_PORT}/${POSTGRES_DB}?schema=public
just run npx prisma generate. This will re-establish the link between schema.prisma and .env file.Make sure you make your DBURL within the backticks ``.

Error: Cannot find module '/app/wait-for-it.sh"'

I am trying to dockerization my backend server.
my stack is nodejs-nestjs with redis and postgres
here is my Dockerfile
FROM node:15
WORKDIR /usr/src/app
COPY package*.json ./
COPY tsconfig.json ./
COPY wait-for-it.sh ./
COPY . .
RUN npm install -g npm#7.22.0
RUN npm install
RUN npm run build
RUN chmod +x ./wait-for-it.sh .
EXPOSE 3333
CMD [ "sh", "-c", "npm run start:prod"]
and here is my docker-compose file:
version: '3.2'
services:
redis-service:
image: "redis:alpine"
container_name: redis-container
ports:
- 127.0.0.1:6379:6379
expose:
- 6379
postgres:
image: postgres:14.1-alpine
container_name: postgres-container
restart: always
environment:
- POSTGRES_USER=root
- POSTGRES_PASSWORD=1234
- DB_NAME = db
ports:
- 127.0.0.1:5432:5432
expose:
- 5432
volumes:
- db:/var/lib/postgresql/data
oms-be:
build: .
ports:
- 3333:3333
links:
- postgres
- redis-service
depends_on:
- postgres
- redis-service
environment:
- DB_HOST=postgres
- POSTGRES_PASSWORD = 1234
- POSTGRES_USER=root
- AUTH_REDIS_HOST=redis-service
- DB_NAME = db
command: ["./wait-for-it.sh", "postgres:5432", "--", "sh", "-c", "npm run start:prod"]
volumes:
db:
driver: local
However, when I run docker-compose up
I got this error :
taching to oms-be-oms-be-1, postgres-container, redis-container
redis-container | 1:C 05 Jun 2022 00:35:16.730 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis-container | 1:C 05 Jun 2022 00:35:16.730 # Redis version=7.0.0, bits=64, commit=00000000, modified=0, pid=1, just started
redis-container | 1:C 05 Jun 2022 00:35:16.730 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis-container | 1:M 05 Jun 2022 00:35:16.731 * monotonic clock: POSIX clock_gettime
redis-container | 1:M 05 Jun 2022 00:35:16.731 * Running mode=standalone, port=6379.
redis-container | 1:M 05 Jun 2022 00:35:16.731 # Server initialized
redis-container | 1:M 05 Jun 2022 00:35:16.731 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
redis-container | 1:M 05 Jun 2022 00:35:16.732 * The AOF directory appendonlydir doesn't exist
redis-container | 1:M 05 Jun 2022 00:35:16.732 * Ready to accept connections
postgres-container |
postgres-container | PostgreSQL Database directory appears to contain a database; Skipping initialization
postgres-container |
postgres-container | 2022-06-05 00:35:16.824 UTC [1] LOG: starting PostgreSQL 14.1 on x86_64-pc-linux-musl, compiled by gcc (Alpine 10.3.1_git20211027) 10.3.1 20211027, 64-bit
postgres-container | 2022-06-05 00:35:16.824 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
postgres-container | 2022-06-05 00:35:16.824 UTC [1] LOG: listening on IPv6 address "::", port 5432
postgres-container | 2022-06-05 00:35:16.827 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres-container | 2022-06-05 00:35:16.833 UTC [21] LOG: database system was shut down at 2022-06-05 00:34:36 UTC
postgres-container | 2022-06-05 00:35:16.836 UTC [1] LOG: database system is ready to accept connections
oms-be-oms-be-1 | internal/modules/cjs/loader.js:905
oms-be-oms-be-1 | throw err;
oms-be-oms-be-1 | ^
oms-be-oms-be-1 |
oms-be-oms-be-1 | Error: Cannot find module '/app/wait-for-it.sh"'
oms-be-oms-be-1 | at Function.Module._resolveFilename (internal/modules/cjs/loader.js:902:15)
oms-be-oms-be-1 | at Function.Module._load (internal/modules/cjs/loader.js:746:27)
oms-be-oms-be-1 | at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:75:12)
oms-be-oms-be-1 | at internal/main/run_main_module.js:17:47 {
oms-be-oms-be-1 | code: 'MODULE_NOT_FOUND',
oms-be-oms-be-1 | requireStack: []
oms-be-oms-be-1 | }
oms-be-oms-be-1 exited with code 1
I tried to build it without wait-for-it.sh and it was complaining that the server cannot connect to the Postgres DB and Redis, so I added wait-for-it.sh file to make it wait until the Redis and the Postgres DB are up, but I got the above error
Can anyone tell me what I am doing wrong?
I've simplified your Dockerfile and docker-compose.yaml in order to test things out on my system. I have this package.json:
{
"name": "example",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"build": "echo \"Example build command\"",
"start:prod": "sleep inf"
},
"author": "",
"license": "ISC"
}
And this Dockerfile:
FROM node:15
WORKDIR /usr/src/app
COPY package*.json ./
COPY wait-for-it.sh ./
RUN chmod +x ./wait-for-it.sh .
RUN npm install
RUN npm run build
EXPOSE 3333
CMD [ "sh", "-c", "npm run start:prod"]
And this docker-compose.yaml:
version: '3.2'
services:
postgres:
image: docker.io/postgres:14
environment:
POSTGRES_PASSWORD: secret
oms-be:
build: .
ports:
- 3333:3333
command: [./wait-for-it.sh", "postgres:5432", "--", "sh", "-c", "npm run start:prod"]
Note that the command: on the final line there has the missing quote. If I try to bring this up using docker-compose up, I see:
oms-be_1 | node:internal/modules/cjs/loader:927
oms-be_1 | throw err;
oms-be_1 | ^
oms-be_1 |
oms-be_1 | Error: Cannot find module '/usr/src/app/wait-for-it.sh"'
oms-be_1 | at Function.Module._resolveFilename (node:internal/modules/cjs/loader:924:15)
oms-be_1 | at Function.Module._load (node:internal/modules/cjs/loader:769:27)
oms-be_1 | at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:76:12)
oms-be_1 | at node:internal/main/run_main_module:17:47 {
oms-be_1 | code: 'MODULE_NOT_FOUND',
oms-be_1 | requireStack: []
oms-be_1 | }
If I correct the syntax so that we have:
version: '3.2'
services:
postgres:
image: docker.io/postgres:14
environment:
POSTGRES_PASSWORD: secret
oms-be:
build: .
ports:
- 3333:3333
command: ["./wait-for-it.sh", "postgres:5432", "--", "sh", "-c", "npm run start:prod"]
Then it runs successfully:
oms-be_1 | wait-for-it.sh: waiting 15 seconds for postgres:5432
oms-be_1 | wait-for-it.sh: postgres:5432 is available after 0 seconds
oms-be_1 |
oms-be_1 | > example#1.0.0 start:prod
oms-be_1 | > sleep inf
oms-be_1 |
The difference in behavior is due to the ENTRYPOINT script in the underlying node:15 image, which includes this logic:
if [ "${1#-}" != "${1}" ] || [ -z "$(command -v "${1}")" ]; then
set -- node "$#"
fi
That says, essentially:
IF the first parameter starts with -
OR There is no command matching $1
THEN try starting the command with node
With the missing ", you end up with an argument that doesn't match any valid commands, which is why you end up with an error in which node is trying to run the wait-for-it.sh script.

Problem about dockerizing a NestJS app with Prisma and PostgreSQL

I am trying to build a NestJS app with Prisma and PostgreSQL. I want to use docker; however, I got an error when I sent the request to the backend.
Here is my docker file
FROM node:14 AS builder
WORKDIR /app
COPY package*.json ./
COPY prisma ./prisma/
RUN npm install
RUN npx prisma generate
COPY . .
RUN npm run build
FROM node:14
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package*.json ./
COPY --from=builder /app/dist ./dist
EXPOSE 3000
CMD [ "npm", "run", "start:prod" ]
Here is my docker-compose.yml
version: '3.8'
services:
nest-api:
container_name: nest-api
build:
context: .
dockerfile: Dockerfile
ports:
- 3000:3000
depends_on:
- postgres
env_file:
- .env
postgres:
image: postgres:13
container_name: postgres
restart: always
ports:
- 5432:5432
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: task-management
env_file:
- .env
Here is my schema.prisma
// This is your Prisma schema file,
// learn more about it in the docs: https://pris.ly/d/prisma-schema
generator client {
provider = "prisma-client-js"
}
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
//url = "postgresql://postgres:postgres#localhost:5432/task-management?schema=public"
}
model Task {
id Int #id #default(autoincrement())
title String
description String
status TaskStatus #default(OPEN)
}
enum TaskStatus {
OPEN
IN_PRO
DOooNE
}
Here is the .env
# Environment variables declared in this file are automatically made available to Prisma.
# See the documentation for more detail: https://pris.ly/d/prisma-schema#using-environment-variables
# Prisma supports the native connection string format for PostgreSQL, MySQL, SQLite, SQL Server and MongoDB (Preview).
# See the documentation for all the connection string options: https://pris.ly/d/connection-strings
DATABASE_URL=postgresql://postgres:postgres#postgres:5432/task-management?schema=public
After I run the command:docker-compose up, everything is fine. However, if I send the request to the app, I get the following error:
nest-api | [Nest] 19 - 11/02/2021, 5:52:43 AM ERROR [ExceptionsHandler]
nest-api | Invalid `this.prisma.task.create()` invocation in
nest-api | /dist/tasks/tasks.service.js:29:33
nest-api |
nest-api | 26 return found;
nest-api | 27 }
nest-api | 28 async creatTask(data) {
nest-api | → 29 return this.prisma.task.create(
nest-api | The table `public.Task` does not exist in the current database.
nest-api | Error:
nest-api | Invalid `this.prisma.task.create()` invocation in
nest-api | /dist/tasks/tasks.service.js:29:33
nest-api |
nest-api | 26 return found;
nest-api | 27 }
nest-api | 28 async creatTask(data) {
nest-api | → 29 return this.prisma.task.create(
nest-api | The table `public.Task` does not exist in the current database.
nest-api | at cb (/node_modules/#prisma/client/runtime/index.js:38537:17)
nest-api | at async /node_modules/#nestjs/core/router/router-execution-context.js:46:28
nest-api | at async /node_modules/#nestjs/core/router/router-proxy.js:9:17
What changes should I make in the docker file to solve the problem?

Docker cannot run angular container on different port that node express start

I'm trying to run angular app with ssr on docker.
My dockerfile is:
FROM node:10-alpine as build-stage
ENV PROD true
WORKDIR /app
COPY ./package.json ./package-lock.json /app/
RUN npm install
COPY . /app
RUN npm run build:ssr
# stage 2
FROM node:10-alpine
WORKDIR /app
# Copy dependency definitions
COPY --from=build-stage /app/package.json /app
# Get all the code needed to run the app
COPY --from=build-stage /app/dist /app/dist
ADD ./build.js /app
EXPOSE 4200
CMD ["/bin/sh", "-c", "node build.js && npm run serve:ssr"]
My docker-compose file:
version: "3.8"
services:
frontend:
build:
context: ../../frontend/
dockerfile: Dockerfile
container_name: "frontend"
ports:
- 4500:4200
networks:
- external-network
networks:
external-network:
external: true
Ok now when im trying get into localhost:4500 im getting error:
frontend |
frontend | > frontend#0.0.3 serve:ssr /app
frontend | > node --max_old_space_size=4096 dist/apps/frontend/server/server
frontend |
frontend | Node Express server listening on http://localhost:4200
frontend | (node:26) [DEP0005] DeprecationWarning: Buffer() is deprecated due to security and usability issues. Please use the Buffer.alloc(), Buffer.allocUnsafe(), or Buffer.from() methods instead.
frontend | ERROR Failed to load the config file
frontend | ERROR { Error: Uncaught (in promise): Failed to load the config file
frontend | at resolvePromise (/app/dist/apps/frontend/server/server.js:1028:31)
frontend | at resolvePromise (/app/dist/apps/frontend/server/server.js:985:17)
frontend | at /app/dist/apps/frontend/server/server.js:1089:17
frontend | at ZoneDelegate.invokeTask (/app/dist/apps/frontend/server/server.js:599:31)
frontend | at Object.onInvokeTask (/app/dist/apps/frontend/server/server.js:192931:33)
frontend | at ZoneDelegate.invokeTask (/app/dist/apps/frontend/server/server.js:598:60)
frontend | at Zone.runTask (/app/dist/apps/frontend/server/server.js:371:47)
frontend | at drainMicroTaskQueue (/app/dist/apps/frontend/server/server.js:777:35)
frontend | at ZoneTask.invokeTask (/app/dist/apps/frontend/server/server.js:678:21)
frontend | at ZoneTask.invoke (/app/dist/apps/frontend/server/server.js:663:48)
frontend | rejection: 'Failed to load the config file',
frontend | promise:
frontend | ZoneAwarePromise [Promise] {
frontend | __zone_symbol__state: 0,
frontend | __zone_symbol__value: 'Failed to load the config file' },
frontend | zone:
frontend | Zone {
frontend | _parent:
frontend | Zone {
frontend | _parent: null,
frontend | _name: '<root>',
frontend | _properties: {},
frontend | _zoneDelegate: [ZoneDelegate] },
frontend | _name: 'angular',
frontend | _properties: { isAngularZone: true },
frontend | _zoneDelegate:
frontend | ZoneDelegate {
frontend | _taskCounts: [Object],
frontend | zone: [Circular],
frontend | _parentDelegate: [ZoneDelegate],
frontend | _forkZS: null,
frontend | _forkDlgt: null,
frontend | _forkCurrZone: [Zone],
frontend | _interceptZS: null,
frontend | _interceptDlgt: null,
frontend | _interceptCurrZone: [Zone],
frontend | _invokeZS: [Object],
frontend | _invokeDlgt: [ZoneDelegate],
frontend | _invokeCurrZone: [Circular],
frontend | _handleErrorZS: [Object],
frontend | _handleErrorDlgt: [ZoneDelegate],
frontend | _handleErrorCurrZone: [Circular],
frontend | _scheduleTaskZS: [Object],
frontend | _scheduleTaskDlgt: [ZoneDelegate],
frontend | _scheduleTaskCurrZone: [Circular],
frontend | _invokeTaskZS: [Object],
frontend | _invokeTaskDlgt: [ZoneDelegate],
frontend | _invokeTaskCurrZone: [Circular],
frontend | _cancelTaskZS: [Object],
frontend | _cancelTaskDlgt: [ZoneDelegate],
frontend | _cancelTaskCurrZone: [Circular],
frontend | _hasTaskZS: [Object],
frontend | _hasTaskDlgt: [ZoneDelegate],
frontend | _hasTaskDlgtOwner: [Circular],
frontend | _hasTaskCurrZone: [Circular] } },
frontend | task:
frontend | ZoneTask {
frontend | _zone:
frontend | Zone {
frontend | _parent: [Zone],
frontend | _name: 'angular',
frontend | _properties: [Object],
frontend | _zoneDelegate: [ZoneDelegate] },
frontend | runCount: 0,
frontend | _zoneDelegates: null,
frontend | _state: 'notScheduled',
frontend | type: 'microTask',
frontend | source: 'Promise.then',
frontend | data:
frontend | ZoneAwarePromise [Promise] {
frontend | __zone_symbol__state: 0,
frontend | __zone_symbol__value: 'Failed to load the config file' },
frontend | scheduleFn: undefined,
frontend | cancelFn: undefined,
frontend | callback: [Function],
frontend | invoke: [Function] } }
frontend | Failed to load the config file
But why i can't run it on another port?
When i run my docker-compose like this:
version: "3.8"
services:
frontend:
build:
context: ../../frontend/
dockerfile: Dockerfile
container_name: "frontend"
ports:
- 4200:4200
networks:
- external-network
networks:
external-network:
external: true
Everything is running perfect.
Why i can only run app on default node express port?

Resources