Cannot find module, docker - babel config in node - node.js

I have a cannot find module error using docker. I'm not sure what is going on. I've tried deleting 'volumes' in the docker-compose file. I have also tried rming the image and running docker-compose up again. I'm really at a loss as to what is happening here. Any help would be appreciated.
docker-compose
version: '2'
services:
nginx:
build: "./nginx"
links: ["node1", "node2"]
ports: ["80:80"]
node1:
build:
context: "./node"
args:
http_proxy: "${http_proxy}"
https_proxy: "${https_proxy}"
environment:
http_proxy: "${http_proxy}"
https_proxy: "${https_proxy}"
NODE_PATH: "lib"
NODE_ENV: "production"
POSTGRES_USER: "admin"
POSTGRES_PASSWORD: "password"
links: ["postgres", "mongo"]
ports: ["5000:5000"]
node2:
build:
context: "./node"
args:
http_proxy: "${http_proxy}"
https_proxy: "${https_proxy}"
environment:
http_proxy: "${http_proxy}"
https_proxy: "${https_proxy}"
NODE_PATH: "lib"
NODE_ENV: "production"
POSTGRES_USER: "admin"
POSTGRES_PASSWORD: "password"
links: ["postgres", "mongo"]
ports: [5000]
postgres:
image: "postgres"
environment:
POSTGRES_USER: "admin"
POSTGRES_PASSWORD: "password"
ports: ["5432:5432"]
mongo:
image: mongo
ports: ['27017:27017']
Dockerfile
FROM node
# Set up environment
RUN npm config set proxy $http_proxy
RUN npm config set https-proxy $https_proxy
# Install app
ENV INSTALL_PATH="/opt/node"
RUN ["mkdir", "-p", "$INSTALL_PATH"]
ADD package.json $INSTALL_PATH/package.json
ADD index.js $INSTALL_PATH/index.js
# Define working directory
WORKDIR $INSTALL_PATH
# Install dependencies
RUN npm install -g nodemon
RUN npm install
# Expose port
EXPOSE 5000
# Run app
ENTRYPOINT npm start
index.js
require('babel-core/register')()
require('babel-polyfill')
require('./bin/server.js')
package.json
{
"name": "no-commerce",
"version": "0.0.1",
"description": "API for No-Commerce",
"main": "index.js",
"scripts": {
"start": "node index.js",
"dev": "./node_modules/.bin/nodemon index.js",
"test": "NODE_ENV=test ./node_modules/.bin/mocha --compilers js:babel-register --require babel-polyfill",
"lint": "eslint src/**/*.js",
"docs": "./node_modules/.bin/apidoc -i src/ -o docs"
},
Error: Cannot find module './bin/server.js'
File Structure:
- Root
-docker-compose
-node
-package.json
-bin
-server.js
-index.js
-Dockerfile
-nginx

For starters, you are only adding these files to the container in your docker file:
ADD package.json $INSTALL_PATH/package.json
ADD index.js $INSTALL_PATH/index.js
You need to add server.js to $INSTALL_PATH/bin

Related

Cloud Run: The user-provided container failed to start and listen on the port defined provided by the PORT=8080

I am trying to deploy a containerized node-typescript-express app to cloud run but I am unable to do so, receiving the following error:
The user-provided container failed to start and listen on the port defined provided by the PORT=8080
Here is my Dockerfile config:
FROM node:18.13.0 as base
WORKDIR /home/node/app
COPY package*.json ./
RUN npm i
COPY . .
FROM base as production
ENV NODE_PATH=./dist
RUN npm run build
In my code, I'm declaring port as
const PORT = process.env.PORT || 8080;
I also have a .env file where I was setting port, but I deleted the port key - as far as I know, GCP cloud run injects the port variable anyway.
Here is a screenshot from my project settings on GCP. I uploaded my image by building it locally with docker-compose build, tagging it, and uploading it to the GCP container repository.
I've tried manually setting the port in the code, removing the env file completely, specifying a different port, etc. I'm not even sure if the port is specifically the error and it's just some kind of catch-all.
Here's my package.json:
{
"name": "weather-service",
"version": "0.0.0",
"description": "small node server that fetches openweather api data",
"engines": {
"node": ">= 18.12 <19"
},
"scripts": {
"start": "NODE_PATH=./dist node dist/src/index.js",
"clean": "rimraf coverage dist tmp",
"dev": "ts-node-dev -r tsconfig-paths/register src/index.ts",
"prebuild": "npm run lint",
"build": "ttsc -p tsconfig.release.json",
"build:watch": "ttsc -w -p tsconfig.release.json",
"build:release": "npm run clean && ttsc -p tsconfig.release.json",
"test": "jest --coverage --detectOpenHandles --forceExit",
"test:watch": "jest --watch --detectOpenHandles --forceExit",
"lint": "eslint . --ext .ts --ext .mts && tsc",
"lint:fix": "eslint . --ext .ts --ext .mts",
"prettier": "prettier --config .prettierrc --write .",
"prepare": "husky install",
"pre-commit": "lint-staged"
And lastly, here's my docker-compose file and how I'm executing the commands
docker-compose.yml
version: '3.7'
services:
weather-service:
build:
context: .
dockerfile: Dockerfile
target: base
volumes:
- ./src:/home/node/app/src
container_name: weather-service
expose:
- '8080'
ports:
- '8080:8080'
command: npm run dev
docker-compose.prod.yml
version: '3.7'
services:
weather-service:
build:
target: production
command: npm run start
docker.compose.dev.yml
version: '3.7'
services:
weather-service:
env_file:
- .env
environment:
- ${PORT}
- ${WEATHER_API_KEY}
Makefile
up:
docker-compose -f docker-compose.yml -f docker-compose.dev.yml up
up-prod:
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up
down:
docker-compose down
build:
docker-compose build
If you are using Macbook, then below answer from Bk Lim in the below link might help you:
Cloud Run: "Failed to start and then listen on the port defined by the PORT environment variable." When I use 8080
Update: I managed to get it successfully deployed by changing my docker-compose files to a template I found on GitHub, here
My docker knowledge is minimal so if anyone has any idea why my old docker-compose wasn't working, I'd love to know.

Running Nodemon in Docker Container

I'm trying to run Nodemon package in a Docker Network with two services, NodeJs and MongoDB.
When I run the app with npm start, nodemon works: I can connect to localhost:3000 and I have the changes in real time. But as soon as I run npm run docker (docker-compose up --build), I can connect to localhost:3000 but I'm not able to see real-time changes on my application nor console.
docker-compose.yml
version: '3.7'
services:
app:
container_name: NodeJs
build: .
volumes:
- "./app:/usr/src/app/app"
ports:
- 3000:3000
mongo_db:
container_name: MongoDB
image: mongo
volumes:
- mongo_db:/data/db
ports:
- 27017:27017
volumes:
mongo_db:
dockerfile
FROM node:alpine
WORKDIR /app
COPY package.json /.
RUN npm install
COPY . /app
CMD ["npm", "run", "dev"]
package.json
{
"name": "projectvintedapi",
"version": "0.0.0",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"start": "node index.js",
"dev": "nodemon index.js",
"docker": "docker-compose up --build"
},
"license": "ISC",
"dependencies": {
"dotenv": "^16.0.0",
"express": "^4.17.3",
"mongodb": "^4.5.0",
"nodemon": "^2.0.15"
}
}
docker-compose.yml tells Docker to boot a container, the Node.js application. It also tells Docker to mount a host volume:
volumes:
- "./app:/usr/src/app/app"
As a result, Docker will mount the ./app directory on your laptop, which contains your code, into the container at /usr/src/app/app.
Once you’ve changed your code on your laptop/desktop, nodemon detects that change and restarts the process without rebuilding the container. To make this happen, you need to tell Docker to set the entrypoint to nodemon. Do that in the Dockerfile:
FROM node:alpine
WORKDIR /app
COPY . /app
RUN npm install -g nodemon
RUN npm install
#Give the path of your endpoint
ENTRYPOINT ["nodemon", "/usr/src/app/server.js"]
CMD ["npm", "run", "dev"]
With host volumes and nodemon, your code sync is almost instantaneous.
Fixed it: it was for bind mount
dockerfile
FROM node:16.13.2
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
COPY . ./
CMD ["npm", "run", "start"]
docker-compose.yml
version: '3.7'
services:
app:
container_name: NodeJs
build: .
command: npm run dev
volumes:
- .:/app
ports:
- 3000:3000
mongo_db:
container_name: MongoDB
image: mongo
volumes:
- mongo_db:/data/db
ports:
- 27017:27017
volumes:
mongo_db:

docker-compose setup node app - deep dive

I am getting a npm ERR! enoent ENOENT: no such file or directory, open '/usr/src/app/package.json' error currently with the below docker setup or a error TS2307: Cannot find module 'Actions' or its corresponding type declarations- i think its a case that the paths are not found in tsconfig.json during the build or i am not COPYing the correct directory/volume as part of the Dockerfile. Have spent multiple days working through different path configs / setups, any help getting this to build would be greatly appreciated.
Would love to see a node / TS / docker / mysql project example if there are any in the community to share - have found it difficult to find opensource projects to compare this to for hints.
...
"paths": {
"Actions/*": [
"Actions/*"
],
}
docker-compose
version: '3.8'
services:
app:
image: app:latest
container_name: balanced-money-backend
build:
context: .
dockerfile: Dockerfile
# TODO investigate uid and gid, how does it get in - from a startup script? Think it needs to be added like user: $UID:$GID if my cmd calls a setup to id on host machine. Needs more investigation.
depends_on:
db:
condition: service_healthy
env_file:
- .env
restart: always
volumes:
- .:/var/www/
command: npm start
ports:
- $NODE_LOCAL_PORT:$NODE_DOCKER_PORT
environment:
- DB_HOST=$MYSQL_HOST
- DB_USERNAME=$MYSQL_USER
- DB_PORT=$MYSQL_DOCKER_PORT
- DB_PASSWORD=$MYSQL_PASSWORD
- DB_DATABASE=$MYSQL_DATABASE
db:
image: mysql:5.7
restart: always
container_name: balanced-money-database
environment:
- MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD
- MYSQL_USER=$MYSQL_USER
- MYSQL_PASSWORD=$MYSQL_PASSWORD
- MYSQL_DATABASE=$MYSQL_DATABASE
ports:
- $MYSQL_LOCAL_PORT:$MYSQL_DOCKER_PORT
volumes:
- db:/var/lib/mysql
healthcheck: # mysql does not start immediatly, app needs to wait for mysql to start, having condition: service_healthy on app and a healthcheck makes sure db has started before app... i think.
test: mysqladmin ping -h 127.0.0.1 -u $$MYSQL_USER --password=$$MYSQL_PASSWORD
timeout: 20s
retries: 10
volumes:
db:
Dockerfile
############### Stage 1 - build the project
# use alpine version of node to keep the image size small as possible
FROM node:16-alpine AS build
# node docs recommend this
WORKDIR /usr/src/app
# docker caches per row as it builds, so copy those files which do not change often to the container first and following builds will not need copy as they are already cached by Docker.
COPY package*.json ./
COPY src tsconfig.json ./
RUN npm install
RUN npm run build
# TODO not sure about the stages - can i have a test / dev stage so test / dev is run in docker too.
############### Stage 2 - run the project
FROM build AS prod
EXPOSE 4000
# from stage 1, i.e. build take the code in the dist / package.json and copy to the container
COPY --from=build /usr/src/app/dist ./dist/
COPY --from=build /usr/src/app/package*.json ./
# npm ci will install exact versions from a package-lock file, and --production will only install dependencies, not dev dependencies.
RUN npm ci --production && npm cache clean --force
# make sure user is not root which could have security consequences.
USER node
CMD ["node", "dist/index.js"]
package.json scripts
"scripts": {
"build": "tsc",
"start": "node ./dist/index.js",
"node": "./dist/index.js",
"dev": "NODE_ENV=development DOTENV_CONFIG_PATH=.env.dev nodemon ts-node src/index.ts",
"format:prettier": "prettier --config .prettierrc 'src/**/*.ts' --write",
"lint": "eslint . --ext .ts",
"lint:fix": "eslint . --ext .ts --fix",
"test": "DOTENV_CONFIG_PATH=.env.test NODE_ENV=test jest --runInBand",
"test:coverage": "DOTENV_CONFIG_PATH=.env.test NODE_ENV=test jest --coverage",
},

Docker compose MongoDB with NodeJS using yarn

I'm trying to use docker compose to put in containers a node.js server and a mongodb database. I'm also trying to use yarn as my package-manager. My project is codded in typescript and my Mongodb is used as a service for my server.
For what I have at the moment, it works just fine, but I'm not sure if my implementation is even good. When I run docker-compose up, the containers and the images of node.js and mongodb are created, and the containers are running on the specified ports.
The problem is that when I kill the mongodb container, my nodejs server still has access to the database, like if my container mongodb container is still running. I would guess that it's because my mongodb server is also running on the same container as my node.js server.
This is my Dockerfile:
FROM node:lts-alpine
ADD package.json /app/package.json
ADD yarn.lock /app/yarn.lock
WORKDIR /app
# Installing packages
RUN yarn
ADD . /app
ENV NODE_ENV=production
# Building TypeScript files
RUN yarn build
CMD ["node", "./dist/app.js"]
These are the scripts in my package.json file:
"scripts": {
"lint": "eslint src/**/*.ts",
"format": "eslint src/**/*.ts --fix",
"start": "tsc && node --unhandled-rejections=strict ./dist/app.js",
"debug": "export DEBUG=* && npm run start",
"test": "echo \"Error: no test specified\" && exit 1",
"dev": "ts-node src/app.ts",
"build": "rm -rf build && tsc -p ."
},
And this is my docker-compose.yml file:
version: "3"
services:
app:
container_name: app
restart: always
build:
context: .
dockerfile: Dockerfile
env_file: .env
volumes:
- ./src:/app/src
environment:
- COSMOSDB_HOST=${COSMOSDB_HOST}
- COSMOSDB_PORT=${COSMOSDB_PORT}
- COSMOSDB_DBNAME=${COSMOSDB_DBNAME}
- COSMOSDB_USER=${COSMOSDB_USER}
- COSMOSDB_PASSWORD=${COSMOSDB_PASSWORD}
ports:
- "4000:4000"
links:
- mongo
mongo:
container_name: mongo
image: mongo
ports:
- "27017:27017"
volumes:
- ./data:/data/db
I don't seem to understand what's wrong with my files...
Can you help me ? Also, would you have some recommendation to give me to optimize the build or anything else ? Thank you !

Docker running both Yarn update and npm update on docker-compose up

I am running a docker container that is containing my backend code as a volume:
docker-compose.yml
version: '3'
services:
auth:
container_name: ${AUTH_CONTAINER}
build: ./modules/auth
working_dir: /usr/app
command: "npm install && npm start"
volumes:
- ../backend/modules/auth:/usr/app
expose:
- 9229
ports:
- 9229:9229
Dockerfile:
FROM node:alpine
WORKDIR /usr/app
COPY entrypoint.sh .
ENTRYPOINT ["sh", "/entrypoint.sh"]
Entrypoint:
#!/usr/bin/env bash
exec "$#"
when I run docker-compose up, the first step is always to run yarn. I get an error saying that I should not use both npm and yarn, because I currently am using npm for the project. Is there a way to have node:alpine only use npm only (I know its installed) and then call npm install when docker-compose up is ran?
EDIT: image of console:
EDIT2: Package.json
{
"name": "auth",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"start": "node index.js"
},
"author": "",
"license": "ISC",
"dependencies": {
"express": "^4.17.1"
}
}
For anyone that gets to this point- and is wondering why this is happening. Ensure that you run the docker-compose -- build FIRST before running docker-compose up that is the problem I was running into.
Dockerfile
FROM node:14
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY entrypoint.sh /
ENTRYPOINT ["sh", "/entrypoint.sh"]
entrypoint:
exec "$#"
docker-compose:
version: '3'
services:
auth:
container_name: "Auth"
build: "./auth"
working_dir: /usr/app
command: "npm start"
environment:
- NODE_ENV=local
volumes:
- ../backend/modules/auth:/usr/app
expose:
- 9299
ports:
- 9229:9229

Resources