How to import modules into a file in docker-compose - node.js

Currently, I am getting this error when I run my server in docker:
import {dataSchema} from "./data-model.js"
> ^^^^^^^^^^
SyntaxError: The requested module './data-model.js' does not provide an export named 'dataSchema'
Despite having exported it like so:
module.exports = {
dataSchema,
}
And importing like so:
import {dataSchema} from "./data-model.js"
My dockerfile looks like this:
FROM node:12
WORKDIR /usr/src/server
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
CMD ["npm", "start"]
The file that imports the dataSchema is on the same directory level as the file that exports the dataSchema. I can not use the CJS syntax for this.
Currently, I'm trying to console.log() the dataSchema, I'm assuming this isn't working due to problems with my dockerfile. The next thing I suspect, is that I'm supposed to copy the data-model.js file, however I don't see why the COPY . . in my dockerfile wouldn't already do that.

Related

What is the command needed for an API backend Dockerfile?

I am new to creating Dockerfiles and cannot figure out what command to use to start up the API backend application. I know that backend applications don't use Angular and that the command to start it is not "CMD ng serve --host 0.0.0.0".
I am attaching the code of the backend Dockerfile and also providing the errors that I am getting when trying to run the container in Docker Desktop below.
I have looked at Docker documentation and Node commands but cannot figure out what command to use to make the API backend run. What am I doing wrong?
Code:
# using Node v10
FROM node:10
# Create app directory
WORKDIR /usr/src/lafs
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm install
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
# Expose port 3000 outside container
EXPOSE 3000
# Command used to start application
CMD ng serve --host 0.0.0.0
Errors that I am receiving in Docker Desktop:
/bin/sh: 1: ng: not found
From your original screenshot, it looks like you've got a server directory. Assuming that's where your Express app lives, try something like this
FROM node:16 # 12 and older are EOL, 14 is in maintenance
WORKDIR /usr/src/lafs
EXPOSE 3000 # assuming this is your server port
COPY server/package*.json . # copy package.json and package-lock.json
RUN npm ci --only=production # install dependencies
COPY server . # copy source code
CMD ["npm", "start"] # start the Express server

React app is not loading from docker image in local

My Docker file
# FROM node:16.14.2
FROM node:alpine
ENV NODE_ENV=production
WORKDIR /app
COPY ["package.json", "package-lock.json", "./"]
RUN npm install
COPY . .
CMD [ "npm", "start"]
Command to run image: docker run -it -d -p 4001:4001 react-app:test2
Project structure
project structure
Output after docker run
result after docker run
Based on this context, a possible mistake for me is basically that you do not copy the rest of the source code correctly.
Try to be more consistent in the Dockerfile, also have a look at the multistage Docker build (within the same file) to optimise the image.
Anyway, your file should be something like:
FROM node:16-alpine
ENV NODE_ENV=production
WORKDIR /app
COPY ["package.json", "package-lock.json", "./"]
RUN npm install
COPY . ./
CMD [ "npm", "start"]
Based on the code in the repo, I managed to spot the following problem.It's neither the Dockerfile, nor the code itslef. It throws some warnings though.
Implicitly, the application is supposed to be running on port 3000, if it is not chnaged manually at some point (in this project there are only default settings). Thus the application starts correclty on port 3000, However you expose 4001:4001. On this port nothing is running according to this Dockerfile.
Try using port 3000 instead and it should work just fine:
docker run -it -d -p 3000:3000 <image-name>:<image-tag>

How do I make docker to run one script only once and the other everytime?

I want to run init.js(which will populate the mongo database) only once during the initial build and run 'app.js' everytime 'docker compose up' command is given
My directory looks like this :
backend/init.js
app.js
My dockerfile :
FROM node:alpine
WORKDIR /app
COPY package*.json .
RUN npm install
COPY . .
CMD ["npm","start"]
Currently npm start performs only 'node app.js' . If someone else pulls my repo and hits 'docker compose up' is there anyway to run 'init.js' only once and 'app.js' every single time?

Docker: ENTRYPOINT can't execute command because it doesn't find the file

I'm trying to create a container from node js image and I have configured my Dockerfile as shown:
FROM node:boron
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app
RUN npm install
# Bundle app source
COPY . /usr/src/app
VOLUME ./:/usr/src/app
ENTRYPOINT [ "npm run watch" ]
In the package.json I have a script called watch than runs the gulp task named watch-less.
If I run npm run watch in my local environment the command works but when I try running the container it doesn't and shows the next error:
docker: Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "exec: \"npm run watch\": executable file not found in $PATH".
ENTRYPOINT [ "npm run watch" ]
This is an incorrect json syntax, it's looking for the executable npm run watch, not the executable npm with parameters run and watch:
With the json syntax you need to separate each argument. You can use the shell syntax:
ENTRYPOINT npm run watch
Or you can update the json syntax like (assuming npm is installed in /usr/bin):
ENTRYPOINT [ "/usr/bin/npm", "run", "watch" ]
You also have an incorrect volume definition:
VOLUME ./:/usr/src/app
Dockerfiles cannot specify the how the volume is mounted to the host, only that an anonymous volume is defined at a specific directory location. With a syntax like:
VOLUME /usr/src/app
I've got strong opinions against using a volume definition inside of the Dockerfile described in this blog post. In short, you can define the volume better in a docker-compose.yml, all you can do with a Dockerfile is create anonymous volumes that you'd need to still redefine elsewhere if you want to be able to easily reuse them later.
If you use the list notation for ENTRYPOINT, that is, with the [brackets], you must separate the arguments properly.
ENTRYPOINT ["npm", "run", "watch"]
Right now it is trying to find a file literally named "npm run watch" and that does not exist.

gcloud App Engine Flexible Strangeness with Docker and Babel

I've been deploying a server side node application to a custom app engine runtime for a few months without any problems. The only half interesting thing about it is that a run babel against the source when I build the container.
In this last few weeks this has been failing intermittently with an error to this effect in the remote build log.
import * as deps from './AppFactory';
SyntaxError: Unexpected token import
Leading me to believe that the babel transpilation wasn't happening; though the gcloud cli indicates it is:
> node_modules/babel-cli/bin/babel.js src/ -d dist/
src/AppFactory.js -> dist/AppFactory.js
src/Ddl.js -> dist/Ddl.js
src/Helpers.js -> dist/Helpers.js
src/MemoryResolver.js -> dist/MemoryResolver.js
src/Mysql.js -> dist/Mysql.js
src/Schema.js -> dist/Schema.js
src/index.js -> dist/index.js
---> 0282c805d5c9
In desperation I cat out the dist/index file in the Dockerfile. When I do, I see that indeed no transpilation occurs.
When I create a docker image locally, everything works perfectly.
My Dockerfile follows:
# Set the base image to Ubuntu
FROM gcr.io/google_appengine/nodejs:latest
ENV NODE_ENV production
# File Author / Maintainer
# Provides cached layer for node_modules
ADD package.json /tmp/package.json
RUN cd /tmp && npm install
RUN mkdir -p /src && cp -a /tmp/node_modules /src/
# Define working directory
WORKDIR /src
ADD . /src
RUN npm run deploy
RUN cat /src/dist/index.js
CMD ["npm", "start"]
Below is my .babelrc file:
{
"presets": [
"es2015",
]
}
And my vanilla yaml file:
service: metrics-api-test
runtime: custom
env: flex
env_variables:
NODE_ENV: 'production'
NODEPORT: '8080'
beta_settings:
cloud_sql_instances: pwc-sales-demos:us-east1:pawc-sales-demos-sql
I've been trying all sorts of variations with babel-register, babel-node. They all work perfectly when I build a local docker image. They all fail when I deploy to the app engine.
I posted this a few months ago and the issue is starting to plague me again. It started off as an intermittent problem and now it happens every time. It happens between services and even on different gcloud projects.
Any insight into this gets my appreciation and 150 points.
So finally getting back to this; it was completely my fault.
I had thought that I had moved all the babel dependencies into the runtime dependency strophe, like so:
"dependencies": {
"babel-cli": "^6.24.1",
"babel-preset-es2015": "^6.24.1"....
But I must have not. All works perfectly with the above and this Dockerfile:
FROM gcr.io/google_appengine/nodejs:latest
ENV NODE_ENV production
# File Author / Maintainer
# Provides cached layer for node_modules
ADD package.json /tmp/package.json
RUN cd /tmp && npm install
RUN mkdir -p /src && cp -a /tmp/node_modules /src/
# Define working directory
WORKDIR /src
ADD . /src
RUN node_modules/babel-cli/bin/babel.js src/ -d dist/
RUN cat dist/index.js
CMD ["npm", "start"]
No more manually building the file!

Resources