When I run my nodejs application in a docker container, the parcel build step fails with an error message that isn't helpful. The app runs fine locally without Docker.
I created a simple app to test my problem out.
My package.json is
{
"name": "test",
"version": "1.0.0",
"scripts": {
"start": "parcel index.html"
},
"dependencies": {
"parcel": "^2.0.0-beta.2"
}
}
and my Dockerfile is
FROM alpine:latest
RUN apk update &&\
apk upgrade &&\
apk add nodejs npm
# Install app dependencies
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 80
EXPOSE 1234
CMD [ "npm", "start" ]
The error message I get is [Error: Invalid argument]
More detailed steps:
The commands I run to build docker is docker build .
Then I run the app with docker run test_app
The build step works fine and creates the image.
When I run the container, this is the entire output
docker run test_app
> test#1.0.0 start
> parcel index.html
[Error: Invalid argument]
I have tried the following:
Using node:13, node:14, node:16 and my latest attempt was with alpine as the Docker image
I tried overriding as many of parcel js default options as I could to see if I could override the problem.
Again the issue only happens inside a docker container so I'm not sure if I'm doing anything wrong with my Docker setup. Thanks in advance!
Are you running parcel from / ?
Try making a WORKDIR and then run parcel, works for me.
Getting error while starting the docker container. I am using nodemon to listen to the file changes.
DockerFile
FROM node:alpine
WORKDIR '/app'
COPY package.json .
RUN npm install
COPY . .
CMD ["npm","run","serve"]
Package.json
{
"dependencies": {
"express": "*",
"nodemon": "*"
},
"scripts": {
"serve": "nodemon index.js",
"start": "node index.js"
}
}
build command
docker build -f Dockerfile.dev -t test/nodeapp1 .
cmdLine docker cmd ->
docker run -p 3000:8080 -v /app/node_modules -v pwd:/app test/nodeapp1.
Iam new to docker, and not able to figure out the cause.
Make this changes in your dockerfile
FROM node:alpine
ENV NPM_CONFIG_PREFIX=/home/node/.npm-global
ENV HOME=/home/node/app
ENV PATH="/home/node/.npm-global/bin:${PATH}"
USER node
RUN npm install -g nodemon
RUN mkdir -p ${HOME}
WORKDIR ${HOME}
ADD package.json ${HOME}
RUN cd ${HOME} && npm install
CMD [ "npm" ,"run", "serve" ]
Build the docker container
docker build -f Dockerfile -t prac/nodeApp .
Run the docker container
docker run -p 3000:8080 -v /app/node_modules -v pwd:/app prac/nodeApp
Changing the WORKDIR to a new value worked.
FROM node:alpine
WORKDIR '/dir'
COPY package.json .
RUN npm install
COPY . .
CMD [ "npm" ,"run", "serve" ]
Your docker run -v options are wrong. You probably actually meant to write
docker run ... -v $PWD:/app ...
docker run ... -v $(pwd):/app ...
to use the current directory (from the PWD environment variable or from the pwd command, respectively) as a bind mount.
I tend to not recommend this pattern, especially for Node applications where the host dependencies are minimal and you're not interacting much with other containers. It's probably easier to just install Node locally (if you don't already have it) and do live development against that; when you want to use Docker to deploy your application, use the version you've COPYed into the image, and don't separately use a -v option to inject your code over it.
I'm trying to use nodemon inside docker container:
Dockerfile
FROM node:carbon
RUN npm install -g nodemon
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
CMD [ "nodemon" ]
Build/Run command
docker build -t tag/apt .
docker run -p 49160:8080 -v /local/path/to/apt:/usr/src/app -d tag/apt
Attaching a local volume to the container to watch for changes in code, results in some override and nodemon complains that can't find node modules (any of them). How can I solve this?
In you Dockerfile, you are running npm install after copying your package*json files. A node_modules directory gets correctly created in /usr/src/app and you're good to go.
When you mount your local directory on /usr/src/app, though, the contents of that directory inside your container are overriden with your local version of the node project, which apparently is lacking the node_modules directory, causing the error you are experiencing.
You need to run npm install on the running container after you mounted your directory. For example you could run something like:
docker exec -ti <containername> npm install
Please note that you'll have to temporarily change your CMD instruction to something like:
CMD ["sleep", "3600"]
In order to be able to enter the container.
This will cause a node_modules directory to be created in your local directory and your container should run nodemon correctly (after switching back to your current CMD).
TL;DR: npm install in a sub-folder, while moving the node_modules folder to the root.
Try this config to see and it should help you.
FROM node:carbon
RUN npm install -g nodemon
WORKDIR /usr/src/app
COPY package*.json /usr/src/app/
RUN npm install && mv /usr/src/app/node_modules /node_modules
COPY . /usr/src/app
EXPOSE 8080
CMD [ "nodemon" ]
As the other answer said, even if you have run npm install at your WORKDIR. When you mount the volume, the content of the WORKDIR is replaced by your mount folder temporarily, which the npm install did not run.
As node search its require package in serveral location, a workaround is to move the 'installed' node_modules folder to the root, which is one of its require path.
Doing so you can still update code until you require a new package, which the image needs another build.
I reference the Dockerfile from this docker sample project.
In a Javascript or a Nodejs application when we bind src file using bind volume in a Docker container either using docker command or docker-compose we end up overriding node_modules folder. To overcome this issue you need to use anonymous volume. In an anonymous volume, we provide only the destination folder path as compared to the bind volume where we specify source:destination folder path.
General syntax
--volume <container file system directory absolute path>:<read write access>
An example docker run command
docker container run \
--rm \
--detach \
--publish 3000:3000 \
--name hello-dock-dev \
--volume $(pwd):/home/node/app \
--volume /home/node/app/node_modules \
hello-dock:dev
For further reference you check this handbook by Farhan Hasin Chowdhury
Maybe there's no need to mount the whole project. In this case, I would only mount the directory where I put all the source files, e.g. src/.
This way you won't have any problem with the node_modules/ directory.
Also, if you are using Windows you may need to add the -L (--legacy-watch) option to the nodemon command, as you can see in this issue. So it would be nodemon -L.
I created new Angular2 app by angular-cli and run it in Docker.
At first I init app on my local machine:
ng new project && cd project && "put my Dockerfile there" && docker build -t my-ui && docker run.
My Dockerfile
FROM node
RUN npm install -g angular-cli#v1.0.0-beta.24 && npm cache clean && rm -rf ~/.npm
RUN mkdir -p /opt/client-ui/src
WORKDIR /opt/client-ui
COPY package.json /opt/client-ui/
COPY angular-cli.json /opt/client-ui/
COPY tslint.json /opt/client-ui/
ADD src/ /opt/client-ui/src
RUN npm install
RUN ng build --prod --aot
EXPOSE 4200
ENV PATH="$PATH:/usr/local/bin/"
CMD ["npm", "start"]
Everything is OK, problem is size of image: 939MB!!! I tried to use FROM: ubuntu:16.04 and install NodeJs on it (it works), but still my image has ~450 MB. I know that node:alpine exists, but I am not able to install angular-cli in it.
How can I shrink image size? Is it necessary to run "npm install" and "ng build" in Dockerfile? I would expect to build app on localhost and copy it to image. I tried to copy dist dir and and package.json etc files, but it does not work (app start fail). Thanks.
You can certainly use my alpine-ng image if you like.
You can also check out the dockerfile, if you want to try and modify it in some way.
I regret to inform you that even based on alpine, it is still 610MB. An improvement to be sure, but there is no getting around the fact that the angular compiler is grossly huge.
For production, you do not need to distribute an image with Node.js, NPM dependencies, etc. You simply need an image that can be used to start a data volume container that provides the compiled sources, release source maps and other assets, effectively no more than what you would redistributed with a package via NPM, that you can attach to your webserver.
So, for your CI host, you can pick one of the node:alpine distributions and copy the sources and install the dependencies therein, then you can re-use the image for running containers that test the builds until you finally run a container that performs a production compilation, which you can name.
docker run --name=compile-${RELEASE} ci-${RELEASE} npm run production
After you have finished compiling the sources within a container, run a container that has the volumes from the compilation container attached and copy the sources to a volume on the container and push that to your Docker upstream:
docker run --name=release-${RELEASE} --volumes-from=compile-${RELEASE} -v /srv/public busybox cp -R /myapp/dist /srv/public
docker commit release-${RELEASE} release-${RELEASE} myapp:${RELEASE}
Try FROM mhart/alpine-node:base-6 maybe it will work.
I have a node application that I want to host in a Docker container, which should be straight forward, as seen in this article:
https://nodejs.org/en/docs/guides/nodejs-docker-webapp/
In my project, however, the sources can not be run directly, they must be compiled from ES6 and/or Typescript. I use gulp to build with babel, browserify and tsify - with different setups for browser and server.
What would be the best workflow for building and automating docker images in this case? Are there any resources on the web that describes such a workflow? Should the Dockerimage do the building after npm install or should I create a shell script to do all this and simply have the Dockerfile pack it all together?
If the Dockerfile should do the build - the image would need to contain all the dev-dependencies, which are not ideal?
Note: I have been able to set up a docker container, and run it - but this required all files to be installed and built beforehand.
The modern recommendation for this sort of thing (as of Docker 17.05) is to use a multi-stage build. This way you can use all your dev/build dependencies in the one Dockerfile but have the end result optimised and free of unnecessary code.
I'm not so familiar with typescript, but here's an example implementation using yarn and babel. Using this Dockerfile, we can build a development image (with docker build --target development .) for running nodemon, tests etc locally; but with a straight docker build . we get a lean, optimised production image, which runs the app with pm2.
# common base image for development and production
FROM node:10.11.0-alpine AS base
WORKDIR /app
# dev image contains everything needed for testing, development and building
FROM base AS development
COPY package.json yarn.lock ./
# first set aside prod dependencies so we can copy in to the prod image
RUN yarn install --pure-lockfile --production
RUN cp -R node_modules /tmp/node_modules
# install all dependencies and add source code
RUN yarn install --pure-lockfile
COPY . .
# builder runs unit tests and linter, then builds production code
FROM development as builder
RUN yarn lint
RUN yarn test:unit --colors
RUN yarn babel ./src --out-dir ./dist --copy-files
# release includes bare minimum required to run the app, copied from builder
FROM base AS release
COPY --from=builder /tmp/node_modules ./node_modules
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/package.json ./
CMD ["yarn", "pm2-runtime", "dist/index.js"]
One possible solution is to wrap your build procedure in a special docker image. It is often referred as Builder image. It should contain all your build dependencies: nodejs, npm, gulp, babel, tsc and etc. It encapsulates all your build process, removing the need to install these tools on the host.
First you run the builder image, mounting the source code directory as a volume. The same or a separate volume can be used as output directory.
The first image takes your code and runs all build commands.
As a first step you take your built code and pack it into production docker image as you do now.
Here is an example of docker builder image for TypeScript: https://hub.docker.com/r/sandrokeil/typescript/
It is ok to have the same docker builder for several projects as it is typically designed to be general purpose wrapper around some common tools.
But it is ok to build your own that describes more complicated procedure.
The good thing about builder image is that your host environment remains unpolluted and you are free to try newer versions of compiler/different tools/change order/do tasks in parallel just by modifing Dockerfile of your builder image. And at any time you can rollback your experiment with build procedure.
I personally prefer to just remove dev dependencies after running babel during build:
FROM node:7
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install
# Copy app source
COPY src /usr/src/app/src
# Compile app sources
RUN npm run compile
# Remove dev dependencies
RUN npm prune --production
# Expose port and CMD
EXPOSE 8080
CMD [ "npm", "start" ]
Follow these steps:
Step 1: make sure you have your babel dependencies inside of dependencies not dev dependencies on package.json. Also Add a deploy script that is referencing to babel from the node_modules folder. you will be calling this script from within docker
This is what my package.json file looks like
{
"name": "tmeasy_api",
"version": "1.0.0",
"description": "Trade made easy Application",
"main": "build/index.js",
"scripts": {
"build": "babel -w src/ -d build/ -s inline",
"deploy" : "node_modules/babel-cli/bin/babel.js src/ -d build/",
},
"devDependencies": {
"nodemon": "^1.9.2"
},
"dependencies": {
"babel-cli": "^6.10.1",
"babel-polyfill": "^6.9.1",
"babel-preset-es2015": "^6.9.0",
"babel-preset-stage-0": "^6.5.0",
"babel-preset-stage-3": "^6.22.0"
}
}
build is for your development purposes on your local machine and deploy is to be called from within you dockerfile.
Step 2: since we want to do the babael transformation ourselves make sure to add .dockerignore with the build folder that you are using during development.
This is what my .dockerignore file looks like.
build
node_modules
Step 3. Construct your dockerfile. below is a sample of my docker file
FROM node:6
MAINTAINER stackoverflow
ENV NODE_ENV=production
ENV PORT=3000
# use changes to package.json to force Docker not to use the cache
# when we change our application's nodejs dependencies:
ADD package.json /tmp/package.json
RUN cd /tmp && npm install
RUN mkdir -p /var/www && cp -a /tmp/node_modules /var/www
# copy current working directory into docker; but it first checks for
# .dockerignore so build will not be included.
COPY . /var/www/
WORKDIR /var/www/
# remove any previous builds and create a new build folder and then
# call our node script deploy
RUN rm -f build
RUN mkdir build
RUN chmod 777 /var/www/build
RUN npm run deploy
VOLUME /var/www/uploads
EXPOSE $PORT
ENTRYPOINT ["node","build/index.js"]
I just released a great seed app for Typescript and Node.js using Docker.
You can find it on GitHub.
The project explains all of the commands that the Dockerfile uses and it combines tsc with gulp for some added benefits.
If you don't want to check out the repo, here's the details:
Dockerfile
FROM node:8
ENV USER=app
ENV SUBDIR=appDir
RUN useradd --user-group --create-home --shell /bin/false $USER &&\
npm install --global tsc-watch npm ntypescript typescript gulp-cli
ENV HOME=/home/$USER
COPY package.json gulpfile.js $HOME/$SUBDIR/
RUN chown -R $USER:$USER $HOME/*
USER $USER
WORKDIR $HOME/$SUBDIR
RUN npm install
CMD ["node", "dist/index.js"]
docker-compose.yml
version: '3.1'
services:
app:
build: .
command: npm run build
environment:
NODE_ENV: development
ports:
- '3000:3000'
volumes:
- .:/home/app/appDir
- /home/app/appDir/node_modules
package.json
{
"name": "docker-node-typescript",
"version": "1.0.0",
"description": "",
"scripts": {
"build": "gulp copy; gulp watch & tsc-watch -p . --onSuccess \"node dist/index.js\"",
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "Stephen Gardner (opensourceaugie#gmail.com)",
"license": "ISC",
"dependencies": {
"express": "^4.10.2",
"gulp": "^3.9.1",
"socket.io": "^1.2.0"
},
"devDependencies": {
"#types/express": "^4.11.0",
"#types/node": "^8.5.8"
}
}
tsconfig.json
{
"compileOnSave": false,
"compilerOptions": {
"outDir": "./dist/",
"sourceMap": true,
"declaration": false,
"module": "commonjs",
"moduleResolution": "node",
"emitDecoratorMetadata": true,
"experimentalDecorators": true,
"target": "ES6"
},
"include": [
"**/*.ts"
],
"exclude": [
"node_modules",
"**/*.spec.ts"
]
}
To get more towards the answer of your question -- the ts is being compiled from the docker-compose.yml file's calling of npm run build which then calls tsc. tsc then copies our files to the dist folder and a simple node dist/index.js command runs this file. Instead of using nodemon, we use tsc-watch and gulp.watch to watch for changes in the app and fire node dist/index.js again after every re-compilation.
Hope that helps :) If you have any questions, let me know!
For the moment, I'm using a workflow where:
npm install and tsd install locally
gulp build locally
In Dockerfile, copy all program files, but not typings/node_modules to docker image
In Dockerfile, npm install --production
This way I get only the wanted files in the image, but it would be nicer if the Dockerfile could do the build itself.
Dockerfile:
FROM node:5.1
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Bundle app
COPY package.json index.js /usr/src/app/
COPY views/ /usr/src/app/views/
COPY build/ /usr/src/app/build/
COPY public/ /usr/src/app/public/
# Install app dependencies
RUN npm install --production --silent
EXPOSE 3000
CMD [ "node", "index.js" ]
I guess a complete automation in the "imaging process" could be established by building in the Dockerimage script and then deleting the unwanted files before installing again.
In my project, however, the sources can not be run directly, they must be compiled from ES6 and/or Typescript. I use gulp to build with babel, browserify and tsify - with different setups for browser and server. What would be the best workflow for building and automating docker images in this case?
When i understand you right, you want to deploy your web app inside a Docker container and provide different flavours for different target-environments (you mentioned different browser and server). (1)
If the Dockerfile should do the build - the image would need to contain all the dev-dependencies, which are not ideal?
It depends. If you want to provide a ready-to-go-image, it has to contain everything your web app needs to run. One advantage is, that you later only need to start the container, pass some parameters and you are ready to go.
During the development phase, that image is not really necessary, because of your usually pre-defined dev-environment. It costs time and resources, if you generate such an image after each change.
Suggested approach: I would suggest a two way setup:
During development: Use a fixed environment to develop your app. All software can run locally or inside a docker/VM. I suggest using a Docker container with your dev-setup, especially if you work in a team and everybody needs to have the same dev-basement.
Deploy Web app: As i understood you right (1), you want to deploy the app for different environments and therefore need to create/provide different configurations. To realize something like that, you could start with a shell-script which packages your app into different docker container. You run the script before your deploy. If you have Jekyll running, it calls your shell-script after each commit, after all tests ran fine.
Docker container for both development and deploy phase: I would like to refer to a project of mine and a colleague: https://github.com/k00ni/Docker-Nodejs-environment
This docker provides a whole development- and deploy-environment by maintaining:
Node.js
NPM
Gulp
Babel (auto transpiling from ECMA6 to JavaScript on a file change)
Webpack
and other JavaScript helpers inside the docker container. You just link your project folder via a volume inside the docker container. It initializes your environment (e.g. deploys all dependencies from package.json) and you are good to go.
You can use it for development purposes so that you and your team are using the same environment (Node.js version, NPM version,...) Another advantage is, that file changes lead to a re-compiling of ECMA6/ReactJS/... files to JavaScript files (No need to do this by hand after each change). We use Babel for that.
For deployment purposes, just extend this Docker image and change required parts. Instead of linking your app inside the container, you can pull it via Git (or something like that). You will use the same basement for all your work.
I found this article that should guide you in both development and production phases: https://www.sentinelstand.com/article/docker-with-node-in-development-and-production
In this article we'll create a production Docker image for a
Node/Express app. We'll also add Docker to the development process
using Docker Compose so we can easily spin up our services, including
the Node app itself, on our local machine in an isolated and
reproducable manner.
The app will be written using newer JavaScript syntax to demonstrate
how Babel can be included in the build process. Your current Node
version may not support certain modern JavaScript features, like
ECMAScript modules (import and export), so Babel will be used to
convert the code into a backwards compatible version.