Docker error: Parcel (Parceljs) won't work in Docker - node.js

When I run my nodejs application in a docker container, the parcel build step fails with an error message that isn't helpful. The app runs fine locally without Docker.
I created a simple app to test my problem out.
My package.json is
{
"name": "test",
"version": "1.0.0",
"scripts": {
"start": "parcel index.html"
},
"dependencies": {
"parcel": "^2.0.0-beta.2"
}
}
and my Dockerfile is
FROM alpine:latest
RUN apk update &&\
apk upgrade &&\
apk add nodejs npm
# Install app dependencies
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 80
EXPOSE 1234
CMD [ "npm", "start" ]
The error message I get is [Error: Invalid argument]
More detailed steps:
The commands I run to build docker is docker build .
Then I run the app with docker run test_app
The build step works fine and creates the image.
When I run the container, this is the entire output
docker run test_app
> test#1.0.0 start
> parcel index.html
[Error: Invalid argument]
I have tried the following:
Using node:13, node:14, node:16 and my latest attempt was with alpine as the Docker image
I tried overriding as many of parcel js default options as I could to see if I could override the problem.
Again the issue only happens inside a docker container so I'm not sure if I'm doing anything wrong with my Docker setup. Thanks in advance!

Are you running parcel from / ?
Try making a WORKDIR and then run parcel, works for me.

Related

How do I resolve this docker container error in my node and reactjs application

I don't know how else to resolve this issue. I keep seeing this error after running my docker container build. When I tried to access the application from my web browser, I get this error:
Error: ENOENT: no such file or directory, stat '/app/server/public/index.html'
It is obvious something is not getting copied. I am still learning this docker container and was following a tutorial. I have crossed checked the tutorial codes with mine, they are perfectly same. His codes worked, but mine have refused to work.
Here is my dockerfile codes:
FROM node:lts-alpine
WORKDIR /app
COPY package*.json ./
COPY client/package*.json client/
RUN npm run install-client --only=production
COPY server/package*.json server/
RUN npm run install-server --only=production
COPY client/ client/
RUN npm run build --prefix client
COPY server/ server/
USER node
CMD [ "npm", "start", "--prefix", "server" ]
EXPOSE 5000
My reactjs package.json build script:
"build": "set BUILD_PATH=../server/public && react-scripts build",
I ran both the docker build and run code in my nasa project directory
Desktop\Node js master class\NASA>
Seems nothing is getting copied into the /app after docker build. I don't know what exactly I need to do to resolve this. Here is the error that I got again when I tried to access the application via my web browser:
Error: ENOENT: no such file or directory, stat '/app/server/public/index.html'
Since i don't see any answer so just in case people having the same issue, it's not working because your react build script is for windows, since you use lts-alpine(linux) you can keep the MacOS/linux cmd line:
"build": "BUILD_PATH=../server/public react-scripts build",

Docker - build fails with operating system is not supported and The command '/bin/sh -c npm install' returned a non-zero code

I try to build an image for a client app (nextjs app), but the build keeps failing.
This is the docker file:
FROM node:12.18.3
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json /app/
COPY package-lock.json /app/
RUN npm install
COPY . /app
RUN npm build
# start app
CMD [ "npm", "start" ]
It fails on the first step with this error:
Step 1/9 : FROM node:12.18.3
operating system is not supported
I followed this post https://stackoverflow.com/a/51071057/9608006 , changed the experimental settings to true, and it did pass the failing step.
but now it fails on the npm i step
npm notice
The command '/bin/sh -c npm install' returned a non-zero code: 4294967295: failed to shutdown container: container c425947f7f17ed39ed51ac0a67231f78ba7239ad199c7df979b3b442969a0a57 encountered an error during hcsshim::System::waitBackground: failure in a Windows system call: The virtual machine or container with the specified identifier is not running. (0xc0370110): subsequent terminate failed container c425947f7f17ed39ed51ac0a67231f78ba7239ad199c7df979b3b442969a0a57 encountered an error during hcsshim::System::waitBackground: failure in a Windows system call: The virtual machine or container with the specified identifier is not running. (0xc0370110)
I also get this warning in the start of this step:
Step 6/9 : RUN npm install
---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (windows/amd64) and no specific platform was requested
I use windows 10,
docker v20.10.5
What is the issue ?
EDIT 1 - Folder structure
the following is the base folders layer of the client app
.next
.vercel
components
enums
hooks
node_modules
pages
pubilc
store
styles
utils
.dockerIgnore
.env.local
next.config.js
package.json
server.js
You are trying to build Linux based image under Windows.
It seems there is a problem in multiarch images of nodejs with tags version 12.
Try the answer under the post that you have tried:
Click on the docker icon in the tray and switch into Linux containers.
https://stackoverflow.com/a/57548944/3040844
If you are using docker desktop.. just change docker desktop option for windows containers bydefault to linux containers and run your dockerfile again.
I think that the problem related to your base image , I used this Dockerfile for nextjs app in my side and it's working correctly :
# Dockerfile
# base image
FROM node:alpine
# create & set working directory
RUN mkdir -p /app
WORKDIR /app
# copy source files
COPY . /app
# install dependencies
RUN npm install
# start app
RUN npm run build
EXPOSE 3000
CMD npm run start
I hope that can help you to resolve your issue .
According to your dockerfile
FROM node:12.18.3
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json /app/
COPY package-lock.json /app/
RUN npm install
COPY . /app
RUN npm build
# start app
CMD [ "npm", "start" ]
You missed the right image FROM node:12.18.3
Correct way to do this FROM node:alpine3.12 or FROM ubuntu:18.04
FROM: FROM directive is probably the most crucial amongst all others for Dockerfiles. It defines the base image to use to start the build process. It can be any image, including the ones you have created previously. If a FROM image is not found on the host, Docker will try to find it (and download) from the Docker Hub or other container repository. It needs to be the first command declared inside a Dockerfile
Simplest Dockerfile with Node Image
FROM node:alpine3.12
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY ..
RUN npm run build
EXPOSE 3000
CMD npm run start

Docker image unable to run postinstall script with error my_pck#2.0.0~postinstall: cannot run in wd my_pck#2.0.0 node symLink.js (wd=/build)

I am building an docker image. My docker file is pretty simple as shown below:-
FROM node:10.15.3
RUN mkdir /build
WORKDIR /build
COPY package.json .
COPY . .
RUN ["npm", "install"]
EXPOSE 3000
CMD [ "npm", "run", "start" ]
The only thing i have extra is i have a postinstall script in package.json which does some extra stuff. The problem is when i try to run the install fails with below error:-
postinstall script with error my_pck#2.0.0~postinstall: cannot run in wd my_pck#2.0.0 node symLink.js (wd=/build)
I have looked into various posts and question reporting same issue where they suggest to use option of:-
using --unsafe-per in npm install does not help
I also tried adding in package.json but that also does not help.
I also tried adding in .npmrc file but unfortunately it also not helping.
I am running the following command to build docker image: docker build -t my_img:1.0 .
I am using windows system.
Any Suggestion on how i can solve this.?

Installing npm dependencies inside docker and testing from volume

I want to use Docker to create development environments for a simple node.js project. I'd like to install my project's dependencies (they are all npm packages) inside the docker container (so they won't touch my host) and still mount my code using a volume. So, the container should be able
to find the node_modules folder at the path where I mount the volume, but I should not see it from the host.
This is my Dockerfile:
FROM node:6
RUN mkdir /code
COPY package.json /code/package.json
WORKDIR /code
RUN npm install
This is how I run it:
docker build --tag my-dev-env .
docker run --rm --interactive --tty --volume $(pwd):/code my-dev-env npm test
And this is my package.json:
{
"private": true,
"name": "my-project",
"version": "0.0.0",
"description": "My project",
"scripts": {
"test": "jasmine"
},
"devDependencies": {
"jasmine": "2.4"
},
"license": "MIT"
}
It fails because it can't find jasmine, so it's not really installing it:
> jasmine
sh: 1: jasmine: not found
Can what I want be accomplished with Docker? An alternative would be to install the packages globally. I also tried npm install -g to no avail.
I'm on Debian with Docker version 1.12.1, build 23cf638.
The solution is to also declare /code/node_modules as a volume, only without bind-mounting it to any directory in the host. Like this:
docker run --rm --interactive --tty --volume /code/node_modules --volume $(pwd):/code my-dev-env npm test
As indicated by #JesusRT, npm install was working just fine but bind-mounting $(pwd) to /code alone was shadowing the existing contents of /code in the image. We can recover whatever we want from /code in the container by declaring it as a data volume -- in this case, just /code/node_modules, as shown above.
A very similar problem is already discussed in Docker-compose: node_modules not present in a volume after npm install succeeds.
The issue here is that you are overwriting the /code folder.
Note that you are executing the npm install command at building time, so the image that is created has in the /code folder the node_modules folder. The problem is that you are mounting a volume in the /code folder when executing the docker run command, so this folder will be overwritten with the content of your local machine.
One approach could be executing the npm install before the npm test command:
docker run --rm --interactive --tty my-dev-env npm install && npm test
Also, in order to the execution of the command jasmine works properly, you will have to modify your package.json as follow below:
"scripts": {
"test": "./node_modules/.bin/jasmine"
}

Node and docker - how to handle babel or typescript build?

I have a node application that I want to host in a Docker container, which should be straight forward, as seen in this article:
https://nodejs.org/en/docs/guides/nodejs-docker-webapp/
In my project, however, the sources can not be run directly, they must be compiled from ES6 and/or Typescript. I use gulp to build with babel, browserify and tsify - with different setups for browser and server.
What would be the best workflow for building and automating docker images in this case? Are there any resources on the web that describes such a workflow? Should the Dockerimage do the building after npm install or should I create a shell script to do all this and simply have the Dockerfile pack it all together?
If the Dockerfile should do the build - the image would need to contain all the dev-dependencies, which are not ideal?
Note: I have been able to set up a docker container, and run it - but this required all files to be installed and built beforehand.
The modern recommendation for this sort of thing (as of Docker 17.05) is to use a multi-stage build. This way you can use all your dev/build dependencies in the one Dockerfile but have the end result optimised and free of unnecessary code.
I'm not so familiar with typescript, but here's an example implementation using yarn and babel. Using this Dockerfile, we can build a development image (with docker build --target development .) for running nodemon, tests etc locally; but with a straight docker build . we get a lean, optimised production image, which runs the app with pm2.
# common base image for development and production
FROM node:10.11.0-alpine AS base
WORKDIR /app
# dev image contains everything needed for testing, development and building
FROM base AS development
COPY package.json yarn.lock ./
# first set aside prod dependencies so we can copy in to the prod image
RUN yarn install --pure-lockfile --production
RUN cp -R node_modules /tmp/node_modules
# install all dependencies and add source code
RUN yarn install --pure-lockfile
COPY . .
# builder runs unit tests and linter, then builds production code
FROM development as builder
RUN yarn lint
RUN yarn test:unit --colors
RUN yarn babel ./src --out-dir ./dist --copy-files
# release includes bare minimum required to run the app, copied from builder
FROM base AS release
COPY --from=builder /tmp/node_modules ./node_modules
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/package.json ./
CMD ["yarn", "pm2-runtime", "dist/index.js"]
One possible solution is to wrap your build procedure in a special docker image. It is often referred as Builder image. It should contain all your build dependencies: nodejs, npm, gulp, babel, tsc and etc. It encapsulates all your build process, removing the need to install these tools on the host.
First you run the builder image, mounting the source code directory as a volume. The same or a separate volume can be used as output directory.
The first image takes your code and runs all build commands.
As a first step you take your built code and pack it into production docker image as you do now.
Here is an example of docker builder image for TypeScript: https://hub.docker.com/r/sandrokeil/typescript/
It is ok to have the same docker builder for several projects as it is typically designed to be general purpose wrapper around some common tools.
But it is ok to build your own that describes more complicated procedure.
The good thing about builder image is that your host environment remains unpolluted and you are free to try newer versions of compiler/different tools/change order/do tasks in parallel just by modifing Dockerfile of your builder image. And at any time you can rollback your experiment with build procedure.
I personally prefer to just remove dev dependencies after running babel during build:
FROM node:7
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install
# Copy app source
COPY src /usr/src/app/src
# Compile app sources
RUN npm run compile
# Remove dev dependencies
RUN npm prune --production
# Expose port and CMD
EXPOSE 8080
CMD [ "npm", "start" ]
Follow these steps:
Step 1: make sure you have your babel dependencies inside of dependencies not dev dependencies on package.json. Also Add a deploy script that is referencing to babel from the node_modules folder. you will be calling this script from within docker
This is what my package.json file looks like
{
"name": "tmeasy_api",
"version": "1.0.0",
"description": "Trade made easy Application",
"main": "build/index.js",
"scripts": {
"build": "babel -w src/ -d build/ -s inline",
"deploy" : "node_modules/babel-cli/bin/babel.js src/ -d build/",
},
"devDependencies": {
"nodemon": "^1.9.2"
},
"dependencies": {
"babel-cli": "^6.10.1",
"babel-polyfill": "^6.9.1",
"babel-preset-es2015": "^6.9.0",
"babel-preset-stage-0": "^6.5.0",
"babel-preset-stage-3": "^6.22.0"
}
}
build is for your development purposes on your local machine and deploy is to be called from within you dockerfile.
Step 2: since we want to do the babael transformation ourselves make sure to add .dockerignore with the build folder that you are using during development.
This is what my .dockerignore file looks like.
build
node_modules
Step 3. Construct your dockerfile. below is a sample of my docker file
FROM node:6
MAINTAINER stackoverflow
ENV NODE_ENV=production
ENV PORT=3000
# use changes to package.json to force Docker not to use the cache
# when we change our application's nodejs dependencies:
ADD package.json /tmp/package.json
RUN cd /tmp && npm install
RUN mkdir -p /var/www && cp -a /tmp/node_modules /var/www
# copy current working directory into docker; but it first checks for
# .dockerignore so build will not be included.
COPY . /var/www/
WORKDIR /var/www/
# remove any previous builds and create a new build folder and then
# call our node script deploy
RUN rm -f build
RUN mkdir build
RUN chmod 777 /var/www/build
RUN npm run deploy
VOLUME /var/www/uploads
EXPOSE $PORT
ENTRYPOINT ["node","build/index.js"]
I just released a great seed app for Typescript and Node.js using Docker.
You can find it on GitHub.
The project explains all of the commands that the Dockerfile uses and it combines tsc with gulp for some added benefits.
If you don't want to check out the repo, here's the details:
Dockerfile
FROM node:8
ENV USER=app
ENV SUBDIR=appDir
RUN useradd --user-group --create-home --shell /bin/false $USER &&\
npm install --global tsc-watch npm ntypescript typescript gulp-cli
ENV HOME=/home/$USER
COPY package.json gulpfile.js $HOME/$SUBDIR/
RUN chown -R $USER:$USER $HOME/*
USER $USER
WORKDIR $HOME/$SUBDIR
RUN npm install
CMD ["node", "dist/index.js"]
docker-compose.yml
version: '3.1'
services:
app:
build: .
command: npm run build
environment:
NODE_ENV: development
ports:
- '3000:3000'
volumes:
- .:/home/app/appDir
- /home/app/appDir/node_modules
package.json
{
"name": "docker-node-typescript",
"version": "1.0.0",
"description": "",
"scripts": {
"build": "gulp copy; gulp watch & tsc-watch -p . --onSuccess \"node dist/index.js\"",
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "Stephen Gardner (opensourceaugie#gmail.com)",
"license": "ISC",
"dependencies": {
"express": "^4.10.2",
"gulp": "^3.9.1",
"socket.io": "^1.2.0"
},
"devDependencies": {
"#types/express": "^4.11.0",
"#types/node": "^8.5.8"
}
}
tsconfig.json
{
"compileOnSave": false,
"compilerOptions": {
"outDir": "./dist/",
"sourceMap": true,
"declaration": false,
"module": "commonjs",
"moduleResolution": "node",
"emitDecoratorMetadata": true,
"experimentalDecorators": true,
"target": "ES6"
},
"include": [
"**/*.ts"
],
"exclude": [
"node_modules",
"**/*.spec.ts"
]
}
To get more towards the answer of your question -- the ts is being compiled from the docker-compose.yml file's calling of npm run build which then calls tsc. tsc then copies our files to the dist folder and a simple node dist/index.js command runs this file. Instead of using nodemon, we use tsc-watch and gulp.watch to watch for changes in the app and fire node dist/index.js again after every re-compilation.
Hope that helps :) If you have any questions, let me know!
For the moment, I'm using a workflow where:
npm install and tsd install locally
gulp build locally
In Dockerfile, copy all program files, but not typings/node_modules to docker image
In Dockerfile, npm install --production
This way I get only the wanted files in the image, but it would be nicer if the Dockerfile could do the build itself.
Dockerfile:
FROM node:5.1
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Bundle app
COPY package.json index.js /usr/src/app/
COPY views/ /usr/src/app/views/
COPY build/ /usr/src/app/build/
COPY public/ /usr/src/app/public/
# Install app dependencies
RUN npm install --production --silent
EXPOSE 3000
CMD [ "node", "index.js" ]
I guess a complete automation in the "imaging process" could be established by building in the Dockerimage script and then deleting the unwanted files before installing again.
In my project, however, the sources can not be run directly, they must be compiled from ES6 and/or Typescript. I use gulp to build with babel, browserify and tsify - with different setups for browser and server. What would be the best workflow for building and automating docker images in this case?
When i understand you right, you want to deploy your web app inside a Docker container and provide different flavours for different target-environments (you mentioned different browser and server). (1)
If the Dockerfile should do the build - the image would need to contain all the dev-dependencies, which are not ideal?
It depends. If you want to provide a ready-to-go-image, it has to contain everything your web app needs to run. One advantage is, that you later only need to start the container, pass some parameters and you are ready to go.
During the development phase, that image is not really necessary, because of your usually pre-defined dev-environment. It costs time and resources, if you generate such an image after each change.
Suggested approach: I would suggest a two way setup:
During development: Use a fixed environment to develop your app. All software can run locally or inside a docker/VM. I suggest using a Docker container with your dev-setup, especially if you work in a team and everybody needs to have the same dev-basement.
Deploy Web app: As i understood you right (1), you want to deploy the app for different environments and therefore need to create/provide different configurations. To realize something like that, you could start with a shell-script which packages your app into different docker container. You run the script before your deploy. If you have Jekyll running, it calls your shell-script after each commit, after all tests ran fine.
Docker container for both development and deploy phase: I would like to refer to a project of mine and a colleague: https://github.com/k00ni/Docker-Nodejs-environment
This docker provides a whole development- and deploy-environment by maintaining:
Node.js
NPM
Gulp
Babel (auto transpiling from ECMA6 to JavaScript on a file change)
Webpack
and other JavaScript helpers inside the docker container. You just link your project folder via a volume inside the docker container. It initializes your environment (e.g. deploys all dependencies from package.json) and you are good to go.
You can use it for development purposes so that you and your team are using the same environment (Node.js version, NPM version,...) Another advantage is, that file changes lead to a re-compiling of ECMA6/ReactJS/... files to JavaScript files (No need to do this by hand after each change). We use Babel for that.
For deployment purposes, just extend this Docker image and change required parts. Instead of linking your app inside the container, you can pull it via Git (or something like that). You will use the same basement for all your work.
I found this article that should guide you in both development and production phases: https://www.sentinelstand.com/article/docker-with-node-in-development-and-production
In this article we'll create a production Docker image for a
Node/Express app. We'll also add Docker to the development process
using Docker Compose so we can easily spin up our services, including
the Node app itself, on our local machine in an isolated and
reproducable manner.
The app will be written using newer JavaScript syntax to demonstrate
how Babel can be included in the build process. Your current Node
version may not support certain modern JavaScript features, like
ECMAScript modules (import and export), so Babel will be used to
convert the code into a backwards compatible version.

Resources