I have a NextJS App that I want to build into a docker image and run as a container later. I'm using the Dockerfile from https://nextjs.org/docs/deployment#docker-image.
When I run docker build . Everything works fine until Step 10/23:
yarn run v1.22.15
$ next build
info - Checking validity of types...
info - Creating an optimized production build...
Error: Error loading shared library ld-linux-x86-64.so.2: No such file or directory (needed by /app/node_modules/#next/swc-linux-x64-gnu/next-swc.linux-x64-gnu.node)
I found out that this is caused by SWC and alpine, but does anyone know how to solve this?
Maybe this can help: https://github.com/vercel/next.js/issues/30713
RUN rm -r node_modules/#next/swc-linux-x64-gnu
adding that und yarn install actually fixes that bug
For us, some of the team members had the older versions of the npm, and that created the problem in package-lock.json.
The solution to this is to delete the node_modules and package-lock.json from the project and run npm install
Note: If are building a docker image and your dockerfile has COPY package*.json ./ line then the new package-lock.json has to be updated to the repository from where the build will happen
Related
I have an issue regarding one dependency in my yarn.lock file. The issue is with ldapjs, the latest version has a bug regarding special characters in user or password so I want to freeze it in the latest working version which is 1.0.2.
As I commited my code to master branch, the step of building this project started to fail saying the message of the title.
Here is my dockerfile
FROM repository/node-oracle:10.15.3
LABEL maintainer="Me"
RUN yarn cache clean
# Add Tini
ENV TINI_VERSION v0.18.0
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /tini
RUN chmod +x /tini
ENTRYPOINT ["/tini", "--"]
WORKDIR /usr/src/auth
COPY . .
RUN yarn install --frozen-lockfile --non-interactive --silent
ENV PATH /usr/src/auth/node_modules/.bin:$PATH
EXPOSE 3000
CMD ["node", "./bin/www"]
Any work around on how can I make this work?
Also as an extra info, I was able to run the pipeline with this step in a feature branch, the message started in develop and master branch.
[UPDATE]
These are the dependencies updated and freezed in my yarn.lock file
activedirectory#^0.7.2:
version "0.7.2"
resolved "https://registry.yarnpkg.com/activedirectory/-/activedirectory-0.7.2.tgz#19286d10c6b24a98cc906dc638256191686fa91f"
integrity sha1-GShtEMaySpjMkG3GOCVhkWhvqR8=
dependencies:
async ">= 0.1.22"
bunyan ">= 1.3.5"
**ldapjs "=1.0.2"**
underscore ">= 1.4.3"
***ldapjs#1.0.2***:
version "1.0.2"
resolved "https://registry.yarnpkg.com/ldapjs/-/ldapjs-1.0.2.tgz#346e040a95a936e90c47edd6ede5df257dd21ee6"
integrity sha512-XzF2BEGeM/nenYDAJvkDMYovZ07fIGalrYD+suprSqUWPCWpoa+a4vWl5g8o/En85m6NHWBpirDFNClWLAd77w==
dependencies:
asn1 "0.2.1"
assert-plus "0.1.5"
bunyan "0.22.1"
nopt "2.1.1"
pooling "0.4.6"
optionalDependencies:
dtrace-provider "0.2.8"
I was stuck in the same error and the issue was that my yarn.lock file was not up to date. I followed the following link and it fixed my issue.
Apparently, I just had to run yarn install to update my yarn.lock file and push to the repository.
Just an Update. After a few attempts I was finally able to do what i wanted. Removing the ^ from ldap.js and from active directory (which contains the ldap.js library) did the job as expected.
Sometimes the error occurs if the yarn install is run from a folder which contains no yarn.lock file. For example if building inside a docker which contains separate frontend and backend.
Solution 1
In that case go to the specific frontend folder which contains the package.json and yarn.lock folder and run the yarn install from there.
Solution 2
run yarn add <package> which will generate a file yarn.lock in the project base folder if the command is run from the base folder. Copy the contents of that file to the existing yarn.lock. This should solve the problem. Here is a link for yarn add package.
I have my website wrapped up and wanted to containerize it for experience as I've never used Docker before. It's built on Gatsby. I did a fresh install of Docker and am running into two issues:
If I try to create an image in a Linux container, it seems to work, but I can't actually run it. I get the following error: "Error in "/app/node_modules/gatsby-transformer-sharp/gatsby-node.js": 'win32-x64' binaries cannot be used on the 'linuxmusl-x64' platform. Please remove the 'node_modules/sharp' directory and run 'npm install' on the 'linuxmusl-x64' platform."
I tried the above, uninstalling and reinstalling sharp in my project to no avail.I'm not even using sharp nor do I know what it is, though.
If I switch to Windows containers, I can't even create an image as I get the following:
"no matching manifest for windows/amd64 10.0.18363 in the manifest list entries"
My Dockerfile is as follows:
FROM node:13.12.0-alpine
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install app dependencies
COPY package.json ./
COPY package-lock.json ./
RUN npm install --silent
RUN npm install react-scripts#3.4.1 -g --silent
# add app
COPY . ./
# start app
CMD ["npm", "start"]
and my .dockerignore contains
node_modules
build
Dockerfile
Dockerfile.prod
.git
Things I've tried:
This tutorial > https://mherman.org/blog/dockerizing-a-react-app/ (Where I got the Dockerfile text)
This tutorial >https://www.robinwieruch.de/docker-create-react-app-development (And its Dockerfile at one point)
Changing the FROM for node: to 14.4.0, 14, with or without -alpine.
Uninstalling and re-installing sharp
Uninstalling sharp entirely and trying to run it that way (I still get the sharp error for some reason)
Reading the documentation. Which for whatever reason only tells you how to launch a default application (such as create-react-app) or one pulled from somewhere, but not how to do so for our own website.
Thanks
I deploy my Node.Js app via AWS ECS Docker container using Circle CI.
However, each time I build a new image it runs npm build (because it's in my Dockerfile) and downloads and builds all the node modules again every time. Then it uploads a new image to the AWS ECS repository.
As my environment stays the same I don't want it to build those packages every time. So do you think it is possible for Docker to actually update an existing image rather than building a new one from scratch with all the modules every time? Is this generally a good practice?
I was thinking the following workflow:
Check if there are any new Node packages compared to the previous image
If yes, run npm build
If not, just keep the old node_modules folder, don't run build and simply update the code
Deploy
What would be the best way to do that?
Here's my Dockerfile
FROM node:12.18.0-alpine
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . .
COPY package.json package-lock.json* ./
RUN npm install
RUN npm install pm2 -g
EXPOSE 3000
CMD [ "pm2-runtime", "ecosystem.config.js"]
My Circle CI workflow (from the ./circleci/config.yml):
workflows:
version: 2.1
test:
jobs:
- test
- aws-ecr/build-and-push-image:
create-repo: true
no-output-timeout: 10m
repo: 'stage-instance'
Move the COPY . . line after the RUN npm install line.
The way Docker's layer caching works, it will skip re-running a RUN line if it knows that it's already run it. So given this Dockerfile fragment:
FROM node:12.18.0-alpine
WORKDIR /usr/src/app
COPY package.json package-lock.json* ./
RUN npm install
Docker keeps track of the hashes of the files it COPYs in. When it gets to the RUN line, if the image up to this point is identical to one it's previously built, it will also skip over the RUN line.
If you have COPY . . first, then if any file in your source tree changes, it will invalidate the layer cache for everything afterwards. If you only copy package.json and the lock file first, then npm install only gets re-run if either of those two files change.
(CircleCI may or may not perform the same layer caching, but "install dependencies, then copy the application in" is a typical Docker-level optimization.)
I installed all npm dependencies inside to the container. So I don't want to install dependencies to my host machine. Everything is okay, it works. But there is a problem with Webstorm.
It says "Unresolved function" for npm dependencies.
How to fix that problem? How can I say "Hey webstorm, node_modules directory is inside the container :)"
WebStorm expects node_modules to be located in the project folder.
You can try setting up NODE_PATH in Node.js run configuration template: Run | Edit Configurations..., expand Templates node, select Node.js configuration, specify NODE_PATH in Environment variables field
Please see comments in https://youtrack.jetbrains.com/issue/WEB-19476.
But I'm not sure it will work for modules installed in container...
Even though you expose the container's node_modules folder it's likely to not work as expected because npm dependencies are built according their host environment, which will not be the same as your local dev machine.
This statement applies even stronger if you want to run some CLI developments tools
- which sometimes are compiled binary files.
TLDR;
Trick is to update the path inside docker container from /your-path to /opt/project.
Detail solution
The issue with webstorm is they don't allow you to define the path from where you can pick node_modules. But they have a default path from which they pick it. I was facing the same issue while I wanted to integrate remote debugging for a backend node service running inside docker.
You need to update your docker file. Suppose you were using Dockerfile was something like this
# pull official base image
FROM node:12.11-buster
# set working directory
WORKDIR /app
COPY ./package.json ./package-lock.json /app
RUN npm install
Now this won't be detecting the node_modules for remote debugging or any other integration you need in webstorm.
But if you update the dockerfile to something like this
# pull official base image
FROM node:12.11-buster
# set working directory
# This done particularly for enabling debugging in webstorm.
WORKDIR /opt/project
COPY ./package.json ./package-lock.json ./.npmrc /opt/project
RUN npm install
Then the webstorm is able to detect everything as expected.
Trick is to update the path from /your-path to /opt/project
And your docker-compose file should look something like this:
version: "3.7"
services:
backend-service:
build:
dockerfile: ./Dockerfile.local
context: ./
command: nodemon app.js
volumes:
- ./:/opt/project
- /opt/project/node_modules/
ports:
- 6060:6060
You can check more details around this in this blog
I'm trying to use Docker as a dev environment in Windows.
The app I'm developing uses Node, NPM and Bower for setting up the dev tools, and Grunt for its task running, and includes a live reload so the app updates when the code changes. Pretty standard. It works fine outside of Docker but I keep running into the Grunt error Fatal error: Unable to find local grunt. no matter how I try to do it inside Docker.
My latest effort involves installing all the npm and bower dependencies to an app directory in the image at build time, as well as copying the app's Gruntfile.js to that directory.
Then in Docker-Compose I create a Volume that is linked to the host app, and ask Grunt to watch that volume using Grunt's --base option. It still won't work. I still get the fatal error.
Here are the Docker files in question:
Dockerfile:
# Pull base image.
FROM node:5.1
# Setup environment
ENV NODE_ENV development
# Setup build folder
RUN mkdir /app
WORKDIR /app
# Build apps
#globals
RUN npm install -g bower
RUN echo '{ "allow_root": true }' > /root/.bowerrc
RUN npm install -g grunt
RUN npm install -g grunt-cli
RUN apt-get update
RUN apt-get install ruby-compass -y
#locals
ADD package.json /app/
ADD Gruntfile.js /app/
RUN npm install
ADD bower.json /app/
RUN bower install
docker-compose.yml:
angular:
build: .
command: sh /host_app/startup.sh
volumes:
- .:/host_app
net: "host"
startup.sh:
#!/bin/bash
grunt --base /host_app serve
The only way I can actually get the app to run at all in Docker is to copy all the files over to the image at build time, create the dev dependencies there and then, and run Grunt against the copied files. But then I have to run a new build every time I change anything in my app.
There must be a way? My Django app is able to do a live reload in Docker no problems, as per Docker's own Django quick startup instructions. So I know live reload can work with Docker.
PS: I have tried leaving the Gruntfile on the Volume and using Grunt's --gruntfile option but it still crashes. I have also tried creating the dependencies at Docker-Compose time, in the shared Volume, but I run into npm errors to do with unpacking tars. I get the impression that the VM can't cope with the amount of data running over the shared file system and chokes, or maybe that the Windows file system can't store the Linux files properly. Or something.